From varnish-bugs at varnish-cache.org Wed Jul 1 12:03:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jul 2015 12:03:42 -0000 Subject: [Varnish] #1757: If-Match header wrong format Message-ID: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> #1757: If-Match header wrong format ---------------------------------+---------------------- Reporter: vko | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: If-Match | ---------------------------------+---------------------- We have some trouble with the If-Match header. So far we understand If- Match or If-None-Match should be like this format If-None-Match: "xyzzy" If-None-Match: W/"xyzzy" If-None-Match: "xyzzy", "r2d2xxxx", "c3piozzzz" If-None-Match: W/"xyzzy", W/"r2d2xxxx", W/"c3piozzzz" If-None-Match: * (Examples from http://www.freesoft.org/CIE/RFC/2068/187.htm) In the varnishlog we see the format like BereqHeader If-Match: W/0 As you can see without quotation marks. Our expectation is like BereqHeader If-Match: W/"0" It seems to be a bug in varnish. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jul 1 13:16:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jul 2015 13:16:14 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.7ec83b3f050f6e034d03340e188b2ab0@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by fgsch): Works for me. Is your backend sending Etag: "0" or Etag: 0? The entity-tag definition is: entity-tag = [ weak ] opaque-tag weak = "W/" opaque-tag = quoted-string -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jul 1 14:13:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jul 2015 14:13:07 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.a004e0de8cf29e0e5b39bb2e9ee8c9ab@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Description changed by fgsch: Old description: > We have some trouble with the If-Match header. So far we understand If- > Match or If-None-Match should be like this format > > If-None-Match: "xyzzy" > If-None-Match: W/"xyzzy" > If-None-Match: "xyzzy", "r2d2xxxx", "c3piozzzz" > If-None-Match: W/"xyzzy", W/"r2d2xxxx", W/"c3piozzzz" > If-None-Match: * > (Examples from http://www.freesoft.org/CIE/RFC/2068/187.htm) > > In the varnishlog we see the format like > > BereqHeader If-Match: W/0 > > As you can see without quotation marks. Our expectation is like > > BereqHeader If-Match: W/"0" > > It seems to be a bug in varnish. New description: We have some trouble with the If-Match header. So far we understand If- Match or If-None-Match should be like this format {{{ If-None-Match: "xyzzy" If-None-Match: W/"xyzzy" If-None-Match: "xyzzy", "r2d2xxxx", "c3piozzzz" If-None-Match: W/"xyzzy", W/"r2d2xxxx", W/"c3piozzzz" If-None-Match: * (Examples from http://www.freesoft.org/CIE/RFC/2068/187.htm) }}} In the varnishlog we see the format like BereqHeader If-Match: W/0 As you can see without quotation marks. Our expectation is like BereqHeader If-Match: W/"0" It seems to be a bug in varnish. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 2 09:23:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jul 2015 09:23:30 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.ba1d76ab6c9e191159e04feedbacd1e8@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by vko): Ok. After the researching with our developers we now what exact the problem is. If-Match header is not the problem but Etag because If-Match gets the value of Etag. The problem is varnish set Etag in this format ETag: W/7 According to RFC2616 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.19, see Examples) the right format should be ETag: W/"7" Should I open a new bug because the problem Etag and not If-Match header is? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 2 09:46:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jul 2015 09:46:19 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.3f12d2376db5662d478b77bb9a0b4e1d@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by scoof): The ETag is set by your backend, not varnish. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 2 10:31:06 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jul 2015 10:31:06 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.2cb6f932a56eed5df8c3905685efba2a@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by vko): Yes, backend set Etag but varnish changes it if it should be "Weak". In our case backend set Etag to "7". As varnish sees that Etag has to be modified because of "weakness" it modifies Etag in wrong format. It should be Etag: W/"7" but we see in the varnishlog W/7. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 2 10:49:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jul 2015 10:49:53 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.7d4e3f152dc2e61923f251d63e5325fc@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by fgsch): Please provide a varnishlog capture. In my tests if the backend sends the ETag in the correct format, varnish will respect it and the resulting header is W/"". -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 2 13:55:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jul 2015 13:55:39 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.8a0b03a7a8e389f8a9c6670544eef766@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by vko): Sorry, the information I provided to you about Etag seems to be not right. Our developers are checking if backend sets Etag to "7" or to 7. I'll write it later. Sorry. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jul 3 18:15:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 03 Jul 2015 18:15:48 -0000 Subject: [Varnish] #1755: The (struct backend).n_conn counter is never decremented In-Reply-To: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> References: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> Message-ID: <058.6edb0113a79114da958d62376236fc32@varnish-cache.org> #1755: The (struct backend).n_conn counter is never decremented ----------------------+-------------------- Reporter: Dridi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Resolution: Keywords: | ----------------------+-------------------- Comment (by Dridi): I have a patch that solves this issue, but also maintains consistency between the {{{refcount}}} and {{{n_conn}}} fields, and also with the {{{(struct tcp_pool).n_used}}} field. It also solves another issue in pending work on dynamic backends. The code is currently not ready to be submitted, it needs a bit of cleaning, especially with regards to locking operations. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jul 5 23:59:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 05 Jul 2015 23:59:28 -0000 Subject: [Varnish] #1635: Completed bans keep accumulating In-Reply-To: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> References: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> Message-ID: <058.8bcb3c1dcfc6f422542891338dfc2254@varnish-cache.org> #1635: Completed bans keep accumulating ----------------------+-------------------- Reporter: Sesse | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: lurker | ----------------------+-------------------- Comment (by karlvr): Registering my interest in this ticket. I have exactly the same scenario on my Varnish 4.0.3 instance. bans_deleted sits at 1. Always. bans and bans_completed are 505395 and 450786 respectively (currently). The Varnish server becomes slower and slower to serve requests, so I restart it at least once per day. I'm happy to run tests, and to try out patches. Should bans_deleted be going up? It seems like it should be? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 03:41:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 03:41:05 -0000 Subject: [Varnish] #1635: Completed bans keep accumulating In-Reply-To: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> References: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> Message-ID: <058.6b70a64766067d59f25917c6c779105c@varnish-cache.org> #1635: Completed bans keep accumulating ----------------------+-------------------- Reporter: Sesse | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: lurker | ----------------------+-------------------- Comment (by karlvr): Attempting to debug this: in `cache_ban.c`: `ban_cleantail` is being called, as expected. However the ban at the `ban_head` has a `refcount` > 0, which I guess means that there are objects in the cache that still refer to it? So am I correct in assuming that bans can only be deleted once there are no older objects in the cache? In which case perhaps the behaviour I'm observing, and others are observing above, is expected? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 03:48:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 03:48:59 -0000 Subject: [Varnish] #1758: Open ended range request results in truncated images Message-ID: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> #1758: Open ended range request results in truncated images -----------------------------+-------------------- Reporter: jafa | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 4.0.3 | Severity: normal Keywords: range truncated | -----------------------------+-------------------- Symptom: client (Kodi) getting truncated images. Client (Kodi) sent requests HEAD, HEAD, GET. The GET request include "Range: bytes=0-". Varnish responded: HEAD: "Content-Length: 50726" HEAD: "Content-Length: 50726" GET: "Content-Range: bytes 0-14157/14158" and "Content-Length: 14158" A few minutes later I hit it from a webbrowser (no range request). Varnish responded with "Content-Length: 50726" and delivered the complete data. Full headers in attached files. Workaround = "unset req.http.Range". This avoids the problem - images are delivered correctly. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 06:52:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 06:52:39 -0000 Subject: [Varnish] #1755: The (struct backend).n_conn counter is never decremented In-Reply-To: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> References: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> Message-ID: <058.56b68f8ad67b25f5daeb21e35408954d@varnish-cache.org> #1755: The (struct backend).n_conn counter is never decremented ----------------------+-------------------- Reporter: Dridi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Resolution: Keywords: | ----------------------+-------------------- Comment (by Dridi): Patch submitted: https://www.varnish-cache.org/lists/pipermail/varnish- dev/2015-July/008389.html -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 09:59:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 09:59:08 -0000 Subject: [Varnish] #1759: VMOD abi check is always lenient Message-ID: <046.c750f774ffe7fd6b92acc7efadf37dd3@varnish-cache.org> #1759: VMOD abi check is always lenient ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- In 4.0, the VMOD version check is currently always a check against the VRT_(MAJOR|MINOR)_VERSION fields. If I write a VMOD that uses internal structures (like ctx->bo->synth_body), and Martin goes off and add something further up in to struct busyobj (or struct http_conn, to pick something "completely at random"), the VMOD compiled for the old version goes sideways when loaded into the new version. There is no way for the VMOD writer to signal it, and there is no error indication for the sysadmin. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:06:31 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:06:31 -0000 Subject: [Varnish] #1759: VMOD abi check is always lenient In-Reply-To: <046.c750f774ffe7fd6b92acc7efadf37dd3@varnish-cache.org> References: <046.c750f774ffe7fd6b92acc7efadf37dd3@varnish-cache.org> Message-ID: <061.ac32cc138b0feca831185d06d440ea05@varnish-cache.org> #1759: VMOD abi check is always lenient ----------------------+----------------------- Reporter: lkarsten | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten Comment: Discussed at bugwash. Suggestion is to add a "VRT | full" flag to vmod.vcc, so the vmod writer can decide. Write a patch. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:09:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:09:03 -0000 Subject: [Varnish] #1758: Open ended range request results in truncated images In-Reply-To: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> References: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> Message-ID: <057.14d78e7b0d563a1abc90ed9aa4c0c8e8@varnish-cache.org> #1758: Open ended range request results in truncated images -----------------------------+-------------------- Reporter: jafa | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: range truncated | -----------------------------+-------------------- Comment (by lkarsten): This should be fixed in git master. Please try that. We need to look if this should be ported back to 4.0. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:14:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:14:15 -0000 Subject: [Varnish] #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu In-Reply-To: <044.b6984289b41a3c6207dadd00b7d1e379@varnish-cache.org> References: <044.b6984289b41a3c6207dadd00b7d1e379@varnish-cache.org> Message-ID: <059.77893212684cf03d0c8226bb69e03ae4@varnish-cache.org> #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu --------------------------------------------+----------------------- Reporter: fleish | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: init init.d reload exit return | --------------------------------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:20:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:20:56 -0000 Subject: [Varnish] #1754: n_objecthead increase just before oomkill In-Reply-To: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> References: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> Message-ID: <061.f3ab508f6575d5b3acfffff1f94e6f5a@varnish-cache.org> #1754: n_objecthead increase just before oomkill ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by lkarsten): * version: trunk => 4.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:22:09 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:22:09 -0000 Subject: [Varnish] #1754: n_objecthead increase just before oomkill In-Reply-To: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> References: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> Message-ID: <061.c6e7380676d6cacb78a62162a462d2f6@varnish-cache.org> #1754: n_objecthead increase just before oomkill ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by slink): * cc: nils.goroll@? (added) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:31:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:31:15 -0000 Subject: [Varnish] #1754: n_objecthead increase just before oomkill In-Reply-To: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> References: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> Message-ID: <061.c41c09d0dd910ab56d9d717aa28b009e@varnish-cache.org> #1754: n_objecthead increase just before oomkill ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by slink): Out of curiosity, is this on bare metal or virtualized? If yes, please name the hypervisor. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:31:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:31:57 -0000 Subject: [Varnish] #1754: n_objecthead increase just before oomkill In-Reply-To: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> References: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> Message-ID: <061.ed0afeccf8db97299cbb6b20587335b7@varnish-cache.org> #1754: n_objecthead increase just before oomkill ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by slink): Also, following a suspicion: Does this also happen with khugepaged disabled? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 11:59:37 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 11:59:37 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. Message-ID: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Keywords: -------------------------+------------------- The -w (wait) functionality in varnishstat is there to write a continuous feed of varnishstat counters to stdout. If varnishd is restarted there is very little error handling in varnishstat, just an exit(1). It is usually in these conditions the numbers output by varnishstat is the most useful. Expected: varnishstat gracefully handled that varnish went away and came back. After discussing this with Martin for a bit, I think we should either: 1) Fix varnishstat to be a proper daemon (write to a logfile by itself, handle SIGHUP, example init script to start/stop collection), or 2) remove -w and say that anyone needing this can use a shell loop instead. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 15:14:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 15:14:32 -0000 Subject: [Varnish] #1761: 204 responses intermittently delivered as chunk-encoded with length byte = 0 Message-ID: <043.4c8744d010240f820f7b2440cfafa2fc@varnish-cache.org> #1761: 204 responses intermittently delivered as chunk-encoded with length byte = 0 ---------------------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: | ---------------------------------+---------------------- We are occasionally seeing responses with code 204 as chunked-encoded with one length byte `0` in the response body: {{{ HTTP/1.1 204 No Content [...] Vary: Accept-Encoding Transfer-Encoding: chunked Connection: keep-alive 0 }}} These are not gzipped, as there is no Content-Encoding header, but they look very much as if they were set up to be delivered as gzipped -- in every example we've seen, they are always delivered for requests with `Accept-Encoding: gzip`. By definition, a 204 response should have no body at all. Instead, a response as shown above is chunked-encoded with a single chunk of length 0. Unfortunately we haven't found a way to reliably reproduce the problem -- we are seeing these cases in error reports from the load balancer in front of Varnish, and they always seem to happen during load tests. When we test exactly the same request manually, so far we always get a response with no body, as expected. After trying for a while to make it reproducible, I'd like to report the problem now while we keep looking. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 6 16:26:04 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jul 2015 16:26:04 -0000 Subject: [Varnish] #1758: Open ended range request results in truncated images In-Reply-To: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> References: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> Message-ID: <057.cfbf10b084fc37399e3d64d7cb503bf8@varnish-cache.org> #1758: Open ended range request results in truncated images -----------------------------+-------------------- Reporter: jafa | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: range truncated | -----------------------------+-------------------- Comment (by jafa): Thanks! Backporting... we can live with the "unset Range" workaround until 4.1 is released if needed however given that it is a data corruption bug it might make sense to backport for everyone. If the decision is made to backport to 4.0.x I am happy to test in a live environment. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 7 22:49:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jul 2015 22:49:53 -0000 Subject: [Varnish] #1755: The (struct backend).n_conn counter is never decremented In-Reply-To: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> References: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> Message-ID: <058.63bc7f32660ba7f8208cc9b1c8d27bfd@varnish-cache.org> #1755: The (struct backend).n_conn counter is never decremented ----------------------+---------------------------------------- Reporter: Dridi | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [35716c713ca505a135101e40f32670ca5671bfd1]: {{{ #!CommitTicketReference repository="" revision="35716c713ca505a135101e40f32670ca5671bfd1" Add testcase for this ticket, the error was fixed in the previous commit. Fixes: #1755 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jul 8 07:22:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Jul 2015 07:22:41 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.5923f54295559426a006101b8dacfd6c@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by fgsch): My personal preference is # 2. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jul 10 17:34:04 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 10 Jul 2015 17:34:04 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.634891defe84e34c238c3aa230a394ed@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by fgsch): vko, any update on this? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jul 12 20:50:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 12 Jul 2015 20:50:51 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions Message-ID: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ---------------------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: critical Keywords: | ---------------------------------+---------------------- We have been experiencing a problem with a VSL client -- under certain conditions the dispatcher callback is no longer called, and the process resident size increases slowly over a long period of time, until all system RAM is consumed. {{{ #6 0x00007f55851be85d in VAS_Fail_default ( func=0x7f55851ddd10 "vtx_synth_rec", file=0x7f55851dd28a "vsl_dispatch.c", line=1008, cond=0x7f55851dda96 "(synth) != 0", err=12, kind=VAS_ASSERT) at ../libvarnish/vas.c:67 #7 0x00007f55851ce903 in vtx_synth_rec (vtx=0x7d24a0, tag=76, fmt=0x7f55851ddafc "%s (%u:%s \"%.*s\")") at vsl_dispatch.c:1008 #8 0x00007f55851cee17 in vtx_diag_tag (vtx=0x7d24a0, ptr=0x125823fe80, reason=0x7f55851dda23 "vxid mismatch") at vsl_dispatch.c:1065 #9 0x00007f55851cdefb in vtx_scan (vslq=0x73d0d0, vtx=0x7d24a0) at vsl_dispatch.c:863 #10 0x00007f55851ce1c2 in vtx_force (vslq=0x73d0d0, vtx=0x7d24a0, reason=0x7f55851ddc87 "timeout") at vsl_dispatch.c:911 #11 0x00007f55851cffd4 in VSLQ_Dispatch (vslq=0x73d0d0, func=0x40a5cd , priv=0x0) at vsl_dispatch.c:1345 }}} That's err=12==ENOMEM for the ALLOC_OBJ() at the beginning of vtx_synth_rec(). As you can see, VSL forced out a transaction due to timeout, and vtx_scan() also found a VXID mismatch. The vtx_scan() call is the second one in vtx_force(). We don't yet know why the VXID mismatch happened, and it's obviously not OK, we're still investigating, but I suspect that vtx_scan() ran into an endless loop here: {{{ while (!(vtx->flags & VTX_F_COMPLETE) && vslc_vtx_next(&vtx->c.cursor) == 1) { ptr = vtx->c.cursor.rec.ptr; if (VSL_ID(ptr) != vtx->key.vxid) { (void)vtx_diag_tag(vtx, ptr, "vxid mismatch"); continue; } }}} I can confirm from the core dump that `VSL_ID(ptr) != vtx->key.vxid` is true here, so it goes into vtx_diag_tag() to synthesize "vxid mismatch". There was already a synthetic record added by vtx_force(), and vtx_diag_tag() causes a new one to be added. Then the while loop goes back to call vslc_vtx_next(), which advances the cursor to the synth record that vtx_diag_tag() just added. The VXID mismatch still holds, so once again vtx_diag_tag() gets called. Again a synth record is added, and again vslc_vtx_next() advances to that record. Thus we have an endless loop, and a struct synth gets allocated every time. That's only 104 bytes, but slowly and surely all of RAM gets filled. Our VSL client has a statistic that is incremented every time the dispatcher is called, and we see in the log that this counter stops increasing when memory size starts to increase. The counter is stopped for about 10 to 20 minutes while the resident size increases, until all of RAM is full. The process finally took up about 75 GB of RAM, meaning that it had to be about 750 million allocations, assuming that it was allocating the 104 bytes of struct synth every time, which is probably why it takes so long. We can see the large mapping is in the heap (low addresses), so it must have been something that was malloc'ed (we've also seen this in pmap): {{{ (gdb) info files [...] 0x0000000000400000 - 0x0000000000401000 is load1a 0x0000000000401000 - 0x0000000000401000 is load1b 0x0000000000613000 - 0x0000000000614000 is load2 0x0000000000614000 - 0x000000000061d000 is load3 0x0000000000703000 - 0x0000000000724000 is load4 0x0000000000724000 - 0x0000001258261000 is load5 [...] }}} 0x1258261000 - 0x724000 == 75GB. And we can see that the last pointer of the synth list goes deep into that large mapping. {{{ (gdb) p vtx->synth $87 = {vtqh_first = 0x866840, vtqh_last = 0x125823ff48} }}} The synth list apparently has a long cascade of "vxid mismatch" records: {{{ (gdb) printf "%s\n", vtx->synth.vtqh_first.data[0] :VSL "vxid mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.data[2] h (6489548:VSL "vxid mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.data[2] 6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.list.vtqe_next.data[2] id mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.data[2] id mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.data[2] id mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.data[2] id mismatch (6489548:VSL "vxid mi (gdb) printf "%s\n", vtx->synth.vtqh_first.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.list.vtqe_next.data[2] id mismatch (6489548:VSL "vxid mi }}} We see all of this as evidence that vtx_scan() is trapped in the while loop. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 06:56:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 06:56:47 -0000 Subject: [Varnish] #1763: tolerate EINTR in accept() ? Message-ID: <043.6d09ee992e18ddcfd4d5c149e6684cb6@varnish-cache.org> #1763: tolerate EINTR in accept() ? ----------------------+------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- Accidentally seen when sending make check into the background, do we want to tolerate EINTR in accept() ? {{{ *** v1 12.7 debug| Child (31710) died signal=6\n *** v1 12.7 debug| Child (31710) Panic message:\n *** v1 12.7 debug| Assert error in vwe_thread(), waiter/cache_waiter_epoll.c line 114:\n *** v1 12.7 debug| Condition(n >= 0) not true.\n *** v1 12.7 debug| errno = 4 (Interrupted system call)\n *** v1 12.7 debug| thread = (cache-epoll)\n *** v1 12.7 debug| version = varnish-trunk revision 0dd8c0b\n *** v1 12.7 debug| ident = Linux,3.16.0-4-amd64,x86_64,-jnone,-smalloc,-smalloc,-hcritbit,epoll\n *** v1 12.7 debug| Backtrace:\n *** v1 12.7 debug| 0x43245f: pan_ic+0x12f\n *** v1 12.7 debug| 0x462b2e: vwe_thread+0x47e\n *** v1 12.7 debug| 0x7f6c3161b0a4: libpthread.so.0(+0x80a4) [0x7f6c3161b0a4]\n *** v1 12.7 debug| 0x7f6c3135004d: libc.so.6(clone+0x6d) [0x7f6c3135004d]\n *** v1 12.7 debug| \n *** v1 12.7 debug| \n *** v1 12.7 debug| Child cleanup complete\n **** v1 12.7 vsl| 3 Debug - Accept failed: Interrupted system call **** v1 12.7 vsl| 3 Debug - Accept failed: Interrupted system call **** v1 12.7 vsl| 0 Backend_health - vcl2.default Still sick ------- 1 3 8 0.000000 0.000000 **** v1 12.7 vsl| 0 CLI - Rd ping **** v1 12.7 vsl| 0 CLI - Wr 200 19 PONG 1436770414 1.0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:07:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:07:21 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.b6a74a8907b89549b108b6098599bff2@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: If-Match | ----------------------+---------------------------------- Comment (by vko): I got today the confirmation that our backend set the Etag-Header to 7 without quation marks. I'm sorry again. The problem was our backend and not varnish. Please close this bug report. Thank you. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:10:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:10:07 -0000 Subject: [Varnish] #1757: If-Match header wrong format In-Reply-To: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> References: <041.94040406ae891048e69223b99b9a3079@varnish-cache.org> Message-ID: <056.bb9b328ad8b4c44291a947bbcebc9d0d@varnish-cache.org> #1757: If-Match header wrong format ----------------------+---------------------------------- Reporter: vko | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: invalid Keywords: If-Match | ----------------------+---------------------------------- Changes (by fgsch): * status: new => closed * resolution: => invalid Comment: Reporter confirmed problem on their backend. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:34:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:34:46 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.5ba8f48ed887ad7c1d3d4d25480006a8@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by martin): #2 decided in bugwash -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:36:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:36:42 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.4ba8b16454fc6511e721fcc001bce1a2@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: dridi Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Changes (by lkarsten): * owner: => dridi Comment: Dridi will take a look in August. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:40:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:40:25 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.b07f5f75cebb824fdd4e776708558a8d@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: dridi Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by Dridi): Lasse, can you please take care of removing -w in 4.1? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 11:50:06 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 11:50:06 -0000 Subject: [Varnish] #1758: Open ended range request results in truncated images In-Reply-To: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> References: <042.59608046e2c72aa2a96dbec825189975@varnish-cache.org> Message-ID: <057.8acacfebab556674419c3e3fae52e5de@varnish-cache.org> #1758: Open ended range request results in truncated images -----------------------------+-------------------- Reporter: jafa | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: range truncated | -----------------------------+-------------------- Comment (by lkarsten): To sum up: you can't test the fix, but if we backport you can test the new code. Keeping this ticket open until we have a chance to look at amount of work to backport. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 12:05:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 12:05:57 -0000 Subject: [Varnish] #1763: tolerate EINTR in accept() ? In-Reply-To: <043.6d09ee992e18ddcfd4d5c149e6684cb6@varnish-cache.org> References: <043.6d09ee992e18ddcfd4d5c149e6684cb6@varnish-cache.org> Message-ID: <058.f4e0146c0ffa5759cf7339652a5edeb0@varnish-cache.org> #1763: tolerate EINTR in accept() ? ----------------------+-------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by martin): There is another use case for this. When attempting to attach gdb to a running varnishd, it asserts on the same issue. Would've been useful to be able to do that in some debugging scenarios. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 14:56:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 14:56:49 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.88b4f12c4ce03b0031b4a1d53236045c@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Comment (by slink): The interesting bit is that the vtx vxid has the client bit set: {{{ (gdb) frame 8 #8 0x00007f55851cee17 in vtx_diag_tag (vtx=0x7d24a0, ptr=0x125823fe80, reason=0x7f55851dda23 "vxid mismatch") at vsl_dispatch.c:1065 1065 in vsl_dispatch.c (gdb) print /x ptr[1] $25 = 0xc06305cc (gdb) print /x vtx->key.vxid $26 = 0x406305cc }}} Reviewing the code I am pretty sure that the root cause is not in the vsl api, but rather the fact that 4.0 logs SLT_Links with the the client/backend bits. This is fixed in master cfb309cad60e0239bc7168082d73b4ab6b53744a, so I'll just port this back to 4.0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 14:57:12 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 14:57:12 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.78d2f7c305a146a38a646eeef526a9c8@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: fixed Keywords: | ----------------------+---------------------------------- Changes (by slink): * status: new => closed * resolution: => fixed Comment: fixed in ca37d86639b90351f98484f2e6eee17a5f3638fa but I forgot the fix marker in the commit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 13 15:24:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Jul 2015 15:24:20 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.8bcaf7d920baf1b56c9228574f1f158f@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Changes (by geoff): * status: closed => reopened * resolution: fixed => Comment: This problem is not solved, since the endless loop and ENOMEM could happen again if a VXID mismatch occurs for any other reason. It's good to have the fix backported to 4.0, that would remove the specific cause of the problem that we experienced recently. But I assume that the check for VXID mismatch is there on the assumption that it might possibly happen, and if it does, I don't see how the same showstopper can be avoided. If we're certain that the VXID mismatch cannot happen under any circumstances now, then we should remove the check. But we certainly should not leave a ticking time bomb in the code. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 02:58:43 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 02:58:43 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.cc0679e4a735d05a87f3d0a86e4904db@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: fixed Keywords: | ----------------------+---------------------------------- Changes (by slink): * status: reopened => closed * resolution: => fixed Comment: Replying to [comment:3 geoff]: > This problem is not solved, since the endless loop and ENOMEM could happen again if a VXID mismatch occurs for any other reason. The specific issue was with this section of the code: {{{ vtx_scan(struct VSLQ *vslq, struct vtx *vtx) { // [....] if (VSL_ID(ptr) != vtx->key.vxid) { (void)vtx_diag_tag(vtx, ptr, "vxid mismatch"); continue; } }}} If we find a log chunk linked to our vtx with a vxid other than the vtx'es vxid, vtx_diag_tag will get called and nothing is wrong with that. vtx_diag_tag -> vtx_synth_rec will use the vtx'es vxid for the synth records. I don't see a risk of an infinite loop here if vtx->key.vxid is correct. The specific issue here was that vtx->key.vxid was _not_ correct in 4.0.3 without the VXID macro fix: For instance Link records could contain vxids with the client/backend bit set, so vtxes with these bad vxids could get created. The comparison above between the vtx vxid and VSL_ID() assumes that the vtx vxid never contains client/backend bits, so it always reported a mismatch for this case. So I think we have good reason to assume that this issue's root cause is really fixed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 10:14:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 10:14:49 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.26e7ca329556b5f36a435b0d0a9f88fc@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Changes (by slink): * status: closed => reopened * resolution: fixed => Comment: Reopened until agreement with Geoff is reached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 13:31:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 13:31:19 -0000 Subject: [Varnish] #1764: nuke_limit is not honored Message-ID: <043.8f7d377a5bb814fd1d00cd338be85d9d@varnish-cache.org> #1764: nuke_limit is not honored ----------------------+------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- nuke_limit doesn't seem to have any effect anymore. It looks like stv_alloc_obj is called multiple times per object, and only does one allocation, so it never hits nuke_limit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 13:58:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 13:58:52 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.12921af6d2e8e9f0414039abcb05f093@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Changes (by slink): * status: reopened => new * owner: => slink -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 13:59:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 13:59:19 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.b2afa8dd64378dc22e72788d5c5ca411@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Changes (by slink): * status: new => assigned -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 15:17:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 15:17:28 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.4723932dd8ff835f6849717c4442bcbd@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Comment (by slink): commits * 90e87e30fb92e525016ce1975fa61ad744bdc41d (master) * 9119bfd659bc51a60d9f0bc1c84a6fd29b889b97 (4.0) add an assertion that parsed vxids do not contain the client/server bit -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 16:49:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 16:49:53 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.a6eefbdce47371c84bac13f9fad8fb47@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Comment (by slink): Further investigating into root cause scenarios in order to write a regression resulted in the following insights: * the bad vxid must have got into vtx->key.vxid by way of `vtx_parse_link` * which is only called for `SLT_Begin` (`vtx_scan_begin()`) and `SLT_Link` (`vtx_scan_link()`) (actually this was known before, but I am now confident that these are the only cases) There is no case in the code as of 4.0.3 release where `SLT_Begin` is emitted with an unmasked vxid, so the issue must be root casue in an `SLT_Link` link record. In both cases where unmasked vxids are emitted for `SLT_Link`, the id comes directly from `VXID_Get()`: * `cache_fetch.c` {{{ wid = VXID_Get(&wrk->vxid_pool); VSLb(bo->vsl, SLT_Link, "bereq %u retry", wid); }}} * `cache_req_fsm.c` {{{ wid = VXID_Get(&wrk->vxid_pool); // XXX: ReqEnd + ReqAcct ? VSLb_ts_req(req, "Restart", W_TIM_real(wrk)); VSLb(req->vsl, SLT_Link, "req %u restart", wid); }}} So unless I have overseen anything significant, the root cause must have been a vxid spill, which was fixed with 0dd8c0b864a9574df0f2891824b4581d0e846613 (master) / 171f3ac585f2bda639f526c31ad0689aecb8f8b4 (4.0) `VXID()` masking would have avoided the issue to surface. This insight is consistent with two observations: * the issue only surfaced after `varnishd` running for longer periods of time * the issue didn't go away after a restart of the vsl client, a `varnishd` restart was required This gives confidence that the issue has really been understood completely and that the root cause has been fixed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jul 14 17:00:09 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jul 2015 17:00:09 -0000 Subject: [Varnish] #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions In-Reply-To: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> References: <043.7e4033f2b41b2a9ac26a953b52c4e27d@varnish-cache.org> Message-ID: <058.e5f99c1f16db28fbb7e410c6cf9a1d0b@varnish-cache.org> #1762: VSL API: endless loop and out of memory in vtx_scan() on forced synthetic transactions ----------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: critical | Resolution: Keywords: | ----------------------+---------------------------------- Comment (by slink): regression test added: * master 20362bf807eb6b0772ccdeb221b05173484d5dc3 * 4.0 9bffd9eac5baa4ef0e3a7f5d1fcab455d5980d61 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 16 13:31:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 16 Jul 2015 13:31:20 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.67934642fc5886826c32badc7b7f54b9@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: dridi Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by fgsch): While looking at diff I realised that -w also specified the refresh interval for the curses interface. Is defaulting to 1 second good enough or we should have some way to change this? I'm starting to think we should handle case 1 without becoming a daemon (i.e. varnishstat keeps retrying until varnishd is back). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jul 16 13:59:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 16 Jul 2015 13:59:26 -0000 Subject: [Varnish] #1760: varnishstat -w dies if varnish is restarted. In-Reply-To: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> References: <046.e8f2d93bd8d8e1b13b5e3d08d51989fc@varnish-cache.org> Message-ID: <061.178fccff256f6659c9c5fda236e51330@varnish-cache.org> #1760: varnishstat -w dies if varnish is restarted. -------------------------+-------------------- Reporter: lkarsten | Owner: dridi Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by Dridi): I didn't mention it but I'm in favor of solution 1, with a rename of -w to something else like -W (wait) or -i (interval) so that -w could be reused and have the same meaning as in other tools. Either way, wrapper script or proper daemon, I need -w to go away in 4.1 to re-introduce it later regardless of the chosen solution. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jul 17 07:16:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 17 Jul 2015 07:16:21 -0000 Subject: [Varnish] #1765: Assert error in HTTP_GetHdrPack() Message-ID: <045.4b17d56307161491ce5b016c887e550e@varnish-cache.org> #1765: Assert error in HTTP_GetHdrPack() --------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.1.0-TP1 | Severity: major Keywords: Assert error | --------------------------+---------------------- {{{ Child (991401) Panic message: Assert error in HTTP_GetHdrPack(), cache/cache_http.c line 930: Condition(vct_issp(*ptr)) not true. thread = (cache-worker) version = varnish-4.1.0-tp1 revision 0e4e1bc ident = Linux,3.2.0-4-amd64,x86_64,-junix,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x434324: pan_ic+0x134 0x42e39f: HTTP_GetHdrPack+0x29f 0x424891: vbf_fetch_thread+0x1161 0x44940a: WRK_Thread+0x48a 0x44997b: pool_thread+0x2b 0x7f4f44004b50: libpthread.so.0(+0x6b50) [0x7f4f44004b50] 0x7f4f43d4e95d: libc.so.6(clone+0x6d) [0x7f4f43d4e95d] busyobj = 0x7f3624c06020 { ws = 0x7f3624c060e0 { id = "bo", {s,f,r,e} = {0x7f3624c07f98,+832,(nil),+254048}, }, refcnt = 2 retries = 0 failed = 0 state = 0 flags = { do_stream } director_req = 0x7f4f43202080 { vcl_name = webssi01 name = backend display_name = 8bf50d44-c3e5-4563-b070-58802ae1562c.webssi01 ipv4 = 172.20.0.11 port = 80 hosthdr = 172.20.0.11 health=healthy, admin_health=probe, changed=1437098584.3 } objcore (FETCH) = 0x7f361ca487c0 { refcnt = 1 flags = 0x2 objhead = 0x7f361e757860 stevedore = (nil) } objcore (IMS) = 0x7f36223de100 { refcnt = 3 flags = 0x0 objhead = 0x7f361e757860 stevedore = 0x7f4f430e3180 (file s0) } } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 20 10:49:37 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jul 2015 10:49:37 -0000 Subject: [Varnish] #1766: Provide an access to the request stack for ESI subrequests Message-ID: <061.1d349b0d533297d66974ace897555d62@varnish-cache.org> #1766: Provide an access to the request stack for ESI subrequests ---------------------------------+------------------------- Reporter: lisachenko.it@? | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: ESI, request, stack | ---------------------------------+------------------------- In some cases, this feature can be very useful with combination of req.esi_level variable. For example, to detect Last-Modified date for a page with ESI blocks (we just update the main response header within sub-responses). Here is an example: {{{ req.stack <== Stack of requests and subrequests req.stack[0] <== Always first, main request req.stack[req.esi_level-1] <== Parent request for the current ESI sub- request. resp.stack <== Stack of responses resp.stack[0] <== Main response if (req.esi_level > 0 && (!resp.stack[0].http.Last-Modified || resp.stack[0].http.Last-Modified < resp.http.Last-Modified)) { // Update our main response headers with information about current response set resp.stack[0].http.Last-Modified = resp.http.Last-Modified; } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jul 24 13:15:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 24 Jul 2015 13:15:10 -0000 Subject: [Varnish] #1767: configtest is broken in 4.1 Message-ID: <046.5ba567a3d5cec0f0e44a0a8a38029771@varnish-cache.org> #1767: configtest is broken in 4.1 ----------------------+----------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.1.0-TP1 Severity: normal | Keywords: ----------------------+----------------------- In master/4.1, we seem to bind to the listening ports even when we're just parsing the configuration. This behaviour has changed since 4.0. This affects how the "configtest" functionality of the init script works. {{{ root at sierra:~# /opt/varnish/sbin/varnishd -a :80 -C \ -f /etc/varnish/default.vcl -n /tmp Error: Could not bind to address :80: Address already in use root at sierra:~# varnishd -V varnishd (varnish-trunk revision 6179e31) }}} Expected: -C does not depend on a local listening port, and should do its thing without trying to bind to any ports. Workaround: removing any -a entries in the command line seem to fix it. {{{ root at sierra:~# /opt/varnish/sbin/varnishd -f /etc/varnish/default.vcl -n /tmp -C (this works, C-code is output) root at sierra:~# /opt/varnish/sbin/varnishd -f /etc/varnish/default.vcl -n /tmp Error: Could not bind to address *:80: Address already in use (as expected) }}} Teaching the different init scripts how to parse and filter the contents of $DAEMON_ARGS feels like the wrong solution. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jul 26 15:40:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 26 Jul 2015 15:40:32 -0000 Subject: [Varnish] #1767: configtest is broken in 4.1 In-Reply-To: <046.5ba567a3d5cec0f0e44a0a8a38029771@varnish-cache.org> References: <046.5ba567a3d5cec0f0e44a0a8a38029771@varnish-cache.org> Message-ID: <061.3048f55ec4054230da4d4c938ac4d61a@varnish-cache.org> #1767: configtest is broken in 4.1 ----------------------+---------------------------------------- Reporter: lkarsten | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.1.0-TP1 Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [1ff5897b6d18954fc209b68270c637a510f30e79]: {{{ #!CommitTicketReference repository="" revision="1ff5897b6d18954fc209b68270c637a510f30e79" Make -a argument checking a two step process: We always do the DNS resolution when we hit -a arguments, but the test that we can bind to the address is postponed until after the -C argument processing. Fixes: #1767 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jul 26 15:40:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 26 Jul 2015 15:40:32 -0000 Subject: [Varnish] #1761: 204 responses intermittently delivered as chunk-encoded with length byte = 0 In-Reply-To: <043.4c8744d010240f820f7b2440cfafa2fc@varnish-cache.org> References: <043.4c8744d010240f820f7b2440cfafa2fc@varnish-cache.org> Message-ID: <058.b83d26d4f85f9f62e6c228047c776ad1@varnish-cache.org> #1761: 204 responses intermittently delivered as chunk-encoded with length byte = 0 ----------------------+---------------------------------------- Reporter: geoff | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [4df51cf26691e875682a39b279cd03f7e202fae8]: {{{ #!CommitTicketReference repository="" revision="4df51cf26691e875682a39b279cd03f7e202fae8" Try to sort out the delivery of resp.body for special resp.status cases. Treat C-L or A-E in 204 backend responses as fetch_error. This hopefully fixes #1761 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jul 27 06:16:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jul 2015 06:16:51 -0000 Subject: [Varnish] #1763: tolerate EINTR in accept() ? In-Reply-To: <043.6d09ee992e18ddcfd4d5c149e6684cb6@varnish-cache.org> References: <043.6d09ee992e18ddcfd4d5c149e6684cb6@varnish-cache.org> Message-ID: <058.47d9327741d3d4360c21490c3129bb91@varnish-cache.org> #1763: tolerate EINTR in accept() ? ----------------------+-------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by phk): I've poked this a bit, and found that we'd have to ignore EINTR a largeish number of places for this to become reliable, that feels a bit on the "fast&loose" side to me, but I'll look at it some more. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jul 29 13:11:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 29 Jul 2015 13:11:21 -0000 Subject: [Varnish] #1768: Problem with related unset http headers on req Message-ID: <043.e49f5eb052d823ef5060f1d4a6d644fd@varnish-cache.org> #1768: Problem with related unset http headers on req -------------------+---------------------- Reporter: boris | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: | -------------------+---------------------- With the following vcl config file I get an error from the C-compiler when compiling it. {{{ vcl 4.0; backend default { .host = "localhost"; .port = "8080"; } sub vcl_recv { unset req.http.TEST_VALUE; unset req.http.TEST-VALUE; } }}} Message is: {{{ Message from C-compiler: ./vcl.CCNf2VxL.c:542:30: error: redefinition of ?VGC_HDR_REQ_TEST_VALUE? static const struct gethdr_s VGC_HDR_REQ_TEST_VALUE = ^ ./vcl.CCNf2VxL.c:540:30: note: previous definition of ?VGC_HDR_REQ_TEST_VALUE? was here static const struct gethdr_s VGC_HDR_REQ_TEST_VALUE = }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jul 31 19:09:37 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Jul 2015 19:09:37 -0000 Subject: [Varnish] #1769: RH7 rpm missing Message-ID: <043.d7dc6c1662a1be420b6269b668069f8b@varnish-cache.org> #1769: RH7 rpm missing -----------------------+--------------------- Reporter: fgsch | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Keywords: -----------------------+--------------------- Doc at https://www.varnish-cache.org/installation/redhat mentions RH7 is beta but we've been shipping it for quite a while. We should add the rpm and instructions to the repository / that page. -- Ticket URL: Varnish The Varnish HTTP Accelerator