From varnish-bugs at varnish-cache.org Sun Mar 1 17:11:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 01 Mar 2015 17:11:39 -0000 Subject: [Varnish] #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling In-Reply-To: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> References: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> Message-ID: <065.b4802a90beac55a1ebef6aa8acf4505e@varnish-cache.org> #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling --------------------------+---------------------------------- Reporter: DonMacAskill | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.0 Severity: critical | Resolution: Keywords: | --------------------------+---------------------------------- Comment (by valdemon): Hi, although it's a different use case, it seems to be related to the ticket subject. I'm getting the 'Transfer-encoding: chunked' in a case of '204 No Content' response, which is wrong by definition. I guess this could be easily fixed by extending the 'if' statement in following line: https://github.com/varnish/Varnish- Cache/blob/master/bin/varnishd/http1/cache_http1_deliver.c#L92 to: {{{ #!c } else if (http_IsStatus(req->resp, 304) || http_IsStatus(req->resp, 204)) { }}} Checked with the latest 4.0.3. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 12:08:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 12:08:48 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.16c53a1ea000ee1381994bba1c0d5e9b@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 12:09:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 12:09:46 -0000 Subject: [Varnish] #1689: Child not responding to CLI In-Reply-To: <042.96a3d3fcbeb10850167e110902a2e21a@varnish-cache.org> References: <042.96a3d3fcbeb10850167e110902a2e21a@varnish-cache.org> Message-ID: <057.4bcb103eb1099788814c19c16985a9e9@varnish-cache.org> #1689: Child not responding to CLI --------------------+------------------------- Reporter: cepi | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: worksforme Keywords: | --------------------+------------------------- Changes (by lkarsten): * status: new => closed * resolution: => worksforme Comment: Hi. This is cli_timeout firing off. In your default/varnish file you've reduced the 4.0 default of 60s down to 25s. This is probably left over from some 3.0 tuning, where the default was 10s. Unless you did this for good reasons, please remove that override and report back if the problem persists. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 12:34:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 12:34:20 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.c634b71f59c48cf2b7a7c16e43f567bf@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+---------------------------------------- Changes (by geoff): * status: closed => reopened * resolution: fixed => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 14:07:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 14:07:19 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.ba3075c727a1c85e0406a9cfe5699f2d@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: fixed Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: In [a61cf0d1a693add65432d8a796a6630af0df5956]: {{{ #!CommitTicketReference repository="" revision="a61cf0d1a693add65432d8a796a6630af0df5956" NUL terminate the ungzip'ed body so we can expect on it. Fixes #1688 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 14:14:06 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 14:14:06 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.41cdd26f10316f1b8e1590ddbc3af5f6@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: fixed Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Comment (by phk): The test-case you provided only pointed out that varnishtest forgot to NUL terminate the gunzip'ed body, which made "expect resp.body" fall of the end of the world, varnishd does the right thing. You will see the synth response uncompressed in the middle of the gzip'ed data in this case, but it will be prefixed by five bytes that make it a "literal" gzip block: {{{ **** c1 1.2 chunk| \x002\x00\xcd\xffthis is the body of an included synthetic response }}} Notice that the '2' is ascii two, not part of \x00, so in this case the header says type=0x00, length = 0x0032, ~length = 0xffcd. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 15:07:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 15:07:26 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.c49fc1a63d458e27a8a020adaab49dda@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): I have attached varnishlog output, while varnishlog was logging (raw), varnishadm reported the following error: Last panic at: Mon, 02 Mar 2015 15:01:59 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 9 6: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision 5fe64d6 ident = Linux,3.2.0-0.bpo.1-amd64,x86_64,-junix,-smalloc,-smal loc,-hclassic,epoll Backtrace: 0x433c9a: pan_ic+0x14a 0x4142e5: tcp_handle+0x255 0x465b20: Wait_Handle+0x90 0x466db0: vwe_thread+0x100 0x7f0e85fca8ca: libpthread.so.0(+0x68ca) [0x7f0e85fca8ca] 0x7f0e85d31b6d: libc.so.6(clone+0x6d) [0x7f0e85d31b6d] Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 15:10:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 15:10:41 -0000 Subject: [Varnish] #1609: Reset MGT.child_panic after a panic.clear In-Reply-To: <046.f6154daa50f9ea4015af5bbbbd99f173@varnish-cache.org> References: <046.f6154daa50f9ea4015af5bbbbd99f173@varnish-cache.org> Message-ID: <061.af427f10ea25caa65b3f2647d9fbc77a@varnish-cache.org> #1609: Reset MGT.child_panic after a panic.clear -------------------------+-------------------- Reporter: coredump | Owner: fgsch Type: enhancement | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by razvanphp): I have a similar feature request, as we also monitor some values of varnishstat: - MAIN.backend_fail - MAIN.sess_drop - MAIN.sess_fail Currently the only way to clear the counters is restarting varnish, which is not so nice because requests are going to fail. So maybe the possibility to clear all stats is also useful. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 16:22:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 16:22:11 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.9d44c48384e232ca62ba6dba2c761677@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -------------------------+---------------------------------------- Changes (by fgsch): * version: trunk => 4.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 16:32:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 16:32:10 -0000 Subject: [Varnish] #1618: The "Range" header is not honored for a cache miss. In-Reply-To: <047.7e50d570f16c3170f5bb0bb63e9ff04d@varnish-cache.org> References: <047.7e50d570f16c3170f5bb0bb63e9ff04d@varnish-cache.org> Message-ID: <062.3605651371c665a3cc9360138eecc145@varnish-cache.org> #1618: The "Range" header is not honored for a cache miss. -------------------------------+--------------------- Reporter: jeffawang | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: Keywords: range header miss | -------------------------------+--------------------- Comment (by fgsch): I've deleted my comments and test - they were wrong. As Martin pointed out this is due to streaming and not having the Content Length. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 2 22:44:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Mar 2015 22:44:05 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.19c2cfe19b777cc8120f09db483b8b6d@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): I think I may have spotted something, and I've tried to fix it, but I'm not entirely sure. Can I get you to test -trunk with or later than bbac365619c995096d6a690bf531227e095424d4 ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 3 15:24:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Mar 2015 15:24:14 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.cd11e5f2df1f4ddeb1f7b7c0db01260e@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by razvanphp): I just want to mention, that there is another case where the original value is better in the logs, for example I modify the `req.url` few times for normalization, but in the log I think `%r` and `%U%q` should still contain the original URL. Is this case affected by the proposed patch? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 3 17:24:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Mar 2015 17:24:15 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.d46519a6d6d8e53f9050ee2d62f81f5e@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: fixed Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Comment (by geoff): OK, I've tested this patch with 4.0.3 now (git checkout varnish-4.0.3 and then build), and I'm still getting the error. varnishtest fails with a gunzip error -- I'll attach another log of the test run. The synthetic part does not appear within a chunk in the midst of gzipped data, but rather in a separate chunk, without any other binary data in that chunk: {{{ **** c1 0.5 len| 0015\r\n **** c1 0.5 chunk| \x1f\x8b\x08\x00\x00\x00\x00\x00\x02\x03\xb2\x89\x88\x8c\xb2\x03\x00\x00\x00\xff\xff **** c1 0.5 len| 0032\r\n **** c1 0.5 chunk| this is the body of an included synthetic response **** c1 0.5 len| 0019\r\n **** c1 0.5 chunk| \xb2\xd1\x8f\x88\x8c\xb2\x03\x00\x00\x00\xff\xff\x01\x00\x00\xff\xff\xa3\x8d\xb7\xdd\x0b\x00\x00\x00 **** c1 0.5 len| 0\r\n **** c1 0.5 bodylen = 96 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 3 17:25:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Mar 2015 17:25:18 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.4d8ad97e7e316559cff8ecb932e6a43f@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Changes (by geoff): * status: closed => reopened * resolution: fixed => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 3 18:45:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Mar 2015 18:45:20 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.0bda641d7f3fa111a490934f59780a9c@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Comment (by phk): Ok, this is clearly a bug in 4.0 and it will probably require a 4.0 specific fix. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 4 16:10:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Mar 2015 16:10:14 -0000 Subject: [Varnish] #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 In-Reply-To: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> References: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> Message-ID: <058.99ac7a4226c74c6221be4ed53d6c48a5@varnish-cache.org> #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 ----------------------+---------------------- Reporter: daghf | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Changes (by slink): * cc: nils.goroll@? (added) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 4 17:51:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Mar 2015 17:51:18 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.2818a9ae0f51e2e173a42bf272e46d01@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -------------------------+---------------------------------------- Comment (by geoff): A brief update (because I'm not sure if I'm up-to-date with fgsch at the moment): I can confirm that with fgsch's patch of February 26th, both versions of the VTC test pass with 4.0.3. Nevertheless, I can get a running Varnish with the patch to crash on the same assertion failure. Right now I'm frankly at a loss as to how construct a VTC that reproduces the error. The offending response that causes the crash is still very similar: * ESI-included (at ESI level 4, as it happens) * Response status 204 and empty response body * No Content-Length header (Content-Length==0 follows implicitly from 204) * Accept-Encoding:gzip in the request, so Content-Encoding:gzip is in the response All of that can be reproduced in a VTC, ('txresp -nolen' causes the beresp to have no Content-Length header), but nevertheless I haven't been able to get a VTC to cause the crash after the patch has been applied (and fgsch has told me the same). I have sent varnishlogs and the panic message to fgsch (haven't attached them here because I'd have to anonymize them). We are able to work around the problem in VCL: * In vcl_backend_response(), if the response is ESI-included and beresp.status==204, then unset beresp.http.Content-Encoding, and set beresp.do_gzip to false. * For synthetic, ESI-included responses: In vcl_synth(), if req.can_gzip is true and resp.http.Content-Encoding does not include "gzip", then set resp.http.Content-Encoding to "gzip". (Don't remember at the moment why this part was necessary, just that it didn't work without it.) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 6 17:03:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Mar 2015 17:03:05 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.68cb64ca3275abe1a8077837b31067bc@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -------------------------+---------------------------------------- Comment (by fgsch): I've reviewed this issue with geoff during vdd15q1. Attached is a second version of the patch that fixes the remaining case. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 9 09:51:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Mar 2015 09:51:15 -0000 Subject: [Varnish] #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader In-Reply-To: <043.d90b1b8d19f2203e577f7e0ebefd25bb@varnish-cache.org> References: <043.d90b1b8d19f2203e577f7e0ebefd25bb@varnish-cache.org> Message-ID: <058.3e9cb76a05a046889637cb5697effa0c@varnish-cache.org> #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader --------------------+---------------------- Reporter: scoof | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------+---------------------- Changes (by aondio): * owner: => aondio -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 9 10:26:40 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Mar 2015 10:26:40 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.ac80690a628bfa64c91ee150edcc9e13@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -------------------------+---------------------------------------- Comment (by geoff): I can confirm that the new version of the VTC successfully reproduces the crash, and that the patch of March 6th resolves it, for both the test case and for our installation in the test environment. Since the problem is not present in master, I suggest that the patch be applied to the next version in 4.x -- 4.0.4 if there is such a thing, or else 4.1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 9 12:11:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Mar 2015 12:11:54 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.6f6442f2d7325ec2261d3e0f8fd42ada@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Changes (by martin): * owner: phk => martin * status: reopened => new -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 9 13:10:44 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Mar 2015 13:10:44 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.9ee146a363f30615d49bc809ac15f5d6@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by aondio): Hi, it should be not affected by the proposed patch. I'm trying to reproduce the issue. In which part of the vcl do you modify req.url? Replying to [comment:6 razvanphp]: > I just want to mention, that there is another case where the original value is better in the logs, for example I modify the `req.url` few times for normalization, but in the log I think `%r` and `%U%q` should still contain the original URL. Is this case affected by the proposed patch? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 10 03:40:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Mar 2015 03:40:16 -0000 Subject: [Varnish] #1690: CentOS 7 init script is broken due to two separate pidfiles Message-ID: <047.01e68cf1abaae063ed09786b9e8d679f@varnish-cache.org> #1690: CentOS 7 init script is broken due to two separate pidfiles -----------------------+----------------------- Reporter: alexzorin | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 3.0.5 | Severity: normal Keywords: | -----------------------+----------------------- RPM from https://repo.varnish- cache.org/redhat/varnish-3.0/el7/x86_64/varnish/ (3.0.6) ships with an init script that describes two separate pid files (varnish.pid and varnishd.pid). Due to this discrepancy, controlling the service via systemd/systemctl does not work. The pid files should be the same. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 10 10:21:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Mar 2015 10:21:52 -0000 Subject: [Varnish] #1690: CentOS 7 init script is broken due to two separate pidfiles In-Reply-To: <047.01e68cf1abaae063ed09786b9e8d679f@varnish-cache.org> References: <047.01e68cf1abaae063ed09786b9e8d679f@varnish-cache.org> Message-ID: <062.0d288f4c547be89fb926771fb577caeb@varnish-cache.org> #1690: CentOS 7 init script is broken due to two separate pidfiles -----------------------+--------------------------------------------- Reporter: alexzorin | Owner: Federico G. Schwindt Type: defect | Status: closed Priority: normal | Milestone: Component: packaging | Version: 3.0.5 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------------------------------- Changes (by Federico G. Schwindt ): * owner: => Federico G. Schwindt * status: new => closed * resolution: => fixed Comment: In [9190770d54322567d3d4fa4b5bd0f1dc091c63c4]: {{{ #!CommitTicketReference repository="" revision="9190770d54322567d3d4fa4b5bd0f1dc091c63c4" Make both pidfiles names match Fixes #1690 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 10 10:33:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Mar 2015 10:33:03 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.9b92c11342501f3df3516ecd223327f2@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Changes (by Arianna Aondio ): * status: new => closed * resolution: => fixed Comment: In [cada82833d07362a7fc908ea6e15373024c032a3]: {{{ #!CommitTicketReference repository="" revision="cada82833d07362a7fc908ea6e15373024c032a3" Varnishncsa doesn't pick the first header's value,but the last one (more precise if a header is set more then once or overwritten). Fixes #1683 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 10 10:59:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Mar 2015 10:59:02 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.909f7debd499ce4eb0f6a45a591a00ad@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Comment (by razvanphp): As I posted before, in `vcl_recv` I call `detect_device` stub where the headers are changed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 09:54:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 09:54:46 -0000 Subject: [Varnish] #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 In-Reply-To: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> References: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> Message-ID: <058.e4d142f7826ac92687fe70a5ba8b6947@varnish-cache.org> #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 ----------------------+---------------------- Reporter: daghf | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------- Changes (by Arianna Aondio ): * status: new => closed * resolution: => fixed Comment: In [d7695e488a003bd7b89279c50990864d4763dd8f]: {{{ #!CommitTicketReference repository="" revision="d7695e488a003bd7b89279c50990864d4763dd8f" If VRB_cache() is called with a POST body larger than the provided size limitation, the request fails and the connection is closed. Fixes #1664 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 12:05:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 12:05:18 -0000 Subject: [Varnish] #1691: varnish-4.0.3 admits bogus content-length header Message-ID: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> #1691: varnish-4.0.3 admits bogus content-length header --------------------+--------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Keywords: --------------------+--------------------- As seen at http://seclists.org/oss-sec/2015/q1/776: While still unable to trig the crash described, varnish seems to accept a bogus Content-Length header. When backend sets Content-Length to a bougs value, like "dupa" in the oss- sec post above, it seems that varnish enters v1f_pull_straight(), while it shouldn't. From my IRC log: 10:47 < phk> ingvar, if you runs something like this against 4.0.3 what do you get ? http://phk.freebsd.dk/misc/a.vtc 10:48 < phk> perbu, did changing the umask help ? 10:49 < phk> ingvar, this might actually be an off-by one thing... (...) 10:56 < ingvar> phk: http://fpaste.org/196148/59813651/ 10:59 < phk> ingvar, try putting "non-fatal" in top of the servers s1 stuff 11:00 < ingvar> phk: # top TEST tests/a.vtc passed (1.513) 11:02 < ingvar> phk: I can add verbose output as well, sec 11:03 < ingvar> phk: http://ur1.ca/jvrka 11:06 < phk> ingvar, the fact that they're in v1f_pull_straight() means that the bogus C-L somehow got accepted. 11:06 < phk> ingvar, I have a really hard time understanding how that happened. Consider the test results attached. I'm not quite sure I've got this stright, and if this really is a problem, why it is not trigged by varnishtest/tests/r01356.vtc -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 13:35:12 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 13:35:12 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.b01c26fa3047ad577e31e556f49666d3@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: fixed Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: In [36f01d5b81ae449ecd532f1eafae32a11e8231c3]: {{{ #!CommitTicketReference repository="" revision="36f01d5b81ae449ecd532f1eafae32a11e8231c3" Add a VDP_pretend_gzip for use with synth bodies in ESI includes with gzip Varnishtest: NUL terminate the ungzip'ed body so we can expect on it. Fixes #1688 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 15:23:17 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 15:23:17 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.1e287a78890e7753810fed0fee201430@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -------------------------+---------------------------------------- Changes (by Martin Blix Grydeland ): * status: reopened => closed * resolution: => fixed Comment: In [fee70166ca0c520b2ce46f9dc540e5a6dd1f9063]: {{{ #!CommitTicketReference repository="" revision="fee70166ca0c520b2ce46f9dc540e5a6dd1f9063" Deal with known zero length objects properly when handling do_gzip/do_gunzip Original patch: fgs Fixes: #1602 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 16:38:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 16:38:56 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response In-Reply-To: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> References: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> Message-ID: <058.887642f46b79346085d05769e74cd7d6@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------------------- Reporter: geoff | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: major | Resolution: fixed Keywords: esi include synthetic gzip | ----------------------------------------+---------------------------------- Comment (by geoff): I can confirm that the patch resolves our issue, thx martin. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 23:16:36 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 23:16:36 -0000 Subject: [Varnish] #1691: varnish-4.0.3 admits bogus content-length header In-Reply-To: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> References: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> Message-ID: <059.670d7002e051b107df67629119eccb6d@varnish-cache.org> #1691: varnish-4.0.3 admits bogus content-length header --------------------+----------------------- Reporter: ingvar | Owner: fgsch Type: defect | Status: assigned Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------+----------------------- Changes (by fgsch): * owner: => fgsch * status: new => assigned -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 11 23:17:09 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Mar 2015 23:17:09 -0000 Subject: [Varnish] #1691: varnish admits bogus content-length header (was: varnish-4.0.3 admits bogus content-length header) In-Reply-To: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> References: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> Message-ID: <059.0cec93d46193fde8e5a0aef847165bdc@varnish-cache.org> #1691: varnish admits bogus content-length header --------------------+----------------------- Reporter: ingvar | Owner: fgsch Type: defect | Status: assigned Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | --------------------+----------------------- Changes (by fgsch): * version: unknown => 4.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 13 11:58:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Mar 2015 11:58:51 -0000 Subject: [Varnish] #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader In-Reply-To: <043.d90b1b8d19f2203e577f7e0ebefd25bb@varnish-cache.org> References: <043.d90b1b8d19f2203e577f7e0ebefd25bb@varnish-cache.org> Message-ID: <058.08279c25db9acc62539de98b311efb60@varnish-cache.org> #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader --------------------+---------------------- Reporter: scoof | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: fixed Keywords: | --------------------+---------------------- Changes (by Arianna Aondio ): * status: new => closed * resolution: => fixed Comment: In [f993d444e97b73536058db456cf2f4e7f2c11396]: {{{ #!CommitTicketReference repository="" revision="f993d444e97b73536058db456cf2f4e7f2c11396" On retries a complete request is logged. Previously BereqMethod,BereqUrl,BereqProtocol and BereqHeader(s) were not logged. Fixes #1684 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 13 15:00:34 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Mar 2015 15:00:34 -0000 Subject: [Varnish] #1691: varnish admits bogus content-length header In-Reply-To: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> References: <044.5326386d60caf1d74424aba9ec3d8194@varnish-cache.org> Message-ID: <059.6d8d4ae9f2c083e25f92d4d5aac9aaf4@varnish-cache.org> #1691: varnish admits bogus content-length header --------------------+--------------------- Reporter: ingvar | Owner: fgsch Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by Martin Blix Grydeland ): * status: assigned => closed * resolution: => fixed Comment: In [9d61ea4d722549a984d912603902fccfac473824]: {{{ #!CommitTicketReference repository="" revision="9d61ea4d722549a984d912603902fccfac473824" Fail fetch on malformed Content-Length header Add a common content length parser that is being used by both client and backend side. Original patch by: fgs Fixes: #1691 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 16 12:03:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Mar 2015 12:03:02 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.01898bb93ca3417f706cab27041e37ef@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Changes (by slink): * status: new => needinfo -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 16 15:10:52 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Mar 2015 15:10:52 -0000 Subject: [Varnish] #1627: Response where you have do_gzip and do_stream enabled return a wrong content-length to HTTP/1.0 clients In-Reply-To: <041.a4ee8870cc795a7f4ced77cec3172663@varnish-cache.org> References: <041.a4ee8870cc795a7f4ced77cec3172663@varnish-cache.org> Message-ID: <056.ee4d22cb434392ed63572d306fe359ab@varnish-cache.org> #1627: Response where you have do_gzip and do_stream enabled return a wrong content-length to HTTP/1.0 clients --------------------+----------------------------------------------- Reporter: tnt | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.6 Severity: normal | Resolution: fixed Keywords: | --------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * owner: => Martin Blix Grydeland * resolution: => fixed Comment: In [72981734a141a0a52172b85bae55f8877f69ff42]: {{{ #!CommitTicketReference repository="" revision="72981734a141a0a52172b85bae55f8877f69ff42" Only emit passed Content_Length header when response mode is RES_LEN Original patch and test case by: tnt Fixes: #1627 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 17 16:21:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 17 Mar 2015 16:21:28 -0000 Subject: [Varnish] #1608: Varnish does not return 413 or 414 In-Reply-To: <043.855604c38eb44e6eba395fc067ec62e2@varnish-cache.org> References: <043.855604c38eb44e6eba395fc067ec62e2@varnish-cache.org> Message-ID: <058.bc4a1a677d149ea636790dc0360812c1@varnish-cache.org> #1608: Varnish does not return 413 or 414 --------------------+-------------------- Reporter: fgsch | Owner: fgsch Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+-------------------- Comment (by fgsch): We have a counter for this now. Should we return 413/414 or closing the session as we currently do is enough? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Mar 19 11:47:35 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 19 Mar 2015 11:47:35 -0000 Subject: [Varnish] #1517: HTTPS Redirects In-Reply-To: <051.0f52524c4a4d7a95edfd419cba0e2b07@varnish-cache.org> References: <051.0f52524c4a4d7a95edfd419cba0e2b07@varnish-cache.org> Message-ID: <066.0c37bdc110b88916688f068bde4ca451@varnish-cache.org> #1517: HTTPS Redirects ---------------------------+---------------------------------- Reporter: shredtechular | Owner: Type: documentation | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: documentation | Version: 4.0.0 Severity: normal | Resolution: invalid Keywords: | ---------------------------+---------------------------------- Comment (by jmrepetti): Hello shredtechular. I'm facing the same problem. this examples seems to belong to Varnish 3 syntax https://www.varnish- cache.org/trac/wiki/VCLExampleRedirectInVCL. How did you fixed it? I appreciate it. Thanks -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 20 13:09:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Mar 2015 13:09:02 -0000 Subject: [Varnish] #1692: Assert error in vep_emit_common Message-ID: <044.b0926c147f799f3c2119ecd58287e758@varnish-cache.org> #1692: Assert error in vep_emit_common --------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Keywords: --------------------+--------------------- During varnish cache plus testing, we found that r01184.vtc would fail occasionally in vep_emit_common. Backtrace: {{{ **** s1 0.6 macro undef s1_sock **** v1 0.6 vsl| 1005 Begin c sess 0 HTTP/1 **** v1 0.6 vsl| 1005 SessOpen c 127.0.0.1 40315 127.0.0.1:0 127.0.0.1 43229 1426852631.268863 13 **** v1 0.6 vsl| 1005 Link c req 1006 rxreq **** v1 0.6 vsl| 1007 Begin b bereq 1006 fetch **** v1 0.6 vsl| 1007 Timestamp b Start: 1426852631.269114 0.000000 0.000000 **** v1 0.6 vsl| 1007 BereqMethod b GET **** v1 0.6 vsl| 1007 BereqURL b /c **** v1 0.6 vsl| 1007 BereqProtocol b HTTP/1.1 **** v1 0.6 vsl| 1007 BereqHeader b X-Forwarded-For: 127.0.0.1 **** v1 0.6 vsl| 1007 BereqHeader b Accept-Encoding: gzip **** v1 0.6 vsl| 1007 BereqHeader b X-Varnish: 1007 **** v1 0.6 vsl| 1007 VCL_call b BACKEND_FETCH **** v1 0.6 vsl| 1007 VCL_return b fetch **** v1 0.6 vsl| 1007 BackendClose b 15 s1(127.0.0.1,,36613) toolate **** v1 0.6 vsl| 1007 BackendOpen b 15 s1(127.0.0.1,,36613) 127.0.0.1 37296 **** v1 0.6 vsl| 1007 Backend b 15 s1 s1(127.0.0.1,,36613) **** v1 0.6 vsl| 1007 BereqHeader b Host: 127.0.0.1 **** v1 0.6 vsl| 1007 Timestamp b Bereq: 1426852631.269525 0.000411 0.000411 **** v1 0.6 vsl| 1007 Timestamp b Beresp: 1426852631.270038 0.000923 0.000512 **** v1 0.6 vsl| 1007 BerespProtocol b HTTP/1.1 **** v1 0.6 vsl| 1007 BerespStatus b 200 **** v1 0.6 vsl| 1007 BerespReason b OK **** v1 0.6 vsl| 1007 BerespHeader b Content-Encoding: gzip **** v1 0.6 vsl| 1007 BerespHeader b Transfer-Encoding: Chunked **** v1 0.6 vsl| 1007 BerespHeader b Date: Fri, 20 Mar 2015 11:57:11 GMT **** v1 0.6 vsl| 1007 TTL b RFC 120 -1 -1 1426852631 1426852631 1426852631 0 0 **** v1 0.6 vsl| 1007 VCL_call b BACKEND_RESPONSE **** v1 0.6 vsl| 1007 VCL_return b deliver **** v1 0.6 vsl| 1007 STV_Alloc b 456 456 456 0 0 **** v1 0.6 vsl| 1007 Storage b malloc s0 **** v1 0.6 vsl| 1007 ObjProtocol b HTTP/1.1 **** v1 0.6 vsl| 1007 ObjStatus b 200 **** v1 0.6 vsl| 1007 ObjReason b OK **** v1 0.6 vsl| 1007 ObjHeader b Content-Encoding: gzip **** v1 0.6 vsl| 1007 ObjHeader b Date: Fri, 20 Mar 2015 11:57:11 GMT **** v1 0.6 vsl| 1007 Fetch_Body b 2 chunked - **** v1 0.6 vsl| 1007 STV_Alloc b 16384 16384 16384 0 0 **** v1 0.6 vsl| 1007 Gzip b Gunzip error: -3 (invalid distance too far back) **** v1 0.6 vsl| 1007 FetchError b Invalid Gzip data: invalid distance too far back **** v1 0.6 vsl| 1007 Gzip b U F - 99 81 80 80 0 *** v1 1.6 debug| Child (29250) died signal=6\n *** v1 1.6 debug| Child (29250) Panic message:\n *** v1 1.6 debug| Assert error in vep_emit_common(), cache/cache_esi_parse.c line 300:\n *** v1 1.6 debug| Condition(l > 0) not true.\n *** v1 1.6 debug| thread = (cache-worker)\n *** v1 1.6 debug| version = varnish-plus-4.0.3r1-rc1 revision 80f5e17\n *** v1 1.6 debug| ident = Linux,3.16.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll\n *** v1 1.6 debug| Backtrace:\n *** v1 1.6 debug| 0x455bc9: pan_backtrace+0x19\n *** v1 1.6 debug| 0x455ac0: pan_ic+0x330\n *** v1 1.6 debug| 0x428b54: vep_emit_common+0x64\n *** v1 1.6 debug| 0x4289f2: VEP_Finish+0x282\n *** v1 1.6 debug| 0x424793: vfp_esi_end+0x1a3\n *** v1 1.6 debug| 0x423d64: vfp_esi_gzip_pull+0x304\n *** v1 1.6 debug| 0x436534: vfp_call+0xb4\n *** v1 1.6 debug| 0x43641e: VFP_Suck+0x2ae\n *** v1 1.6 debug| 0x4369b9: VFP_Fetch_Body+0x479\n *** v1 1.6 debug| 0x433f2d: vbf_stp_fetch+0xe5d\n *** v1 1.6 debug| busyobj = 0x7ff55dc8e020 {\n *** v1 1.6 debug| ws = 0x7ff55dc8e0e0 {\n *** v1 1.6 debug| id = "bo",\n *** v1 1.6 debug| {s,f,r,e} = {0x7ff55dc90008,+528,(nil),+57368},\n *** v1 1.6 debug| },\n *** v1 1.6 debug| refcnt = 2\n *** v1 1.6 debug| retries = 0\n *** v1 1.6 debug| failed = 1\n *** v1 1.6 debug| state = 1\n *** v1 1.6 debug| is_do_esi\n *** v1 1.6 debug| is_is_gzip\n *** v1 1.6 debug| bodystatus = 2 (chunked),\n *** v1 1.6 debug| },\n *** v1 1.6 debug| http[bereq] = {\n *** v1 1.6 debug| ws = 0x7ff55dc8e0e0[bo]\n *** v1 1.6 debug| "GET",\n *** v1 1.6 debug| "/c",\n *** v1 1.6 debug| "HTTP/1.1",\n *** v1 1.6 debug| "X-Forwarded-For: 127.0.0.1",\n *** v1 1.6 debug| "Accept-Encoding: gzip",\n *** v1 1.6 debug| "X-Varnish: 1007",\n *** v1 1.6 debug| "Host: 127.0.0.1",\n *** v1 1.6 debug| },\n *** v1 1.6 debug| http[beresp] = {\n *** v1 1.6 debug| ws = 0x7ff55dc8e0e0[bo]\n *** v1 1.6 debug| "HTTP/1.1",\n *** v1 1.6 debug| "200",\n *** v1 1.6 debug| "OK",\n *** v1 1.6 debug| "Content-Encoding: gzip",\n *** v1 1.6 debug| "Transfer-Encoding: Chunked",\n *** v1 1.6 debug| "Date: Fri, 20 Mar 2015 11:57:11 GMT",\n *** v1 1.6 debug| },\n *** v1 1.6 debug| ws = 0x7ff55dc8e258 {\n *** v1 1.6 debug| id = "obj",\n *** v1 1.6 debug| {s,f,r,e} = {0x7ff56a13c368,+96,(nil),+96},\n *** v1 1.6 debug| },\n *** v1 1.6 debug| objcore (FETCH) = 0x7ff55c8100c0 {\n *** v1 1.6 debug| refcnt = 2\n *** v1 1.6 debug| flags = 0x2\n *** v1 1.6 debug| objhead = 0x7ff55c80e0e0\n *** v1 1.6 debug| }\n *** v1 1.6 debug| obj (FETCH) = 0x7ff56a13c200 {\n *** v1 1.6 debug| meta = 0x7ff56a13c238 {\n *** v1 1.6 debug| vxid = 2147484655,\n *** v1 1.6 debug| http[obj] = {\n *** v1 1.6 debug| ws = (nil)[]\n *** v1 1.6 debug| "HTTP/1.1",\n *** v1 1.6 debug| "200",\n *** v1 1.6 debug| "OK",\n *** v1 1.6 debug| "Content-Encoding: gzip",\n *** v1 1.6 debug| "Date: Fri, 20 Mar 2015 11:57:11 GMT",\n *** v1 1.6 debug| },\n *** v1 1.6 debug| len = 15,\n *** v1 1.6 debug| }\n *** v1 1.6 debug| store = {\n *** v1 1.6 debug| 15 {\n *** v1 1.6 debug| 1f 8b 08 00 00 00 00 00 00 03 00 00 00 ff ff |...............|\n *** v1 1.6 debug| },\n *** v1 1.6 debug| },\n *** v1 1.6 debug| },\n *** v1 1.6 debug| }\n *** v1 1.6 debug| \n *** v1 1.6 debug| \n }}} varnishtest -j20 -n50 would usually trigger it. We haven't been able to reproduce it on master/4.0, but changes in -plus with regard to buffer handling could explain why it behaves differently. Attached patch fixes the issue for us, and it looks to be applicable for master/4.0 as well. -Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 20 18:00:01 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Mar 2015 18:00:01 -0000 Subject: [Varnish] #1693: Hash is not logged even if Hash is set Message-ID: <043.7f49e344a8b7c8303a859e021707a4cd@varnish-cache.org> #1693: Hash is not logged even if Hash is set --------------------+------------------- Reporter: fgsch | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------- Commit 7fd3aa3f8fdf97185af22ce9e71c338933ee1cc0 broke logging the hash. Test and fix attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Mar 22 02:03:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 22 Mar 2015 02:03:57 -0000 Subject: [Varnish] #1694: How to limit RAM for varnish Message-ID: <047.07db56535616226a122b5df2d1dc634a@varnish-cache.org> #1694: How to limit RAM for varnish -----------------------------+------------------------- Reporter: phongthvn | Type: enhancement Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: build Version: 3.0.5 | Severity: normal Keywords: memory usage | -----------------------------+------------------------- Hi, My varnish service run with another service (as mysql, php, java...), varnish use alot of RAM, 29GB / 32GB. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5931 varnish 20 0 2988g 28g 28g S 72.1 92.6 694:23.60 varnishd 276 root 20 0 0 0 0 S 99.0 0.0 594:20.92 kswapd0 277 root 20 0 0 0 0 S 99.4 0.0 594:20.92 kswapd1 Can i limit memory usage for varnish about 8GB - 16GB? THanks -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 23 09:52:00 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Mar 2015 09:52:00 -0000 Subject: [Varnish] #1674: Debian: varnishncsa fails to start during boot In-Reply-To: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> References: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> Message-ID: <058.4c478bf9d8be715b919ec1781cfa8b6d@varnish-cache.org> #1674: Debian: varnishncsa fails to start during boot -----------------------+-------------------- Reporter: idl0r | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------+-------------------- Comment (by idl0r): Hm, it looks like I was wrong. The stale socket might be still present so we need a check to verify whether it's a stale socket or not. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 23 12:09:29 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Mar 2015 12:09:29 -0000 Subject: [Varnish] #1694: How to limit RAM for varnish In-Reply-To: <047.07db56535616226a122b5df2d1dc634a@varnish-cache.org> References: <047.07db56535616226a122b5df2d1dc634a@varnish-cache.org> Message-ID: <062.1b480f08c60b9d803b59ce9f0b79b099@varnish-cache.org> #1694: How to limit RAM for varnish --------------------------+------------------------------ Reporter: phongthvn | Owner: Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: 3.0.5 Severity: normal | Resolution: invalid Keywords: memory usage | --------------------------+------------------------------ Changes (by daghf): * status: new => closed * resolution: => invalid Comment: This is not a bug. Please consult the user documentation and use the varnish-misc mailing list for general questions. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 23 12:52:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Mar 2015 12:52:45 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.8a1f37b642665ac297345587ce80b255@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+--------------------- Reporter: xcir | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by martin): * status: new => closed * resolution: => fixed Comment: These commits are in the 4.0 git branch. Closing ticket. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 23 15:57:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Mar 2015 15:57:27 -0000 Subject: [Varnish] #1693: Hash is not logged even if Hash is set In-Reply-To: <043.7f49e344a8b7c8303a859e021707a4cd@varnish-cache.org> References: <043.7f49e344a8b7c8303a859e021707a4cd@varnish-cache.org> Message-ID: <058.1a00ff2e2931516ccd5bda8bd3fdf081@varnish-cache.org> #1693: Hash is not logged even if Hash is set --------------------+--------------------------------------------- Reporter: fgsch | Owner: Federico G. Schwindt Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------------------------------- Changes (by Federico G. Schwindt ): * status: new => closed * owner: => Federico G. Schwindt * resolution: => fixed Comment: In [7cbfb2d7f650538a01225329723ed51bc9f5db80]: {{{ #!CommitTicketReference repository="" revision="7cbfb2d7f650538a01225329723ed51bc9f5db80" Log hash_data() input when the Hash bit is set Fixes #1693. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 23 17:14:06 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Mar 2015 17:14:06 -0000 Subject: [Varnish] #1609: Reset MGT.child_panic after a panic.clear In-Reply-To: <046.f6154daa50f9ea4015af5bbbbd99f173@varnish-cache.org> References: <046.f6154daa50f9ea4015af5bbbbd99f173@varnish-cache.org> Message-ID: <061.9891ba2c2de1c11306236b8decf3ff27@varnish-cache.org> #1609: Reset MGT.child_panic after a panic.clear -------------------------+--------------------- Reporter: coredump | Owner: fgsch Type: enhancement | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: | -------------------------+--------------------- Changes (by Federico G. Schwindt ): * status: new => closed * resolution: => fixed Comment: In [b3a74ff9a50a7a658edbfce2860a598faa286ae4]: {{{ #!CommitTicketReference repository="" revision="b3a74ff9a50a7a658edbfce2860a598faa286ae4" Allow to reset the child_panic counter panic.clear gains a -z optional parameter. Fixes #1609. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 25 09:49:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Mar 2015 09:49:54 -0000 Subject: [Varnish] #1642: Assert error in VGZ_Ibuf() In-Reply-To: <045.985190b764fa9780a1b24d2b8ab90094@varnish-cache.org> References: <045.985190b764fa9780a1b24d2b8ab90094@varnish-cache.org> Message-ID: <060.9d5587126043bb9ee4ee2fe34d8088dd@varnish-cache.org> #1642: Assert error in VGZ_Ibuf() --------------------------+--------------------- Reporter: llavaud | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: assert error | --------------------------+--------------------- Comment (by llavaud): Hello, Any news about this issue ? I can't switch to varnish 4 due to this problem... :( Regards. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 25 14:28:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Mar 2015 14:28:45 -0000 Subject: [Varnish] #1695: "Unknown protocol" startup error Message-ID: <055.5b43860b724cc45d70453967235be82c@varnish-cache.org> #1695: "Unknown protocol" startup error -----------------------------------------------+---------------------- Reporter: zaterio@? | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: trunk | Severity: normal Keywords: Error: Unknown protocol 'IP:port' | -----------------------------------------------+---------------------- Varnish startup config: varnishd -a 190.196.162.29:80,10.0.0.241:80,190.96.94.221:80,190.96.94.194:80 -T 10.0.0.241:6088 -f /etc/varnish/default.vcl -h classic,16383 -s malloc,10G -p thread_pools=2 -p thread_pool_min=100 -p thread_pool_max=3000 -p thread_pool_add_delay=1 -p auto_restart=on with revision e25c648 starts OK: varnishd -v varnishd (varnish-trunk revision e25c648) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS but with revision : varnishd -V varnishd (varnish-trunk revision b1f8725) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS we have the following error: Error: Unknown protocol '10.0.0.241:80' -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 25 14:32:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Mar 2015 14:32:25 -0000 Subject: [Varnish] #1695: "Unknown protocol" startup error In-Reply-To: <055.5b43860b724cc45d70453967235be82c@varnish-cache.org> References: <055.5b43860b724cc45d70453967235be82c@varnish-cache.org> Message-ID: <070.7b8f5895c32ab7c8572f01080209dc93@varnish-cache.org> #1695: "Unknown protocol" startup error -----------------------------------------------+--------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 Component: varnishd | release Severity: normal | Version: trunk Keywords: Error: Unknown protocol 'IP:port' | Resolution: worksforme -----------------------------------------------+--------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: This is part of ongoing development. You need to specify -a for each address now: -a 190.196.162.92 -a 10.0.0.241:80 ... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 25 14:48:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Mar 2015 14:48:28 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.7971f6d3109120db80037c98aa986733@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): Dear phk, sorry the delay, i was in holiday: now running: varnishd -V varnishd (varnish-trunk revision 63bf572) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS I will report any issue. Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Mar 25 15:09:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Mar 2015 15:09:08 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.4b3e92953e879d0c696c52b20c496fcc@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by zaterio@?): panic.show 200 Last panic at: Wed, 25 Mar 2015 14:49:02 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 95: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision 63bf572 ident = Linux,2.6.32-5-amd64,x86_64,-junix,-smalloc,-smalloc,-hclassic,epoll Backtrace: 0x433f8a: pan_ic+0x14a 0x4140fd: tcp_handle+0x3bd 0x465ed0: Wait_Handle+0x90 0x467160: vwe_thread+0x100 0x7fd69f5358ca: libpthread.so.0(+0x68ca) [0x7fd69f5358ca] 0x7fd69f29cb6d: libc.so.6(clone+0x6d) [0x7fd69f29cb6d] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Mar 26 12:20:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Mar 2015 12:20:46 -0000 Subject: [Varnish] #1696: Varnish leaves connections in CLOSE_WAIT Message-ID: <045.9d3cd635ef30cb25d4bc2d87bd479cf7@varnish-cache.org> #1696: Varnish leaves connections in CLOSE_WAIT ---------------------+---------------------- Reporter: rwimmer | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.5 | Severity: critical Keywords: | ---------------------+---------------------- We've seen this problem two weeks ago in a completely different project (in this case with Varnish 3.0.6) but it disappeared suddenly. Now it's back on another project with Varnish 3.0.5. Varnish is running for some hours when it suddenly doesn't accept some requests. Further investigations with lsof and netstat showed that there were around 100000 connections in CLOSE_WAIT state. An example lsof output (note that Varnish listens on port 16081): lsof -n 2>&1 | grep varnish | grep CLOSE_WAIT COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME varnishd 12682 nobody 3u IPv4 788349978 0t0 TCP 172.18.97.170:16081->185.15.111.18:35694 (CLOSE_WAIT) varnishd 12682 nobody 4u IPv4 788793838 0t0 TCP 172.18.97.170:16081->77.22.87.38:38329 (CLOSE_WAIT) varnishd 12682 nobody 5u IPv4 788365096 0t0 TCP 172.18.97.170:16081->46.164.62.152:23263 (CLOSE_WAIT) varnishd 12682 nobody 6u IPv4 789230766 0t0 TCP 172.18.97.170:16081->78.131.7.202:27775 (CLOSE_WAIT) varnishd 12682 nobody 10u IPv4 789129001 0t0 TCP 172.18.97.170:16081->46.164.62.152:31629 (CLOSE_WAIT) varnishd 12682 nobody 11u IPv4 789352359 0t0 TCP 172.18.97.170:16081->188.103.5.90:48825 (CLOSE_WAIT) varnishd 12682 nobody 18u IPv4 788518839 0t0 TCP 172.18.97.170:16081->92.50.98.10:59357 (CLOSE_WAIT) varnishd 12682 nobody 19u IPv4 788720641 0t0 TCP 172.18.97.170:16081->37.188.135.151:35951 (CLOSE_WAIT) varnishd 12682 nobody 20u IPv4 788581054 0t0 TCP 172.18.97.170:16081->81.182.85.246:54467 (CLOSE_WAIT) varnishd 12682 nobody 22u IPv4 789159713 0t0 TCP 172.18.97.170:16081->78.63.204.139:46028 (CLOSE_WAIT) varnishd 12682 nobody 23u IPv4 789059708 0t0 TCP 172.18.97.170:16081->213.192.25.210:24825 (CLOSE_WAIT) .... Similar with netstat -tonp 2>&1 Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer ... tcp 1 0 172.18.97.172:16081 84.184.219.117:14371 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 46.39.187.224:52043 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 83.208.43.226:36693 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 188.6.166.158:39417 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 89.233.129.36:16291 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 37.203.126.22:60019 CLOSE_WAIT 26536/varnishd off (0.00/0/0) tcp 1 0 172.18.97.172:16081 37.188.135.151:57363 CLOSE_WAIT 26536/varnishd off (0.00/0/0) ... We see growing the amount of CLOSE_WAIT connections from the user to Varnish every minute until it counts about 100000. Then only a restart releases the connections. With dmesg we see messages like [4395699.833458] TCP: too many orphaned sockets [4395699.857379] TCP: too many orphaned sockets [4395700.035472] TCP: too many orphaned sockets [4395700.062136] TCP: too many orphaned sockets [4395700.212538] TCP: too many orphaned sockets [4395700.215065] TCP: too many orphaned sockets [4395701.105658] TCP: too many orphaned sockets [4395702.603942] TCP: too many orphaned sockets [4395707.474580] TCP: too many orphaned sockets after Varnish restart. If we look at the proc filesystem how many fd's the Varnish worker process has open we see same amount of sockets netstat or lsof shows: ls -al /proc/26536/fd/ | more total 0 dr-x------ 2 nobody nogroup 0 Mar 26 10:48 . dr-xr-xr-x 9 nobody nogroup 0 Mar 26 10:47 .. lr-x------ 1 nobody nogroup 64 Mar 26 10:48 0 -> /dev/null l-wx------ 1 nobody nogroup 64 Mar 26 10:48 1 -> pipe:[785486723] lrwx------ 1 nobody nogroup 64 Mar 26 10:48 10 -> socket:[785975005] lrwx------ 1 nobody nogroup 64 Mar 26 10:48 100 -> socket:[787172993] lrwx------ 1 nobody nogroup 64 Mar 26 10:50 1000 -> socket:[786915607] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10000 -> socket:[786585448] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10001 -> socket:[786708112] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10002 -> socket:[787832286] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10003 -> socket:[786695486] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10004 -> socket:[787427586] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10005 -> socket:[787944180] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10006 -> socket:[786718487] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10007 -> socket:[786607488] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10008 -> socket:[786620992] lrwx------ 1 nobody nogroup 64 Mar 26 11:30 10009 -> socket:[786874970] .... Additionally we see growing number of Varnish open files with "can't identify protocol" in the NAME column of lsof output (but about only 10% of the amount in CLOSE_WAIT): lsof -n | grep varn | grep identify | more varnishd 12682 nobody 21u sock 0,7 0t0 788513770 can't identify protocol varnishd 12682 nobody 44u sock 0,7 0t0 789511210 can't identify protocol varnishd 12682 nobody 52u sock 0,7 0t0 789358574 can't identify protocol varnishd 12682 nobody 66u sock 0,7 0t0 789329635 can't identify protocol varnishd 12682 nobody 74u sock 0,7 0t0 789469904 can't identify protocol varnishd 12682 nobody 77u sock 0,7 0t0 788574512 can't identify protocol varnishd 12682 nobody 102u sock 0,7 0t0 789022929 can't identify protocol ... The problem starts at about 100000 connections in CLOSE_WAIT. It has certainly something to do with the Varnish session_max parameter which default is 100000. We've raised this now to 190000 and also increased the max open files to 250000 (from 130000 max open files): cat /proc/26536/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 524288 524288 bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 641255 641255 processes Max open files 250000 250000 files Max locked memory 83968000 83968000 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 641255 641255 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us But why doesn't Varnish release this sockets/connections suddenly? All sources we contacted or googled told us to fix the application - which in this case is Varnish... There're no CLOSE_WAIT timeouts in the Linux kernel (we use Ubuntu 12.04 LTS with kernel 3.13 btw.) With the default limit of session_max=100000 we currently need to restart Varnish 3-4 times a day. We've tweaked a lot of parameters, tried to eliminate keep-alive everywhere, ... but nothing changed. Only at night when requests decrease it seems Varnish is able to release more connections/sockets than it opens. We used this Varnish configuration and version unmodified for several month now without problems but since yesterday Varnish leaves connections in CLOSE_WAIT until we restart it (but even 3.0.6 have had the problem as mentioned above). In the change log of 3.0.7 I can't find any bug fixed related to our issue. I've attached a Munin graph which show's how fast the number of Varnish open files grows. Any hint's what could cause this issue? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Mar 27 10:27:37 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Mar 2015 10:27:37 -0000 Subject: [Varnish] #1696: Varnish leaves connections in CLOSE_WAIT In-Reply-To: <045.9d3cd635ef30cb25d4bc2d87bd479cf7@varnish-cache.org> References: <045.9d3cd635ef30cb25d4bc2d87bd479cf7@varnish-cache.org> Message-ID: <060.32d37305d823fa032c923d670b67122d@varnish-cache.org> #1696: Varnish leaves connections in CLOSE_WAIT ----------------------+-------------------- Reporter: rwimmer | Owner: Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.5 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Comment (by rwimmer): We've investigated the issue a little bit further. Here you can see the output of "ss" (a utility to dump socket statistics): ss -t -o -p {{{ State Recv-Q Send-Q Local Address:Port Peer Address:Port CLOSE-WAIT 1 0 172.18.97.170:16081 217.160.127.250:52031 users:(("varnishd",17527,7217)) CLOSE-WAIT 1 0 172.18.97.170:16081 89.176.201.80:23299 users:(("varnishd",17527,7485)) CLOSE-WAIT 1 0 172.18.97.170:16081 178.24.33.119:14975 users:(("varnishd",17527,5532)) CLOSE-WAIT 1 0 172.18.97.170:16081 212.95.7.209:45404 users:(("varnishd",17527,11271)) CLOSE-WAIT 1 0 172.18.97.170:16081 178.24.33.119:2733 users:(("varnishd",17527,7944)) CLOSE-WAIT 1 0 172.18.97.170:16081 158.195.75.102:55999 users:(("varnishd",17527,5198)) CLOSE-WAIT 1 0 172.18.97.170:16081 188.6.166.158:61071 users:(("varnishd",17527,15792)) CLOSE-WAIT 1 0 172.18.97.170:16081 2.247.148.115:4851 users:(("varnishd",17527,14971)) CLOSE-WAIT 1 0 172.18.97.170:16081 83.77.186.26:27446 users:(("varnishd",17527,12737)) CLOSE-WAIT 1 0 172.18.97.170:16081 37.201.240.119:49509 users:(("varnishd",17527,7343)) CLOSE-WAIT 1 0 172.18.97.170:16081 77.22.87.38:40169 users:(("varnishd",17527,10276)) CLOSE-WAIT 1 0 172.18.97.170:16081 78.139.25.24:31947 users:(("varnishd",17527,5613)) .... }}} As you can see all the connections have a receive queue size of 1. All services besides Varnish have their Resv-Q at 0 as long as the state isn't established (ESTA). We think that this is the problem why the sockets never get released because the Resv-Q is not empty. Additionally we've tried to lower the "tcp_keepalive_time" kernel parameter from default 7200 to 600: {{{ echo 600 > /proc/sys/net/ipv4/tcp_keepalive_time }}} We hoped that the kernel will detect that this connection doesn't do anything usefull add will kick it. But as expected nothing happens. The socket is still owned by Varnish and there is still something in the Resv-Q. Today I restarted Varnish at 10:27. Now it's 11:00. If you have a look at the /proc fs {{{ ls -ld /proc/9455 dr-xr-xr-x 9 nobody nogroup 0 Mar 27 10:27 /proc/9455 ls -alrt /proc/9455/fd/ | more total 0 dr-xr-xr-x 9 nobody nogroup 0 Mar 27 10:27 .. dr-x------ 2 nobody nogroup 0 Mar 27 10:27 . lrwx------ 1 nobody nogroup 64 Mar 27 10:28 99 -> socket:[812049314] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 98 -> socket:[812187431] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 97 -> socket:[812301205] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 96 -> socket:[812036401] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 95 -> socket:[812083869] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 94 -> socket:[812773476] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 93 -> socket:[812231242] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 92 -> socket:[812021756] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 91 -> socket:[812498505] lrwx------ 1 nobody nogroup 64 Mar 27 10:28 90 -> socket:[812351724] ... }}} you can see that the oldest socket connection is already 30 minutes old (before I restarted Varnish today I could see thousands of sockets that existed for over 10 hours). If you now grep for the oldest socket id with lsof {{{ lsof -n | grep 812049314 varnishd 9455 nobody 99u IPv4 812049314 0t0 TCP 172.18.97.170:16081->84.42.225.26:37078 (CLOSE_WAIT) }}} you see that this connection is in CLOSE_WAIT (as most of the Varnish sockets). If I now look hours later this socket will still be there until Varnish is restarted. We've now set Varnish session_max=500000 and the number of max open files for Varnish to 600000: {{{ cat /proc/9455/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 524288 524288 bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 641254 641254 processes Max open files 600000 600000 files Max locked memory 83968000 83968000 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 641254 641254 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us }}} But is this REALLY the solution? If connections doubles we need to set this values to over a million at least. I don't know if this has any side effects. We have a peak in traffic and connections at about 10 p.m. and then the traffic and connections decrease to a low rate at 12 p.m. and starts to increase again at 6 a.m. The crazy thing is that at 12 p.m. also the number of open files starts to decrease but it take 5 hours do decrease from 50000 to a few hundred files Varnish has open. If you search Google for this topic you can find a lot of unanswered threads and no real solution. It would be cool if we could find some kind of solution about this issue. We've had this issue now on two completely different sites with different kernel, different OS and different Varnish version. At least a hint if raising session_max and the number of max open files to a very high number is okay would be very helpful! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Mar 30 10:20:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 30 Mar 2015 10:20:22 -0000 Subject: [Varnish] #1696: Varnish leaves connections in CLOSE_WAIT In-Reply-To: <045.9d3cd635ef30cb25d4bc2d87bd479cf7@varnish-cache.org> References: <045.9d3cd635ef30cb25d4bc2d87bd479cf7@varnish-cache.org> Message-ID: <060.51d0e9aeb38f61c9e6d78deb2cdc8300@varnish-cache.org> #1696: Varnish leaves connections in CLOSE_WAIT ----------------------+-------------------- Reporter: rwimmer | Owner: Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.5 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Comment (by rwimmer): We were able to nail the problem further down but for us it's a little bit wired because it's exactly one URI causing the problem and it looks like this: /foo/bar/suggest/heroTeaser/ And ONLY this syntax! "/foo/bar/suggest/heroTeaser" (without / at the end) and "/foo/bar/suggest/heroTeaser/?some_query_string=value" works as expected. The cause seems to be in vcl_fetch. The original code looks like this: {{{ ... if (beresp.http.Content-Type ~ "text/html" || beresp.http.Content-Type ~ "text/xml" || beresp.http.Content-Type ~ "application/json") { if ((beresp.http.Set-Cookie ~ "NO_CACHE=") || (beresp.ttl < 1s)) { set beresp.ttl = 0s; return (hit_for_pass); } ... }}} With this VCL code we see connections in CLOSE_WAIT steadily growing (ok we now know that this code isn't perfect but it shouldn't matter). If we change the code above to {{{ ... if (beresp.http.Content-Type ~ "text/html" || beresp.http.Content-Type ~ "text/xml" || beresp.http.Content-Type ~ "application/json") { if ((beresp.http.Set-Cookie ~ "NO_CACHE=") || (beresp.ttl < 1s)) { set beresp.ttl = 0s; if (req.url ~ "^/foo/bar/suggest/heroTeaser/$") { return (deliver); } return (hit_for_pass); } ... }}} everything is ok. No further growing in CLOSE_WAIT connections! We looked at the Varnish flow graph (https://www.varnish- cache.org/trac/attachment/wiki/VCLExampleDefault/varnish_flow_3_0.png) what part could case this behavior and finally discovered that we need to add some code to vcl_miss to prevent vcl_fetch called next for this URI. vcl_pass should be called instead. So we added the following VCL code: {{{ sub vcl_miss { if (req.url ~ "^/foo/bar/suggest/heroTeaser/$") { return (pass); } } }}} Additionally we switched back to the original code in vcl_fetch mentioned above: {{{ ... if (beresp.http.Content-Type ~ "text/html" || beresp.http.Content-Type ~ "text/xml" || beresp.http.Content-Type ~ "application/json") { if ((beresp.http.Set-Cookie ~ "NO_CACHE=") || (beresp.ttl < 1s)) { set beresp.ttl = 0s; return (hit_for_pass); } ... }}} If you look at the Varnish flow graph the main difference is that after vcl_pass an anon object is created which is completely skipped if vcl_fetch is called after vcl_miss. So adding the vcl_miss code we changed the execution path and that worked. Without vcl_miss and the "if" clause in vcl_miss a simple "curl http://somdomain.tld/foo/bar/suggest/heroTeaser/" caused a timeout after 5 minutes (tcp timeout) and no request to the backend. "curl http://domain.tld/foo/bar/suggest/heroTeaser" and "curl http://domain.tld/foo/bar/suggest/heroTeaser/?some_query_string=value" returned immediately. It shouldn't matter how wrong the VCL code is in this case. At least an error should occur and not tens of thousands of connectsion in CLOSE_WAIT. We suspect that it has something to do with "hit_for_pass" and the anon object that is only created after vcl_pass was called but not if vcl_miss was called and vcl_fetch is executed next. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 31 12:35:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 31 Mar 2015 12:35:20 -0000 Subject: [Varnish] #1697: Discard a configuration with bad character is impossible Message-ID: <045.e592b764f80a19b4c80257a8cd55750b@varnish-cache.org> #1697: Discard a configuration with bad character is impossible ---------------------------------+------------------------ Reporter: vrobert | Type: defect Status: new | Priority: low Milestone: Varnish 4.0 release | Component: varnishadm Version: 4.0.3 | Severity: normal Keywords: discard | ---------------------------------+------------------------ Hi, we have regularly a problem with some configurations that we can not discard. These configurations have a bad char (we don't know why) and they are not retrieved by the discard command. varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.list available 0 20150311_ofaDrupal?2 available 0 20150327?_maville360_3 active 127 20150331_devicedetect varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.discard 20150311_ofaDrupal?2 No configuration named 20150311_ofaDrupal?2 known. Command failed with error code 106 varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.discard 20150311_ofaDrupal No configuration named 20150311_ofaDrupal known. Command failed with error code 106 The big problem is that the health check of old configurations are still running, putting unnecessary load on the backends. Do you have a workaround ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Mar 31 12:37:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 31 Mar 2015 12:37:05 -0000 Subject: [Varnish] #1697: Discard a configuration with bad character is impossible In-Reply-To: <045.e592b764f80a19b4c80257a8cd55750b@varnish-cache.org> References: <045.e592b764f80a19b4c80257a8cd55750b@varnish-cache.org> Message-ID: <060.0ad2cb2768cc681d8931206a375417ca@varnish-cache.org> #1697: Discard a configuration with bad character is impossible ------------------------+---------------------------------- Reporter: vrobert | Owner: Type: defect | Status: new Priority: low | Milestone: Varnish 4.0 release Component: varnishadm | Version: 4.0.3 Severity: normal | Resolution: Keywords: discard | ------------------------+---------------------------------- Comment (by vrobert): {{{ varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.list available 0 20150311_ofaDrupal?2 available 0 20150327?_maville360_3 active 127 20150331_devicedetect varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.discard 20150311_ofaDrupal?2 No configuration named 20150311_ofaDrupal?2 known. Command failed with error code 106 varnishadm -T localhost:6082 -S /etc/varnish/secret vcl.discard 20150311_ofaDrupal No configuration named 20150311_ofaDrupal known. Command failed with error code 106 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator