From varnish-bugs at varnish-cache.org Tue Sep 1 09:22:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Sep 2015 09:22:28 -0000 Subject: [Varnish] #1780: Missing errorhandling code in HSH_Purge() Message-ID: <045.1510b9ff08f133373af2d982235f6896@varnish-cache.org> #1780: Missing errorhandling code in HSH_Purge() ---------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.1.0-TP1 | Severity: normal Keywords: errorhandling | ---------------------------+---------------------- {{{ Child (675522) Panic message: Missing errorhandling code in HSH_Purge(), cache_hash.c line 557: Condition(spc >= sizeof *ocp) not true.errno = 32 (Broken pipe) thread = (cache-worker) ident = Linux,3.2.0-4-amd64,x86_64,-sfile,-sfile,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x432205: /usr/sbin/varnishd() [0x432205] 0x42bc9c: /usr/sbin/varnishd(HSH_Purge+0x4ac) [0x42bc9c] 0x7f4224d445c5: ./vcl.5ETuLgy3.so(VGC_function_vcl_miss+0x6f) [0x7f4224d445c5] 0x439418: /usr/sbin/varnishd(VCL_miss_method+0x48) [0x439418] 0x41805a: /usr/sbin/varnishd() [0x41805a] 0x41a7a5: /usr/sbin/varnishd(CNT_Session+0x9b5) [0x41a7a5] 0x433f79: /usr/sbin/varnishd() [0x433f79] 0x7f5c98026b50: /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f5c98026b50] 0x7f5c97d7095d: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f5c97d7095d] sp = 0x7f3f87002008 { fd = 595, id = 595, xid = 1110637145, client = 10.114.7.45 40555, step = STP_MISS, handling = deliver, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7f3f87002080 { id = "sess", {s,f,r,e} = {0x7f3f87002c78,+304,(nil),+524288}, }, http[req] = { ws = 0x7f3f87002080[sess] "XPURGE", "my_uri", "HTTP/1.1", "Accept: */*", "Host: my_host", "Surrogate-Capability: abc=ESI/1.0", }, worker = 0x7f41c7313b30 { ws = 0x7f41c7313d68 { id = "wrk", {s,f,r,e} = {0x7f41c72f3a70,+24,+65536,+65536}, }, http[bereq] = { ws = 0x7f41c7313d68[wrk] "GET", "my_uri", "HTTP/1.1", "Accept: */*", "Host: my_host", "Surrogate-Capability: abc=ESI/1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 1 10:40:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Sep 2015 10:40:27 -0000 Subject: [Varnish] #1666: make init script check config before restarting In-Reply-To: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> References: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> Message-ID: <065.f528aa937b1c564a5e73d707ea993b1d@varnish-cache.org> #1666: make init script check config before restarting --------------------------+----------------------- Reporter: KlavsKlavsen | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------------+----------------------- Comment (by gquintard): I looked at how to do it in systemd, and stumbled upon this thread : http://lists.freedesktop.org/archives/systemd-devel/2014-July/021633.html TL;DR : there no !ExecRestartPre/!ExecRestartCheck that would allow us to do it. Restart is just a stop and a start with no idea of context. Maybe we can tweak varnish_vcl_reload to have a dryrun/check/fake option that would just validate the vcl? However, that wouldn't catch errors in the command line, but I'm not sure anything can. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 1 11:29:33 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Sep 2015 11:29:33 -0000 Subject: [Varnish] #1780: Missing errorhandling code in HSH_Purge() In-Reply-To: <045.1510b9ff08f133373af2d982235f6896@varnish-cache.org> References: <045.1510b9ff08f133373af2d982235f6896@varnish-cache.org> Message-ID: <060.611fa569efcf14c56e7be40f547a4535@varnish-cache.org> #1780: Missing errorhandling code in HSH_Purge() ---------------------------+------------------------ Reporter: llavaud | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: normal | Resolution: invalid Keywords: errorhandling | ---------------------------+------------------------ Changes (by fgsch): * status: new => closed * resolution: => invalid Comment: You are running out of workspace_thread: {{{ {s,f,r,e} = {0x7f41c72f3a70,+24,+65536,+65536}, }}} Only 24 bytes left which is not enough. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 1 11:52:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Sep 2015 11:52:58 -0000 Subject: [Varnish] #1759: VMOD abi check is always lenient In-Reply-To: <046.c750f774ffe7fd6b92acc7efadf37dd3@varnish-cache.org> References: <046.c750f774ffe7fd6b92acc7efadf37dd3@varnish-cache.org> Message-ID: <061.872523172c95eb7ab87ed87a542047e8@varnish-cache.org> #1759: VMOD abi check is always lenient ----------------------+----------------------- Reporter: lkarsten | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by lkarsten): Discussed during VDD15Q3. We'll have a look at this after 4.1 is out. The vrt|full stuff is a partial fix/hack, and we should do this right instead. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 2 09:10:35 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Sep 2015 09:10:35 -0000 Subject: [Varnish] #1781: Error with gzip content on Varnish4 Message-ID: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> #1781: Error with gzip content on Varnish4 --------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.1.0-TP1 | Severity: major Keywords: gzip corrupt | --------------------------+---------------------- Hello, i have a problem with some gzip content, i request an url on my Varnish3 and i got a correct gzip response but if i do the same request on my Varnish4 server i got a corrupt gzip response. - request on the Varnish3 (3.0.5 on debian wheezy): {{{ curl -H 'Accept-Encoding: gzip' -H 'Host: www.parents.fr' http://10.114.3.12/Beaute-Forme/Beaute --silent > /tmp/content.gz }}} {{{ stat /tmp/content.gz File: ?/tmp/content.gz? Size: 11804 Blocks: 24 IO Block: 4096 regular file Device: 801h/2049d Inode: 175099 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/laurentl) Gid: ( 1000/laurentl) Access: 2015-09-01 18:25:01.632840870 +0200 Modify: 2015-09-01 18:25:33.733209689 +0200 Change: 2015-09-01 18:25:33.733209689 +0200 Birth: - }}} {{{ gunzip -t /tmp/content.gz }}} - request on the Varnish4 (4.1.0-TP1 on debian wheezy): {{{ curl -H 'Accept-Encoding: gzip' -H 'Host: www.parents.fr' http://10.114.3.14/Beaute-Forme/Beaute --silent > /tmp/content2.gz }}} file is smaller: {{{ stat /tmp/content2.gz File: ?/tmp/content2.gz? Size: 11794 Blocks: 24 IO Block: 4096 regular file Device: 801h/2049d Inode: 175055 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/laurentl) Gid: ( 1000/laurentl) Access: 2015-09-01 18:26:38.481953633 +0200 Modify: 2015-09-01 18:26:38.509953954 +0200 Change: 2015-09-01 18:26:38.509953954 +0200 Birth: - }}} and corrupt: {{{ gunzip -t /tmp/content2.gz gzip: /tmp/content2.gz: invalid compressed data--crc error gzip: /tmp/content2.gz: invalid compressed data--length error }}} i have attached the full varnishlog. thanks in advance. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 2 10:27:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Sep 2015 10:27:53 -0000 Subject: [Varnish] #1782: CHANGELOG doc/changes.rst in master does not contain any reference to 3.0.7 Message-ID: <046.b8751b3a647defb0cd3b8869eacf5179@varnish-cache.org> #1782: CHANGELOG doc/changes.rst in master does not contain any reference to 3.0.7 ----------------------+--------------------------- Reporter: regilero | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: unknown | Severity: minor Keywords: | ----------------------+--------------------------- Release 3.0.7 occured in 23 March, 2015 with. Changelog is edited on the git 3.0 branch https://www.varnish- cache.org/trac/browser/doc/changes.rst?rev=f544cd8129e3a5b500ea4a706faf6cb4208ad983 But the doc/changes.rst document on master and 4.x branches has no information after 3.0.6 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 2 11:36:33 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Sep 2015 11:36:33 -0000 Subject: [Varnish] #1782: CHANGELOG doc/changes.rst in master does not contain any reference to 3.0.7 In-Reply-To: <046.b8751b3a647defb0cd3b8869eacf5179@varnish-cache.org> References: <046.b8751b3a647defb0cd3b8869eacf5179@varnish-cache.org> Message-ID: <061.58698f0f58469396893ae168f84a3bad@varnish-cache.org> #1782: CHANGELOG doc/changes.rst in master does not contain any reference to 3.0.7 ---------------------------+-------------------------------------------- Reporter: regilero | Owner: Lasse Karstensen Type: documentation | Status: closed Priority: low | Milestone: Component: documentation | Version: unknown Severity: minor | Resolution: fixed Keywords: | ---------------------------+-------------------------------------------- Changes (by Lasse Karstensen ): * status: new => closed * owner: => Lasse Karstensen * resolution: => fixed Comment: In [a6162d5baeff4dbcd8d7ac39aa24ec8c0c46b288]: {{{ #!CommitTicketReference repository="" revision="a6162d5baeff4dbcd8d7ac39aa24ec8c0c46b288" Add contents for 3.0.7 from 3.0 branch. Fixes: #1782 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 3 13:51:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 03 Sep 2015 13:51:14 -0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 Message-ID: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> #1783: POST with empty body and absent content-length returns 503 -----------------------------+-------------------- Reporter: fascinatedcow | Type: defect Status: new | Priority: normal Milestone: Varnish 4.1-TP1 | Component: build Version: 4.1.0-TP1 | Severity: normal Keywords: | -----------------------------+-------------------- Hi, I think I have found a bug in 4.1.0-tp1. As per subject, if I POST without specifying a body and without setting a content-length header, I'm getting "503 Backend fetch failed" from varnish, after a few seconds pause. I can see that varnish is sending the request to the backend and getting a response back. I'm using curl like so: {{{ curl -v -XPOST http:/// -H 'Host: ' }}} This doesn't happen with varnish 3 or 4.0.3. Please let me know if you need more information. Regards, Matt -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 3 14:31:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 03 Sep 2015 14:31:13 -0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 In-Reply-To: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> References: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> Message-ID: <066.07f445fb7f3fb48fa3b82faf0414dd6c@varnish-cache.org> #1783: POST with empty body and absent content-length returns 503 ---------------------------+------------------------------ Reporter: fascinatedcow | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.1-TP1 Component: build | Version: 4.1.0-TP1 Severity: normal | Resolution: invalid Keywords: | ---------------------------+------------------------------ Changes (by phk): * status: new => closed * resolution: => invalid Comment: A POST without Content-Length or Transfer-Encoding is not legal. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 3 16:30:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 03 Sep 2015 16:30:58 -0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 In-Reply-To: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> References: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> Message-ID: <066.57f8aef08e7d1b56411d15f83d8d33a1@varnish-cache.org> #1783: POST with empty body and absent content-length returns 503 ---------------------------+------------------------------ Reporter: fascinatedcow | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.1-TP1 Component: build | Version: 4.1.0-TP1 Severity: normal | Resolution: invalid Keywords: | ---------------------------+------------------------------ Comment (by fascinatedcow): Hi PHK, OK, you're the expert so I'll take your word for it. But then why not just reject the request? Why is there a pause, and why send the request to the backend and report a backend error. It's not a backend error. Cheers, Matt -- Ticket URL: Varnish The Varnish HTTP Accelerator From phk at phk.freebsd.dk Thu Sep 3 17:01:46 2015 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 03 Sep 2015 17:01:46 +0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 In-Reply-To: <066.57f8aef08e7d1b56411d15f83d8d33a1@varnish-cache.org> References: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> <066.57f8aef08e7d1b56411d15f83d8d33a1@varnish-cache.org> Message-ID: <75077.1441299706@critter.freebsd.dk> -------- In message <066.57f8aef08e7d1b56411d15f83d8d33a1 at varnish-cache.org>, "Varnish" writes: > OK, you're the expert so I'll take your word for it. But then why not just > reject the request? Why is there a pause, and why send the request to the > backend and report a backend error. It's not a backend error. The pause is there because if it had been a HTTP/1.0 the body could have followed without Content-Length/Transfer-Encoding and be terminated by the client closing the socket. Doing that fall-back is probably overdoing it, if the client explicitly sent a HTTP/1.1 header. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From fgsch at lodoss.net Thu Sep 3 23:55:50 2015 From: fgsch at lodoss.net (Federico Schwindt) Date: Fri, 4 Sep 2015 00:55:50 +0100 Subject: Functions in backend variables interpretted literally In-Reply-To: <84619A39D0EA2742A35E6A47CC81A7329139D06C@HQ-BP-EXH1.spotmain.com> References: <84619A39D0EA2742A35E6A47CC81A7329139D06C@HQ-BP-EXH1.spotmain.com> Message-ID: Hi, I suspect you are using an old version. if2dd2d2f works fine in 4.0.3 and master. Cheers. PS: in the future please use varnish-dev or trac for bug reports. On Wed, Aug 12, 2015 at 4:17 PM, Chris Monteiro < chris.monteiro at spotlight.com> wrote: > Hi there > > > > I used dynamically named servers on EC2, my varnish bootstrap setup > configures the vcl according to the server name e.g. > > i-f2dd2d2f > > > > Varnish doesn?t like the hyphen in the back end name, so I replace it out, > fair enough: > > if2dd2d2f > > > > Now it fails because the back end server name has the function ?if? in it! > > > > I now replace this out to: > > lol2dd2d2f > > > > I think it may be a bug that the ?if? function is interpreted in a back > end variable definition like this. > > ------------------------------ > > This e-mail, and any attachment, is confidential. If you have received it > in error, please delete it from your system, do not use or disclose the > information in any way, and notify me immediately. The contents of this > message may contain personal views that are not the views of Spotlight, > unless specifically stated. > > > ------------------------------ > This email has been scanned for email related threats and delivered safely > by Mimecast. > For more information please visit http://www.mimecast.com > ------------------------------ > > _______________________________________________ > varnish-bugs mailing list > varnish-bugs at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at varnish-cache.org Fri Sep 4 09:35:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 04 Sep 2015 09:35:41 -0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 In-Reply-To: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> References: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> Message-ID: <066.86fd9f1df43ab1f3cee278f504b0b79d@varnish-cache.org> #1783: POST with empty body and absent content-length returns 503 ---------------------------+------------------------------ Reporter: fascinatedcow | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.1-TP1 Component: build | Version: 4.1.0-TP1 Severity: normal | Resolution: Keywords: | ---------------------------+------------------------------ Changes (by phk): * status: closed => reopened * resolution: invalid => Comment: Hmm, it seems that this was "clarified" in the HTTPbis RFCs, so POST without a body now is valid. Reopening. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 13:43:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 13:43:46 -0000 Subject: [Varnish] #1784: Varnish 4 - googlbot response 403 Message-ID: <045.a4ab8bfd2bd5222bc5683af25875adb0@varnish-cache.org> #1784: Varnish 4 - googlbot response 403 ---------------------------------+-------------------- Reporter: morphey | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: build Version: unknown | Severity: normal Keywords: | ---------------------------------+-------------------- Hello, I have a problem with varnish. Basically it works all right. But a few months googlebot tells me many errors 403 that is not derived from wordpress's underneath. Has anyone encountered similar problems? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:04:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:04:54 -0000 Subject: [Varnish] #1632: Debian: postinst broken In-Reply-To: <043.bdb6966f18eda35af6376ddaeb977125@varnish-cache.org> References: <043.bdb6966f18eda35af6376ddaeb977125@varnish-cache.org> Message-ID: <058.bff1846e48e888eec09792f1317b3674@varnish-cache.org> #1632: Debian: postinst broken -----------------------+----------------------- Reporter: idl0r | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Resolution: Keywords: | -----------------------+----------------------- Comment (by lkarsten): Having worked on the init scripts for 4.1 today, I don't have any specific input. I'll try to get some upgrade testing done later in the week, but as far as I've seen so far it works as it is. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:06:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:06:30 -0000 Subject: [Varnish] #1666: make init script check config before restarting In-Reply-To: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> References: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> Message-ID: <065.be98abcad156716f48e80d8142fd2ae9@varnish-cache.org> #1666: make init script check config before restarting --------------------------+----------------------- Reporter: KlavsKlavsen | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------------+----------------------- Comment (by lkarsten): Varnish 4.1 on el6/el7/non-systemd debians/ubuntus will do this check, starting from today. I haven't checked how our systemd service files handles it. Leaving this open until we decide if this should be put into 4.0 as well. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:09:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:09:49 -0000 Subject: [Varnish] #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu In-Reply-To: <044.b6984289b41a3c6207dadd00b7d1e379@varnish-cache.org> References: <044.b6984289b41a3c6207dadd00b7d1e379@varnish-cache.org> Message-ID: <059.383b096206f08f45f8937b915f756e43@varnish-cache.org> #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu --------------------------------------------+----------------------- Reporter: fleish | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: init init.d reload exit return | --------------------------------------------+----------------------- Comment (by lkarsten): Thanks for reporting this. Scripts for Varnish 4.1 does now return an error code on a failing reload, and will write the error description to stdout/stderr. Varnish 3 is end of life and will not get any updates. Leaving this open until we figure out if we should apply this to 4.0 as well. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:22:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:22:48 -0000 Subject: [Varnish] #1586: Can still loose headers when running out of workspace in set xxx.http In-Reply-To: <043.4d2be4d8abd2b08850fd215fc93be98e@varnish-cache.org> References: <043.4d2be4d8abd2b08850fd215fc93be98e@varnish-cache.org> Message-ID: <058.04162f2b192aaf3d5537fcb0d2e5398e@varnish-cache.org> #1586: Can still loose headers when running out of workspace in set xxx.http ----------------------+-------------------- Reporter: slink | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): Had a look at this in the context of the new client WS overflow handling. On master as of today, the header assignment leads to a LostHeader log line, but overflow marker is not set. We should find out if this is significant enough to fail the request, or keep going as we've been doing so far. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:23:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:23:48 -0000 Subject: [Varnish] #1702: RHEL6 Init-script: startup errors piped to /dev/null In-Reply-To: <044.81cb7ae807b7e679852f34b4fe523747@varnish-cache.org> References: <044.81cb7ae807b7e679852f34b4fe523747@varnish-cache.org> Message-ID: <059.4144d705523af2b009dedb7c0102c03f@varnish-cache.org> #1702: RHEL6 Init-script: startup errors piped to /dev/null -----------------------+---------------------- Reporter: denisb | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Resolution: Keywords: | -----------------------+---------------------- Comment (by lkarsten): This is fixed as of today in Varnish 4.1 init scripts for EL6/EL7. Leaving this open until we figure out if it should be added to 4.0 as well. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 7 20:26:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Sep 2015 20:26:45 -0000 Subject: [Varnish] #1784: Varnish 4 - googlbot response 403 In-Reply-To: <045.a4ab8bfd2bd5222bc5683af25875adb0@varnish-cache.org> References: <045.a4ab8bfd2bd5222bc5683af25875adb0@varnish-cache.org> Message-ID: <060.4ac945ab567c53b074c03af888940791@varnish-cache.org> #1784: Varnish 4 - googlbot response 403 ---------------------+---------------------------------- Reporter: morphey | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: unknown Severity: normal | Resolution: invalid Keywords: | ---------------------+---------------------------------- Changes (by lkarsten): * status: new => closed * resolution: => invalid Comment: Hi. Please use the varnish-misc at varnish-cache.org email list for support inquiries. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 09:44:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 09:44:55 -0000 Subject: [Varnish] #1781: Error with gzip content on Varnish4 In-Reply-To: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> References: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> Message-ID: <060.9be0bb20dfc65bc61ececc0332a5710d@varnish-cache.org> #1781: Error with gzip content on Varnish4 --------------------------+------------------------ Reporter: llavaud | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: major | Resolution: Keywords: gzip corrupt | --------------------------+------------------------ Comment (by martin): Hi, and thanks for the detailed bug report. My attempts at reproducing the problem though has so far been unsuccessful. As it looks like you can easily reproduce the issue, I wondered if I could ask you to redo it once more. What I notice in the logs you attached is that the objects that is attempted stitched together appears to be empty objects. Or at least they end up producing zero bytes worth of payload to the final result. This is contrary to the logs from Varnish 3 where things are working. So likely something went wrong during the fetch into cache of the numerous ESI fragments. In your log though most of these fragments are already in the cache, so they are cache HITs and the logs thus won't show the steps to fetch them. Could I ask you to please redo the test on Varnish 4, with the same logging as before, but make sure that the cache is completely clean at the time you run it? (e.g. just started). This would allow us to see the nature of the objects as they enter the cache (e.g. if they are gzip'ed before entering or gziped by Varnish, etc.) Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 11:27:24 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 11:27:24 -0000 Subject: [Varnish] #1785: r01576.vtc fails on jessie armv7l. Message-ID: <046.3796e0bf19bc6694851715fe36fe2682@varnish-cache.org> #1785: r01576.vtc fails on jessie armv7l. ----------------------+----------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: normal | Keywords: ----------------------+----------------------- Running on a 4 core scaleway.com armv7l, all tests in 4.1.0-beta1 passes with the exception of r01576.vtc ("Test recursive regexp's fail before consuming all the stack".) {{{ **** s1 1.6 bodylen = 0 ** s1 1.6 === expect req.http.found == ---- s1 1.6 EXPECT req.http.found (1) == "" failed }}} {{{ $ dpkg -l | grep pcre ii libpcre3:armhf 2:8.35-3.3 armhf Perl 5 Compatible Regular Expression Library - runtime files ii libpcre3-dev:armhf 2:8.35-3.3 armhf Perl 5 Compatible Regular Expression Library - development files ii libpcrecpp0:armhf 2:8.35-3.3 armhf Perl 5 Compatible Regular Expression Library - C++ runtime files $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16148 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 16148 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited }}} Attaching varnishtest output. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 11:33:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 11:33:50 -0000 Subject: [Varnish] #1779: Test std.ip (m00011) failing on OS X In-Reply-To: <045.c141d2673f9da9e64a8a20409a51199a@varnish-cache.org> References: <045.c141d2673f9da9e64a8a20409a51199a@varnish-cache.org> Message-ID: <060.dcdd3e4b114ec6c3ca4ca1c170c66a86@varnish-cache.org> #1779: Test std.ip (m00011) failing on OS X ---------------------+-------------------- Reporter: espebra | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------+-------------------- Comment (by lkarsten): Probably related: #1520 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 11:55:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 11:55:49 -0000 Subject: [Varnish] #1786: Web documentation push for 4.1 fails. Message-ID: <046.aaa8a81b85a50305618ae83ea644c119@varnish-cache.org> #1786: Web documentation push for 4.1 fails. ---------------------------+----------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: documentation | Version: 4.1.0-TP1 Severity: normal | Keywords: ---------------------------+----------------------- When pushing code to the 4.1 repository, uploading the documentation to web fails: {{{ remote: make[1]: Leaving directory `/tmp/tmp.iiE3HZtNmg/varnish- cache/doc/graphviz' remote: sphinx-build -W -q -N -b html -d build/doctrees -D latex_paper_size=a4 . build/html remote: Making output directory... remote: remote: Build finished. The HTML pages are in build/html. remote: make: Leaving directory `/tmp/tmp.iiE3HZtNmg/varnish- cache/doc/sphinx' remote: sending incremental file list remote: rsync: mkdir "/srv/varnish-cache.org/www/docs/4.1" failed: Permission denied (13) remote: rsync error: error in file IO (code 11) at main.c(605) [Receiver=3.0.9] remote: rsync: connection unexpectedly closed (9 bytes received so far) [sender] remote: rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] remote: failure (exit 12) To git at git.varnish-cache.org:varnish-cache 8993e19..2cb715a 4.1 -> 4.1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 12:27:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 12:27:41 -0000 Subject: [Varnish] #1786: Web documentation push for 4.1 fails. In-Reply-To: <046.aaa8a81b85a50305618ae83ea644c119@varnish-cache.org> References: <046.aaa8a81b85a50305618ae83ea644c119@varnish-cache.org> Message-ID: <061.13becf978dd7b6b2a65a6396ab9b06b1@varnish-cache.org> #1786: Web documentation push for 4.1 fails. ---------------------------+------------------------ Reporter: lkarsten | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: documentation | Version: 4.1.0-TP1 Severity: normal | Resolution: fixed Keywords: | ---------------------------+------------------------ Changes (by lkarsten): * status: new => closed * resolution: => fixed Comment: [14:15:09] < espen> scn: please try again Appears to work now. {{{ remote: Build finished. The HTML pages are in build/html. remote: make: Leaving directory `/tmp/tmp.j1KD7VaIJF/varnish- cache/doc/sphinx' remote: sending incremental file list remote: ./ remote: .buildinfo remote: genindex.html remote: index.html [cut] remote: whats-new/upgrade-4.0.html remote: whats-new/upgrading.html remote: remote: sent 1981757 bytes received 4111 bytes 3971736.00 bytes/sec remote: total size is 1967477 speedup is 0.99 remote: success To git at git.varnish-cache.org:varnish-cache 2cb715a..72a0d5d 4.1 -> 4.1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 8 12:40:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Sep 2015 12:40:18 -0000 Subject: [Varnish] #1781: Error with gzip content on Varnish4 In-Reply-To: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> References: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> Message-ID: <060.d258eeae01753d552035bc6e3d067066@varnish-cache.org> #1781: Error with gzip content on Varnish4 --------------------------+------------------------ Reporter: llavaud | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: major | Resolution: Keywords: gzip corrupt | --------------------------+------------------------ Comment (by llavaud): I have attached a new varnishlog (varnishlog_v4_full.txt) with all ESI fetched from backend. I forgot to say that i use the version 4.1.0-TP1 with the 2 following patches (maybe it will make a difference): * https://www.varnish- cache.org/trac/changeset/b26f9fff4877fbe1001c7f995df98f9241914e12 * https://www.varnish- cache.org/trac/changeset/0a800c5167b6e8ccd0df483bb69136c30199aac2 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 9 06:48:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Sep 2015 06:48:39 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.e95c8f99a44a16ebb46940ef7a475b5a@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+----------------------- Reporter: xcir | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by xcir): * status: closed => reopened * resolution: fixed => Comment: Hi, I got an error message that "Error: (-sfile) allocation error: No space left on device" at the time of varnish restart. Environment {{{ varnishd (varnish-4.1.0-beta1 revision 017ec21) and varnishd (varnish- trunk revision ca88bce) Linux cache03 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13 10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux CentOS Linux release 7.1.1503 (Core) filesystem is xfs. }}} The cause is to fail to fallocate in libvarnish/vfil.c (Even if "(st.st_blocks * 512) + fsspace >= size") --fail log {{{ loggin syslog(6,"VFIL:st_blocks*512:%ld st_size:%ld free-space:%ld require- size:%ld",st.st_blocks * 512, st.st_size, fsspace ,size); syslog: Sep 9 13:38:45 cache03 varnishd: VFIL:st_blocks*512:46170902528 st_size:46170898432 free-space:3188826112 require-size:46170898432 stdout: Error: (-sfile) allocation error: No space left on device }}} https://github.com/varnish/Varnish- Cache/blob/117c2bdcfbb8c11f48bd9137e76edfc746dfbe71/lib/libvarnish/vfil.c#L188 I think, if "st_blocks * 512" is greater than st_size is not require fallocate. (Because already allocated) I send patch. https://www.varnish-cache.org/trac/attachment/ticket/1682/0001-If- st_blocks-512-is-greater-than-st_size-is-not-requ.patch -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 9 07:38:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Sep 2015 07:38:47 -0000 Subject: [Varnish] #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service Message-ID: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service ---------------------------------+----------------------- Reporter: anders | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: packaging Version: 4.0.3 | Severity: normal Keywords: redhat | ---------------------------------+----------------------- I have vcc_err_unref set to off in varnish.params: [root at no000010sapit0 ~]# grep ^DAE /etc/varnish/varnish.params DAEMON_OPTS="-p vcc_err_unref=off" But when I try to start Varnish it fails with unused subs: [root at no000010sapit0 ~]# service varnish start Redirecting to /bin/systemctl start varnish.service Job for varnish.service failed. See 'systemctl status varnish.service' and 'journalctl -xn' for details. [root at no000010sapit0 ~]# journalctl -xn | cat -- Logs begin at Sun 2015-09-06 15:40:04 CEST, end at Wed 2015-09-09 09:35:22 CEST. -- Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Message from VCC-compiler: Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Unused sub pass_if_gethead, defined: Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: ('input' Line 48 Pos 5) Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: sub pass_if_gethead The problem is that /usr/lib/systemd/system/varnish.service has an ExecStartPre that does not use/consider $DAEMON_OPTS. It needs to, particularly for that vcc_err_unref=off setting. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 9 14:20:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Sep 2015 14:20:54 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.c8361c21e6fdc592ba9bd6d1a29519ec@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+----------------------- Reporter: xcir | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by xcir): I'm sorry. I misunderstood VFIL_allocate has been backported to 4.0. I create new ticket for 4.1, if necessary. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 9 19:29:09 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Sep 2015 19:29:09 -0000 Subject: [Varnish] #1781: Error with gzip content on Varnish4 In-Reply-To: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> References: <045.b1c3367938319f9ec0d4c40a960292f9@varnish-cache.org> Message-ID: <060.b571b91d2e9993ae9c432e4d8fbe1477@varnish-cache.org> #1781: Error with gzip content on Varnish4 --------------------------+---------------------------------------- Reporter: llavaud | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: major | Resolution: fixed Keywords: gzip corrupt | --------------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [723f8a5219e622c10302a70c0596b34d1116418b]: {{{ #!CommitTicketReference repository="" revision="723f8a5219e622c10302a70c0596b34d1116418b" Propagate gzip CRC upwards from nested ESI includes. Detected by: Laurent Lavaud Patched by: martin (with minor shuffle by me) Fixes: #1781 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 10 03:55:31 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Sep 2015 03:55:31 -0000 Subject: [Varnish] #1635: Completed bans keep accumulating In-Reply-To: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> References: <043.d9a46f331b8fae5fc5502a3059307d4c@varnish-cache.org> Message-ID: <058.8285b007cae75935775a52a56c1551b7@varnish-cache.org> #1635: Completed bans keep accumulating ----------------------+-------------------- Reporter: Sesse | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: lurker | ----------------------+-------------------- Comment (by daniel.s): Hi, I'm having some issues with bans that might be related to this ticket. I'm running Varnish 4.0.3 with a configuration where we end up with lots of bans or with bans that affect lots of objects. After a day of use we have more than 400k bans in the list, most of them are completed but they are still in the list. The issue I'm seeing is that varnish eventually becomes really slow with an increased time to first byte from varnishncsa logs, the number of threads spikes until it hits the max and then we start dropping connections. After a while, the situation recovers. But this keeps happening until we restart varnish. I haven't been able to find a strong correlation yet, but it seems to be related to having a huge amount of bans_lurker_tests_tested (which I'm not sure what it does) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 10 09:17:23 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Sep 2015 09:17:23 -0000 Subject: [Varnish] #1783: POST with empty body and absent content-length returns 503 In-Reply-To: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> References: <051.09aed746d894bc757cda1db58c0bdf64@varnish-cache.org> Message-ID: <066.78e0b432711151cd6ec6e0719dcc7d58@varnish-cache.org> #1783: POST with empty body and absent content-length returns 503 ---------------------------+---------------------------------------- Reporter: fascinatedcow | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.1-TP1 Component: build | Version: 4.1.0-TP1 Severity: normal | Resolution: fixed Keywords: | ---------------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: reopened => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [a6a6e97a4db6e94b14f81af55fea4d07aa06d0af]: {{{ #!CommitTicketReference repository="" revision="a6a6e97a4db6e94b14f81af55fea4d07aa06d0af" Align code with RFC7230 section 3.3.3 which allows POST without a body. Fixes: #1783 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 10 13:35:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Sep 2015 13:35:11 -0000 Subject: [Varnish] #1778: Varnishstat oddity In-Reply-To: <041.23ee325c745521249f4c080cd31166ff@varnish-cache.org> References: <041.23ee325c745521249f4c080cd31166ff@varnish-cache.org> Message-ID: <056.8f9d3caaaac002caf6812efeffe0fbd1@varnish-cache.org> #1778: Varnishstat oddity -------------------------+----------------------------------------------- Reporter: phk | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * owner: => Martin Blix Grydeland * status: new => closed * resolution: => fixed Comment: In [399aab778dec87bae5212e1ef65c9f48bf692c23]: {{{ #!CommitTicketReference repository="" revision="399aab778dec87bae5212e1ef65c9f48bf692c23" Cast to integer to prevent negative values messing the statistics Negative values can occur intermittently due to per worker caching of changes to counters. Fixes: #1778 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 10 14:28:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Sep 2015 14:28:03 -0000 Subject: [Varnish] #1788: ObjIter has terrible performance profile when busyobj != NULL Message-ID: <041.000537049e394809a235d0799213d1da@varnish-cache.org> #1788: ObjIter has terrible performance profile when busyobj != NULL -------------------+-------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 4.0.3 | Severity: normal Keywords: | -------------------+-------------------- The loop of ObjIter when there is a busyobj (i.e. dl from the backend isn't over), will do a linear scan of all the 'struct storage' for every chunk served to the client. This mean a N^2 complexity which is _really_ bad. This causes issues when the file is largish (i.e hundreds of Mbytes), the download time is several order of magnitude larger through varnish than it is hitting the backend directly. Especially when the client connection is about the same speed as the backend connection (and so busyobj is != NULL for a while) I tested this on 4.1-rp1 and 4.0.3. To reproduce, use this .vtc {{{ vcl 4.0; backend be { .host = "127.0.0.1"; .port = "5000"; } sub vcl_recv { return (pass); } sub vcl_backend_response { set beresp.do_stream = true; } }}} and download like a 1G file through varnish. In my case it takes 2m30s to download (from 127.0.0.1) and my cpu is 100% the whole time. While hitting the backend directly it takes 3s. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 10 14:44:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Sep 2015 14:44:07 -0000 Subject: [Varnish] #1788: ObjIter has terrible performance profile when busyobj != NULL In-Reply-To: <041.000537049e394809a235d0799213d1da@varnish-cache.org> References: <041.000537049e394809a235d0799213d1da@varnish-cache.org> Message-ID: <056.45472f6d714596be970c28a03c2a0e4a@varnish-cache.org> #1788: ObjIter has terrible performance profile when busyobj != NULL --------------------+-------------------- Reporter: tnt | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | --------------------+-------------------- Comment (by tnt): Actually that vtc is not needed ... (pass) is not needed for that behavior and the do_stream is default. So just starting varnish with : ./varnishd -a 127.0.0.1:5001 -b 127.0.0.1:5000 -s malloc,2G is enough to reproduce -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 11 07:14:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Sep 2015 07:14:53 -0000 Subject: [Varnish] #1789: Disable probe, Varnish 4 continues to use it Message-ID: <044.d3e5ddfc1008c57c32d6ab1525a85fea@varnish-cache.org> #1789: Disable probe, Varnish 4 continues to use it --------------------+---------------------- Reporter: anders | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: major Keywords: probe | --------------------+---------------------- If you have a failing probe and you want to simply disable it, Varnish continues to probe it using data from previous VCLs even though it is not present in the current VCL. The state is also preserved, so Varnish will respond with 503 due to "no backend connection" because of failing probes, when none is present in the current config. How to reproduce: 1) set up a backend and a probe, with a separate probe using an url path that gives a failing response from the backend: {{{ probe mbap_bilhold { .url = "/bilhold/failing_page.html"; .interval = 5s; .timeout = 10s; .window = 5; .threshold = 3; } backend no000010smbap0 { .host = "10.111.9.10"; .port = "8080"; .probe = mbap_bilhold; } }}} 2) disable the probe to try to get the site back up again: {{{ #probe mbap_bilhold { # .url = "/bilhold/failing_page.html"; # .interval = 5s; # .timeout = 10s; # .window = 5; # .threshold = 3; #} backend no000010smbap0 { .host = "10.111.9.10"; .port = "8080"; # .probe = mbap_bilhold; } }}} 3) reload vcl 4) varnishadm backend.list shows backend no000010smbap0 still being probed and failing. 5) Site is still down, but should not! :-) 6) Workaround: restart Varnish, site is up :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 11 07:40:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Sep 2015 07:40:22 -0000 Subject: [Varnish] #1790: Assert error in smp_find_so(), storage/storage_persistent_silo.c Message-ID: <046.e77678b919be3ea5ef9a55ef80511934@varnish-cache.org> #1790: Assert error in smp_find_so(), storage/storage_persistent_silo.c ----------------------+----------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: 4.1.0-TP1 Severity: normal | Keywords: ----------------------+----------------------- When preparing for 4.1.0-beta1 one of the persistence tests threw this assert when building on a jenkins node. {{{ Assert error in smp_find_so(), storage/storage_persistent_silo.c line 332: Condition(priv2 <= sg->p.lobjlist) not true. thread = (cache-worker) version = varnish-4.1.0-beta1 revision 0776128 ident = Linux,2.6.32-504.12.2.el6.x86_64,x86_64,-jnone,-sdeprecated_persistent,-smalloc,-hcritbit,epoll Backtrace: 0x43be32: pan_backtrace+0x1d 0x43c228: pan_ic+0x2a6 0x47a553: smp_find_so+0x6e 0x47a843: smp_oc_getobj+0x17d 0x43804d: obj_getobj+0x11c 0x439e4f: ObjSetattr+0x134 0x43a56c: ObjSetU32+0x38 0x426145: vbf_beresp2obj+0x284 0x428048: vbf_stp_fetch+0x636 0x429642: vbf_fetch_thread+0x392 busyobj = 0x7f80d8090970 { ws = 0x7f80d8090a30 { id = "bo", {s,f,r,e} = {0x7f80d80928e8,+384,(nil),+57440}, }, refcnt = 2, retries = 0, failed = 0, state = 1, flags = {do_stream, is_gunzip}, http_conn = 0x7f80d8092960 { fd = 16, doclose = NULL, ws = 0x7f80d8090a30, {rxbuf_b, rxbuf_e} = {0x7f80d8092a10, 0x7f80d8092a3c}, {pipeline_b, pipeline_e} = {(nil), (nil)}, content_length = -1, body_status = chunked, first_byte_timeout = 60.000000, between_bytes_timeout = 60.000000, }, director_req = 0x2581c58 { vcl_name = s1, type = backend { display_name = vcl1.s1, ipv4 = 127.0.0.1, port = 41959, hosthdr = 127.0.0.1, health=healthy, admin_health=probe, changed=1441956434.3, n_conn = 1, }, }, director_resp = director_req, http[bereq] = 0x7f80d8090ff8 { ws[bo] = 0x7f80d8090a30, hdrs { "GET", "/1", "HTTP/1.1", "X-Forwarded-For: 127.0.0.1", "Accept-Encoding: gzip", "X-Varnish: 1002", "Host: 127.0.0.1", }, }, http[beresp] = 0x7f80d8091470 { ws[bo] = 0x7f80d8090a30, hdrs { "HTTP/1.1", "200", "OK", "Transfer-encoding: chunked", "Date: Fri, 11 Sep 2015 07:27:14 GMT", }, }, objcore[fetch] = 0x7f80c0000950 { refcnt = 2, flags = 0x2, objhead = 0x7f80c0000a00, stevedore = 0x2573130 (deprecated_persistent s0), }, vcl = { temp = warm srcname = { "input", "Builtin", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 11 08:44:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Sep 2015 08:44:32 -0000 Subject: [Varnish] #1789: Disable probe, Varnish 4 continues to use it In-Reply-To: <044.d3e5ddfc1008c57c32d6ab1525a85fea@varnish-cache.org> References: <044.d3e5ddfc1008c57c32d6ab1525a85fea@varnish-cache.org> Message-ID: <059.92a09b760fe626056a485f6249cba998@varnish-cache.org> #1789: Disable probe, Varnish 4 continues to use it ----------------------+------------------------- Reporter: anders | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: major | Resolution: worksforme Keywords: probe | ----------------------+------------------------- Changes (by lkarsten): * status: new => closed * resolution: => worksforme Comment: As discussed on IRC, this is most likely a case of old VCLs still running the health probes. Overriding this after the fact may be possible with backend.set_health backendname healthy. Closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 11 16:24:36 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 11 Sep 2015 16:24:36 -0000 Subject: [Varnish] #1791: Varnish send request to wrong servers without any logging Message-ID: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> #1791: Varnish send request to wrong servers without any logging ---------------------+---------------------- Reporter: webmons | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.7 | Severity: critical Keywords: | ---------------------+---------------------- We route requests to different backend based on whether it's mobile and its url. We start seeing infrequent but steady stream of requests being routed to a wrong backend. We turned on varnishlog but found no trace of those requests but our backend log show that they did go through varnish Anyone has experience this issue Here is detail info '''Which varnish version:''' 3.0.7 ''' Which type of CPU ?''' 2 CPU processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 62 model name : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz stepping : 4 microcode : 0x415 cpu MHz : 2500.106 cache size : 25600 KB '''32 or 64 bit mode ?''' 64 '''how much RAM ?''' 14G '''Which OS/kernel version ?''' Ubuntu Kernel 3.13.0-48-generic '''VCL''' {{{ import std; import ipcast; import header; include "devicedetect.vcl"; acl block_ips { } backend zfront1 { .host = "10.0.1.70"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } backend api1 { .host = "10.0.1.251"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } backend mobile1 { .host = "10.0.1.249"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } backend quoted { .host = "10.0.1.250"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } director zfrontCluster round-robin { { .backend = zfront1; } } director apiCluster round-robin { { .backend = api1; } } director mobileCluster round-robin { { .backend = mobile1; } } director quotedCluster round-robin { { .backend = quoted; } } sub vcl_recv { # Pull the IP address from X-Forwarded-For. If we can't parse the IP, default to 127.0.0.1 so that we let it through set req.http.xff = regsub(req.http.X-Forwarded-For, "^(^[^,]+),?.*$", "\1"); if (ipcast.ip(req.http.xff, "127.0.0.1") ~ block_ips) { error 403 "Forbidden"; } call devicedetect; if (req.http.host == "thezebra.com") { set req.http.host = "www.thezebra.com"; error 750 "https://" + req.http.host + req.url; } if (req.url ~ "^/insurance-news/") { set req.url = regsub(req.url, "^/insurance-news/", "/"); set req.backend = quotedCluster; } else if (req.url ~ "^/(api|admin|be|al|el|il)/") { set req.backend = apiCluster; } else { if (req.http.host ~ "[A-Z]" || req.url ~ "[A-Z]") { error 750 "https://" + std.tolower(req.http.host + req.url); } if ((req.url ~ "^/(\?.*)?$" || req.url ~ "^/compare/.*") && req.http.X-UA-Device ~ "^mobile") { set req.backend = mobileCluster; } else { set req.backend = zfrontCluster; } } } sub vcl_hash { if (req.backend == quotedCluster) { hash_data("/insurance-news/"); } } sub vcl_fetch { # do not cache anything that's not 200. if (beresp.status != 200) { set beresp.ttl = 0 s; } # mobile-related: Google requires adding vary: user-agent header to the response to make it mobile-friendly. # The problem is if varnish see vary: user-agent it'll create different cache item per user-agent so the following # code will add vary header with X-UA-Device and vcl_deliver will replace it with User-agent # this is assuming backend doesn't add vary: user-agent. If it does, we'd need to remove it here if (req.http.X-UA-Device) { if (!beresp.http.Vary) { # no Vary at all set beresp.http.Vary = "X-UA-Device"; } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; } } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } # to keep any caches in the wild from serving wrong content to client #2 # behind them, we need to transform the Vary on the way out. if ((req.http.X-UA-Device) && (resp.http.Vary)) { set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User- Agent"); } # click stream stuff # extract subid from the cookie or request url if (req.http.cookie ~ "subid") { std.log("subid:" + regsub(req.http.cookie, ".*subid=([^;]*).*", "\1")); } elseif (req.url ~ "[&\?]subid=") { std.log("subid:" + regsub(req.url, ".*[&\?]subid=([^&]*).*", "\1")); } # now extract the channel id. if the channel id isn't specified, then we want to use the organic channel id if (req.http.cookie ~ "channel_id") { std.log("channelid:" + regsub(req.http.cookie, ".*channel\_id=([^;]*).*", "\1")); } elseif (req.url ~ "[&\?]channelid=") { std.log("channelid:" + regsub(req.url, ".*[&\?]channelid=([^&]*).*", "\1")); } else { std.log("channelid:3acf01"); } # session id should only be in cookie if (req.http.cookie ~ "sessionid") { std.log("sessionid:" + regsub(req.http.cookie, ".*sessionid=([^;]*).*", "\1")); } else { std.log("sessionid:" + regsub(header.get(resp.http.set- cookie,"sessionid="), ".*sessionid=([^;]*).*", "\1")); } if (req.http.X-UA-Device ~ "^mobile") { std.log("mobile:true"); } else { std.log("mobile:false"); } } sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } }}} '''Behavior:''' We'd see requests not meant for API cluster routed to it. Most of these are from bot, like google adsbot or pingdom but quite a few are from legitimate users Here is an nginx access log entry from an api server in API cluster 10.0.1.100 0.052 0.052 - [11/Sep/2015:10:56:41 -0500] "GET /compare/start/bid811/ut?utm_source=google&utm_medium=cpc&utm_campaign =car-insurance-ut&utm_term=geico-auto-insurance-policy&utm_content=the- zebra-car-insurance&channelid=ajf201&subid=ut-big-brand-carriers-exact HTTP/1.1" 404 6655 "-" "AdsBot-Google (+http://www.google.com/adsbot.html)" "66.249.92.33" The url path doesn't match this regex ^/(api|admin|be|al|el|il)/, so I'm completely lost as to why it'd show up there Neither Varnishlog nor varnishncsa shows anything about this request. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 12 14:40:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Sep 2015 14:40:50 -0000 Subject: [Varnish] #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service In-Reply-To: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> References: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> Message-ID: <059.d5d8182697629febb7f9423d4444bb67@varnish-cache.org> #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service -----------------------+---------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: packaging | Version: 4.0.3 Severity: normal | Resolution: Keywords: redhat | -----------------------+---------------------------------- Description changed by fgsch: Old description: > I have vcc_err_unref set to off in varnish.params: > > [root at no000010sapit0 ~]# grep ^DAE /etc/varnish/varnish.params > DAEMON_OPTS="-p vcc_err_unref=off" > > But when I try to start Varnish it fails with unused subs: > > [root at no000010sapit0 ~]# service varnish start > Redirecting to /bin/systemctl start varnish.service > Job for varnish.service failed. See 'systemctl status varnish.service' > and 'journalctl -xn' for details. > [root at no000010sapit0 ~]# journalctl -xn | cat > -- Logs begin at Sun 2015-09-06 15:40:04 CEST, end at Wed 2015-09-09 > 09:35:22 CEST. -- > Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Message from > VCC-compiler: > Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Unused sub > pass_if_gethead, defined: > Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: ('input' > Line 48 Pos 5) > Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: sub > pass_if_gethead > > The problem is that /usr/lib/systemd/system/varnish.service has an > ExecStartPre > that does not use/consider $DAEMON_OPTS. It needs to, particularly for > that vcc_err_unref=off setting. New description: I have vcc_err_unref set to off in varnish.params: {{{ [root at no000010sapit0 ~]# grep ^DAE /etc/varnish/varnish.params DAEMON_OPTS="-p vcc_err_unref=off" }}} But when I try to start Varnish it fails with unused subs: {{{ [root at no000010sapit0 ~]# service varnish start Redirecting to /bin/systemctl start varnish.service Job for varnish.service failed. See 'systemctl status varnish.service' and 'journalctl -xn' for details. [root at no000010sapit0 ~]# journalctl -xn | cat -- Logs begin at Sun 2015-09-06 15:40:04 CEST, end at Wed 2015-09-09 09:35:22 CEST. -- Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Message from VCC-compiler: Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: Unused sub pass_if_gethead, defined: Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: ('input' Line 48 Pos 5) Sep 09 09:35:22 no000010sapit0.moller.local varnishd[19404]: sub pass_if_gethead }}} The problem is that /usr/lib/systemd/system/varnish.service has an ExecStartPre that does not use/consider $DAEMON_OPTS. It needs to, particularly for that vcc_err_unref=off setting. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 12 14:42:01 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 12 Sep 2015 14:42:01 -0000 Subject: [Varnish] #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service In-Reply-To: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> References: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> Message-ID: <059.096ee01408c1747ad13bc36c8bb396d1@varnish-cache.org> #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service -----------------------+---------------------------------- Reporter: anders | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: packaging | Version: 4.0.3 Severity: normal | Resolution: Keywords: redhat | -----------------------+---------------------------------- Comment (by fgsch): Fixed in master in commit 28198ab847c60382576c91a916ee4657484dd44f. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 07:23:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 07:23:48 -0000 Subject: [Varnish] #1792: If few free space and file-storage on xfs, Varnish does not restart. Message-ID: <042.122a5dd3faba83c625d8e4a9f361bf81@varnish-cache.org> #1792: If few free space and file-storage on xfs, Varnish does not restart. -------------------------+---------------------- Reporter: xcir | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.1.0-beta1 | Severity: normal Keywords: | -------------------------+---------------------- Hi, I got an error message that "Error: (-sfile) allocation error: No space left on device" at the time of varnish restart. Environment: {{{ varnishd (varnish-4.1.0-beta1 revision 1628f0b) Linux varnish-trunk 3.13.0-49-generic #83-Ubuntu SMP Fri Apr 10 20:11:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux (Ubuntu 14.04.2 LTS) Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdh xfs 15718400 32928 15685472 1% /mnt/xfs }}} Log(xfs / Fail) {{{ root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/xfs Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdh xfs 15718400 32928 15685472 1% /mnt/xfs root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish start * Starting HTTP accelerator varnishd ...done. root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/xfs Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdh xfs 15718400 10518688 5199712 67% /mnt/xfs root at varnish-trunk:~/varnish-4.1.0-beta1# du -sb /mnt/xfs/varnish_storage.bin 10737418240 /mnt/xfs/varnish_storage.bin root at varnish-trunk:~/varnish-4.1.0-beta1# ps axut|grep varn varnish 14687 0.0 0.2 124768 5384 ? Ss 14:43 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s xfs=file,/mnt/xfs/varnish_storage.bin,10G varnish 14689 0.0 5.9 11074296 119456 ? Sl 14:43 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s xfs=file,/mnt/xfs/varnish_storage.bin,10G root 15168 0.0 0.0 8860 644 pts/1 R+ 14:49 0:00 grep --color=auto varn root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish restart Error: (-sfile) allocation error: No space left on device * Syntax check failed, not restarting }}} Log(ext4 / Success) {{{ root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/ext4 Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdi ext4 15350768 38384 14509568 1% /mnt/ext4 root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish start * Starting HTTP accelerator varnishd ...done. root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/ext4 Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdi ext4 15350768 10524148 4023804 73% /mnt/ext4 root at varnish-trunk:~/varnish-4.1.0-beta1# du -sb /mnt/xfs/varnish_storage.bin 10737418240 /mnt/xfs/varnish_storage.bin root at varnish-trunk:~/varnish-4.1.0-beta1# ps axut|grep varn varnish 15639 0.0 0.2 124768 5376 ? Ss 14:56 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s ext4=file,/mnt/ext4/varnish_storage.bin,10G varnish 15641 0.3 5.8 11074296 117260 ? Sl 14:56 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s ext4=file,/mnt/ext4/varnish_storage.bin,10G root 15904 0.0 0.0 8860 640 pts/1 S+ 14:56 0:00 grep --color=auto varn root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish restart ... * Stopping HTTP accelerator varnishd ...done. * Starting HTTP accelerator varnishd ...done. }}} The cause is to fail to fallocate in libvarnish/vfil.c (Even if "(st.st_blocks * 512) + fsspace >= size") https://github.com/varnish/Varnish- Cache/blob/117c2bdcfbb8c11f48bd9137e76edfc746dfbe71/lib/libvarnish/vfil.c#L188 I think not require fallocate, if "st_blocks * 512" is greater than st_size. Because already allocated. Log(patched) {{{ root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/xfs Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdh xfs 15718400 32928 15685472 1% /mnt/xfs root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish start * Starting HTTP accelerator varnishd ...done. root at varnish-trunk:~/varnish-4.1.0-beta1# df -T /mnt/xfs Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/xvdh xfs 15718400 10518688 5199712 67% /mnt/xfs root at varnish-trunk:~/varnish-4.1.0-beta1# du -sb /mnt/xfs/varnish_storage.bin 10737418240 /mnt/xfs/varnish_storage.bin root at varnish-trunk:~/varnish-4.1.0-beta1# ps axut|grep varn varnish 20804 0.0 0.2 124768 5380 ? Ss 16:20 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s xfs=file,/mnt/xfs/varnish_storage.bin,10G varnish 20806 0.1 5.9 11074296 119308 ? Sl 16:20 0:00 /usr/local/sbin/varnishd -P /run/varnishd.pid -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p thread_pool_stack=512k -s malloc,256m -s xfs=file,/mnt/xfs/varnish_storage.bin,10G root 21026 0.0 0.0 8860 648 pts/1 S+ 16:21 0:00 grep --color=auto varn root at varnish-trunk:~/varnish-4.1.0-beta1# /etc/init.d/varnish restart ... * Stopping HTTP accelerator varnishd ...done. * Starting HTTP accelerator varnishd ...done. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 07:35:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 07:35:19 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.d6d301c2d4c009849224c6f6f671948f@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+----------------------- Reporter: xcir | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Comment (by xcir): I create another ticket.(#1792) Could you close this ticket? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 07:35:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 07:35:54 -0000 Subject: [Varnish] #1791: Varnish send request to wrong servers without any logging In-Reply-To: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> References: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> Message-ID: <060.93e3d94715559df885e2ef8088968cac@varnish-cache.org> #1791: Varnish send request to wrong servers without any logging ----------------------+------------------------- Reporter: webmons | Owner: Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.7 Severity: critical | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: This sounds like some requests are getting piped. When that happens all subsequent requests on the same TCP connections just gets passed through varnish byte for byte and will not appear in the log. Check your Varnishlog for entries like: {{{ VCL_call b PIPE VCL_return b pipe }}} If this is not the case, feel free to reopen this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 08:31:44 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 08:31:44 -0000 Subject: [Varnish] #1793: Assert error in default_oc_getobj() Message-ID: <045.597d8a2e6b21666e8f19cff1d3e7a950@varnish-cache.org> #1793: Assert error in default_oc_getobj() --------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.1.0-TP1 | Severity: normal Keywords: assert error | --------------------------+---------------------- {{{ webcache09:~# varnishadm panic.show Last panic at: Sun, 13 Sep 2015 17:04:10 GMT Assert error in default_oc_getobj(), storage/stevedore.c line 60: Condition(((o))->magic == (0x32851d42)) not true. thread = (cache-timeout) version = varnish-4.1.0-tp1 revision 0e4e1bc ident = Linux,3.2.0-4-amd64,x86_64,-junix,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x4343a4: pan_ic+0x134 0x45ca28: default_oc_getobj+0x78 0x433033: ObjGetattr+0xa3 0x433922: ObjGetU32+0x12 0x433980: ObjGetXID+0x10 0x421c84: exp_thread+0x444 0x448faf: wrk_bgthread+0x5f 0x7f5516bd2b50: libpthread.so.0(+0x6b50) [0x7f5516bd2b50] 0x7f551691c95d: libc.so.6(clone+0x6d) [0x7f551691c95d] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 11:07:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 11:07:26 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.f959c2b95d53a44c688851ebfdde366b@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+------------------------ Reporter: xcir | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: duplicate Keywords: | ----------------------+------------------------ Changes (by phk): * status: reopened => closed * resolution: => duplicate -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 11:11:29 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 11:11:29 -0000 Subject: [Varnish] #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service In-Reply-To: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> References: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> Message-ID: <059.d0cc59cd9bad8da061f6fcf1687bf171@varnish-cache.org> #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service -----------------------+---------------------------------- Reporter: anders | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: packaging | Version: 4.0.3 Severity: normal | Resolution: Keywords: redhat | -----------------------+---------------------------------- Changes (by lkarsten): * owner: => lkarsten Comment: Summing this up: we need DAEMON_OPTS in the pre-command. Not in place in 4.1, haven't checked 4.0. Assuming ownership. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 11:54:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 11:54:18 -0000 Subject: [Varnish] #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service In-Reply-To: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> References: <044.861e7bf77273f84687c24d76ef0875df@varnish-cache.org> Message-ID: <059.eb1c56f34c2ed5178b3849a024d128c4@varnish-cache.org> #1787: Varnish fails to start because of unused subs in VCL and bug in Red Hat systemd service -----------------------+---------------------------------- Reporter: anders | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: packaging | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: redhat | -----------------------+---------------------------------- Changes (by Lasse Karstensen ): * status: new => closed * resolution: => fixed Comment: In [f8cdd8fcab29bc53cfc72964482ecafaaa85f94f]: {{{ #!CommitTicketReference repository="" revision="f8cdd8fcab29bc53cfc72964482ecafaaa85f94f" Add DAEMON_OPTS to ExecStartPre. Fixes: #1787 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 14 15:24:24 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Sep 2015 15:24:24 -0000 Subject: [Varnish] #1792: If few free space and file-storage on xfs, Varnish does not restart. In-Reply-To: <042.122a5dd3faba83c625d8e4a9f361bf81@varnish-cache.org> References: <042.122a5dd3faba83c625d8e4a9f361bf81@varnish-cache.org> Message-ID: <057.afc4906d662ccf0ed4c0bfbc2590ec50@varnish-cache.org> #1792: If few free space and file-storage on xfs, Varnish does not restart. ----------------------+-------------------------- Reporter: xcir | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-beta1 Severity: normal | Resolution: Keywords: | ----------------------+-------------------------- Comment (by xcir): --Test code {{{ root at varnish-trunk:~/varnish-4.1.0-beta1- diff -up lib/libvarnish/vfil.c.org lib/libvarnish/vfil.c --- lib/libvarnish/vfil.c.org 2015-09-14 23:59:00.228175000 +0900 +++ lib/libvarnish/vfil.c 2015-09-15 00:09:22.568175000 +0900 @@ -56,6 +56,7 @@ -include "vdef.h" -include "vfil.h" +-include char * VFIL_readfd(int fd, ssize_t *sz) { @@ -182,9 +183,14 @@ VFIL_allocate(int fd, off_t size, int in errno = ENOSPC; return (-1); } +syslog(6,"VFIL:A:st_blocks*512:%ld st_size:%ld free-space:%ld require- size:%ld errno:%d",st.st_blocks * 512, st.st_size, fsspace ,size,errno); +// if(st.st_blocks * 512 >= st.st_size) +// return (0); -ifdef HAVE_FALLOCATE if (!fallocate(fd, 0, 0, size)) return (0); +syslog(6,"VFIL:B:st_blocks*512:%ld st_size:%ld free-space:%ld require- size:%ld errno:%d",st.st_blocks * 512, st.st_size, fsspace ,size,errno); + if (errno == ENOSPC) return (-1); -endif }}} -xfs[[BR]] --Start(1st) {{{ Sep 15 00:04:00 varnish-trunk varnishd: VFIL:A:st_blocks*512:0 st_size:10737418240 free-space:16061923328 require-size:10737418240 errno:0 Sep 15 00:04:00 varnish-trunk varnishd[9463]: Platform: Linux,3.13.0-49-generic,x86_64,-junix,-smalloc,-sfile,-smalloc,-hcritbit Sep 15 00:04:00 varnish-trunk varnishd[9463]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130189824 require-size:84934656 errno:0 Sep 15 00:04:00 varnish-trunk varnishd[9463]: child (9465) Started Sep 15 00:04:00 varnish-trunk varnishd[9463]: Child (9465) said Child starts Sep 15 00:04:00 varnish-trunk varnishd[9463]: Child (9465) said SMF.xfs mmap'ed 10737418240 bytes of 10737418240 }}} --Restart(Fail) {{{ Sep 15 00:04:17 varnish-trunk varnishd: VFIL:A:st_blocks*512:10737418240 st_size:10737418240 free-space:5324505088 require-size:10737418240 errno:17 Sep 15 00:04:17 varnish-trunk varnishd: VFIL:B:st_blocks*512:10737418240 st_size:10737418240 free-space:5324505088 require-size:10737418240 errno:28 }}} --Stop and Start(Fail)[[BR]] ---Stop {{{ Sep 15 00:04:36 varnish-trunk varnishd[9463]: Manager got SIGINT Sep 15 00:04:36 varnish-trunk varnishd[9463]: Stopping Child Sep 15 00:04:37 varnish-trunk varnishd[9463]: Child (9465) ended Sep 15 00:04:37 varnish-trunk varnishd[9463]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130189824 require-size:84934656 errno:0 Sep 15 00:04:37 varnish-trunk varnishd[9463]: Child (9465) said Child dies Sep 15 00:04:37 varnish-trunk varnishd[9463]: Child cleanup complete }}} ---Start(Fail) {{{ Sep 15 00:04:40 varnish-trunk varnishd: VFIL:A:st_blocks*512:10737418240 st_size:10737418240 free-space:5324505088 require-size:10737418240 errno:17 Sep 15 00:04:40 varnish-trunk varnishd: VFIL:B:st_blocks*512:10737418240 st_size:10737418240 free-space:5324505088 require-size:10737418240 errno:28 }}} -ext4[[BR]] --Start(1st) {{{ Sep 15 00:12:35 varnish-trunk varnishd: VFIL:A:st_blocks*512:0 st_size:10737418240 free-space:14857797632 require-size:10737418240 errno:0 Sep 15 00:12:35 varnish-trunk varnishd[10056]: Platform: Linux,3.13.0-49-generic,x86_64,-junix,-smalloc,-sfile,-smalloc,-hcritbit Sep 15 00:12:35 varnish-trunk varnishd[10056]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130181632 require-size:84934656 errno:0 Sep 15 00:12:35 varnish-trunk varnishd[10056]: child (10058) Started Sep 15 00:12:35 varnish-trunk varnishd[10056]: Child (10058) said Child starts Sep 15 00:12:35 varnish-trunk varnishd[10056]: Child (10058) said SMF.ext4 mmap'ed 10737418240 bytes of 10737418240 }}} --Restart(Success) {{{ Sep 15 00:12:51 varnish-trunk varnishd: VFIL:A:st_blocks*512:10737422336 st_size:10737418240 free-space:4120375296 require-size:10737418240 errno:17 Sep 15 00:12:51 varnish-trunk varnishd[10056]: Manager got SIGINT Sep 15 00:12:51 varnish-trunk varnishd[10056]: Stopping Child Sep 15 00:12:52 varnish-trunk varnishd[10056]: Child (10058) ended Sep 15 00:12:52 varnish-trunk varnishd[10056]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130173440 require-size:84934656 errno:0 Sep 15 00:12:52 varnish-trunk varnishd[10056]: Child (10058) said Child dies Sep 15 00:12:52 varnish-trunk varnishd[10056]: Child cleanup complete Sep 15 00:12:52 varnish-trunk varnishd: VFIL:A:st_blocks*512:10737422336 st_size:10737418240 free-space:4120375296 require-size:10737418240 errno:17 Sep 15 00:12:52 varnish-trunk varnishd[10350]: Platform: Linux,3.13.0-49-generic,x86_64,-junix,-smalloc,-sfile,-smalloc,-hcritbit Sep 15 00:12:52 varnish-trunk varnishd[10350]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130173440 require-size:84934656 errno:0 Sep 15 00:12:52 varnish-trunk varnishd[10350]: child (10352) Started Sep 15 00:12:52 varnish-trunk varnishd[10350]: Child (10352) said Child starts Sep 15 00:12:52 varnish-trunk varnishd[10350]: Child (10352) said SMF.ext4 mmap'ed 10737418240 bytes of 10737418240 }}} --Stop and Start(Success)[[BR]] ---Stop {{{ Sep 15 00:14:38 varnish-trunk varnishd[10350]: Manager got SIGINT Sep 15 00:14:38 varnish-trunk varnishd[10350]: Stopping Child Sep 15 00:14:39 varnish-trunk varnishd[10350]: Child (10352) ended Sep 15 00:14:39 varnish-trunk varnishd[10350]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130181632 require-size:84934656 errno:0 Sep 15 00:14:39 varnish-trunk varnishd[10350]: Child (10352) said Child dies Sep 15 00:14:39 varnish-trunk varnishd[10350]: Child cleanup complete }}} ---Start(Success) {{{ Sep 15 00:14:52 varnish-trunk varnishd: VFIL:A:st_blocks*512:10737422336 st_size:10737418240 free-space:4120375296 require-size:10737418240 errno:17 Sep 15 00:14:52 varnish-trunk varnishd[10662]: Platform: Linux,3.13.0-49-generic,x86_64,-junix,-smalloc,-sfile,-smalloc,-hcritbit Sep 15 00:14:52 varnish-trunk varnishd[10662]: VFIL:A:st_blocks*512:0 st_size:84934656 free-space:95130181632 require-size:84934656 errno:0 Sep 15 00:14:52 varnish-trunk varnishd[10662]: child (10664) Started Sep 15 00:14:52 varnish-trunk varnishd[10662]: Child (10664) said Child starts Sep 15 00:14:52 varnish-trunk varnishd[10662]: Child (10664) said SMF.ext4 mmap'ed 10737418240 bytes of 10737418240 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 15 04:36:33 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 15 Sep 2015 04:36:33 -0000 Subject: [Varnish] #405: Varnish problem with purge requests In-Reply-To: <044.47f190ea6c840896f7a2f9a99bae95cf@varnish-cache.org> References: <044.47f190ea6c840896f7a2f9a99bae95cf@varnish-cache.org> Message-ID: <059.bb70af6ec0cf7df489966ae4888281c1@varnish-cache.org> #405: Varnish problem with purge requests ----------------------+--------------------- Reporter: anders | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Comment (by domtheo): [http://www.dinastipoker.com Agen Domino99] | [http://www.dinastipoker.com Poker Online Uang Asli] | [http://www.dianastipoker.com Agen Poker] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 15 04:40:04 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 15 Sep 2015 04:40:04 -0000 Subject: [Varnish] #374: Implement restart in vcl_error In-Reply-To: <045.6ff5ff1d5b0b27391daa2825a1f14ca3@varnish-cache.org> References: <045.6ff5ff1d5b0b27391daa2825a1f14ca3@varnish-cache.org> Message-ID: <060.66b3a985fc718c8b5e0044fc46fcb785@varnish-cache.org> #374: Implement restart in vcl_error -------------------------------+---------------------------------- Reporter: tcouery | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: restart vcl_error | -------------------------------+---------------------------------- Comment (by domtheo): [http://www.papdan.com/seo-services-search-engine-optimisation.php Melbourne SEO Services] | [http://www.papdan.com/ Melbourne Web Developer] | [http://www.usapropertyinvestors.com.au USA Property Investment] | [http://www.phillro.com.au/p/industrial-2/airless-spray-packages-2/ Airless Spray] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 15 04:41:29 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 15 Sep 2015 04:41:29 -0000 Subject: [Varnish] #145: Issues whild loading VCL code on the fly In-Reply-To: <044.c0a386f204d0b1ed9fd739db3d9fc75e@varnish-cache.org> References: <044.c0a386f204d0b1ed9fd739db3d9fc75e@varnish-cache.org> Message-ID: <059.a08acc6c7e3ab001fc87554b66026d69@varnish-cache.org> #145: Issues whild loading VCL code on the fly -------------------------------+--------------------- Reporter: anders | Owner: petter Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 1.1 Severity: normal | Resolution: fixed Keywords: varnishd load VCL | -------------------------------+--------------------- Comment (by domtheo): [http://www.tokobungasabana.com Florist Jakarta] | [http://www.tokobungasabana.com Toko Bunga Online Murah] | [https://digiadvertise.wordpress.com/our-services/jasa-seo/ Jasa SEO] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 15 13:37:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 15 Sep 2015 13:37:51 -0000 Subject: [Varnish] #1792: If few free space and file-storage on xfs, Varnish does not restart. In-Reply-To: <042.122a5dd3faba83c625d8e4a9f361bf81@varnish-cache.org> References: <042.122a5dd3faba83c625d8e4a9f361bf81@varnish-cache.org> Message-ID: <057.40e31e1a512cc0d016a89d53e097e62c@varnish-cache.org> #1792: If few free space and file-storage on xfs, Varnish does not restart. ----------------------+----------------------------------------------- Reporter: xcir | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.1.0-beta1 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * owner: => Martin Blix Grydeland * status: new => closed * resolution: => fixed Comment: In [9f38e1a8909baeb55f6d4e570948d3f904146303]: {{{ #!CommitTicketReference repository="" revision="9f38e1a8909baeb55f6d4e570948d3f904146303" fallocate will for some filesystems (e.g. xfs) not take the already allocated blocks of the file into account. This will cause fallocate to report ENOSPC when called on an existing fully allocated file unless the filesystem has enough free space to accomodate the complete new file size. Because of this we enable fallocate only on filesystems that are known to work as we expect. Fixes: #1792 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 16 16:04:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 Sep 2015 16:04:46 -0000 Subject: [Varnish] #1791: Varnish send request to wrong servers without any logging In-Reply-To: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> References: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> Message-ID: <060.abedbb5bd2f1de39d8fbb23006a3d477@varnish-cache.org> #1791: Varnish send request to wrong servers without any logging ----------------------+----------------------- Reporter: webmons | Owner: Type: defect | Status: reopened Priority: high | Milestone: Component: varnishd | Version: 3.0.7 Severity: critical | Resolution: Keywords: | ----------------------+----------------------- Changes (by webmons): * status: closed => reopened * resolution: worksforme => Comment: Thanks for the response. We found something like this in the log around the same time as the backend server receive the request 16 SessionOpen c 10.0.0.179 51226 :80 16 SessionClose c pipe Our vcl doesn't have any return pipe; and the default vcl only does it if the request method is not standard. I'm not sure what could be the cause of this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 21 10:28:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Sep 2015 10:28:15 -0000 Subject: [Varnish] #1643: corrupt range response In-Reply-To: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> References: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> Message-ID: <056.bb0d584d93c243f6cc6a91a4c98d9409@varnish-cache.org> #1643: corrupt range response --------------------------------------+----------------------- Reporter: Jay | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: byte-range request range | --------------------------------------+----------------------- Changes (by onovy): * status: closed => reopened * resolution: fixed => Comment: this bug is still preset in 4.0.3 varnish -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 21 10:37:34 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Sep 2015 10:37:34 -0000 Subject: [Varnish] #1643: corrupt range response In-Reply-To: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> References: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> Message-ID: <056.3e935425606bad45de9764a05617cd70@varnish-cache.org> #1643: corrupt range response --------------------------------------+-------------------- Reporter: Jay | Owner: Type: defect | Status: new Priority: normal | Milestone: Later Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: byte-range request range | --------------------------------------+-------------------- Changes (by phk): * owner: phk => * status: reopened => new * version: trunk => 4.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 21 10:39:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Sep 2015 10:39:21 -0000 Subject: [Varnish] #1791: Varnish send request to wrong servers without any logging In-Reply-To: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> References: <045.1127c148a3c966fe522b21a16dd3839d@varnish-cache.org> Message-ID: <060.1835d602ebee54fdc551893a81033a08@varnish-cache.org> #1791: Varnish send request to wrong servers without any logging ----------------------+------------------------- Reporter: webmons | Owner: Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.7 Severity: critical | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by phk): * status: reopened => closed * resolution: => worksforme Comment: You should be able to see the request in your varnishlog. In general it is a good idea to add a "Connection: close" to all piped requests going to the backend, that way only that single request goes to the backend. I think we have something in the docs about this, but I can't remember where. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 21 11:04:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Sep 2015 11:04:47 -0000 Subject: [Varnish] #1643: corrupt range response In-Reply-To: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> References: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> Message-ID: <056.b9791b07b3989f07f368be13155bb36f@varnish-cache.org> #1643: corrupt range response --------------------------------------+-------------------- Reporter: Jay | Owner: Type: defect | Status: new Priority: normal | Milestone: Later Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: byte-range request range | --------------------------------------+-------------------- Comment (by lkarsten): Comment for VS: Look up how much effort it is porting this to 4.0. Since 4.1 is right around the corner, that may be an option. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 21 11:05:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Sep 2015 11:05:59 -0000 Subject: [Varnish] #1643: corrupt range response In-Reply-To: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> References: <041.113f5e6ae1e1582a5da6631379ba3e37@varnish-cache.org> Message-ID: <056.3f62785b253a767eef911201cb2a8972@varnish-cache.org> #1643: corrupt range response --------------------------------------+----------------------- Reporter: Jay | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Later Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: byte-range request range | --------------------------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator