From varnish-bugs at varnish-cache.org Mon Jan 3 13:19:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Jan 2011 13:19:32 -0000 Subject: [Varnish] #566: varnishncsa segfault during (very) high load In-Reply-To: <045.329d8cea0c94b900f994347642286b5d@varnish-cache.org> References: <045.329d8cea0c94b900f994347642286b5d@varnish-cache.org> Message-ID: <054.dbd873e38c5a7b85ce650c387edd2908@varnish-cache.org> #566: varnishncsa segfault during (very) high load -------------------------+-------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishncsa | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: No response from submitter; closing. Please reopen if you still see this problem. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 3 13:39:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Jan 2011 13:39:05 -0000 Subject: [Varnish] #822: Repository URLs are wrong on the wiki page UsingSvnSvk In-Reply-To: <047.5ff850794de151226124161e23165988@varnish-cache.org> References: <047.5ff850794de151226124161e23165988@varnish-cache.org> Message-ID: <056.aafefccc0383e89046215757a5d7d76c@varnish-cache.org> #822: Repository URLs are wrong on the wiki page UsingSvnSvk ---------------------------+------------------------------------------------ Reporter: bonetruck2 | Owner: tfheen Type: documentation | Status: closed Priority: low | Milestone: Component: website | Version: trunk Severity: minor | Resolution: fixed Keywords: | ---------------------------+------------------------------------------------ Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: I've updated the page now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 4 10:54:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Jan 2011 10:54:19 -0000 Subject: [Varnish] #841: varnish 2.1.4 Message-ID: <043.6d2854f79c943132b9245f23e64c5b29@varnish-cache.org> #841: varnish 2.1.4 -------------------------------+-------------------------------------------- Reporter: sdz007 | Type: defect Status: new | Priority: normal Milestone: After Varnish 2.1 | Component: build Version: 2.1.4 | Severity: normal Keywords: | -------------------------------+-------------------------------------------- I currently upgraded to version 2.1.4 from a 2.1 release that comes as default for ubuntu 10.04. After installing varnish 2.1.4 my svn which i use along with the svn_dav mod of apache failed to do exports or checkouts. I always got the following error Invalid Content-Length. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 6 11:51:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Jan 2011 11:51:11 -0000 Subject: [Varnish] #842: assert error in hcb_deref Message-ID: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> #842: assert error in hcb_deref --------------------+------------------------------------------------------- Reporter: perbu | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- Running trunk as of this morning. Jan 6 12:22:11 odd varnishd[14082]: Child (14083) died signal=6 Jan 6 12:22:11 odd varnishd[14082]: Child (14083) Panic message: Assert error in hcb_deref(), hash_critbit.c line 411:#012 Condition((oh->waitinglist) == 0 ) not true.#012thread = (cache-timeout)#012ident = Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x427c48: pan_ic+b8#012 0x435e14: hcb_deref+264#012 0x42095f: HSH_Deref+21f#012 0x41d29c: exp_timer+32c#012 0x42a15b: wrk_bgthread+bb#012 0x7f9abd26c9ca: _end+7f9abcbfea32#012 0x7f9abcfc970d: _end+7f9abc95b775#012 Jan 6 12:22:11 odd varnishd[14082]: Child cleanup complete Jan 6 12:22:11 odd varnishd[14082]: child (24347) Started Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said Child starts Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:36:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:36:32 -0000 Subject: [Varnish] #837: varnishadm purge.list fails frequently In-Reply-To: <043.a2d15772321998574922abae75b7be94@varnish-cache.org> References: <043.a2d15772321998574922abae75b7be94@varnish-cache.org> Message-ID: <052.689f540b0b45697c749fb34ffc3e81eb@varnish-cache.org> #837: varnishadm purge.list fails frequently ----------------------+----------------------------------------------------- Reporter: jelder | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Keywords: purge.list ----------------------+----------------------------------------------------- Changes (by kristian): * status: new => assigned Comment: Any further update on this? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:38:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:38:27 -0000 Subject: [Varnish] #835: Varnish stops receiving incoming connections, but the process is still up In-Reply-To: <043.0b306324d23af01d5f961c07ada4c811@varnish-cache.org> References: <043.0b306324d23af01d5f961c07ada4c811@varnish-cache.org> Message-ID: <052.3e1bb0fc826c2be19030b89a5ea835d6@varnish-cache.org> #835: Varnish stops receiving incoming connections, but the process is still up ----------------------+----------------------------------------------------- Reporter: blamer | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.1.4 Severity: major | Keywords: broken pipe, freeze, crash ----------------------+----------------------------------------------------- Changes (by kristian): * status: new => assigned Comment: Have you had a chance to look further at this? I'm unable to replicate the issue, thus rely on your feedback if there is to be any chance of fixing it beyond random chance... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:40:41 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:40:41 -0000 Subject: [Varnish] #841: varnish 2.1.4 In-Reply-To: <043.6d2854f79c943132b9245f23e64c5b29@varnish-cache.org> References: <043.6d2854f79c943132b9245f23e64c5b29@varnish-cache.org> Message-ID: <052.a58db7df8031c39bf1f7f6b3583d1835@varnish-cache.org> #841: varnish 2.1.4 --------------------+------------------------------------------------------- Reporter: sdz007 | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: After Varnish 2.1 Component: build | Version: 2.1.4 Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by kristian): * owner: => kristian Comment: Please attach output of varnishlog -o when doing the svn checkout. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:42:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:42:12 -0000 Subject: [Varnish] #310: WS_Reserve panic + error In-Reply-To: <040.75b83f2a5390779c25adf146cd0b53d5@varnish-cache.org> References: <040.75b83f2a5390779c25adf146cd0b53d5@varnish-cache.org> Message-ID: <049.d6208a9d3db96c7921ccb407c34d0f67@varnish-cache.org> #310: WS_Reserve panic + error ----------------------+----------------------------------------------------- Reporter: sky | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.1.4 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by kristian): * owner: phk => kristian * status: reopened => new -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:42:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:42:20 -0000 Subject: [Varnish] #310: WS_Reserve panic + error In-Reply-To: <040.75b83f2a5390779c25adf146cd0b53d5@varnish-cache.org> References: <040.75b83f2a5390779c25adf146cd0b53d5@varnish-cache.org> Message-ID: <049.d5d53f2cdaeec8fb841597a6cb3b5990@varnish-cache.org> #310: WS_Reserve panic + error ----------------------+----------------------------------------------------- Reporter: sky | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.1.4 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by kristian): * status: new => assigned -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 08:49:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 08:49:05 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> References: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> Message-ID: <051.786dec638e9c6b6d64168cdf6fc75053@varnish-cache.org> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: kristian Type: documentation | Status: assigned Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Keywords: ---------------------------+------------------------------------------------ Changes (by kristian): * owner: => kristian * status: new => assigned Old description: > Most of the varnish documentation on the web (including this wiki) > suggests something like this for mangling up Zope VirtualHostBase URLs: > > {{{ > if (req.http.host ~ "^(www.)?example.com") { > set req.http.host = "example.com"; > set req.url = regsub(req.url, "^", > "/VirtualHostBase/http/example.com:80/Sites/example.com/VirtualHostRoot"); > } elsif (req.http.host ~ "^(www.)?example.org") { > set req.http.host = "example.org"; > set req.url = regsub(req.url, "^", > "/VirtualHostBase/http/example.org:80/Sites/example.org/VirtualHostRoot"); > } else { > error 404 "Unknown virtual host"; > } > }}} > > The big problem with this is that every single virtual host must be > labouriously specified, and then doubly specified if you support HTTPS. > This rapidly becomes a royal PITA. > > A far better solution is to use varnish's regsub support and generate it > entirely dynamically: > > {{{ > if (req.http.X-Forwarded-Proto == "https" ) { > set req.http.X-Forwarded-Port = "443"; > } else { > set req.http.X-Forwarded-Port = "80"; > } > if (req.http.host ~ "^(www\.|ipv6\.)?(.+)\.(.+)?$") { > set req.http.host = regsub(req.http.host, > "^(www\.|ipv6\.)?(.+)\.(.+)?$", "www.\2.\3"); > set req.url = "/VirtualHostBase/" req.http.X-Forwarded- > Proto > regsub(req.http.host, > "^(www\.|ipv6\.)?(.+)\.(.+)$", "/www.\2.\3:") > req.http.X-Forwarded-Port > regsub(req.http.host, > "^(www\.|ipv6\.)?(.+)\.(.+)$", "/\2.\3/\2.\3/VirtualHostR$ > req.url; > } > }}} > > It also does a great job of demonstrating string concantation in varnish > as well as a few other useful tricks. > > I have an optional ipv6. subdomain in there, but note that it doesn't > include the ipv6. redirect into Zope so all the links in the Plone site > will still point to www. instead. This doesn't bother me as I only use > the ipv6. subdomain to test my site's IPv6 connectivity. > > I hope that you find this useful. I'd suggest sticking the above anywhere > where a search for VirtualHostBase on this site returns including the > example Plone .vcl file. > > Cheers,[[BR]] > Niall Douglas New description: Most of the varnish documentation on the web (including this wiki) suggests something like this for mangling up Zope VirtualHostBase URLs: {{{ if (req.http.host ~ "^(www.)?example.com") { set req.http.host = "example.com"; set req.url = regsub(req.url, "^", "/VirtualHostBase/http/example.com:80/Sites/example.com/VirtualHostRoot"); } elsif (req.http.host ~ "^(www.)?example.org") { set req.http.host = "example.org"; set req.url = regsub(req.url, "^", "/VirtualHostBase/http/example.org:80/Sites/example.org/VirtualHostRoot"); } else { error 404 "Unknown virtual host"; } }}} The big problem with this is that every single virtual host must be labouriously specified, and then doubly specified if you support HTTPS. This rapidly becomes a royal PITA. A far better solution is to use varnish's regsub support and generate it entirely dynamically: {{{ if (req.http.X-Forwarded-Proto == "https" ) { set req.http.X-Forwarded-Port = "443"; } else { set req.http.X-Forwarded-Port = "80"; } if (req.http.host ~ "^(www\.|ipv6\.)?(.+)\.(.+)?$") { set req.http.host = regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)?$", "www.\2.\3"); set req.url = "/VirtualHostBase/" req.http.X-Forwarded- Proto regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)$", "/www.\2.\3:") req.http.X-Forwarded-Port regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)$", "/\2.\3/\2.\3/VirtualHostR$ req.url; } }}} It also does a great job of demonstrating string concantation in varnish as well as a few other useful tricks. I have an optional ipv6. subdomain in there, but note that it doesn't include the ipv6. redirect into Zope so all the links in the Plone site will still point to www. instead. This doesn't bother me as I only use the ipv6. subdomain to test my site's IPv6 connectivity. I hope that you find this useful. I'd suggest sticking the above anywhere where a search for VirtualHostBase on this site returns including the example Plone .vcl file. Cheers,[[BR]] Niall Douglas -- Comment: If this is mentioned on our wiki, you should be able to edit it yourself (pending edit-bits). If it's mentioned elsewhere, I'd appreciate it if you could supply some links and I'll update the examples. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 11:59:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 11:59:09 -0000 Subject: [Varnish] #842: assert error in hcb_deref In-Reply-To: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> References: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> Message-ID: <051.7c1e1f8efb8800f02bc44ca829d197cd@varnish-cache.org> #842: assert error in hcb_deref --------------------+------------------------------------------------------- Reporter: perbu | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- Description changed by phk: Old description: > Running trunk as of this morning. > > Jan 6 12:22:11 odd varnishd[14082]: Child (14083) died signal=6 > Jan 6 12:22:11 odd varnishd[14082]: Child (14083) Panic message: Assert > error in hcb_deref(), hash_critbit.c line 411:#012 > Condition((oh->waitinglist) == 0 > ) not true.#012thread = (cache-timeout)#012ident = > Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll#012Backtrace:#012 > 0x427c48: pan_ic+b8#012 > 0x435e14: hcb_deref+264#012 0x42095f: HSH_Deref+21f#012 0x41d29c: > exp_timer+32c#012 0x42a15b: wrk_bgthread+bb#012 0x7f9abd26c9ca: > _end+7f9abcbfea32#012 > 0x7f9abcfc970d: _end+7f9abc95b775#012 > Jan 6 12:22:11 odd varnishd[14082]: Child cleanup complete > Jan 6 12:22:11 odd varnishd[14082]: child (24347) Started > Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said > Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said Child starts > Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said SMF.s0 mmap'ed > 1073741824 bytes of 1073741824 New description: Running trunk as of this morning. {{{ Jan 6 12:22:11 odd varnishd[14082]: Child (14083) died signal=6 Jan 6 12:22:11 odd varnishd[14082]: Child (14083) Panic message: Assert error in hcb_deref(), hash_critbit.c line 411: Condition((oh->waitinglist) == 0 ) not true. thread = (cache-timeout) ident = Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x427c48: pan_ic+b8 0x435e14: hcb_deref+264 0x42095f: HSH_Deref+21f 0x41d29c: exp_timer+32c 0x42a15b: wrk_bgthread+bb 0x7f9abd26c9ca: _end+7f9abcbfea32 0x7f9abcfc970d: _end+7f9abc95b775 Jan 6 12:22:11 odd varnishd[14082]: Child cleanup complete Jan 6 12:22:11 odd varnishd[14082]: child (24347) Started Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said Child starts Jan 6 12:22:11 odd varnishd[14082]: Child (24347) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 12:09:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 12:09:20 -0000 Subject: [Varnish] #832: Error resolving IPv6 literal w/o port In-Reply-To: <039.e921458af12652c361ffca82e19b5360@varnish-cache.org> References: <039.e921458af12652c361ffca82e19b5360@varnish-cache.org> Message-ID: <048.682897d786ea0d2b1abda00de4b78153@varnish-cache.org> #832: Error resolving IPv6 literal w/o port ------------------------------+--------------------------------------------- Reporter: bz | Owner: phk Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: IPv6 getaddrinfo | ------------------------------+--------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [5705]) Fix a parse bug in IPv6 addresses. Fixes: #832 Patch by: bz (THANKS!) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 12:18:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 12:18:28 -0000 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: <042.766fbd7234f41043875c033a4974cac8@varnish-cache.org> References: <042.766fbd7234f41043875c033a4974cac8@varnish-cache.org> Message-ID: <051.86eb77bb25cdab4a49e960201a7acc9d@varnish-cache.org> #840: Varnish on Cygwin-Windows Platform -------------------------+-------------------------------------------------- Reporter: jdzst | Type: enhancement Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: worksforme | Keywords: cygwin, windows -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I've looked over your patches, but still find they contain a fair bit too much unwanted stuff, for instance all the "connection: close" in the testcases, (what's up with that ?) As I think I said earlier, we are not likely to see a windows port of Varnish make it past Tier-C status (http://www.varnish- cache.org/docs/trunk/phk/platforms.html) so the intrusiveness of the patches we are willing to entertain is very limited. I would suggest you create a wiki-page for the cygwin port instead of using tickets, until such time as we see that the windows port has a relevance and we have a patch-set we can agree on. In general we do not use tickets for feature-requests, but only for bugs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 12:37:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 12:37:49 -0000 Subject: [Varnish] #782: expose sp->esis to vcl (patch attached) In-Reply-To: <045.38ab3dedb5c8bc37860e60bac6c20b23@varnish-cache.org> References: <045.38ab3dedb5c8bc37860e60bac6c20b23@varnish-cache.org> Message-ID: <054.61ef28ceeb071539c0f2d1273619efcb@varnish-cache.org> #782: expose sp->esis to vcl (patch attached) -------------------------+-------------------------------------------------- Reporter: askalski | Owner: kristian Type: enhancement | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: esi | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [5706]) Rename sp->esis to sp->esi_level and make it available in VCL as req.vcl_level. Fixes: #782 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 10 13:24:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Jan 2011 13:24:45 -0000 Subject: [Varnish] #839: Varnish fails to start on boot for some FreeBSD machines In-Reply-To: <054.631b323b9c05fe09e2b12eed8c556e4e@varnish-cache.org> References: <054.631b323b9c05fe09e2b12eed8c556e4e@varnish-cache.org> Message-ID: <063.27095f0b4867512682e864da2895fac5@varnish-cache.org> #839: Varnish fails to start on boot for some FreeBSD machines -------------------------------+-------------------------------------------- Reporter: james.m.henderson | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: -------------------------------+-------------------------------------------- Comment(by kristian): Please paste the error message. Varnish doesn't really care about wether a backend is reachable or not on startup. The only issue I see is if -a is used with non-wildcard IP which isn't available. -- Ticket URL: Varnish The Varnish HTTP Accelerator From jdzstz at gmail.com Mon Jan 10 14:34:10 2011 From: jdzstz at gmail.com (=?ISO-8859-1?Q?Jorge_D=EDaz?=) Date: Mon, 10 Jan 2011 15:34:10 +0100 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: <051.86eb77bb25cdab4a49e960201a7acc9d@varnish-cache.org> References: <042.766fbd7234f41043875c033a4974cac8@varnish-cache.org> <051.86eb77bb25cdab4a49e960201a7acc9d@varnish-cache.org> Message-ID: In my opinion, the intrusiveness of patch I have posted is very limited. Please tell me which of the changes do you think is very intrusive and what do you think is ok, I send you a little explanation about my changes. * configure.ac and Makefile.am : The big changes are applied in "configure.ac" and all changes are conditional. only are applied if platform is "cygwin". The rest of platforms ignore new variables or modifications. Most of the modifications in Makefile are need because windows EXE and DLL link process is made in diferent way that linux ELF format and need some parameters that are setted in configure.ac. * bin/varnishd/mgt_vcc.c : there is a modification in cygwin to test if we can use "/bin/sh" or windows "cmd.exe" this allows us to use windows shell instead cygwin bash. * lib/libvarnish/time.c : if whe dont have defined CLOCK_MONOTONIC, we undefine HAVE_CLOCK_GETTIME because has no sense to use HAVE_CLOCK_GETTIME code if we do not have a clock monotonic, as it ocurrs in cygwin. * lib/libvarnish/vin.c : allow windows paths: c:/path , c:\path or \\machine\path in varnish dir parameter. * lib/libvmod_std/vmod_std.c : ugly fix to drand48 for cygwin. I think it can be removed because it will work ok in future cygwin 1.7.8 * bin/varnishtest/vtc_server.c : in process shutdown, the server thread can be blocked in "accept" function. - Linux has no problem with it, if thread receives pthread_cancel / pthread_join, the "accept" is unblocked and thread ends OK. - In Windows, "accept" only is unblocked if a signal is receved, so the solution is to send a signal to thread with pthread_kill before making the pthread_join. The signal does nothing. * bin/varnishtest/tests/b00015.vtc and v00009.vtc : About "connection: close" that I have added in two tests, the reason is that they have a problematic behaviour in windows: 1) varnishd process makes a TCP connection to the mockup web server process. 2) the mockup web server returns some contents, as configured in "vtc" file 3) if varnishd doesnot receive "connection: close" the TCP connection is mantained forever, until varnishd process stops 4) Inside test, web server (server s1) is stoped and started again, and try binding the same port at start 5) in linux there is no problem with port reusing, but in windows there is a problem with "Bind: Address Already in Use" because windows winsock implementation does not allow reusing this port if the client process (varnishd) keeps port open in the same machine. ( If client is in different machine, there is no problem ) If we says to varnisd not to keep opened the connection, the test run fine. 2011/1/10 Varnish : > #840: Varnish on Cygwin-Windows Platform > -------------------------+-------------------------------------------------- > ?Reporter: ?jdzst ? ? ? | ? ? ? ?Type: ?enhancement > ? ?Status: ?closed ? ? ?| ? ?Priority: ?normal > ?Milestone: ? ? ? ? ? ? ?| ? Component: ?build > ? Version: ?trunk ? ? ? | ? ?Severity: ?normal > Resolution: ?worksforme ?| ? ?Keywords: ?cygwin, windows > -------------------------+-------------------------------------------------- > Changes (by phk): > > ?* status: ?new => closed > ?* resolution: ?=> worksforme > > > Comment: > > ?I've looked over your patches, but still find they contain a fair bit too > ?much unwanted stuff, for instance all the "connection: close" in the > ?testcases, (what's up with that ?) > > ?As I think I said earlier, we are not likely to see a windows port of > ?Varnish make it past Tier-C status (http://www.varnish- > ?cache.org/docs/trunk/phk/platforms.html) so the intrusiveness of the > ?patches we are willing to entertain is very limited. > > ?I would suggest you create a wiki-page for the cygwin port instead of > ?using tickets, until such time as we see that the windows port has a > ?relevance and we have a patch-set we can agree on. > > ?In general we do not use tickets for feature-requests, but only for bugs. > > -- > Ticket URL: > Varnish > The Varnish HTTP Accelerator > From phk at phk.freebsd.dk Mon Jan 10 14:37:30 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 10 Jan 2011 14:37:30 +0000 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: Your message of "Mon, 10 Jan 2011 15:34:10 +0100." Message-ID: <57913.1294670250@critter.freebsd.dk> In message , =?IS O-8859-1?Q?Jorge_D=EDaz?= writes: I really don't have anything to add. Until the CygWin port shows that it is worth keeping, I'm not touch patches that *I* deem too intrusive. As I said, feel free to make a wiki page for this project (if you don't have a wiki-edit bit yet, just let me know, we only limit them to keep spammers away) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jdzstz at gmail.com Mon Jan 10 14:56:02 2011 From: jdzstz at gmail.com (=?ISO-8859-1?Q?Jorge_D=EDaz?=) Date: Mon, 10 Jan 2011 15:56:02 +0100 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: <57913.1294670250@critter.freebsd.dk> References: <57913.1294670250@critter.freebsd.dk> Message-ID: I dont have permisions to edit in your trac wiki pages. If you grant edit permission I can copy all the information to a new page. In any case, I think at least the modification of time.c should be necesary: 65 #ifndef CLOCK_MONOTONIC 66 #undef HAVE_CLOCK_GETTIME 67 #endif If any system have clock_gettime function defined but dont have "CLOCK_MONOTONIC", your code does not work: 76 assert(clock_gettime(CLOCK_MONOTONIC, &ts) == 0); 2011/1/10 Poul-Henning Kamp : > In message , =?IS > O-8859-1?Q?Jorge_D=EDaz?= writes: > > I really don't have anything to add. > > Until the CygWin port shows that it is worth keeping, I'm not touch patches > that *I* deem too intrusive. > > As I said, feel free to make a wiki page for this project (if you don't > have a wiki-edit bit yet, just let me know, we only limit them to keep > spammers away) > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > From phk at phk.freebsd.dk Mon Jan 10 16:05:51 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 10 Jan 2011 16:05:51 +0000 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: Your message of "Mon, 10 Jan 2011 15:56:02 +0100." Message-ID: <58152.1294675551@critter.freebsd.dk> In message , =?IS O-8859-1?Q?Jorge_D=EDaz?= writes: >I dont have permisions to edit in your trac wiki pages. If you grant >edit permission I can copy all the information to a new page. You should have wiki-bit now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From varnish-bugs at varnish-cache.org Tue Jan 11 12:04:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 11 Jan 2011 12:04:39 -0000 Subject: [Varnish] #842: assert error in hcb_deref In-Reply-To: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> References: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> Message-ID: <051.b8d442bcccf1ac96412c85b11fc91c04@varnish-cache.org> #842: assert error in hcb_deref --------------------+------------------------------------------------------- Reporter: perbu | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- Comment(by perbu): Buildt with autogen.des. Same error (below). Let me know if you want a core or anything (it's running without coredumping enabled). Jan 10 22:12:00 odd varnishd[6866]: Child (6867) Panic message: Assert error in hcb_deref(), hash_critbit.c line 411: Condition((oh->waitinglist) == 0) not true. thread = (cache-timeout) ident = Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x42f0e6: pan_backtrace+19 0x42f380: pan_ic+172 0x441844: hcb_deref+1e8 0x4280f5: HSH_Deref+566 0x422a64: exp_timer+3ec 0x431e9b: wrk_bgthread+184 0x7f29cd0a09ca: _end+7f29cca1a9b2 0x7f29ccdfd70d: _end+7f29cc7776f5 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 12 09:25:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 12 Jan 2011 09:25:10 -0000 Subject: [Varnish] #843: Sometime ESI are not interpreted Message-ID: <045.c142446fef74c9da246ccdcdb72ed611@varnish-cache.org> #843: Sometime ESI are not interpreted ----------------------+----------------------------------------------------- Reporter: nidosaur | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.3 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Hi we have a problem on EZPublish site ESI enabled. Sometimes ESI are interpreted and sometimes not. We are currently in varnish 2.1.3 {{{ varnishstat -1 client_conn 880512 15.06 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 4806872 82.20 Client requests received cache_hit 7969831 136.29 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 124448 2.13 Cache misses backend_conn 162815 2.78 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 219 0.00 Backend conn. failures backend_reuse 72684 1.24 Backend conn. reuses backend_toolate 51704 0.88 Backend conn. was closed backend_recycle 124388 2.13 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 73635 1.26 Fetch with Length fetch_chunked 51665 0.88 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 69 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 49 0.00 Fetch zero len fetch_failed 80 0.00 Fetch failed n_sess_mem 674 . N struct sess_mem n_sess 253 . N struct sess n_object 19650 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 19977 . N struct objectcore n_objecthead 11795 . N struct objecthead n_smf 45541 . N struct smf n_smf_frag 1411 . N small free smf n_smf_large 79 . N large free smf n_vbe_conn 46 . N struct vbe_conn n_wrk 400 . N worker threads n_wrk_create 400 0.01 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 420377 7.19 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 163 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 9690 . N expired objects n_lru_nuked 94461 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 3335607 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 6028714 103.10 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 880450 15.06 Total Sessions s_req 4806872 82.20 Total Requests s_pipe 109695 1.88 Total pipe s_pass 1524 0.03 Total pass s_fetch 125338 2.14 Total fetch s_hdrbytes 1831475536 31320.12 Total header bytes s_bodybytes 58393321969 998586.12 Total body bytes sess_closed 273287 4.67 Session Closed sess_pipeline 5524 0.09 Session Pipeline sess_readahead 5007 0.09 Session Read Ahead sess_linger 4615604 78.93 Session Linger sess_herd 4240874 72.52 Session herd shm_records 240285980 4109.14 SHM records shm_writes 15386982 263.13 SHM writes shm_flushes 1116 0.02 SHM flushes due to overflow shm_cont 1120 0.02 SHM MTX contention shm_cycles 102 0.00 SHM cycles through buffer sm_nreq 387119 6.62 allocator requests sm_nobj 44051 . outstanding allocations sm_balloc 934379520 . bytes allocated sm_bfree 139362304 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 634 0.01 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 265646 . SMS bytes allocated sms_bfree 265646 . SMS bytes freed backend_req 125805 2.15 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 8094568 138.43 HCB Lookups without lock hcb_lock 60306 1.03 HCB Lookups with lock hcb_insert 60306 1.03 HCB Inserts esi_parse 43401 0.74 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 58476 1.00 Client uptime }}} {{{ default.vcl backend www46 { .host = "192.168.1.137"; .port = "80"; } sub vcl_recv { set req.backend = www46; #Add a unique header containing the client address unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # if (req.http.Cache-Control ~ "no-cache") { # purge_url(req.url); # } ##### always cache these items: ## images if (req.request == "GET" && req.url ~ "\.(gif|jpg|jpeg|bmp|png|tiff|tif|ico|i mg|tga|wmf)$") { return (lookup); } ## various other content pages if (req.request == "GET" && req.url ~ "\.(css|js)$") { return (lookup); } ## multimedia if (req.request == "GET" && req.url ~ "\.(svg|swf|mov|avi|wmv)$") { return (lookup); } #### do not cache these rules: # pass mode can't handle POST (yet) if (req.request == "POST") { return (pipe); } if (req.http.host != "www.site.fr"){ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { return (pipe); } if (req.http.Expect) { return (pipe); } if (req.http.Authenticate || req.http.Authorization) { return (pass); } if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm unset req.http.Accept-Encoding; } } unset req.http.Accept-Encoding; unset req.http.Vary; #### don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "is_logged_in=" && req.url !~ "^/lay out/set/esi") { return(pass); } ##// Varnish doesn't do INM requests so pass it through if no If- Modified-Sin ce was sent if (req.http.If-None-Match && !req.http.If-Modified-Since) { return (pass); } ## don t cache these urls if (req.url ~ "_pages/wpwidgets/wp-admin/") { return (pipe); } #### if it passes all these tests, do a lookup anyway; return (lookup); } sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set bereq.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. set bereq.http.PipeCache = "Pipeline"; return (pipe); } # sub vcl_pass { return (pass); } # sub vcl_hash { set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } return (hash); } # sub vcl_hit { if (!obj.cacheable) { return (pass); } return (deliver); } sub vcl_miss { return (fetch); } # sub vcl_fetch { set beresp.ttl = 600s; if( beresp.http.X-ttl ~ "s$"){ # seconds C{ char *ttl; ttl = VRT_GetHdr(sp, HDR_BERESP, "\06X-ttl:"); // 6 == 6 chars VRT_l_beresp_ttl(sp, atoi(ttl)); }C } elseif ( beresp.http.X-ttl ~ "m$") { # minutes C{ char *ttl; ttl = VRT_GetHdr(sp, HDR_BERESP, "\06X-ttl:"); // 6 == 6 chars VRT_l_beresp_ttl(sp, atoi(ttl) * 60); }C } elseif ( beresp.http.X-ttl ~ "h$") { # hours C{ char *ttl; ttl = VRT_GetHdr(sp, HDR_BERESP, "\06X-ttl:"); // 6 == 6 chars VRT_l_beresp_ttl(sp, atoi(ttl) * 60 * 60); }C } esi; # Varnish determined the object was not cacheable if (!beresp.cacheable) { set beresp.http.X-Cacheable = "NO:Not Cacheable"; # You don't wish to cache content for logged in users } elsif(req.http.Cookie ~"(is_logged_in)") { set beresp.http.X-Cacheable = "NO:Got Session"; # You are respecting the Cache-Control=private header from the backend } elsif ( beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(pass); # Varnish determined the object was cacheable } else { set beresp.http.X-Cacheable = "YES"; } if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie ~ "is_logged_in=deleted(.*)") { return (deliver); } #if (beresp.http.Set-Cookie) { # return (pass); #} if (req.request == "GET" && req.url ~ "\.(gif|jpg|jpeg|bmp|png|tiff|tif|ico |img|tga|wmf)$") { set beresp.ttl = 24h; return (deliver); } ## various other content pages if (req.request == "GET" && req.url ~ "\.(css|js)$") { set beresp.ttl = 24h; return (deliver); } ## multimedia if (req.request == "GET" && req.url ~ "\.(svg|swf|ico|mp3|mp4|m4a|ogg|mov|a vi|wmv)$") { set beresp.ttl = 24h; return (deliver); } # grace if ( beresp.status == 500) { set beresp.grace = 1h; restart; } set beresp.grace = 1h; return (deliver); } # sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS"; } set resp.http.WhoisCache = "TopProxy"; return (deliver); } # }}} when esi is not interpreted, we can see in code source the esi load {{{
}}} I have tried to modify differents parameters but there's always the same result : sometime all ESI are interpreted and sometimes none. {{{ param.show 200 2375 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_lurker_sleep 0.000000 [s] between_bytes_timeout 300.000000 [s] cache_vbe_conns off [bool] cc_command "exec cc -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 10 [seconds] clock_skew 10 [s] connect_timeout 300.000000 [s] critbit_cooloff 180.000000 [s] default_grace 10 [seconds] default_ttl 600 [seconds] diag_bitmap 0x0 [bitmap] err_ttl 0 [seconds] esi_syntax 1 [bitmap] fetch_chunksize 128 [kilobytes] first_byte_timeout 300.000000 [s] group nogroup (65534) http_headers 64 [header lines] http_range_support off [bool] listen_address :80 listen_depth 1024 [connections] log_hashstring off [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_includes 10 [includes] max_restarts 4 [restarts] overflow_max 100 [%] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] purge_dups on [bool] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 32768 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] syslog_cli_traffic on [bool] thread_pool_add_delay 20 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 200 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 100 [seconds] thread_pools 2 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcl_trace off [bool] waiter default (epoll, poll) }}} I don't know if it's the same cas of http://www.varnish- cache.org/trac/ticket/805 but I've got also negative ReqEnd {{{ Varnish:~# varnishlog | grep nan 83 ReqEnd - 1460013818 1294824278.541809559 1294824281.333508492 -2.791691780 nan nan 97 ReqEnd - 1460013630 1294824277.736756325 1294824281.742346287 -3.734101057 nan nan 334 ReqEnd - 1460014066 1294824279.950573683 1294824282.971493483 -2.936873913 nan nan 101 ReqEnd c 1460014725 1294824283.651019812 1294824283.726319790 -0.075284481 nan nan 82 ReqEnd c 1460014601 1294824282.742613554 1294824283.744442940 -1.001822472 nan nan 214 ReqEnd c 1460014687 1294824283.311659813 1294824285.048683643 -1.737019062 nan nan 32 ReqEnd c 1460014895 1294824284.836895704 1294824285.345507860 -0.381927729 nan nan 298 ReqEnd - 1460014196 1294824280.686220884 1294824285.820836782 -4.761623859 nan nan 117 ReqEnd c 1460014540 1294824282.388427496 1294824286.035535336 -3.647100687 nan nan 245 ReqEnd - 1460014122 1294824280.186545134 1294824286.089832306 -5.662771463 nan nan 117 ReqEnd c 1460015132 1294824286.105287313 1294824286.161284924 -0.055979967 nan nan 331 ReqEnd c 1460015096 1294824285.907188416 1294824286.372141600 -0.464935541 nan nan 122 ReqEnd - 1460013821 1294824278.552110434 1294824286.479575396 -3.372597456 nan nan 240 ReqEnd c 1460014974 1294824285.243790865 1294824286.903476715 -1.659677982 nan nan 164 ReqEnd - 1460014148 1294824280.382161617 1294824287.273322344 -6.556917667 nan nan 20 ReqEnd c 1460014891 1294824284.788542747 1294824287.581802368 -1.827905655 nan nan 79 ReqEnd c 1460015391 1294824287.458260298 1294824288.009994507 -0.551726580 nan nan }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 12 11:26:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 12 Jan 2011 11:26:31 -0000 Subject: [Varnish] #843: Sometime ESI are not interpreted In-Reply-To: <045.c142446fef74c9da246ccdcdb72ed611@varnish-cache.org> References: <045.c142446fef74c9da246ccdcdb72ed611@varnish-cache.org> Message-ID: <054.96134558d245e7cba0a296344bfa1f5c@varnish-cache.org> #843: Sometime ESI are not interpreted ----------------------+----------------------------------------------------- Reporter: nidosaur | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.3 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Comment(by nidosaur): it seems I forgot set bereq.http.Connection = "Close"; in vcl_pipe -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 13 13:53:54 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Jan 2011 13:53:54 -0000 Subject: [Varnish] #844: Build error during test c00003.vtc Message-ID: <044.a3953edd6e744d182a68e3255c139275@varnish-cache.org> #844: Build error during test c00003.vtc ----------------------------------------------------+----------------------- Reporter: johnnyh | Type: defect Status: new | Priority: low Milestone: | Component: build Version: 2.1.4 | Severity: minor Keywords: build fail ip_nonlocal_bind c00003.vtc | ----------------------------------------------------+----------------------- A small bugreport: I had a bit of a hard time building a Varnish 2.1.4 RPM on a Linux RHEL6 64-bit machine. The build kept failing during the selftest part, specifically the c00003.vtc test: ------8<------ snip --------8<----------------- ### v1 debug| child (6300) Started\n ### v1 CLI RX 200 ## v1 CLI 200 ---- v1 FAIL CLI response 200 expected 300 ### v1 debug| Child (6300) said \n ### v1 debug| Child (6300) said Child starts\n ### v1 debug| Child (6300) said managed to mmap 10485760 bytes of 10485760\n # top Test timed out # top TEST ././tests/c00003.vtc FAILED FAIL: ./tests/c00003.vtc Eventually I found the cause to be the following: If /proc/net/ipv4/ip_nonlocal_bind is set to '1' , then the test error will occur. The solution is to do the following: echo 0 > /proc/net/ipv4/ip_nonlocal_bind The default setting for ip_nonlocal_bind on most Linux systems will be '0', but in my case, the buildmachine was also a testmachine for a fail- over setup, which required ip_nonlocal_bind to be set to '1'. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 13 15:17:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Jan 2011 15:17:52 -0000 Subject: [Varnish] #845: Health checks get duplicated when loading a new config Message-ID: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> #845: Health checks get duplicated when loading a new config ------------------------------------+--------------------------------------- Reporter: johnnyh | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: health check duplicate | ------------------------------------+--------------------------------------- Summary: When you reload a config, and the do vcl.discard on the old config, the health checks sometimes get broken. It is related to report ticket 834 ( http://www.varnish-cache.org/trac/ticket/834 ) , but the final word on 834 was that doing a vcl.discard fixes the problem. Apparently it does not do so all the time. A workaround for this issue is to restart varnish, but that is a really nasty solution because the cache gets flushed and it is also not as 'safe' as just doing a 'reload' of Varnish. Now first some system details: Varnish v2.1.3 Intel(R) Xeon(R) CPUX5670 @ 2.93GHz (In a VMware virtual machine) 64-bit 4G RAM Linux kernel 2.6.18-194.26.1.el5 RHEL 5.5 completely up-to-date Custom VCL (described below) Here is how to try and reproduce it: - Start with a working config, with a single backend, with a health check that returns 'healthy' all the time. - Now change the IP address of the backend to something that is certainly not a healthy backend, like 1.2.3.4. - Load the new config and start using it: # DATE=`date +%s` ; varnishadm -T 127.0.0.1:6082 vcl.load reload${DATE} /etc/varnish/test-config.vcl; varnishadm -T 127.0.0.1:6082 vcl.use reload${DATE} - We now have a situation where Varnishlog shows our backend as healthy and sick at the same time: # varnishlog 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.012475 0.012311 HTTP/1.1 200 OK 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.008161 0.011274 HTTP/1.1 200 OK 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011735 0.011389 HTTP/1.1 200 OK - Let's check what configs varnish thinks it knows about: # varnishadm -T 127.0.0.1:6082 vcl.list available 105 boot active 1 reload1294928381 - According to ticket 834 we must now discard the old configurations, which is only one in this case: # varnishadm -T 127.0.0.1:6081 vcl.discard boot - The problem now exists here: Sometimes, the discarded configuration does not 'disappear' from the list of available configurations, but it remains there in the state 'discarded' # varnishadm -T 127.0.0.1:6082 vcl.list discarded 105 boot active 1 reload1294928381 - The real problem lies here: The backend checks are now kaput. Varnishlogs shows the backend as healthy and sick at the same time. # varnishlog 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.007989 0.010539 HTTP/1.1 200 OK 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.014861 0.011620 HTTP/1.1 200 OK 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011745 0.011651 HTTP/1.1 200 OK - There appears to be no way to fix this situation other than restarting Varnishd. - I have been able to reproduce this problem a few times, but not consistently. It seems this problem shows up when you use vcl.load-vcl .use-vcl.discard in rapid succession. If you work really slowly while doing the reload/discard cycle, you will probably not find this bug. The way I reload and discard my configs, is by having the following script in my init.d script, so that all I have to do is call "/etc/init.d/varnish reload ; /etc/init.d/varnish discard". Here is the code I use in the init script: vcl_reload() { echo "Reloading Varnish VCL..." DATE=`date +%s` varnishadm -T $HOSTPORT vcl.load reload${DATE} $VARNISH_VCL_CONF || vcl_exit 1 "Error compiling config $VARNISH_VCL_CONF" varnishadm -T $HOSTPORT vcl.use reload${DATE} || vcl_exit 1 "Error loading config $VARNISH_VCL_CONF" vcl_exit 0 "VCL reloaded succesfuly." } vcl_discard_all() { echo "Discarding old configurations..." COUNT=`varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | grep active -B1 | wc -l` if [ $COUNT -le 1 ] ; then vcl_exit 1 "Error: There are no old configurations to discard." ; fi varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | while read CONFIG ; do if [ `echo "$CONFIG" | awk '{print $1}'` == "available" ] ; then varnishadm -T $HOSTPORT vcl.discard `echo "$CONFIG" | awk '{print $3}'` else break fi done vcl_exit 0 "Old configurations were succesfully discarded." } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 13 15:23:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Jan 2011 15:23:52 -0000 Subject: [Varnish] #845: Health checks get duplicated when loading a new config In-Reply-To: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> References: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> Message-ID: <053.8f92eace4d4d9786d3115bb7ec7f8501@varnish-cache.org> #845: Health checks get duplicated when loading a new config ------------------------------------+--------------------------------------- Reporter: johnnyh | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: health check duplicate | ------------------------------------+--------------------------------------- Comment(by johnnyh): NOTE: In the examples I used both admin port 6081 and port 6082. This is merely a copy / paste error on my part. On my testmachine I uset 6081, but forgot to change that to 6082 in some of the above examples. It is certainly not the cause of me experiencing the bug I mention in this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 14 09:56:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 14 Jan 2011 09:56:28 -0000 Subject: [Varnish] #846: Let varnishd continue running when encountering unused backends in a configuration Message-ID: <046.9edf7c9c3b91413c38ba35f8d2efbcd8@varnish-cache.org> #846: Let varnishd continue running when encountering unused backends in a configuration -------------------------------------+-------------------------------------- Reporter: jhalfmoon | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: normal Keywords: unused backend exit vcl | -------------------------------------+-------------------------------------- Varnishd (we're currently using 2.1.4) exits when encountering unused backends in a configuration. It throws an errror and exits like so: Message from VCC-compiler: Warning: Unused backend www_testsite_com, defined: (/etc/varnish/sites/testsite_com Line 3 Pos 9) backend www_testsite_com { .host = "1.2.3.4"; } --------####################--------------------------------------------------------------------- Running VCC-compiler failed, exit 1 VCL compilation failed It would be nice if Varnishd had an option like '--continue-on-warning' that makes Varnish continue running when encountering a non fatal situation like the one above. The reason this feature is desirable is because we are running a VCL with lots of inline C code and among other things, the backends get checked and set in this C code. So in fact, there are no unused backends in our VCL, but Varnish just does not see us using them. We currently use the following code to work around this problem by creating a dummy-reference to the backends, but it makes the VCLs really messy, and we have a lot of backends. Here's the workaround: sub vcl_error { # the code within the if-block never gets executed if (req.http.host ~ "^kludge$") { set req.backend = www_testsite_com ; set req.backend = etc..etc...etc... } As far as I can see, a fix for this issue is very unintrusive, but then again I don't have the overview of the fulltime developpers. I can do a quick and dirty fix of the problem by preventing Varnish from exiting on an 'Unused object' error, using the following patch: diff --git a/lib/libvcl/vcc_xref.c b/lib/libvcl/vcc_xref.c index c9b0418..9c5547b 100644 --- a/lib/libvcl/vcc_xref.c +++ b/lib/libvcl/vcc_xref.c @@ -168,9 +168,11 @@ vcc_CheckReferences(struct tokenlist *tl) continue; } - vsb_printf(tl->sb, "Unused %s %.*s, defined:\n", + vsb_printf(tl->sb, "Warning: Unused %s %.*s, defined:\n", type, PF(r->name)); vcc_ErrWhere(tl, r->name); + tl->err=0; // Patch: This unsets the errorcode that was set by vcc_ErrWhere + nerr=0; // Patch: This unsets the errorcode set by the above 'nerr++' } return (nerr); } But I'd rather be using vanilla Varnishd sourcecode for our production machines. Thanks for your time and I hope this helps. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 14 09:59:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 14 Jan 2011 09:59:42 -0000 Subject: [Varnish] #846: Let varnishd continue running when encountering unused backends in a configuration In-Reply-To: <046.9edf7c9c3b91413c38ba35f8d2efbcd8@varnish-cache.org> References: <046.9edf7c9c3b91413c38ba35f8d2efbcd8@varnish-cache.org> Message-ID: <055.4b6ae60d8f63f305a67781f97d5991e1@varnish-cache.org> #846: Let varnishd continue running when encountering unused backends in a configuration -------------------------------------+-------------------------------------- Reporter: jhalfmoon | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: normal Keywords: unused backend exit vcl | -------------------------------------+-------------------------------------- Comment(by jhalfmoon): I added the patch as an attachment because I see that linefeeds somehow get messed up in my Trac reports. My apologies for that. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 14 14:25:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 14 Jan 2011 14:25:35 -0000 Subject: [Varnish] #847: vcl_pipe() closes connection on POST requests on timeout 60s Message-ID: <043.7ca131263f9cf6243ae46b42836def7f@varnish-cache.org> #847: vcl_pipe() closes connection on POST requests on timeout 60s ---------------------------------+------------------------------------------ Reporter: werdan | Type: defect Status: new | Priority: normal Milestone: Varnish 2.1 release | Component: build Version: trunk | Severity: normal Keywords: | ---------------------------------+------------------------------------------ Using popular configuration for vcl_recv(): # do not cache POST requests if (req.request == "POST") { return (pass); } and initiating POST request to backend - I got connection closed in 60seconds. Varnish backend is configured like that: backend default { .host = "127.0.0.1"; .port = "8088"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 100s; } I use Apache 2.2.3 as backend. When requesting directly Apache without Varnish - no problem arises. The only villain left is Varnish. If return(pipe); is changed to return(pass) -> requests are proceeded correctly. We use Ubuntu on Amazon cloud: #uname -a Linux ip-10-204-51-237 2.6.35-24-virtual #42-Ubuntu SMP Thu Dec 2 05:15:26 UTC 2010 x86_64 GNU/Linux -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jan 15 17:38:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 15 Jan 2011 17:38:29 -0000 Subject: [Varnish] #805: Negative ReqEnd accept time (ESI enabled) and hanging request In-Reply-To: <043.a1d082f96df5a220dbf6111b921708bb@varnish-cache.org> References: <043.a1d082f96df5a220dbf6111b921708bb@varnish-cache.org> Message-ID: <052.59a03029d6841ab74130da9af80f77c1@varnish-cache.org> #805: Negative ReqEnd accept time (ESI enabled) and hanging request ----------------------+----------------------------------------------------- Reporter: tesdal | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.1.3 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kriller): I'm experiencing the same issue - both on 2.1.3 and latest 2.1.x branch from subversion, except that I do not have problems with hanging requests, which indicates to me that this could be 2 separate issues. -- Ticket URL: Varnish The Varnish HTTP Accelerator From advertising at freegsm.ro Tue Jan 11 07:57:29 2011 From: advertising at freegsm.ro (Mihai Valentin) Date: Tue, 11 Jan 2011 09:57:29 +0200 Subject: [Varnish] #835: Varnish stops receiving incoming connections, but the process is still up In-Reply-To: <052.3e1bb0fc826c2be19030b89a5ea835d6@varnish-cache.org> References: <043.0b306324d23af01d5f961c07ada4c811@varnish-cache.org> <052.3e1bb0fc826c2be19030b89a5ea835d6@varnish-cache.org> Message-ID: Hi. First of all, it wasn't a varnish bug. I use Varnish + Monit, and Monit periodically makes HTTP requests to see if the site works, and if not, restart varnish. Unfortunately, the SQL server behind it was slow, and it took more than monit timeout to load, so monit decided to restart varnish. Once again, I confirm this was not a varnish issue. Thank you very much. On Mon, Jan 10, 2011 at 10:38 AM, Varnish wrote: > #835: Varnish stops receiving incoming connections, but the process is > still up > > ----------------------+----------------------------------------------------- > Reporter: blamer | Owner: kristian > Type: defect | Status: assigned > Priority: normal | Milestone: Varnish 2.1 release > Component: varnishd | Version: 2.1.4 > Severity: major | Keywords: broken pipe, freeze, crash > > ----------------------+----------------------------------------------------- > Changes (by kristian): > > * status: new => assigned > > > Comment: > > Have you had a chance to look further at this? I'm unable to replicate the > issue, thus rely on your feedback if there is to be any chance of fixing > it beyond random chance... > > -- > Ticket URL: > Varnish > The Varnish HTTP Accelerator > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at varnish-cache.org Mon Jan 17 07:38:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 07:38:36 -0000 Subject: [Varnish] #843: Sometime ESI are not interpreted In-Reply-To: <045.c142446fef74c9da246ccdcdb72ed611@varnish-cache.org> References: <045.c142446fef74c9da246ccdcdb72ed611@varnish-cache.org> Message-ID: <054.10308795879cd8d370c691b69ed775fe@varnish-cache.org> #843: Sometime ESI are not interpreted -----------------------+---------------------------------------------------- Reporter: nidosaur | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 2.1.3 | Severity: normal Resolution: invalid | Keywords: -----------------------+---------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: Closing, as the submitter discovered it was a configuration error. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 07:39:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 07:39:11 -0000 Subject: [Varnish] #846: Let varnishd continue running when encountering unused backends in a configuration In-Reply-To: <046.9edf7c9c3b91413c38ba35f8d2efbcd8@varnish-cache.org> References: <046.9edf7c9c3b91413c38ba35f8d2efbcd8@varnish-cache.org> Message-ID: <055.1babce7edd5ae5a09eac812c754f7cc6@varnish-cache.org> #846: Let varnishd continue running when encountering unused backends in a configuration ------------------------+--------------------------------------------------- Reporter: jhalfmoon | Type: enhancement Status: closed | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: normal Resolution: fixed | Keywords: unused backend exit vcl ------------------------+--------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: This has already been fixed in trunk where you can set a parameter to turn these errors into warnings. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 09:39:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 09:39:37 -0000 Subject: [Varnish] #835: Varnish stops receiving incoming connections, but the process is still up In-Reply-To: <043.0b306324d23af01d5f961c07ada4c811@varnish-cache.org> References: <043.0b306324d23af01d5f961c07ada4c811@varnish-cache.org> Message-ID: <052.b6b698f07db83ed5653e2555b8135a2a@varnish-cache.org> #835: Varnish stops receiving incoming connections, but the process is still up ----------------------------------------+----------------------------------- Reporter: blamer | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: 2.1.4 Severity: major | Resolution: invalid Keywords: broken pipe, freeze, crash | ----------------------------------------+----------------------------------- Changes (by kristian): * status: assigned => closed * resolution: => invalid Comment: Closing this, based on the following mail to -bugs: {{{ Date: Tue, 11 Jan 2011 09:57:29 +0200 From: Mihai Valentin Hi. First of all, it wasn't a varnish bug. I use Varnish + Monit, and Monit periodically makes HTTP requests to see if the site works, and if not, restart varnish. Unfortunately, the SQL server behind it was slow, and it took more than monit timeout to load, so monit decided to restart varnish. Once again, I confirm this was not a varnish issue. }}} (For future reference: Mail to -bugs often get lost, as it's not currently visible in trac. -bugs is meant mainly as a read-only list) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 09:52:25 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 09:52:25 -0000 Subject: [Varnish] #847: vcl_pipe() closes connection on POST requests on timeout 60s In-Reply-To: <043.7ca131263f9cf6243ae46b42836def7f@varnish-cache.org> References: <043.7ca131263f9cf6243ae46b42836def7f@varnish-cache.org> Message-ID: <052.e551826e12d0a5b889987dda506b6d0d@varnish-cache.org> #847: vcl_pipe() closes connection on POST requests on timeout 60s ----------------------+----------------------------------------------------- Reporter: werdan | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by kristian): * owner: => kristian * component: build => varnishd Comment: You seem to be mixing pipe and pass in the report. You first state that pass in vcl_recv times out, then go on to say that switching from pipe to pass solves the problem - which is it? Also: Do you use Connection = close in vcl_pipe? Please attach varnishlog output. How long does it normally take to execute the request? And a fun fact: unless your server is on Venus, 600s connect timeout is way over the top. I did the math - most of the time, 600s is enough to reach Venus, but it'll take a bit more to reach Mars. The sun is 8 minutes one way. Technically, we do support interplanetary servers in Varnish, but those scenarios are not part of our usual test scenarios during quality assurance, and we don't do regular tests with it and thus can't guarantee the quality of service. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 10:38:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 10:38:04 -0000 Subject: [Varnish] #845: Health checks get duplicated when loading a new config In-Reply-To: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> References: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> Message-ID: <053.0e2064bdcf87130cd5e60da4d5b6af90@varnish-cache.org> #845: Health checks get duplicated when loading a new config ----------------------+----------------------------------------------------- Reporter: johnnyh | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: health check duplicate ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin Old description: > Summary: When you reload a config, and the do vcl.discard on the old > config, the health checks sometimes get broken. It is related to report > ticket 834 ( http://www.varnish-cache.org/trac/ticket/834 ) , but the > final word on 834 was that doing a vcl.discard fixes the problem. > Apparently it does not do so all the time. A workaround for this issue is > to restart varnish, but that is a really nasty solution because the cache > gets flushed and it is also not as 'safe' as just doing a 'reload' of > Varnish. > > Now first some system details: > > Varnish v2.1.3 > Intel(R) Xeon(R) CPUX5670 @ 2.93GHz (In a VMware virtual machine) > 64-bit > 4G RAM > Linux kernel 2.6.18-194.26.1.el5 > RHEL 5.5 completely up-to-date > Custom VCL (described below) > > Here is how to try and reproduce it: > > - Start with a working config, with a single backend, with a health check > that returns 'healthy' all the time. > - Now change the IP address of the backend to something that is certainly > not a healthy backend, like 1.2.3.4. > - Load the new config and start using it: > > # DATE=`date +%s` ; varnishadm -T 127.0.0.1:6082 vcl.load reload${DATE} > /etc/varnish/test-config.vcl; varnishadm -T 127.0.0.1:6082 vcl.use > reload${DATE} > > - We now have a situation where Varnishlog shows our backend as healthy > and sick at the same time: > > # varnishlog > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.012475 > 0.012311 HTTP/1.1 200 OK > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.008161 > 0.011274 HTTP/1.1 200 OK > 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 > 0.000000 > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011735 > 0.011389 HTTP/1.1 200 OK > > - Let's check what configs varnish thinks it knows about: > > # varnishadm -T 127.0.0.1:6082 vcl.list > available 105 boot > active 1 reload1294928381 > > - According to ticket 834 we must now discard the old configurations, > which is only one in this case: > > # varnishadm -T 127.0.0.1:6081 vcl.discard boot > > - The problem now exists here: Sometimes, the discarded configuration > does not 'disappear' from the list of available configurations, but it > remains there in the state 'discarded' > > # varnishadm -T 127.0.0.1:6082 vcl.list > discarded 105 boot > active 1 reload1294928381 > > - The real problem lies here: The backend checks are now kaput. > Varnishlogs shows the backend as healthy and sick at the same time. > > # varnishlog > 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 > 0.000000 > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.007989 > 0.010539 HTTP/1.1 200 OK > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.014861 > 0.011620 HTTP/1.1 200 OK > 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 > 0.000000 > 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011745 > 0.011651 HTTP/1.1 200 OK > > - There appears to be no way to fix this situation other than restarting > Varnishd. > > - I have been able to reproduce this problem a few times, but not > consistently. It seems this problem shows up when you use vcl.load-vcl > .use-vcl.discard in rapid succession. If you work really slowly while > doing the reload/discard cycle, you will probably not find this bug. The > way I reload and discard my configs, is by having the following script in > my init.d script, so that all I have to do is call "/etc/init.d/varnish > reload ; /etc/init.d/varnish discard". Here is the code I use in the init > script: > > vcl_reload() { > echo "Reloading Varnish VCL..." > DATE=`date +%s` > varnishadm -T $HOSTPORT vcl.load reload${DATE} $VARNISH_VCL_CONF || > vcl_exit 1 "Error compiling config $VARNISH_VCL_CONF" > varnishadm -T $HOSTPORT vcl.use reload${DATE} || vcl_exit 1 "Error > loading config $VARNISH_VCL_CONF" > vcl_exit 0 "VCL reloaded succesfuly." > } > > vcl_discard_all() { > echo "Discarding old configurations..." > COUNT=`varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | grep active > -B1 | wc -l` > if [ $COUNT -le 1 ] ; then vcl_exit 1 "Error: There are no old > configurations to discard." ; fi > varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | while read CONFIG ; > do > if [ `echo "$CONFIG" | awk '{print $1}'` == "available" ] ; then > varnishadm -T $HOSTPORT vcl.discard `echo "$CONFIG" | awk > '{print $3}'` > else > break > fi > done > vcl_exit 0 "Old configurations were succesfully discarded." > } New description: Summary: When you reload a config, and the do vcl.discard on the old config, the health checks sometimes get broken. It is related to report ticket 834 ( http://www.varnish-cache.org/trac/ticket/834 ) , but the final word on 834 was that doing a vcl.discard fixes the problem. Apparently it does not do so all the time. A workaround for this issue is to restart varnish, but that is a really nasty solution because the cache gets flushed and it is also not as 'safe' as just doing a 'reload' of Varnish. Now first some system details: Varnish v2.1.3 Intel(R) Xeon(R) CPUX5670 @ 2.93GHz (In a VMware virtual machine) 64-bit 4G RAM Linux kernel 2.6.18-194.26.1.el5 RHEL 5.5 completely up-to-date Custom VCL (described below) Here is how to try and reproduce it: - Start with a working config, with a single backend, with a health check that returns 'healthy' all the time. - Now change the IP address of the backend to something that is certainly not a healthy backend, like 1.2.3.4. - Load the new config and start using it: # DATE=`date +%s` ; varnishadm -T 127.0.0.1:6082 vcl.load reload${DATE} /etc/varnish/test-config.vcl; varnishadm -T 127.0.0.1:6082 vcl.use reload${DATE} - We now have a situation where Varnishlog shows our backend as healthy and sick at the same time: # varnishlog 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.012475 0.012311 HTTP/1.1 200 OK 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.008161 0.011274 HTTP/1.1 200 OK 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011735 0.011389 HTTP/1.1 200 OK - Let's check what configs varnish thinks it knows about: # varnishadm -T 127.0.0.1:6082 vcl.list available 105 boot active 1 reload1294928381 - According to ticket 834 we must now discard the old configurations, which is only one in this case: # varnishadm -T 127.0.0.1:6081 vcl.discard boot - The problem now exists here: Sometimes, the discarded configuration does not 'disappear' from the list of available configurations, but it remains there in the state 'discarded' # varnishadm -T 127.0.0.1:6082 vcl.list discarded 105 boot active 1 reload1294928381 - The real problem lies here: The backend checks are now kaput. Varnishlogs shows the backend as healthy and sick at the same time. # varnishlog 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.007989 0.010539 HTTP/1.1 200 OK 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.014861 0.011620 HTTP/1.1 200 OK 0 Backend_health - test_site Still sick ------- 0 4 5 0.000000 0.000000 0 Backend_health - test_site Still healthy 4--X-RH 5 4 5 0.011745 0.011651 HTTP/1.1 200 OK - There appears to be no way to fix this situation other than restarting Varnishd. - I have been able to reproduce this problem a few times, but not consistently. It seems this problem shows up when you use vcl.load-vcl .use-vcl.discard in rapid succession. If you work really slowly while doing the reload/discard cycle, you will probably not find this bug. The way I reload and discard my configs, is by having the following script in my init.d script, so that all I have to do is call "/etc/init.d/varnish reload ; /etc/init.d/varnish discard". Here is the code I use in the init script: vcl_reload() { echo "Reloading Varnish VCL..." DATE=`date +%s` varnishadm -T $HOSTPORT vcl.load reload${DATE} $VARNISH_VCL_CONF || vcl_exit 1 "Error compiling config $VARNISH_VCL_CONF" varnishadm -T $HOSTPORT vcl.use reload${DATE} || vcl_exit 1 "Error loading config $VARNISH_VCL_CONF" vcl_exit 0 "VCL reloaded succesfuly." } vcl_discard_all() { echo "Discarding old configurations..." COUNT=`varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | grep active -B1 | wc -l` if [ $COUNT -le 1 ] ; then vcl_exit 1 "Error: There are no old configurations to discard." ; fi varnishadm -T $HOSTPORT vcl.list | grep -v ^$ | while read CONFIG ; do if [ `echo "$CONFIG" | awk '{print $1}'` == "available" ] ; then varnishadm -T $HOSTPORT vcl.discard `echo "$CONFIG" | awk '{print $3}'` else break fi done vcl_exit 0 "Old configurations were succesfully discarded." } -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 10:40:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 10:40:14 -0000 Subject: [Varnish] #845: Health checks get duplicated when loading a new config In-Reply-To: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> References: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> Message-ID: <053.c84e9efb31de413d412701d028779012@varnish-cache.org> #845: Health checks get duplicated when loading a new config ----------------------+----------------------------------------------------- Reporter: johnnyh | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: health check duplicate ----------------------+----------------------------------------------------- Comment(by martin): Hello johnnyh, Have you tried this on trunk as well, and does it happen there too? These code paths have seen a lot of rework in trunk so it would be interesting to know if this problem is there too. I will have a look and see if I can reproduce what you are experiencing on 2.1. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 10:49:26 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 10:49:26 -0000 Subject: [Varnish] #845: Health checks get duplicated when loading a new config In-Reply-To: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> References: <044.528c4d3e8335b9cb7d2ad6cb76204380@varnish-cache.org> Message-ID: <053.a6505c4df2136b44bf0107abc71527d6@varnish-cache.org> #845: Health checks get duplicated when loading a new config ----------------------+----------------------------------------------------- Reporter: johnnyh | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: health check duplicate ----------------------+----------------------------------------------------- Comment(by jhalfmoon): Hi Martin, I've not tried reproducing this issue on trunk yet. I'll try to make some time this week to test that. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 17 21:49:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Jan 2011 21:49:21 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> References: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> Message-ID: <051.429a8dc3a9b1b1f6db09dbf33bc7d6b6@varnish-cache.org> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: kristian Type: documentation | Status: assigned Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Keywords: ---------------------------+------------------------------------------------ Comment(by ned14): Replying to [comment:5 kristian]: > If this is mentioned on our wiki, you should be able to edit it yourself (pending edit-bits). If it's mentioned elsewhere, I'd appreciate it if you could supply some links and I'll update the examples. I've activated my account and can edit existing pages but cannot see how to create a new VCL example page. Suggestions? Niall -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jan 18 08:01:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 18 Jan 2011 08:01:44 -0000 Subject: [Varnish] #844: Build error during test c00003.vtc In-Reply-To: <044.a3953edd6e744d182a68e3255c139275@varnish-cache.org> References: <044.a3953edd6e744d182a68e3255c139275@varnish-cache.org> Message-ID: <053.ae77df5f75904e33d4bf2b4894a6f7f1@varnish-cache.org> #844: Build error during test c00003.vtc ----------------------+----------------------------------------------------- Reporter: johnnyh | Type: defect Status: closed | Priority: low Milestone: | Component: build Version: 2.1.4 | Severity: minor Resolution: fixed | Keywords: build fail ip_nonlocal_bind c00003.vtc ----------------------+----------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: I just added a comment stating that you need to disable non-local binds in c00003.vtc. If non-local binds are enabled, there's no invalid IP we can try to bind to, so this is the best we can do. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 19 15:11:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 19 Jan 2011 15:11:43 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> References: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> Message-ID: <051.5d0355be2f6d4ab221ce312cb132de7e@varnish-cache.org> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: kristian Type: documentation | Status: assigned Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Keywords: ---------------------------+------------------------------------------------ Comment(by kristian): You should be able to edit the page with the list of examples, then just make a link to the (non-existing) example you want to make, save it and follow the link. That should give you a "this page doesn't exist, do you want to make it?" page. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 21 12:23:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 21 Jan 2011 12:23:21 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> References: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> Message-ID: <051.659adee84bc5cc69424d9917ea510ca8@varnish-cache.org> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: kristian Type: documentation | Status: assigned Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Keywords: ---------------------------+------------------------------------------------ Comment(by ned14): What a weird page creation system. Done at http://www.varnish- cache.org/trac/wiki/VCLExampleRegExStringMunging. You can probably close this issue now thanks. Niall -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jan 23 14:59:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 23 Jan 2011 14:59:57 -0000 Subject: [Varnish] #840: Varnish on Cygwin-Windows Platform In-Reply-To: <042.766fbd7234f41043875c033a4974cac8@varnish-cache.org> References: <042.766fbd7234f41043875c033a4974cac8@varnish-cache.org> Message-ID: <051.3c3edda7200e3e0b3d2ef19558d211df@varnish-cache.org> #840: Varnish on Cygwin-Windows Platform -------------------------+-------------------------------------------------- Reporter: jdzst | Type: enhancement Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: worksforme | Keywords: cygwin, windows -------------------------+-------------------------------------------------- Comment(by jdzst): I have created a wiki page with [wiki:VarnishOnCygwinWindows Varnish over Windows (with Cygwin DLL)] information -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jan 23 15:00:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 23 Jan 2011 15:00:11 -0000 Subject: [Varnish] #736: Migration to Cygwin plataform In-Reply-To: <042.6ef2981e8cf7eb068adf1551551220f4@varnish-cache.org> References: <042.6ef2981e8cf7eb068adf1551551220f4@varnish-cache.org> Message-ID: <051.f4af1b678d7065b5f8f7566565a20b41@varnish-cache.org> #736: Migration to Cygwin plataform ----------------------+----------------------------------------------------- Reporter: jdzst | Type: enhancement Status: closed | Priority: normal Milestone: | Component: build Version: | Severity: normal Resolution: invalid | Keywords: cygwin, windows ----------------------+----------------------------------------------------- Comment(by jdzst): I have created a wiki page with [wiki:VarnishOnCygwinWindows Varnish over Windows (with Cygwin DLL)] information -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 24 13:13:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Jan 2011 13:13:02 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> References: <042.7f9af009b98ac47cf8d3c3fd964c4f5b@varnish-cache.org> Message-ID: <051.861670748afb3637816d15fdf8f434ce@varnish-cache.org> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: kristian Type: documentation | Status: closed Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Resolution: fixed Keywords: | ---------------------------+------------------------------------------------ Changes (by kristian): * status: assigned => closed * resolution: => fixed Comment: Ok, thank you for your patience and thanks for fixing the ssue :) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 26 10:58:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Jan 2011 10:58:24 -0000 Subject: [Varnish] #848: varnishlog -r seems broken Message-ID: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> #848: varnishlog -r seems broken --------------------+------------------------------------------------------- Reporter: perbu | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: normal | Keywords: varnishlog --------------------+------------------------------------------------------- Created a log file like this: ./varnishlog -w /tmp/vlog Waiting a few seconds and breaking (^C). Then I try to read the file like this: ./varnishlog -r /tmp/vlog varnishlog then gives no output and returns with a exit code of 0. Strace gives some hints that something is wrong: (..) open("/tmp/vlog", O_RDONLY) = 3 uname({sys="Linux", node="odd", ...}) = 0 What the hell is this: open("/opt/varnish/var/varnish/odd/_.vsm", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=84934656, ...}) = 0 read(4, "6\332y\371\200\0\1\0#\330?M\0\0\0\0\277\23\0\0\300\23\0\0\0\0\20\5\0\0\0\0"..., 65664) = 65664 mmap(NULL, 84934656, PROT_READ, MAP_SHARED, 4, 0) = 0x7fc57f055000 read(3, "\35\0\0)\f\0\0\0", 8) = 8 read(3, "66.249.72.243 40136 773308670\0\0\0", 32) = 32 exit_group(0) = ? I have a log file that I can send to someone. It does however, contain session data so I won't upload it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 26 10:59:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Jan 2011 10:59:44 -0000 Subject: [Varnish] #848: varnishlog -r seems broken In-Reply-To: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> References: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> Message-ID: <051.820167fd58101836a5b873d87ccf428b@varnish-cache.org> #848: varnishlog -r seems broken --------------------+------------------------------------------------------- Reporter: perbu | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: normal | Keywords: varnishlog --------------------+------------------------------------------------------- Description changed by perbu: Old description: > Created a log file like this: ./varnishlog -w /tmp/vlog > > Waiting a few seconds and breaking (^C). > > Then I try to read the file like this: ./varnishlog -r /tmp/vlog > > varnishlog then gives no output and returns with a exit code of 0. > > Strace gives some hints that something is wrong: > > (..) > open("/tmp/vlog", O_RDONLY) = 3 > uname({sys="Linux", node="odd", ...}) = 0 > > What the hell is this: > open("/opt/varnish/var/varnish/odd/_.vsm", O_RDONLY) = 4 > fstat(4, {st_mode=S_IFREG|0644, st_size=84934656, ...}) = 0 > read(4, > "6\332y\371\200\0\1\0#\330?M\0\0\0\0\277\23\0\0\300\23\0\0\0\0\20\5\0\0\0\0"..., > 65664) = 65664 > mmap(NULL, 84934656, PROT_READ, MAP_SHARED, 4, 0) = 0x7fc57f055000 > read(3, "\35\0\0)\f\0\0\0", 8) = 8 > read(3, "66.249.72.243 40136 773308670\0\0\0", 32) = 32 > exit_group(0) = ? > > I have a log file that I can send to someone. It does however, contain > session data so I won't upload it. New description: Created a log file like this: ./varnishlog -w /tmp/vlog Waiting a few seconds and breaking (^C). Then I try to read the file like this: ./varnishlog -r /tmp/vlog varnishlog then gives no output and returns with a exit code of 0. Strace gives some hints that something is wrong: {{{ (..) open("/tmp/vlog", O_RDONLY) = 3 uname({sys="Linux", node="odd", ...}) = 0 }}} What the hell is this: {{{ open("/opt/varnish/var/varnish/odd/_.vsm", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=84934656, ...}) = 0 read(4, "6\332y\371\200\0\1\0#\330?M\0\0\0\0\277\23\0\0\300\23\0\0\0\0\20\5\0\0\0\0"..., 65664) = 65664 mmap(NULL, 84934656, PROT_READ, MAP_SHARED, 4, 0) = 0x7fc57f055000 read(3, "\35\0\0)\f\0\0\0", 8) = 8 read(3, "66.249.72.243 40136 773308670\0\0\0", 32) = 32 exit_group(0) = ? }}} I have a log file that I can send to someone. It does however, contain session data so I won't upload it. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 26 17:43:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Jan 2011 17:43:14 -0000 Subject: [Varnish] #849: Session timeout while receiving POST data from client causes multiple broken backend requests Message-ID: <040.5f9f459ac69310e4298e91826395f877@varnish-cache.org> #849: Session timeout while receiving POST data from client causes multiple broken backend requests -----------------------------------------------------------------------------------+ Reporter: lew | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: normal Keywords: 503, post, backend write error: 11 (Resource temporarily unavailable) | -----------------------------------------------------------------------------------+ The default session timeout of 5s was causing a lot of broken POSTs on our backends, and 503 errors in our varnish logs (showing up as "backend write error: 11 (Resource temporarily unavailable)") - particularly with one of our sites that serves mainly mobile clients. It appears that during a POST, if the session timeout is exceeded while varnish is waiting for data from the client, an incomplete/invalid POST request is made to the backend, and then tried again after another 5 seconds. I'm not sure what the 'right' way to handle this would be, but this doesn't seem like good behaviour currently. For the moment I'm working around the issue by piping POSTs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jan 26 22:53:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Jan 2011 22:53:30 -0000 Subject: [Varnish] #784: Varnish crash at HSH_Lookup() In-Reply-To: <045.78a0c9b5aafa8025a5bddb048c9c85eb@varnish-cache.org> References: <045.78a0c9b5aafa8025a5bddb048c9c85eb@varnish-cache.org> Message-ID: <054.173612d867b2d7c4ebeba86682b834d2@varnish-cache.org> #784: Varnish crash at HSH_Lookup() ----------------------+----------------------------------------------------- Reporter: censored | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.1.3 Severity: normal | Keywords: varnishd HSH_Lookup died ----------------------+----------------------------------------------------- Comment(by rklahn): I was seeing the "Error in munmap()" on Ubuntu 10.04.1 LTS/2.6.32-27-generic. With some help from the #varnish IRC channel, I was able to track this down to varnishd hitting the number of memory maps per process limit. This is tunable in /proc/sys/vm/max_map_count. If you add this to your /etc/sysctl.conf, then do a sysctl -p, you should be able to make this problem go away. {{{ # Increase the number of memory maps a process can have vm.max_map_count = 262144 }}} You can monitor the number of memory maps a process has by doing a {{{ # ex: cat /proc/15130/maps | wc -l cat /proc//maps | wc -l }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 09:04:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 09:04:19 -0000 Subject: [Varnish] #842: assert error in hcb_deref In-Reply-To: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> References: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> Message-ID: <051.e498ab9d33793f3f3651ee9eccd9b1f9@varnish-cache.org> #842: assert error in hcb_deref --------------------+------------------------------------------------------- Reporter: perbu | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- Comment(by perbu): This still happends running 54ddaec. {{{ Jan 27 00:27:14 odd varnishd[27895]: Child (15908) died signal=6 Jan 27 00:27:14 odd varnishd[27895]: Child (15908) Panic message: Assert error in hcb_deref(), hash_critbit.c line 411: #012 Condition((oh->waitinglist) == 0) not true. #012thread = (cache-timeout) #012ident = Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll #012Backtrace: #012 0x42af88: pan_ic+b8#012 0x439614: hcb_deref+264 #012 0x423c8c: HSH_Deref+22c #012 0x41f60f: exp_timer+32f #012 0x42d4db: wrk_bgthread+bb #012 0x7fad99aa09ca: _end+7fad994288b2 #012 0x7fad997fd70d: _end+7fad991855f5#012 Jan 27 00:27:14 odd varnishd[27895]: Child cleanup complete Jan 27 00:27:14 odd varnishd[27895]: child (22932) Started }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 10:05:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 10:05:17 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 Message-ID: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Hi, Long-term support release of Ubuntu 8.04 does no longer work with Varnish. I suspect this is change in libc or other system library. jemalloc fails to initialize and deadlocks itself. Related bug: https://bugs.launchpad.net/firefox/+bug/333624 By looking the jemalloc source code shipped with Varnish there is a comment: {{{ /* * sysconf(3) would be the preferred method for determining the number * of CPUs, but it uses malloc internally, which causes untennable * recursion during malloc initialization. */ fd = open("/proc/cpuinfo", O_RDONLY); if (fd == -1) return (1); /* Error. */ /* }}} But apparently open() also cause a call to malloc which leads to the situation where jemalloc waits its own mutex: {{{ strace /usr/sbin/varnishd execve("/usr/sbin/varnishd", ["/usr/sbin/varnishd"], [/* 21 vars */]) = 0 brk(0) = 0x675000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d030c000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d030a000 access("/etc/ld.so.preload", R_OK) = 0 open("/etc/ld.so.preload", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=18, ...}) = 0 mmap(NULL, 18, PROT_READ|PROT_WRITE, MAP_PRIVATE, 3, 0) = 0x7fb3d0309000 close(3) = 0 open("/lib/libaux.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\16\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=8248, ...}) = 0 mmap(NULL, 2103304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cfeee000 mprotect(0x7fb3cfef0000, 2093056, PROT_NONE) = 0 mmap(0x7fb3d00ef000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0x7fb3d00ef000 close(3) = 0 munmap(0x7fb3d0309000, 18) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=16743, ...}) = 0 mmap(NULL, 16743, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb3d0305000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libvarnish.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3402\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=69592, ...}) = 0 mmap(NULL, 2164896, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cfcdd000 mprotect(0x7fb3cfcee000, 2093056, PROT_NONE) = 0 mmap(0x7fb3cfeed000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10000) = 0x7fb3cfeed000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libvarnishcompat.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\7\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=5384, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d0304000 mmap(NULL, 2100752, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cfadc000 mprotect(0x7fb3cfadd000, 2093056, PROT_NONE) = 0 mmap(0x7fb3cfcdc000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0) = 0x7fb3cfcdc000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libvcl.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\00002\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=98240, ...}) = 0 mmap(NULL, 2193544, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cf8c4000 mprotect(0x7fb3cf8da000, 2097152, PROT_NONE) = 0 mmap(0x7fb3cfada000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7fb3cfada000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \16\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=14624, ...}) = 0 mmap(NULL, 2109728, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cf6c0000 mprotect(0x7fb3cf6c2000, 2097152, PROT_NONE) = 0 mmap(0x7fb3cf8c2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7fb3cf8c2000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260W\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=130224, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d0303000 mmap(NULL, 2208624, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cf4a4000 mprotect(0x7fb3cf4ba000, 2097152, PROT_NONE) = 0 mmap(0x7fb3cf6ba000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7fb3cf6ba000 mmap(0x7fb3cf6bc000, 13168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb3cf6bc000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libnsl.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240@\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=93080, ...}) = 0 mmap(NULL, 2198224, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cf28b000 mprotect(0x7fb3cf2a1000, 2093056, PROT_NONE) = 0 mmap(0x7fb3cf4a0000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x15000) = 0x7fb3cf4a0000 mmap(0x7fb3cf4a2000, 6864, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb3cf4a2000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260>\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=526560, ...}) = 0 mmap(NULL, 2621672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cf00a000 mprotect(0x7fb3cf08a000, 2093056, PROT_NONE) = 0 mmap(0x7fb3cf289000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7f000) = 0x7fb3cf289000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\340\342"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1436976, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d0302000 mmap(NULL, 3543672, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3ceca8000 mprotect(0x7fb3cee00000, 2097152, PROT_NONE) = 0 mmap(0x7fb3cf000000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7fb3cf000000 mmap(0x7fb3cf005000, 17016, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb3cf005000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/librt.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240#\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=35784, ...}) = 0 mmap(NULL, 2132968, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3cea9f000 mprotect(0x7fb3ceaa7000, 2093056, PROT_NONE) = 0 mmap(0x7fb3ceca6000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7fb3ceca6000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libpcre.so.3", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\23\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=154200, ...}) = 0 mmap(NULL, 2249472, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb3ce879000 mprotect(0x7fb3ce89e000, 2097152, PROT_NONE) = 0 mmap(0x7fb3cea9e000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x25000) = 0x7fb3cea9e000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d0301000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb3d0300000 arch_prctl(ARCH_SET_FS, 0x7fb3d03006e0) = 0 mprotect(0x7fb3cf000000, 12288, PROT_READ) = 0 munmap(0x7fb3d0305000, 16743) = 0 set_tid_address(0x7fb3d0300770) = 8167 set_robust_list(0x7fb3d0300780, 0x18) = 0 futex(0x7fffd830b9ec, 0x81 /* FUTEX_??? */, 1) = 0 rt_sigaction(SIGRTMIN, {0x7fb3cf4a92d0, [], SA_RESTORER|SA_SIGINFO, 0x7fb3cf4b27d0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x7fb3cf4a9350, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7fb3cf4b27d0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0 getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=1024}) = 0 close(1024) = -1 EBADF (Bad file descriptor) ... open("/proc/cpuinfo", O_RDONLY) = 3 futex(0x7fb3cf8c310c, 0x81 /* FUTEX_??? */, 2147483647) = 0 futex(0x663640, 0x80 /* FUTEX_??? */, 2 }}} I am not sure whether this is issue with libc, System libraries, jemalloc, Varnish or what, so I hope little guidance to narrow down this bug. I have tested Varnish source build and Varnish shipped from Ubuntu repositories and they both seem to enjoy this issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 10:15:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 10:15:21 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.01920e444a070aeae5625f4b97730857@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): Some more info: This *used* to work perfectly and there hasn't been much changes on the server, besides regular system security updates. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 10:16:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 10:16:33 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.82ec38b6018610d0c458e84c7710a947@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): Also, I manage to compile and run Varnish with --disable-jemalloc flag. However, in this case, Varnish suffers all sort of performance problems and is nowhere near what it should be or what is was before. So I suspect working jemalloc support is critical for smooth Varnish operationing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 11:18:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 11:18:15 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.cf8a4768d54584cb8df6060891e7f993@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): gdb back trace - looks like libdl is culprint {{{ #0 0x00007f097c932174 in __lll_lock_wait () from /lib/libpthread.so.0 #1 0x00007f097c92db74 in _L_lock_1007 () from /lib/libpthread.so.0 #2 0x00007f097c92d99e in pthread_mutex_lock () from /lib/libpthread.so.0 #3 0x0000000000445c4e in malloc_init_hard () at jemalloc_linux.c:1367 #4 0x0000000000446bbd in calloc (num=1, size=32) at jemalloc_linux.c:4875 #5 0x00007f097cb4254b in ?? () from /lib/libdl.so.2 #6 0x00007f097cb420aa in dlsym () from /lib/libdl.so.2 #7 0x00007f097d37321b in read () from /lib/libaux.so.1 #8 0x0000000000445c95 in malloc_init_hard () at jemalloc_linux.c:4842 #9 0x0000000000447202 in malloc (size=40) at jemalloc_linux.c:4875 #10 0x00007f097d16a3b4 in vsb_new (s=0x0, buf=0x0, length=0, flags=1) at vsb.c:166 #11 0x0000000000440edb in main (argc=1, argv=0x7fff85791f38) at varnishd.c:93 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 11:45:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 11:45:14 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.3a1683bd5553ac595849121a9fb80711@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): Some more info with LD_LIBRARY_PATH=/usr/lib/debug {{{ Starting program: /home/moo/varnish-2.1.4/bin/varnishd/.libs/lt-varnishd [Thread debugging using libthread_db enabled] ^C[New Thread 0x7ff3e4d036e0 (LWP 20494)] Program received signal SIGINT, Interrupt. [Switching to Thread 0x7ff3e4d036e0 (LWP 20494)] 0x00007ff3e3eb2174 in __lll_lock_wait () from /usr/lib/debug/libpthread.so.0 Current language: auto; currently asm (gdb) bt #0 0x00007ff3e3eb2174 in __lll_lock_wait () from /usr/lib/debug/libpthread.so.0 #1 0x00007ff3e3eadb74 in _L_lock_1007 () from /usr/lib/debug/libpthread.so.0 #2 0x00007ff3e3ead99e in __pthread_mutex_lock (mutex=0x6663a0) at pthread_mutex_lock.c:103 #3 0x0000000000445c4e in malloc_init_hard () at jemalloc_linux.c:1367 #4 0x0000000000446bbd in calloc (num=1, size=32) at jemalloc_linux.c:4875 #5 0x00007ff3e40c254b in _dlerror_run (operate=0x7ff3e40c20e0 , args=0x7fffecd0cbe0) at dlerror.c:142 #6 0x00007ff3e40c20aa in __dlsym (handle=, name=) at dlsym.c:71 #7 0x00007ff3e48f321b in read () from /lib/libaux.so.1 #8 0x0000000000445c95 in malloc_init_hard () at jemalloc_linux.c:4842 #9 0x0000000000447202 in malloc (size=40) at jemalloc_linux.c:4875 #10 0x00007ff3e46ea3b4 in vsb_new (s=0x0, buf=0x0, length=0, flags=1) at vsb.c:166 #11 0x0000000000440edb in main (argc=1, argv=0x7fffecd10498) at varnishd.c:93 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 11:57:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 11:57:21 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.2ca177aeb0984f1bed7d39984091ecaa@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): Ok. Looks like the issue is that there is dynamic DLL loading error. __dlsym tries to give an error message and allocates memory for it {{{ internal_function _dlerror_run (void (*operate) (void *), void *args) { struct dl_action_result *result; /* If we have not yet initialized the buffer do it now. */ __libc_once (once, init); /* Get error string and number. */ if (static_buf != NULL) result = static_buf; else { /* We don't use the static buffer and so we have a key. Use it to get the thread-specific buffer. */ result = __libc_getspecific (key); if (result == NULL) { result = (struct dl_action_result *) calloc (1, sizeof (*result)); if (result == NULL) /* We are out of memory. Since this is no really critical situation we carry on by using the global variable. This might lead to conflicts between the threads but they soon all will have memory problems. */ result = &last_result; else /* Set the tsd. */ __libc_setspecific (key, result); } } }}} No idea yet, which DLL triggers this and how to figure out it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 12:31:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 12:31:42 -0000 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> Message-ID: <054.764ded86943d6ffee40271a20e333bf4@varnish-cache.org> #850: jemalloc fails to init on Ubuntu 8.04 ----------------------+----------------------------------------------------- Reporter: miohtama | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.4 | Severity: minor Keywords: | ----------------------+----------------------------------------------------- Comment(by miohtama): Ok. I have no idea how to further debug this. Looks like read() triggers dynamic DLL call which causes the malloc. Was libdl before operating without a malloc call? Looks like it is wrapping all calls to Is it an error condition causing this malloc call? Is it possible to statically link against libc to avoid dynamic DLL load for read. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jan 27 16:12:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 27 Jan 2011 16:12:16 -0000 Subject: [Varnish] #842: assert error in hcb_deref In-Reply-To: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> References: <042.af9d139c0a165d44a64eea5d9b695272@varnish-cache.org> Message-ID: <051.56edc588e0f05f02f8e2e8cc25a3d4f4@varnish-cache.org> #842: assert error in hcb_deref --------------------+------------------------------------------------------- Reporter: perbu | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I belive this is fixed in rev 176190e67641de430bbf0ddffe31d5b49cf670d6 Please test and report. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 28 00:34:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Jan 2011 00:34:03 -0000 Subject: [Varnish] #811: No recent binary packages for 32-bit RHEL/CentOS In-Reply-To: <050.1325b4e88b8e050530222133dcf1869c@varnish-cache.org> References: <050.1325b4e88b8e050530222133dcf1869c@varnish-cache.org> Message-ID: <059.8e648e813403423ed3235f9a7e2a3de5@varnish-cache.org> #811: No recent binary packages for 32-bit RHEL/CentOS ---------------------------+------------------------------------------------ Reporter: chadwackerman | Owner: tfheen Type: enhancement | Status: new Priority: normal | Milestone: Component: build | Version: 2.1.4 Severity: normal | Keywords: ---------------------------+------------------------------------------------ Comment(by chadwackerman): Just installed 2.1.5 from the repo today. Thank you for the build fixes! You can mark this closed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From kbrownfield at google.com Fri Jan 28 04:28:05 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 27 Jan 2011 20:28:05 -0800 Subject: [Varnish] #850: jemalloc fails to init on Ubuntu 8.04 In-Reply-To: <054.764ded86943d6ffee40271a20e333bf4@varnish-cache.org> References: <045.ff30f3e5367faa879e2904892fb3b45f@varnish-cache.org> <054.764ded86943d6ffee40271a20e333bf4@varnish-cache.org> Message-ID: We've used 2.1.4 heavily on 8.04 LTS with jemalloc with no issues, so I would suggest looking at your environment for a possible cause (mixed library versions, changes to paths (PATH/LD_LIBRARY_PATH/LD_PRELOAD), etc). Also, I'm not sure I would come to the same conclusion you have seeing that strace... but the "..." that was omitted from the trace would probably be the most useful. -- kb On Thu, Jan 27, 2011 at 04:31, Varnish wrote: > #850: jemalloc fails to init on Ubuntu 8.04 > > ----------------------+----------------------------------------------------- > Reporter: miohtama | Type: defect > Status: new | Priority: normal > Milestone: | Component: varnishd > Version: 2.1.4 | Severity: minor > Keywords: | > > ----------------------+----------------------------------------------------- > > Comment(by miohtama): > > Ok. I have no idea how to further debug this. Looks like read() triggers > dynamic DLL call which causes the malloc. > > Was libdl before operating without a malloc call? Looks like it is > wrapping all calls to > > Is it an error condition causing this malloc call? > > Is it possible to statically link against libc to avoid dynamic DLL load > for read. > > -- > Ticket URL: > Varnish > The Varnish HTTP Accelerator > > _______________________________________________ > varnish-bugs mailing list > varnish-bugs at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-bugs at varnish-cache.org Fri Jan 28 08:25:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Jan 2011 08:25:20 -0000 Subject: [Varnish] #851: bad file descriptor Message-ID: <043.c18a6c81c330acd0839d3e63154e1559@varnish-cache.org> #851: bad file descriptor ----------------------+----------------------------------------------------- Reporter: tfheen | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- varnishd was killed by the oom killer on odd, then got into a state where I could not restart the child using varnishadm: {{{ Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd auth 028cd075468b1ec7faa47c40e147182acbdf118bb13f810a778ce6690929dca6 Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 200 -----------------------------#012Varnish Cache CLI 1.0#012-----------------------------#012Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit#012#012Type 'help' for command list.#012Type 'quit' to close CLI session.#012Type 'start' to launch worker process. Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd ping Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 200 PONG 1296202535 1.0 Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd banner Jan 28 09:15:35 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 200 -----------------------------#012Varnish Cache CLI 1.0#012-----------------------------#012Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit#012#012Type 'help' for command list.#012Type 'quit' to close CLI session.#012Type 'start' to launch worker process. Jan 28 09:15:37 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd start Jan 28 09:15:37 odd varnishd[22658]: child (29416) Started Jan 28 09:15:37 odd varnishd[22658]: Pushing vcls failed: CLI communication error (hdr) Jan 28 09:15:37 odd varnishd[22658]: Stopping Child Jan 28 09:15:37 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 200 Jan 28 09:15:37 odd varnishd[22658]: Child (29416) died signal=6 Jan 28 09:15:37 odd varnishd[22658]: Child (-1) said Jan 28 09:15:37 odd varnishd[22658]: Child (-1) said Child starts Jan 28 09:15:37 odd varnishd[22658]: Child (-1) said Assert error in vsm_iter_n(), vsm.c line 95: Jan 28 09:15:37 odd varnishd[22658]: Child (-1) said Condition((*pp)->magic == 0x43907b6e) not true. Jan 28 09:15:37 odd varnishd[22658]: Child (-1) said errno = 9 (Bad file descriptor) Jan 28 09:15:37 odd varnishd[22658]: Child cleanup complete Jan 28 09:15:40 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd stop Jan 28 09:15:40 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 300 Child in state stopped Jan 28 09:15:42 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd start Jan 28 09:15:42 odd varnishd[22658]: child (29417) Started Jan 28 09:15:42 odd varnishd[22658]: Pushing vcls failed: CLI communication error (hdr) Jan 28 09:15:42 odd varnishd[22658]: Stopping Child Jan 28 09:15:42 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 200 Jan 28 09:15:42 odd varnishd[22658]: Child (29417) died signal=6 Jan 28 09:15:42 odd varnishd[22658]: Child (-1) said Jan 28 09:15:42 odd varnishd[22658]: Child (-1) said Child starts Jan 28 09:15:42 odd varnishd[22658]: Child (-1) said Assert error in vsm_iter_n(), vsm.c line 95: Jan 28 09:15:42 odd varnishd[22658]: Child (-1) said Condition((*pp)->magic == 0x43907b6e) not true. Jan 28 09:15:42 odd varnishd[22658]: Child (-1) said errno = 9 (Bad file descriptor) Jan 28 09:15:42 odd varnishd[22658]: Child cleanup complete Jan 28 09:15:46 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Rd quit Jan 28 09:15:46 odd varnishd[22658]: CLI telnet ::1 34163 ::1 6082 Wr 500 Closing CLI connection Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Rd auth 5fdc28540cb3919917a24d3183bbba15b739a6c370099442822e4cc80fcee701 Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Wr 200 -----------------------------#012Varnish Cache CLI 1.0#012-----------------------------#012Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit#012#012Type 'help' for command list.#012Type 'quit' to close CLI session.#012Type 'start' to launch worker process. Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Rd ping Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Wr 200 PONG 1296202563 1.0 Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Rd banner Jan 28 09:16:03 odd varnishd[22658]: CLI telnet ::1 34164 ::1 6082 Wr 200 -----------------------------#012Varnish Cache CLI 1.0#012-----------------------------#012Linux,2.6.32-27-generic,x86_64,-sfile,-smalloc,-hcritbit#012#012Type 'help' for command list.#012Type 'quit' to close CLI session.#012Type 'start' to launch worker process. Jan 28 09:16:49 odd varnishd[22658]: Manager got SIGINT }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 28 11:51:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Jan 2011 11:51:42 -0000 Subject: [Varnish] #852: need to define NO_VIZ for platforms without visibility attribute support Message-ID: <042.9673caf760561460a7d33f2664041d5f@varnish-cache.org> #852: need to define NO_VIZ for platforms without visibility attribute support --------------------+------------------------------------------------------- Reporter: slink | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- (initially noted by geoff at uplex dot de) On Solaris, compiling with gcc reults in the following warnings: {{{ zutil.c: In function `vgz_zcalloc': zutil.c:308: warning: visibility attribute not supported in this configuration; ignored zutil.c: In function `vgz_zcfree': zutil.c:316: warning: visibility attribute not supported in this configuration; ignored }}} and linking fails with: {{{ ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz_zcalloc: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz_zcfree: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__length_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__length_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__length_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol ld: fatal: relocation error: R_386_GOTOFF: file .libs/deflate.o: symbol vgz__dist_code: a GOT relative relocation must reference a local symbol }}} the zlib configure script contains a test for the visibility attribute and sets NO_VIZ if that fails. I will prepare a patch -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 28 13:08:50 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Jan 2011 13:08:50 -0000 Subject: [Varnish] #852: need to define NO_VIZ for platforms without visibility attribute support In-Reply-To: <042.9673caf760561460a7d33f2664041d5f@varnish-cache.org> References: <042.9673caf760561460a7d33f2664041d5f@varnish-cache.org> Message-ID: <051.5bd9da442102addff4423d33957d66e5@varnish-cache.org> #852: need to define NO_VIZ for platforms without visibility attribute support --------------------+------------------------------------------------------- Reporter: slink | Owner: thfeen Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by slink): * owner: slink => thfeen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jan 28 13:52:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Jan 2011 13:52:42 -0000 Subject: [Varnish] #852: need to define NO_VIZ for platforms without visibility attribute support In-Reply-To: <042.9673caf760561460a7d33f2664041d5f@varnish-cache.org> References: <042.9673caf760561460a7d33f2664041d5f@varnish-cache.org> Message-ID: <051.8d999ed5d2ea10eacb6d933d5f76a9bc@varnish-cache.org> #852: need to define NO_VIZ for platforms without visibility attribute support --------------------+------------------------------------------------------- Reporter: slink | Owner: thfeen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [a776169645967061d22ed87bf26236ca6d6f9488]) Check for __attribute__((visibility)) support The solaris compiler does not support the GNU C extension __attribute__((visibility("hidden"))), leading to build failures. This commit adds a check for that support and defines NO_VIZ if it's visibility setting is not supported. Fixes #852 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jan 30 14:54:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Jan 2011 14:54:09 -0000 Subject: [Varnish] #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac Message-ID: <042.9773b80b7c13e9c38342f4031a121616@varnish-cache.org> #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac -------------------+-------------------------------------------------------- Reporter: jdzst | Type: enhancement Status: new | Priority: low Milestone: | Component: build Version: trunk | Severity: minor Keywords: | -------------------+-------------------------------------------------------- I have made a small improvement to SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac Trunk version only tests if setsockopt works ok. With my patch, it tests both setsockopt and getsockopt. The checks are copied from "varnishd/cache_acceptor.c": {{{ #ifdef SO_RCVTIMEO_WORKS l = sizeof tv; i = getsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &tv, &l); if (i) { TCP_Assert(i); return; } assert(l == sizeof tv); if (memcmp(&tv, &tv_rcvtimeo, l)) need_rcvtimeo = 1; #else }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 31 08:10:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Jan 2011 08:10:34 -0000 Subject: [Varnish] #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac In-Reply-To: <042.9773b80b7c13e9c38342f4031a121616@varnish-cache.org> References: <042.9773b80b7c13e9c38342f4031a121616@varnish-cache.org> Message-ID: <051.20257ed393f6ac9b1048e60222c59f3b@varnish-cache.org> #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac -------------------+-------------------------------------------------------- Reporter: jdzst | Type: enhancement Status: new | Priority: low Milestone: | Component: build Version: trunk | Severity: minor Keywords: | -------------------+-------------------------------------------------------- Comment(by tfheen): Do you have any platforms where setsockopt works, but getsockopt doesn't? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jan 31 10:51:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Jan 2011 10:51:44 -0000 Subject: [Varnish] #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac In-Reply-To: <042.9773b80b7c13e9c38342f4031a121616@varnish-cache.org> References: <042.9773b80b7c13e9c38342f4031a121616@varnish-cache.org> Message-ID: <051.4455b72c0254fe3d5a8cc80e21078992@varnish-cache.org> #853: Improvement of SO_RCVTIMEO / SO_SNDTIMEO test in configure.ac -------------------+-------------------------------------------------------- Reporter: jdzst | Type: enhancement Status: new | Priority: low Milestone: | Component: build Version: trunk | Severity: minor Keywords: | -------------------+-------------------------------------------------------- Comment(by jdzst): Yes, under cygwin I had problems, so I prepared this code instead adding a hardcode with platform name. The code works fine in Solaris and Linux. -- Ticket URL: Varnish The Varnish HTTP Accelerator