From varnish-bugs at varnish-cache.org Tue May 8 06:56:21 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 May 2012 06:56:21 -0000 Subject: [Varnish] #849: Session timeout while receiving POST data from client causes multiple broken backend requests In-Reply-To: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> References: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> Message-ID: <050.4c19a934456f2b999bca318ec72417f4@varnish-cache.org> #849: Session timeout while receiving POST data from client causes multiple broken backend requests ----------------------+----------------------------------------------------- Reporter: lew | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: 2.1.4 Severity: normal | Keywords: 503, post, backend write error: 11 (Resource temporarily unavailable) ----------------------+----------------------------------------------------- Comment(by Emil): We had this issue too, resulting in image uploads failing. It gave us a bunch of these in the backen logs. {{{ [Wed Apr 25 12:21:56 2012] [warn] [client x.x.x.x] (70014)End of file found: mod_fcgid: can't get data from http client, referer: http://www.example.com/the-url }}} In varnish we got the same error as reported above. {{{ 12 FetchError c backend write error: 11 }}} I think the severity of this issue should be raised. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 9 12:59:58 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 May 2012 12:59:58 -0000 Subject: [Varnish] #1035: Port numbers are not sanitized, e.g: 1234124124 In-Reply-To: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> References: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> Message-ID: <055.9cdf6bbd896cfbcc55f6f83b24f373df@varnish-cache.org> #1035: Port numbers are not sanitized, e.g: 1234124124 ----------------------+----------------------------------------------------- Reporter: kristian | Owner: kristian Type: defect | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Kristian Lyngstol ): * status: assigned => closed * resolution: => fixed Comment: (In [48054086d85a912723b59b44d686c4e4d104284e]) Verify range of port numbers before using them Fixes #1035 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 9 15:17:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 May 2012 15:17:17 -0000 Subject: [Varnish] #1131: Strange bug with if/lookup - else/pass in vcl_recv Message-ID: <049.eecbfc45a96523cf173c9f229a4e2cac@varnish-cache.org> #1131: Strange bug with if/lookup - else/pass in vcl_recv -------------------------+-------------------------------------------------- Reporter: Guillaume.S | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Hello, I have a strange bug with my vcl_recv : if(req.url ~ \.(png|gif|jpg|jpeg|css|js|PNG|GIF|JPG|JPEG|CSS|JS)$"){ std.syslog(6, "++ RECV_IMG OK : "+req.http.host + req.url +" -- IP CLIENT : "+req.http.remote-ip); return (lookup); } else { std.syslog(6, "++ !RECV_NO-IMG : "+req.http.host + req.url +" -- IP CLIENT : "+req.http.remote-ip); return (pass); } if i do a request in .php, i lost some time my cookie in response (the cookie disappear and our LB can't send the request to the good backend, so i have a new session on an other backend) I see the request use the "else/pass" way in the logs. But if i change "lookup" by "pass" in the "if", i did'nt loose the cookie anymore. if(req.url ~ \.(png|gif|jpg|jpeg|css|js|PNG|GIF|JPG|JPEG|CSS|JS)$"){ std.syslog(6, "++ RECV_IMG OK : "+req.http.host + req.url +" -- IP CLIENT : "+req.http.remote-ip); return (pass); } else { std.syslog(6, "++ !RECV_NO-IMG : "+req.http.host + req.url +" -- IP CLIENT : "+req.http.remote-ip); return (pass); } I always see the request use the "else/pass" way in the logs. Why my request was affected by the "if/lookup" ? I have this bug only when the server receive 20/30 requests/sec. With 2/3 requests, no bugs ... The server isn't loaded (2 x 8 Core, Load ~0.07) and i have free memory (malloc 44GB for Varnish / 48GB on server, i don't use more than 1.5GB today) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 9 20:24:14 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 May 2012 20:24:14 -0000 Subject: [Varnish] #1132: Varnish randomly restarting with esi requests Message-ID: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> #1132: Varnish randomly restarting with esi requests ------------------------+--------------------------------------------------- Reporter: eitch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: esi, panic | ------------------------+--------------------------------------------------- Hi, Varnish is randomly dying and restarting itself. I can't reproduce this, but it's happening with some frequency (3-5 times a day). Varnish generates a syslog message and its error is always on pages that use ESI. I'm running version 3.0.2 on CentOS 5.7 x86_64. There are 3 instances of varnish running on 3 physical machines, but the error is happening only on two of them (very strange, since all requests are balanced through all of them, and all the machines and configurations are equal). Here is my last sample of error: varnishd[3600]: Child (3376) Panic message: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388:#012 Condition((vef->error) == 0) not true.thread = (cache-worker)#012ident = Linux,2.6.18-274.17.1.el5,x86_64,-sfile,-smalloc,-hclassic,epoll#012Backtrace:#012 0x42c7a6: /usr/sbin/varnishd [0x42c7a6]#012 0x41b2e8: /usr/sbin/varnishd [0x41b2e8]#012 0x4215fd: /usr/sbin/varnishd(FetchBody+0x3fd) [0x4215fd]#012 0x4153e8: /usr/sbin/varnishd [0x4153e8]#012 0x417ab6: /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6]#012 0x42efb8: /usr/sbin/varnishd [0x42efb8]#012 0x42e19b: /usr/sbin/varnishd [0x42e19b]#012 0x2b79fd54673d: /lib64/libpthread.so.0 [0x2b79fd54673d]#012 0x2b79fd8304bd: /lib64/libc.so.6(clone+0x6d) [0x2b79fd8304bd]#012sp = 0x2aabd5802008 {#012 fd = 721, id = 721, xid = 1611183207,#012 client = 50.16.221.66 33336,#012 step = STP_FETCHBODY,#012 handling = deliver,#012 err_code = 200, err_reason = (null),#012 restarts = 0, esi_level = 0#012 flags = do_esi is_gzip#012 bodystatus = 4#012 ws = 0x2aabd5802080 { #012 id = "sess",#012 {s,f,r,e} = {0x2aabd5802c90,+1216,(nil),+524288},#012 },#012 http[req] = {#012 ws = 0x2aabd5802080[sess]#012 "GET",#012 "/gadgets/latestNews/content.html?canal=24&numShowNews=6&corMenu=123d01&corTitulos=0D2F00",#012 "HTTP/1.1",#012 "host: www.gazetaesportiva.net",#012 "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",#012 "Accept-Language: pt-br,pt;q=0.8,en-us;q=0.5,en;q=0.3",#012 "Cookie: no_cache=undefined; data_news=Quarta%2C%2009/05/2012; origem_r7=1; __utma=156012717.1271081437.1336590036.1336590036.1336592230.2; __utmc=156012717; __utmz=156012717.1336592230.2.2.utmcsr=r7.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __gads=ID=c2d9aeb092f3c1e8:T=1336590031:S=ALNI_Mb- sZo2lFKMHIA52_A2XSviPcI7XQ; __jid=1336590040370789922; __utmb=156012717.1.10.1336592230",#012 "Referer: http://www.gazetaesportiva.net/noticia/2012/05/palmeiras/para-evitar- oitavo-fracasso-f ESI is activated only if the user has a cookie. If this happens, all pages for this domain will have a simple ESI that adds a bar to the top of the page. Thanks, Hugo -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 00:09:10 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 00:09:10 -0000 Subject: [Varnish] #1133: Varnishstat uses too narrow columns causing keys/values to bleed together Message-ID: <042.ae3feeb8be9fef444412ebceefa7d0b7@varnish-cache.org> #1133: Varnishstat uses too narrow columns causing keys/values to bleed together -------------------+-------------------------------------------------------- Reporter: kane | Type: defect Status: new | Priority: normal Milestone: | Component: varnishstat Version: 3.0.2 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Here's an example from one of our nodes when running varnishstat -1: {{{ VBE.consumer_9030(127.0.0.1,,9030).vcls 2 . VCL references VBE.consumer_9030(127.0.0.1,,9030).happy18446744073709551615 . Happy health probes VBE.controltag_9020(127.0.0.1,,9020).vcls 2 . VCL references VBE.controltag_9020(127.0.0.1,,9020).happy18446744073709551615 . Happy health probes }}} Over time, lots of probes are fired and healthy, the number gets large and the key + value bleed together in the output. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 00:09:30 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 00:09:30 -0000 Subject: [Varnish] #1132: Varnish randomly restarting with esi requests In-Reply-To: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> References: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> Message-ID: <052.c3f94be38b644a5c75f80269af2348d2@varnish-cache.org> #1132: Varnish randomly restarting with esi requests ------------------------+--------------------------------------------------- Reporter: eitch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: esi, panic | ------------------------+--------------------------------------------------- Comment(by egeland): As per attached, we are seeing very similar issues on three Ubuntu 10.04 servers, running varnish version 3.0.1-1~lucid1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 08:59:20 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 08:59:20 -0000 Subject: [Varnish] #1073: req.hash_always_miss should imply req.hash_ignore_busy In-Reply-To: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> References: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> Message-ID: <055.059573bfd1a0675325abe3a2eba50316@varnish-cache.org> #1073: req.hash_always_miss should imply req.hash_ignore_busy ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Dag Haavi Finstad ): * status: new => closed * resolution: => fixed Comment: (In [4020b13d0eab47df865d3b055404ed84b8ffd459]) Req.hash_always_miss now implies req.hash_ignore_busy. Fixes a case where we might get a cache hit even though hash_always_miss is set. Fixes: #1073 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 09:17:07 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 09:17:07 -0000 Subject: [Varnish] #1134: VCC crash when string concatenating the result of a constant integer converted to a string Message-ID: <044.4a3d22394366758c00ef726907bda64f@varnish-cache.org> #1134: VCC crash when string concatenating the result of a constant integer converted to a string ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Gives vcc errors from constructs like "std.log("Test: " + 1);" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 09:49:56 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 09:49:56 -0000 Subject: [Varnish] #1135: vmod_log function in the std vmod is not working in trunk Message-ID: <044.213a56b60fb40386523d12108ff21af1@varnish-cache.org> #1135: vmod_log function in the std vmod is not working in trunk ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- A bug was introduced in this function with the stack usage limiting work, causing the function to not log anything. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 09:50:47 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 09:50:47 -0000 Subject: [Varnish] #1135: vmod_log function in the std vmod is not working in trunk In-Reply-To: <044.213a56b60fb40386523d12108ff21af1@varnish-cache.org> References: <044.213a56b60fb40386523d12108ff21af1@varnish-cache.org> Message-ID: <053.a129218e96711b1c148804ccaf8d50d6@varnish-cache.org> #1135: vmod_log function in the std vmod is not working in trunk ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: (In [5b3e41b4958ea33012ef290c706e6c45d540d361]) Fix vmod_log (VRT_StringList returns end of string, not beginning) Fixes: #1135 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 10 09:50:48 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 May 2012 09:50:48 -0000 Subject: [Varnish] #1134: VCC crash when string concatenating the result of a constant integer converted to a string In-Reply-To: <044.4a3d22394366758c00ef726907bda64f@varnish-cache.org> References: <044.4a3d22394366758c00ef726907bda64f@varnish-cache.org> Message-ID: <053.229f873348cf69465af73173607cb787@varnish-cache.org> #1134: VCC crash when string concatenating the result of a constant integer converted to a string ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: (In [9d0e486fe21199e6c059eb829f5d666611806012]) Don't consider a vcc expr to be constant after vcc_expr_tostring. Avoids vcc errors from constructs like "std.log("Test: " + 1);" Fixes: #1134 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 08:50:56 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 08:50:56 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.a9ba8e1eb3832070cc0d0091929ab576@varnish-cache.org> #1054: Child not responding to CLI, killing it -----------------------+---------------------------------------------------- Reporter: scorillo | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Changes (by etherael): * status: closed => reopened * resolution: worksforme => Comment: I am seeing the exact same issue on a varnish cache here, dies every 3-4 days. Details follow; {{{ Messagelog content; incident 1; May 9 15:13:57 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:07 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:17 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:27 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:37 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:47 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:52 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:52 elk2 varnishd[25859]: Child (2551) not responding to CLI, killing it. May 9 15:14:52 elk2 varnishd[25859]: Child (2551) died signal=3 (core dumped) May 9 15:14:52 elk2 varnishd[25859]: child (15127) Started May 9 15:14:52 elk2 varnishd[25859]: Child (15127) said Child starts May 9 15:14:52 elk2 varnishd[25859]: Child (15127) said SMF.s0 mmap'ed 311385128960 bytes of 311385128960 incident 2; May 14 00:53:33 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:53:43 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:53:53 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:54:03 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:54:13 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:54:20 elk2 abrt[30696]: file /usr/sbin/varnishd seems to be deleted May 14 00:54:21 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:54:21 elk2 varnishd[25859]: Child (15127) not responding to CLI, killing it. May 14 00:54:21 elk2 varnishd[25859]: Child (15127) died signal=3 (core dumped) May 14 00:54:21 elk2 varnishd[25859]: child (30697) Started May 14 00:54:21 elk2 varnishd[25859]: Child (30697) said Child starts May 14 00:54:21 elk2 varnishd[25859]: Child (30697) said SMF.s0 mmap'ed 311385128960 bytes of 311385128960 version info; root at parrot:~$ rpm -qa |grep varnish varnish-libs-3.0.2-1.el5.x86_64 varnish-3.0.2-1.el5.x86_64 varnish-release-3.0-1.noarch root at parrot:~$ uname -a Linux parrot 2.6.32-220.7.1.el6.x86_64 #1 SMP Wed Mar 7 00:52:02 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux root at parrot:~$ cat /etc/redhat-release CentOS release 6.2 (Final) startup cmd; varnish 30697 25859 10 00:54 ? 00:17:52 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,290G varnishstat; root at parrot:~$ varnishstat -1 client_conn 253164 25.32 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 338951 33.91 Client requests received cache_hit 156382 15.64 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 153794 15.38 Cache misses backend_conn 69798 6.98 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 2 0.00 Backend conn. failures backend_reuse 112847 11.29 Backend conn. reuses backend_toolate 1116 0.11 Backend conn. was closed backend_recycle 113971 11.40 Backend conn. recycles backend_retry 38 0.00 Backend conn. retry fetch_head 6 0.00 Fetch head fetch_length 178831 17.89 Fetch with Length fetch_chunked 3509 0.35 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 204 0.02 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 1 0.00 Fetch no body (304) n_sess_mem 565 . N struct sess_mem n_sess 145 . N struct sess n_object 153780 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 153835 . N struct objectcore n_objecthead 153835 . N struct objecthead n_waitinglist 1139 . N struct waitinglist n_vbc 58 . N struct vbc n_wrk 90 . N worker threads n_wrk_create 360 0.04 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 4119 0.41 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 3 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 142671 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 329530 32.96 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 253107 25.32 Total Sessions s_req 338951 33.91 Total Requests s_pipe 0 0.00 Total pipe s_pass 28775 2.88 Total pass s_fetch 182551 18.26 Total fetch s_hdrbytes 138408349 13844.99 Total header bytes s_bodybytes 6200495767 620235.65 Total body bytes sess_closed 106492 10.65 Session Closed sess_pipeline 289 0.03 Session Pipeline sess_readahead 72 0.01 Session Read Ahead sess_linger 239202 23.93 Session Linger sess_herd 246213 24.63 Session herd shm_records 23151895 2315.88 SHM records shm_writes 1718951 171.95 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 6496 0.65 SHM MTX contention shm_cycles 9 0.00 SHM cycles through buffer sms_nreq 18 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 7524 . SMS bytes allocated sms_bfree 7524 . SMS bytes freed backend_req 182657 18.27 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 310164 31.03 HCB Lookups without lock hcb_lock 153842 15.39 HCB Lookups with lock hcb_insert 153842 15.39 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 9997 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 235860 23.59 Gunzip operations LCK.sms.creat 2 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 141948 14.20 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 4334608 433.59 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 2 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 20270014 2027.61 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 2 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 6813196 681.52 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 2 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 27017 2.70 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 2 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 3353 0.34 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 2 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 10434926 1043.81 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 2 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 1062489 106.28 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 2 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 28077 2.81 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 4 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 32085426 3209.51 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 6777786 677.98 Created locks LCK.objhdr.destroy 38400 3.84 Destroyed locks LCK.objhdr.locks 58265283 5828.28 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 2 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 7078276 708.04 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 4 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 6740732 674.28 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 2 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 112482 11.25 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 2 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 7078306 708.04 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 2 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 2 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 7031988 703.41 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 6 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 21975254 2198.18 Lock Operations LCK.backend.colls 0 0.00 Collisions SMF.s0.c_req 13531967 1353.60 Allocator requests SMF.s0.c_fail 0 0.00 Allocator failures SMF.s0.c_bytes 907854680064 90812711.82 Bytes allocated SMF.s0.c_freed 795545841664 79578457.70 Bytes freed SMF.s0.g_alloc 13484647 . Allocations outstanding SMF.s0.g_bytes 112308838400 . Bytes outstanding SMF.s0.g_space 510461419520 . Bytes available SMF.s0.g_smf 14758490 . N struct smf SMF.s0.g_smf_frag 1273251 . N small free smf SMF.s0.g_smf_large 592 . N large free smf SMA.Transient.c_req 57850 5.79 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 3809803264 381094.65 Bytes allocated SMA.Transient.c_freed 3809803264 381094.65 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.default(10.52.5.194,,8080).vcls 2 . VCL references VBE.default(10.52.5.194,,8080).happy 0 . Happy health probes VBE.thumbs(127.0.0.1,,8080).vcls 2 . VCL references VBE.thumbs(127.0.0.1,,8080).happy 0 . Happy health probes VBE.nm_elk(50.23.148.202,,80).vcls 2 . VCL references VBE.nm_elk(50.23.148.202,,80).happy 0 . Happy health probes }}} Other strangeness; the directory that contains the actual varnish storage bin behaves erratically, the file storage.bin is a sparse file of the size requested on the startup (290gb) but sparse file listing shows that it never climbs above the 205gb mark in terms of actual space used. When varnish died on the 9th and auto restarted, this figure (actual size used in sparse file) was registered as 153gb despite the cache being completely empty. client_req average hovers around 30 but peaks at 70, nothing at all huge compared to what I've seen previously. The only thing that is out of the ordinary I suppose is the size of the storage file (very long tail of infrequently accessed objects on the backends with very high load on those backend servers that we want to ease off on, hence the immense cache size) The average io load is about 200kbytes reads per second and negligible writes, load average rarely exceeds 3 (16 core Intel Xeon E5620 @ 2.40Ghz, 12gb memory. That's all I can think of in terms of detail, I note the version of the cache is up to date with upstream despite being a binary distributed from the vendor, could it help to compile from source and see if the issue persists? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:10:06 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:10:06 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.50f9f32a4708c56c0b2884bd835e309f@varnish-cache.org> #1054: Child not responding to CLI, killing it -----------------------+---------------------------------------------------- Reporter: scorillo | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by tfheen): Hi, thanks for the data. I was wondering if you could run iostat -x 1 and see if it increases massively when Varnish restarts? Also, could you please check if there's a panic, by doing varnishadm panic.show ? Can you also please run gdb on the core dump and give us the backtrace? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:14:24 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:14:24 -0000 Subject: [Varnish] #1132: Varnish randomly restarting with esi requests In-Reply-To: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> References: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> Message-ID: <052.82bbee51a402f29e76a5b0e377ed4460@varnish-cache.org> #1132: Varnish randomly restarting with esi requests ------------------------+--------------------------------------------------- Reporter: eitch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: esi, panic | ------------------------+--------------------------------------------------- Description changed by phk: Old description: > Hi, > > Varnish is randomly dying and restarting itself. I can't reproduce this, > but it's happening with some frequency (3-5 times a day). Varnish > generates a syslog message and its error is always on pages that use ESI. > > I'm running version 3.0.2 on CentOS 5.7 x86_64. There are 3 instances of > varnish running on 3 physical machines, but the error is happening only > on two of them (very strange, since all requests are balanced through all > of them, and all the machines and configurations are equal). > > Here is my last sample of error: > > varnishd[3600]: Child (3376) Panic message: Missing errorhandling code in > vfp_esi_end(), cache_esi_fetch.c line 388:#012 Condition((vef->error) == > 0) not true.thread = (cache-worker)#012ident = > Linux,2.6.18-274.17.1.el5,x86_64,-sfile,-smalloc,-hclassic,epoll#012Backtrace:#012 > 0x42c7a6: /usr/sbin/varnishd [0x42c7a6]#012 0x41b2e8: /usr/sbin/varnishd > [0x41b2e8]#012 0x4215fd: /usr/sbin/varnishd(FetchBody+0x3fd) > [0x4215fd]#012 0x4153e8: /usr/sbin/varnishd [0x4153e8]#012 0x417ab6: > /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6]#012 0x42efb8: > /usr/sbin/varnishd [0x42efb8]#012 0x42e19b: /usr/sbin/varnishd > [0x42e19b]#012 0x2b79fd54673d: /lib64/libpthread.so.0 > [0x2b79fd54673d]#012 0x2b79fd8304bd: /lib64/libc.so.6(clone+0x6d) > [0x2b79fd8304bd]#012sp = 0x2aabd5802008 {#012 fd = 721, id = 721, xid = > 1611183207,#012 client = 50.16.221.66 33336,#012 step = > STP_FETCHBODY,#012 handling = deliver,#012 err_code = 200, err_reason = > (null),#012 restarts = 0, esi_level = 0#012 flags = do_esi is_gzip#012 > bodystatus = 4#012 ws = 0x2aabd5802080 { #012 id = "sess",#012 > {s,f,r,e} = {0x2aabd5802c90,+1216,(nil),+524288},#012 },#012 http[req] > = {#012 ws = 0x2aabd5802080[sess]#012 "GET",#012 > "/gadgets/latestNews/content.html?canal=24&numShowNews=6&corMenu=123d01&corTitulos=0D2F00",#012 > "HTTP/1.1",#012 "host: www.gazetaesportiva.net",#012 "Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",#012 > "Accept-Language: pt-br,pt;q=0.8,en-us;q=0.5,en;q=0.3",#012 "Cookie: > no_cache=undefined; data_news=Quarta%2C%2009/05/2012; origem_r7=1; > __utma=156012717.1271081437.1336590036.1336590036.1336592230.2; > __utmc=156012717; > __utmz=156012717.1336592230.2.2.utmcsr=r7.com|utmccn=(referral)|utmcmd=referral|utmcct=/; > __gads=ID=c2d9aeb092f3c1e8:T=1336590031:S=ALNI_Mb- > sZo2lFKMHIA52_A2XSviPcI7XQ; __jid=1336590040370789922; > __utmb=156012717.1.10.1336592230",#012 "Referer: > http://www.gazetaesportiva.net/noticia/2012/05/palmeiras/para-evitar- > oitavo-fracasso-f > > ESI is activated only if the user has a cookie. If this happens, all > pages for this domain will have a simple ESI that adds a bar to the top > of the page. > > Thanks, > Hugo New description: Hi, Varnish is randomly dying and restarting itself. I can't reproduce this, but it's happening with some frequency (3-5 times a day). Varnish generates a syslog message and its error is always on pages that use ESI. I'm running version 3.0.2 on CentOS 5.7 x86_64. There are 3 instances of varnish running on 3 physical machines, but the error is happening only on two of them (very strange, since all requests are balanced through all of them, and all the machines and configurations are equal). Here is my last sample of error: {{{ varnishd[3600]: Child (3376) Panic message: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388: Condition((vef->error) == 0) not true.thread = (cache-worker) ident = Linux,2.6.18-274.17.1.el5,x86_64,-sfile,-smalloc,-hclassic,epoll Backtrace: 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] 0x41b2e8: /usr/sbin/varnishd [0x41b2e8] 0x4215fd: /usr/sbin/varnishd(FetchBody+0x3fd) [0x4215fd] 0x4153e8: /usr/sbin/varnishd [0x4153e8] 0x417ab6: /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417ab6] 0x42efb8: /usr/sbin/varnishd [0x42efb8] 0x42e19b: /usr/sbin/varnishd [0x42e19b] 0x2b79fd54673d: /lib64/libpthread.so.0 [0x2b79fd54673d] 0x2b79fd8304bd: /lib64/libc.so.6(clone+0x6d) [0x2b79fd8304bd] sp = 0x2aabd5802008 { fd = 721, id = 721, xid = 1611183207, client = 50.16.221.66 33336, step = STP_FETCHBODY, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = do_esi is_gzip bodystatus = 4 ws = 0x2aabd5802080 { id = "sess", {s,f,r,e} = {0x2aabd5802c90,+1216,(nil),+524288}, }, http[req] = { ws = 0x2aabd5802080[sess] "GET", "/gadgets/latestNews/content.html?canal=24&numShowNews=6&corMenu=123d01&corTitulos=0D2F00", "HTTP/1.1", "host: www.gazetaesportiva.net", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: pt-br,pt;q=0.8,en-us;q=0.5,en;q=0.3", "Cookie: no_cache=undefined; data_news=Quarta%2C%2009/05/2012; origem_r7=1; __utma=156012717.1271081437.1336590036.1336590036.1336592230.2; __utmc=156012717; __utmz=156012717.1336592230.2.2.utmcsr=r7.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __gads=ID=c2d9aeb092f3c1e8:T=1336590031:S=ALNI_Mb- sZo2lFKMHIA52_A2XSviPcI7XQ; __jid=1336590040370789922; __utmb=156012717.1.10.1336592230", "Referer: http://www.gazetaesportiva.net/noticia/2012/05/palmeiras /para-evitar-oitavo-fracasso-f }}} ESI is activated only if the user has a cookie. If this happens, all pages for this domain will have a simple ESI that adds a bar to the top of the page. Thanks, Hugo -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:21:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:21:26 -0000 Subject: [Varnish] #1132: Varnish randomly restarting with esi requests In-Reply-To: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> References: <043.8feb616fcbbc7d1edc54a12ee4274951@varnish-cache.org> Message-ID: <052.3f291afdc68e74d5574b7cd4e4ee2462@varnish-cache.org> #1132: Varnish randomly restarting with esi requests ------------------------+--------------------------------------------------- Reporter: eitch | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: duplicate | Keywords: esi, panic ------------------------+--------------------------------------------------- Changes (by martin): * status: new => closed * resolution: => duplicate Comment: Hi, This looks like a duplicate of ticket #1044, which has been fixed and will be part of the upcoming 3.0.3 Varnish release. If you want you can try the latest 3.0 git branch of varnishd to see if that resolves your issue. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:23:56 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:23:56 -0000 Subject: [Varnish] #1133: Varnishstat uses too narrow columns causing keys/values to bleed together In-Reply-To: <042.ae3feeb8be9fef444412ebceefa7d0b7@varnish-cache.org> References: <042.ae3feeb8be9fef444412ebceefa7d0b7@varnish-cache.org> Message-ID: <051.747aa5bbce5c518388c7faeaa970f495@varnish-cache.org> #1133: Varnishstat uses too narrow columns causing keys/values to bleed together -------------------------+-------------------------------------------------- Reporter: kane | Owner: daghf Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: 3.0.2 Severity: normal | Keywords: -------------------------+-------------------------------------------------- Changes (by tfheen): * owner: => daghf -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:32:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:32:17 -0000 Subject: [Varnish] #1131: Strange bug with if/lookup - else/pass in vcl_recv In-Reply-To: <049.eecbfc45a96523cf173c9f229a4e2cac@varnish-cache.org> References: <049.eecbfc45a96523cf173c9f229a4e2cac@varnish-cache.org> Message-ID: <058.48ea25737c2edaf20193690111fe3332@varnish-cache.org> #1131: Strange bug with if/lookup - else/pass in vcl_recv -------------------------+-------------------------------------------------- Reporter: Guillaume.S | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Comment(by kristian): It's a bit unclear what, precisely, the problem is. Can you attach varnishlog output of the problem, please? Also: Are you missing Cookie headers, Set-Cookie headers or something else? Which precise cookies are you missing, and when? (In relation to the varnishlog output). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 10:43:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 10:43:15 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.b7a5626a06fd0548a7d22493df90b33d@varnish-cache.org> #1054: Child not responding to CLI, killing it -----------------------+---------------------------------------------------- Reporter: scorillo | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by etherael): The iostat output is quite huge but I couldn't see much difference between the read / write rates during or after the varnishd restart. I have it saved in case you want to analyse it directly. {{{ root at parrot:~$ varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 }}} I was unable to find a varnishd.core anywhere, following the instructions at https://www.varnish-cache.org/trac/wiki/DebuggingVarnish I note that the varnishd bin is supposed to be not stripped but the one on this system is listed as stripped. {{{ root at parrot:~$ file /usr/sbin/varnishd /usr/sbin/varnishd: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 14 14:02:40 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 May 2012 14:02:40 -0000 Subject: [Varnish] #1136: multi varnish instance setup dies with signal 6 Message-ID: <056.83e733e1eff8d86d60c6a4351771437e@varnish-cache.org> #1136: multi varnish instance setup dies with signal 6 --------------------------------+------------------------------------------- Reporter: christian.albrecht | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: | --------------------------------+------------------------------------------- Hi, I try to run a multi varnish instance setup on one of my hosts. The first instance starts as expected. The second instance could not drop the privileges while forking. Here is the error message: {{{ Child (26002) died signal=6 Child (-1) said Missing errorhandling code in mgt_sandbox(), mgt_sandbox.c line 71: Child (-1) said Condition((setuid(params->uid)) == 0) not true. Child (-1) said errno = 11 (Resource temporarily unavailable) Child cleanup complete }}} More details: * varnish 3.0.2 * cpu type: Intel(R) Xeon(R) CPU X5670 @ 2.93GHz (4 cores) * 32 GB ram * OS: RHEL6 * Linux webcache01 2.6.32-220.4.2.el6.x86_64 #1 SMP Mon Feb 6 16:39:28 EST 2012 x86_64 x86_64 x86_64 GNU/Linux I will attach my config -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 16 07:58:24 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 May 2012 07:58:24 -0000 Subject: [Varnish] #1137: rollback resets the restart counter. potential infinite loop. Message-ID: <042.97bc0180653ec52aac9a3473636180b2@varnish-cache.org> #1137: rollback resets the restart counter. potential infinite loop. --------------------+------------------------------------------------------- Reporter: yves | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: normal | Keywords: --------------------+------------------------------------------------------- we have a case where a customer has commented out the rollback to avoid resetting the restart counter that they currently use in their VCL. perhaps an internal global restart counter should be used so that max_restart param is adhered to. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 16 12:02:05 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 May 2012 12:02:05 -0000 Subject: [Varnish] #1138: Current trunk leaks objcore references on fetch failures, and fails to inform the exp timer of ttl changes on failures Message-ID: <044.7847ba73d52821db9e1c3d65e54c1a69@varnish-cache.org> #1138: Current trunk leaks objcore references on fetch failures, and fails to inform the exp timer of ttl changes on failures ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- The current trunk code will leak objcore references on delivery failures, causing memory leaks. Also when it changes the expiry of the object on fetch failure, it doesn't inform the exp_timer of this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed May 16 13:30:37 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 May 2012 13:30:37 -0000 Subject: [Varnish] #1139: default_keep makes objects stay around too long Message-ID: <044.8c1a9c748ce289914d1adcd0203efe6a@varnish-cache.org> #1139: default_keep makes objects stay around too long ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- In some "forced expiry" situations (e.g. purge from vcl), the default_keep parameter will make the objects stay in cache longer than expected. One possible solution is to use EXP_Clr() always (the ban mechanism uses this), which resets also the entered time, causing the ttl to be very negative. Another solution is to not take these parameters into account at ttl calculation time, but set the keep and grace of an object to the values of these parameters at the time the object is created (instead of -1). See attached varnishtest script. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 17 02:13:18 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 17 May 2012 02:13:18 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.d60f076896dad98bb65e38539f44f355@varnish-cache.org> #1054: Child not responding to CLI, killing it -----------------------+---------------------------------------------------- Reporter: scorillo | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by bstillwell): I'm also seeing this problem with varnish 3.0.2 on CentOS 6.2. There isn't any panic output from varnishadm: {{{ [root at den2tpv16 ~]# varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 }}} However /var/log/messages has some details: {{{ May 16 19:18:45 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:18:55 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:05 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:15 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:25 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:35 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:37 den2tpv16 varnishd[32205]: Child (32206) not responding to CLI, killing it. May 16 19:19:38 den2tpv16 varnishd[32205]: Child (32206) died signal=3 (core dumped) May 16 19:19:38 den2tpv16 varnishd[32205]: child (4447) Started May 16 19:19:38 den2tpv16 varnishd[32205]: Child (4447) said Child starts May 16 19:19:38 den2tpv16 varnishd[32205]: Child (4447) said SMF.s0 mmap'ed 85899345920 bytes of 85899345920 }}} The machine has 32GiB of memory with an SSD for storage (120GB OCZ Vertex2). The storage option I'm using is: -s file,/data/varnish_storage.bin,80G,32K Using -s malloc,23G on this same box is quite stable, but I'm hoping to start using SSDs to improve our hitrates (we have a very large set size). 'dstat -tlcmgndr 60' output: {{{ ----system---- ---load-avg--- ----total-cpu-usage---- ------memory- usage----- ---paging-- -net/total- -dsk/total- --io/total- date/time | 1m 5m 15m |usr sys idl wai hiq siq| used buff cach free| in out | recv send| read writ| read writ 16-05 19:08:13|0.10 11.6 15.1| 3 2 93 2 0 1|2719M 52.7M 28.0G 328M| 0 0 |3675k 13M|1878k 136k| 469 11.3 16-05 19:09:13|0.12 9.52 14.2| 3 2 93 2 0 1|2721M 52.9M 28.0G 320M| 0 0 |3425k 13M|1753k 133k| 438 11.1 16-05 19:10:13|0.16 7.82 13.3| 3 2 93 2 0 1|2719M 53.1M 28.0G 298M| 0 0 |3069k 13M|1529k 139k| 382 12.1 16-05 19:11:13|0.47 6.50 12.5| 3 2 93 2 0 1|2721M 53.3M 28.0G 308M| 0 0 |3763k 13M|1925k 135k| 481 12.3 16-05 19:12:13|0.35 5.36 11.7| 3 2 93 2 0 1|2724M 53.5M 28.0G 310M| 0 0 |3652k 12M|1852k 140k| 463 12.5 16-05 19:13:13|0.17 4.40 11.0| 3 2 92 2 0 1|2725M 53.6M 28.0G 304M| 0 0 |4319k 14M|2498k 140k| 625 12.6 16-05 19:14:13|0.31 3.68 10.3| 3 2 93 2 0 1|2723M 51.2M 28.0G 316M| 0 0 |3437k 12M|2006k 135k| 501 12.0 16-05 19:15:13|0.26 3.05 9.71| 3 2 93 2 0 1|2722M 51.4M 28.0G 316M| 0 0 |4249k 14M|2313k 135k| 578 12.2 16-05 19:16:13|0.14 2.51 9.10| 3 2 93 2 0 1|2723M 51.6M 28.0G 308M| 0 0 |3741k 12M|1774k 132k| 443 11.6 16-05 19:17:13|17.6 6.52 10.0| 3 3 53 41 0 1|2725M 51.7M 28.0G 297M| 0 0 |3780k 12M|1755k 18M| 439 193 16-05 19:18:13| 302 78.2 33.7| 2 2 26 68 0 1|2778M 51.8M 27.9G 321M| 0 0 |2401k 9803k| 681k 18M| 170 463 16-05 19:19:13| 561 194 76.3| 1 1 35 63 0 1|2765M 51.8M 27.9G 350M| 0 0 | 438k 1137k| 148k 14M|37.1 840 16-05 19:20:13| 373 222 94.2| 4 5 27 62 0 2|2011M 52.1M 28.0G 1015M| 0 0 | 15M 13M|1474k 15M| 366 577 16-05 19:21:13| 142 183 88.9| 3 3 52 41 0 1|2012M 52.2M 28.1G 930M| 0 0 | 10M 13M|1371k 15M| 343 360 16-05 19:22:13|52.7 150 83.4| 3 3 73 20 0 1|2012M 52.4M 28.2G 781M| 0 0 |8901k 12M|2472k 8992k| 618 266 16-05 19:23:13|20.4 123 78.3| 3 2 73 21 0 1|2012M 52.6M 28.3G 630M| 0 0 |7799k 12M|2502k 8782k| 626 270 16-05 19:24:13|8.86 101 73.5| 3 3 76 18 0 1|2022M 52.8M 28.5G 453M| 0 0 |7689k 18M|2791k 7731k| 698 245 16-05 19:25:13|3.74 82.7 69.0| 3 3 74 20 0 1|2027M 52.9M 28.6G 315M| 0 0 |7754k 14M|2786k 8175k| 697 255 16-05 19:26:13|2.70 68.0 64.8| 3 2 80 15 0 1|2029M 53.1M 28.6G 315M| 0 0 |6680k 13M|2162k 6918k| 540 240 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri May 18 14:20:14 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 May 2012 14:20:14 -0000 Subject: [Varnish] #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended Message-ID: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Keywords: ----------------------+----------------------------------------------------- When an object is put on transient to salvage a request, it's TTL isn't set to 'shortlived' as intended. See attached test case. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 07:24:14 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 07:24:14 -0000 Subject: [Varnish] #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended In-Reply-To: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> References: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> Message-ID: <053.b82fe952b8d910cb0ba4f62e907d578a@varnish-cache.org> #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [3f56d4ebb4297f383cba2d05c2eca179eb171fe6]) Fix ttl when backend fetches are salvaged into transient storage. Submitted by: Martin Fixes #1140 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 08:12:45 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 08:12:45 -0000 Subject: [Varnish] #1138: Current trunk leaks objcore references on fetch failures, and fails to inform the exp timer of ttl changes on failures In-Reply-To: <044.7847ba73d52821db9e1c3d65e54c1a69@varnish-cache.org> References: <044.7847ba73d52821db9e1c3d65e54c1a69@varnish-cache.org> Message-ID: <053.802d6ec90ec63ff5d4dc857d9734453a@varnish-cache.org> #1138: Current trunk leaks objcore references on fetch failures, and fails to inform the exp timer of ttl changes on failures ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [304683cc17f4bbf39fb6eb35443e40d2755867cd]) Adopt Martins fixed for #1138, which is mostly an artifact of me being interrupted half-way through committing a bunch of stuff. I'm passing on the test-case for two reasons: The code is in flux and it will soon be obsolete, and second "delay 0.2" testcases are notoriously flakey. Submitted by: Martin Fixes #1138 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 08:21:41 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 08:21:41 -0000 Subject: [Varnish] #1137: rollback resets the restart counter. potential infinite loop. In-Reply-To: <042.97bc0180653ec52aac9a3473636180b2@varnish-cache.org> References: <042.97bc0180653ec52aac9a3473636180b2@varnish-cache.org> Message-ID: <051.e8e60e81dced054e038daa718de7ae25@varnish-cache.org> #1137: rollback resets the restart counter. potential infinite loop. --------------------+------------------------------------------------------- Reporter: yves | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I am pretty sure that no version of varnish has ever reset the req.restarts counter on rollback. Please double check your diagnosis, and reopen ticket if you can reproduce. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 08:40:02 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 08:40:02 -0000 Subject: [Varnish] #1136: multi varnish instance setup dies with signal 6 In-Reply-To: <056.83e733e1eff8d86d60c6a4351771437e@varnish-cache.org> References: <056.83e733e1eff8d86d60c6a4351771437e@varnish-cache.org> Message-ID: <065.c7fbdd99b7484640d9ff9e2774661fad@varnish-cache.org> #1136: multi varnish instance setup dies with signal 6 ---------------------------------+------------------------------------------ Reporter: christian.albrecht | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Resolution: worksforme | Keywords: ---------------------------------+------------------------------------------ Changes (by phk): * status: new => closed * resolution: => worksforme Comment: To me this looks like a resource starvation issue, please check your ulimits etc. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 10:29:29 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 10:29:29 -0000 Subject: [Varnish] #1112: Varnish delivers truncated files when serving from Tomcat7 Backend and ESI enabled In-Reply-To: <045.a494717a9dcd83966ef7bb1fb1473262@varnish-cache.org> References: <045.a494717a9dcd83966ef7bb1fb1473262@varnish-cache.org> Message-ID: <054.4f9bdc25e50dba41cd149c7200b29103@varnish-cache.org> #1112: Varnish delivers truncated files when serving from Tomcat7 Backend and ESI enabled ---------------------+------------------------------------------------------ Reporter: derjohn | Owner: scoof Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: | ---------------------+------------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => worksforme Comment: We're closing this ticket as timed out. There is no indication of an actual bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 10:30:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 10:30:26 -0000 Subject: [Varnish] #1095: Assert error in SMS_Finish(), storage_synth.c line 114: In-Reply-To: <052.583a0880dd82471b8c0861a3310d6cc8@varnish-cache.org> References: <052.583a0880dd82471b8c0861a3310d6cc8@varnish-cache.org> Message-ID: <061.8736ce273ec67e505809e1c64a158c6b@varnish-cache.org> #1095: Assert error in SMS_Finish(), storage_synth.c line 114: -----------------------------+---------------------------------------------- Reporter: alex.goncharov | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: critical Resolution: worksforme | Keywords: varnishd SMS_Finish -----------------------------+---------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I'm timing this ticket out: There is no indication that this is anything but out of memory. Feel free to reopen if evidence to the contrary appears. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 10:35:12 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 10:35:12 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <056.37f90d72454655a6151eb103c473f6d0@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------+-------------------------------------------------- Reporter: campisano | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: blocker Resolution: worksforme | Keywords: child died pushing vcls failed #012CLI communication error (hdr) -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: The connect_timeout has nothing to do with starting the child process. As I said, you can try to increase cli_timeout, if the problem is disk-i/o pileups. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 10:35:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 10:35:26 -0000 Subject: [Varnish] #897: sess_mem "leak" on hyper-threaded cpu In-Reply-To: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> References: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> Message-ID: <055.5848c046ab0bec502c8854d7613156e5@varnish-cache.org> #897: sess_mem "leak" on hyper-threaded cpu -------------------------------------------------+-------------------------- Reporter: askalski | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: sess_mem leak n_sess race condition | -------------------------------------------------+-------------------------- Changes (by martin): * status: new => closed * resolution: => fixed Comment: The relevant areas of this has redesigned in trunk and will be part of the next major release of Varnish. For 3.0 this should not cause any major issues (potentially reaching sess_max a tiny bit faster than strictly necessary), so I am closing this bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 10:40:50 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 10:40:50 -0000 Subject: [Varnish] #1053: Persistent storage: space leakage In-Reply-To: <046.15b4aca0756757e34fe8cc21c10552fc@varnish-cache.org> References: <046.15b4aca0756757e34fe8cc21c10552fc@varnish-cache.org> Message-ID: <055.56864acb6ea41e8551473c09f1ea0c20@varnish-cache.org> #1053: Persistent storage: space leakage ----------------------+----------------------------------------------------- Reporter: dumbbell | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 14:10:13 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 14:10:13 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <056.bee8d7ffa40c2a6b9a9012e6ae1dc594@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------+-------------------------------------------------- Reporter: campisano | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: blocker Resolution: worksforme | Keywords: child died pushing vcls failed #012CLI communication error (hdr) -------------------------+-------------------------------------------------- Comment(by campisano): I had the same problem again. The varnish freezed and a simple restart solve. I'll test the cli_timeout increase to 60 secs, and report if it solve. {{{ May 20 01:59:39 macleod varnishd[22546]: Child (19229) not responding to CLI, killing it. May 20 01:59:40 macleod varnishd[22546]: Child (19229) not responding to CLI, killing it. May 20 01:59:40 macleod varnishd[22546]: Child (19229) died signal=3 May 20 01:59:40 macleod varnishd[22546]: Child cleanup complete May 20 01:59:40 macleod varnishd[22546]: child (16809) Started May 20 01:59:40 macleod varnishd[22546]: Child (16809) said Child starts May 20 01:59:40 macleod varnishd[22546]: Child (16809) said SMF.s0 mmap'ed 536870912 bytes of 536870912 May 20 05:17:58 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:08 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:18 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:28 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:38 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:48 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:18:58 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:19:08 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:19:11 macleod varnishd[22546]: Child (16809) not responding to CLI, killing it. May 20 05:19:11 macleod varnishd[22546]: Child (16809) died signal=3 May 20 05:19:11 macleod varnishd[22546]: Child cleanup complete May 20 05:19:11 macleod varnishd[22546]: child (11945) Started May 20 05:19:21 macleod varnishd[22546]: Pushing vcls failed:#012CLI communication error (hdr) May 20 05:19:21 macleod varnishd[22546]: Stopping Child May 20 05:19:21 macleod varnishd[22546]: Child (11945) said Child starts May 20 05:20:14 macleod varnishd[22546]: Child (11945) said SMF.s0 mmap'ed 536870912 bytes of 536870912 May 20 05:20:14 macleod varnishd[22546]: Child (11945) said Child dies May 20 05:20:16 macleod varnishd[22546]: Child (11945) died status=1 May 20 05:20:16 macleod varnishd[22546]: Child cleanup complete }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 21 16:41:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 May 2012 16:41:31 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <056.464ceefa7b45a4a83598cf6e563f351c@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------+-------------------------------------------------- Reporter: campisano | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: blocker Resolution: worksforme | Keywords: child died pushing vcls failed #012CLI communication error (hdr) -------------------------+-------------------------------------------------- Comment(by campisano): Some links: the default value is 10s https://www.varnish- cache.org/docs/3.0/reference/varnishd.html?highlight=cli_timeout someone used this parameter to resolve issues that may be similars: - https://www.varnish- cache.org/docs/3.0/reference/varnishd.html?highlight=cli_timeout If your varnish is heavily loaded, it might not answer the management thread in a timely fashion, which in turn will kill it off. To avoid that, set cli_timeout to 20 seconds or more. - http://tech.sybreon.com/2010/06/17/vanishing-varnish/ - http://www.gossamer-threads.com/lists/varnish/misc/15191 After switching to malloc and increasing the cli_timeout, Varnish does stay up for longer periods of time, but it's still dying at least once a day. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 08:37:13 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 08:37:13 -0000 Subject: [Varnish] #1141: VCL's should stop polling backends on being discarded Message-ID: <046.0e49316e8491aac00b3a31baf2952d4d@varnish-cache.org> #1141: VCL's should stop polling backends on being discarded ----------------------+----------------------------------------------------- Reporter: nicholas | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- After removing a backend by vcl reloadand discard, we still see health checks of backend. A typical vcl.list looks like this 30 minutes after discard: discarded 203 boot active 54 1337845014 The backend doesn't get used after discard, so this only affects backend health checks. Our monitoring of changes and application health is based around "varnishlog -u -i Backend_health". This hampers our debugging so much that we restart varnish on deploying new servers. Steps to reproduce: drop backend in vcl, reload vcl, discard vcl. Migth depend on varnish server being used actively, so there are threads which have been used, but are not currently in use. Greetings Nicholas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:47:51 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:47:51 -0000 Subject: [Varnish] #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended In-Reply-To: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> References: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> Message-ID: <053.90aff52553a977b669ab4c22d87b36cc@varnish-cache.org> #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [d6bc813c8e08d7aaa20e27d52cf330d0548c7e9b]) Fix ttl when backend fetches are salvaged into transient storage. Submitted by: Martin Fixes #1140 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:48:00 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:48:00 -0000 Subject: [Varnish] #1035: Port numbers are not sanitized, e.g: 1234124124 In-Reply-To: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> References: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> Message-ID: <055.aa55a3a59dec60166852177f8a7f7721@varnish-cache.org> #1035: Port numbers are not sanitized, e.g: 1234124124 ----------------------+----------------------------------------------------- Reporter: kristian | Owner: kristian Type: defect | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [e7b91c0ad49132cffd449f7926027ee2c1e5524e]) Verify range of port numbers before using them Fixes #1035 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:51:23 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:51:23 -0000 Subject: [Varnish] #1035: Port numbers are not sanitized, e.g: 1234124124 In-Reply-To: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> References: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> Message-ID: <055.d540a179ab7b3059876a0d8493d2fd89@varnish-cache.org> #1035: Port numbers are not sanitized, e.g: 1234124124 ----------------------+----------------------------------------------------- Reporter: kristian | Owner: kristian Type: defect | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [e7b91c0ad49132cffd449f7926027ee2c1e5524e]) Verify range of port numbers before using them Fixes #1035 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:51:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:51:17 -0000 Subject: [Varnish] #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended In-Reply-To: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> References: <044.8328c0dc120860b55348b388d232b76d@varnish-cache.org> Message-ID: <053.b7dd1fe934588cb9e7d311ff9e950f73@varnish-cache.org> #1140: Objects put on transient storage to salvage a request doesn't have their TTL lowered as intended ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [d6bc813c8e08d7aaa20e27d52cf330d0548c7e9b]) Fix ttl when backend fetches are salvaged into transient storage. Submitted by: Martin Fixes #1140 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:51:20 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:51:20 -0000 Subject: [Varnish] #1073: req.hash_always_miss should imply req.hash_ignore_busy In-Reply-To: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> References: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> Message-ID: <055.be84353df6d08ae051568d808f043fc8@varnish-cache.org> #1073: req.hash_always_miss should imply req.hash_ignore_busy ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [48ab7aa14ee48facf8fb0ccd74058c2e69ec0cb4]) Req.hash_always_miss now implies req.hash_ignore_busy. Fixes a case where we might get a cache hit even though hash_always_miss is set. Fixes: #1073 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 24 12:47:55 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 May 2012 12:47:55 -0000 Subject: [Varnish] #1073: req.hash_always_miss should imply req.hash_ignore_busy In-Reply-To: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> References: <046.67b8d58456e1a0db3764369f86329e68@varnish-cache.org> Message-ID: <055.cbc802bdd83c8b74121af96a9e95a66a@varnish-cache.org> #1073: req.hash_always_miss should imply req.hash_ignore_busy ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by Tollef Fog Heen ): (In [48ab7aa14ee48facf8fb0ccd74058c2e69ec0cb4]) Req.hash_always_miss now implies req.hash_ignore_busy. Fixes a case where we might get a cache hit even though hash_always_miss is set. Fixes: #1073 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon May 28 21:19:07 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 May 2012 21:19:07 -0000 Subject: [Varnish] #1142: nuke_limit documentation needs expanding to explain 'random' 503's Message-ID: <046.29494e99aa56b9276a3039740969a02d@varnish-cache.org> #1142: nuke_limit documentation needs expanding to explain 'random' 503's ----------------------+----------------------------------------------------- Reporter: timbunce | Type: documentation Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- See https://www.varnish-cache.org/trac/ticket/1012 for some background. The current behavior is unexpected, counter-intuitive, (seemingly) undocumented, hard to predict and harmful to users. Ideally it should be possible for varnish to 'pass-through' the backend response even if it can't find space in the cache. If that's not feasible/practical then docs need updating to remove the element of surprise for the users. Currently nuke_limit and the related behaviour is under-documented. I suggest that it should mention: * what happens when nuke_limit is reached (ie the 503 response) * explain that it's more likely if the site serves a mix of large and small objects * give guidance on the effect of setting it to high or two low * explain how to tell from the stats if this is happening -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue May 29 07:37:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 29 May 2012 07:37:38 -0000 Subject: [Varnish] #1141: VCL's should stop polling backends on being discarded In-Reply-To: <046.0e49316e8491aac00b3a31baf2952d4d@varnish-cache.org> References: <046.0e49316e8491aac00b3a31baf2952d4d@varnish-cache.org> Message-ID: <055.6ba8df72991f16d479da529ed03ccb13@varnish-cache.org> #1141: VCL's should stop polling backends on being discarded -----------------------+---------------------------------------------------- Reporter: nicholas | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: -----------------------+---------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [9243e4a4ba009a2626d33cd3378a73d3b8f91411]) Stop VCL's health-polling of backend already when we discard the VCL. Fixes #1141 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu May 31 17:46:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 31 May 2012 17:46:31 -0000 Subject: [Varnish] #1143: Slackware64-Current segfault Message-ID: <045.bdeacc2343bf5d909edbf2b7a1de20e2@varnish-cache.org> #1143: Slackware64-Current segfault ---------------------+------------------------------------------------------ Reporter: nanashi | Type: defect Status: new | Priority: highest Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: | ---------------------+------------------------------------------------------ I'm using slackware-current and although in slackware 32bit varnish running normally. in my 64bit built I get a segmentation fault when running varnish. {{{ varnishd[3930]: Platform: Linux,3.2.14-smp,x86_64,-sfile,-smalloc,-hcritbit varnishd[3930]: child (3932) Started kernel: [ 6274.568734] varnishd[3932]: segfault at 0 ip 0000000000414637 sp 00007fffb9d09d30 error 4 in varnishd[400000+79000] varnishd[3930]: Child (3932) died signal=11 varnishd[3930]: Child (-1) said Child starts }}} regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From guillaume.szkopinski at ovh.net Mon May 14 11:41:47 2012 From: guillaume.szkopinski at ovh.net (Guile SZK) Date: Mon, 14 May 2012 11:41:47 -0000 Subject: [Varnish] #1131: Strange bug with if/lookup - else/pass in vcl_recv In-Reply-To: <058.48ea25737c2edaf20193690111fe3332@varnish-cache.org> References: <049.eecbfc45a96523cf173c9f229a4e2cac@varnish-cache.org> <058.48ea25737c2edaf20193690111fe3332@varnish-cache.org> Message-ID: <4FB0EF78.3080700@ovh.net> Hi ! It's done, we found how fix it. Our infrastructure : VH : Varnish ACE : Cisco ACE WEBS : Thousand web servers (Apache) WEBX : web server Step by step : 1? VH <-> ACE <-> WEBS 2? VH --> ACE (Cookie? Yes:Set-Cookie=$cookie, No:Set-Cookie=newcookie()) --> WEBX 3? VH --- ACE Transparent mode --> WEBX 4? VH <-- ACE Transparent mode --- WEBX Some responses were back from backend to VH without "Set-Cookie" field. Web servers had keep-alive enable, so when VH reuse connections, step2 was ignored. We had disable keep-alive connections on web servers, and all is fine :) Guile. On 05/14/12 12:32, Varnish wrote: > #1131: Strange bug with if/lookup - else/pass in vcl_recv > -------------------------+-------------------------------------------------- > Reporter: Guillaume.S | Type: defect > Status: new | Priority: normal > Milestone: | Component: varnishd > Version: 3.0.2 | Severity: normal > Keywords: | > -------------------------+-------------------------------------------------- > > Comment(by kristian): > > It's a bit unclear what, precisely, the problem is. > > Can you attach varnishlog output of the problem, please? > > Also: Are you missing Cookie headers, Set-Cookie headers or something > else? Which precise cookies are you missing, and when? (In relation to the > varnishlog output). > >