From varnish-bugs at varnish-cache.org Tue Nov 1 08:43:56 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 08:43:56 -0000 Subject: [Varnish] #1047: regsuball, infinite loops and confusing behaviours In-Reply-To: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> References: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> Message-ID: <052.3f0c83f17b9fc572ba562ade7924c6a9@varnish-cache.org> #1047: regsuball, infinite loops and confusing behaviours ---------------------+------------------------------------------------------ Reporter: ctrix | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: regsuball freeze ---------------------+------------------------------------------------------ Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [fe425ce63a03b0de1a9c257c4e27cf1defe6a862]) Avoid infinite loop in regsuball Also optimise out a few strlen calls in favour of just tracking the lengths. Fixes: #1047 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 09:41:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 09:41:51 -0000 Subject: [Varnish] #1046: exp2 portability In-Reply-To: <048.e2a3240d9558155d1a81fc80c16a9cc2@varnish-cache.org> References: <048.e2a3240d9558155d1a81fc80c16a9cc2@varnish-cache.org> Message-ID: <057.cde278c5b63b1502fe4d0581950db6aa@varnish-cache.org> #1046: exp2 portability -------------------------+-------------------------------------------------- Reporter: msporleder | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: -------------------------+-------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [f6536dac9a6c9fdda4236e0ad92cfc4cdd6dd34a]) Use scalbn(3) rather than exp2(3), it should be faster and more portable. Fixes #1046 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 10:49:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 10:49:44 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.de1ff5924e712342c9ed3f4eec2bfdd8@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [374c3a235fec63ee4f213696fc43731fd329ac2e]) Add a missing case: ESI parent document gunzip'ed but included document gzip'ed. Fixes #1029 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 10:52:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 10:52:00 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.af4175e786d5ed29159280710fec915e@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by phk): My apologies for misunderstanding what this ticket was about. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 11:03:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 11:03:31 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.3c07f82a1954d15e169ef9f7ca7327da@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -----------------------+---------------------------------------------------- Reporter: kwy | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by kwy): My take is that it should be possible to error out of anywhere. Seems to me the problem is there is no response object to work on, which triggers the panic in deliver. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 12:02:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 12:02:40 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.5dcd178f365ae7b6d64df9fa053eab88@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by sthing): Thanks! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 14:36:22 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 14:36:22 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.b336790b71d911094c0dc93ef3aae1a4@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -----------------------+---------------------------------------------------- Reporter: kwy | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by kristian): The reason it panics in deliver was because the VCL compiler thought you could do error in vcl_deliver, and varnishd disagreed. It paniced on an assert that checks for legal return values. error; was one of the few states that weren't automatically synchronized between VCC and varnishd. The reason it's not possible at the moment is because vcl_error goes through vcl_deliver now. There's some work going on here which will simplify this whole situation, but since that's the type of change best suited for a major release (think removal of vcl_error more or less), the question of what to do on the short term remains. I do believe the conclusion from a bugwash was: It should be fixed short- term too. Just a matter of sitting down and doing it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 1 22:27:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 01 Nov 2011 22:27:38 -0000 Subject: [Varnish] #1048: vcl_pass caching in memory Message-ID: <041.f9055c27fe27b05b24285a449c12e892@varnish-cache.org> #1048: vcl_pass caching in memory -----------------------------+---------------------------------------------- Reporter: d4f | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.2 | Severity: major Keywords: | -----------------------------+---------------------------------------------- For vcl_pass: According to documentation, vcl_pass does not cache the response nor should it remain in memory but passed, as in pipe, to the client. {{{ Called upon entering pass mode. In this mode, the request is passed on to the backend, and the backend's response is passed on to the client, but is not entered into the cache. }}} However when running vcl_pass on a large file, it is still loaded into RAM - without even checking if it even fits into it and thus causing the kernel to OOM-kill random tasks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 2 07:57:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Nov 2011 07:57:33 -0000 Subject: [Varnish] #1048: vcl_pass caching in memory In-Reply-To: <041.f9055c27fe27b05b24285a449c12e892@varnish-cache.org> References: <041.f9055c27fe27b05b24285a449c12e892@varnish-cache.org> Message-ID: <050.64c4076faefd2eb3c7d1bfffe5e8278b@varnish-cache.org> #1048: vcl_pass caching in memory ------------------------------+--------------------------------------------- Reporter: d4f | Type: defect Status: closed | Priority: normal Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.2 | Severity: major Resolution: worksforme | Keywords: ------------------------------+--------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Yes, that is how it works. varnish 3.x has a basic streaming facility that allows you avoid this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 2 08:50:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 02 Nov 2011 08:50:17 -0000 Subject: [Varnish] #1049: Unbalanced {} in varnishncsa init script Message-ID: <043.b2723c63d08ceda97a3718abd8394083@varnish-cache.org> #1049: Unbalanced {} in varnishncsa init script -----------------------+---------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: low | Milestone: Component: packaging | Version: 3.0.2 Severity: normal | Keywords: -----------------------+---------------------------------------------------- Debian bug #645688 needs to be merged for the packages in repo.varnish- cache.org. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=645688 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 4 12:24:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 04 Nov 2011 12:24:38 -0000 Subject: [Varnish] #1050: Condition((o)->magic == 0x32851d42) not true. Message-ID: <044.e48e339840d9d0f9a8cfbea9ad3ac0a2@varnish-cache.org> #1050: Condition((o)->magic == 0x32851d42) not true. --------------------+------------------------------------------------------- Reporter: mgkula | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Version: varnish-3.0.2-1.el5 # cat /etc/sysconfig/varnish NFILES=131072 MEMLOCK=82000 DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p lru_interval=20 \ -p cli_timeout=60 \ -h classic,500009 \ -s malloc,30G \ -s persistent,/var/lib/varnish/store/silo,30G \ -p thread_pool_min=500 \ -p thread_pool_max=4000 \ -p thread_pools=4 \ -p thread_pool_add_delay=2 \ -p session_linger=100 \ -p send_timeout=120 \ -p sess_timeout=10 \ -p listen_depth=4096 \ -p nuke_limit=50" --- default.vcl --- #This is a basic VCL configuration file for varnish. See the vcl(7) #man page for details on VCL syntax and semantics. # cleanup url include "/etc/varnish/url.vcl"; #Default backend definition. Set this to point to your content #server. # # backed is at localhost:8000 backend default { .host = "127.0.0.1"; .port = "8000"; .max_connections = 500; .connect_timeout = 5s; .first_byte_timeout = 15s; .between_bytes_timeout = 2s; } ## ACL acl fw_purge { "zzz.zzz.zzz.0"/24; } # #Below is a commented-out copy of the default VCL logic. If you #redefine any of these subroutines, the built-in logic will be #appended to your code. # sub vcl_recv { # goto apache set req.backend = default; #set req.backend = apaches; # bust some torrent sites if (req.http.referer ~ "^http(s)?://(www\.)?(peb\.pl|dvhk\.pl|fuzja\.com|polishbytes\.net|oslonet\.net|emulek\.com\.pl|ufs\.pl|kinomaniak\.info |crackers- team\.net|darkwarez\.pl|dvdseed\.org|emulek\.info|.*\.dvhk\.pl|ukwarez\.org |emule- island\.com|emulek\.com|emulek\.info|exsite\.pl|darmowefilmy\.eu|download24\.li|megawarez\.eu|ajo\.pl|btgigs\.info|chomikuj\.pl|dvdseed\.org|marekchrzanow\.beq\.pl|torrenty\.org)/") { set req.http.host = "gfx"; set req.url = "/busted.jpg"; return(lookup); } ## Serve expired objs from cache if backend is irresponsible set req.grace = 5m; ## Remove any query strings, this is all about static content, no cgi and apps here set req.url = regsub(req.url, "\?.*$", ""); ## Normalize encoding (http://varnish- cache.org/docs/tutorial/increasing_your_hitrate/#tutorial-increasing-your- hitrate) if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } ## Purging interface - verify ACL if (req.request == "PURGE" && req.http.host ~ "gfx") { if(client.ip ~ fw_purge) { #set req.http.x = req.url "#" req.http.host "#"; #purge_hash(req.http.x); set req.http.host = "gfx"; ban("req.url ~ ^" + req.url + "$"); error 100 "Continue."; } else { error 405 "Not allowed."; } } # Normalize the Host: header if (req.http.host ~ "^[123].cdn$") { set req.http.host = "gfx"; } # Normalize the Host: header if (req.http.host == "ad" || req.http.host ~ "^[0-9a-z]\.ad$") { set req.http.host = "ad"; } # all GET to Host: (*.ad|ad|gfx) should be looked up in cache if (req.request == "GET") { if (req.http.host == "gfx" || req.http.host == "ad") { unset req.http.cookie; unset req.http.authorization; return(lookup); } } # DON'T lookup if non-RFC2616 or CONNECT which is weird if (req.request != "GET" && req.request != "HEAD") { error 405 "Not allowed."; } # DON'T lookup if HTTP-AUTH if (req.http.Authorization) { return(pass); } return(lookup); } sub vcl_fetch { ## Serve expired objs from cache if backend is irresponsible set req.grace = 5m; # CACHE all responses for GET requests to Host: gfx if (req.request == "GET" && req.http.host == "gfx") { unset beresp.http.Set-Cookie; return(deliver); } # CACHE 200's only! if (beresp.status >= 300) { return(hit_for_pass); } ## if varnish thinks it's not cachable, don't cache it. if (beresp.ttl == 0s && beresp.grace == 0s) { return(hit_for_pass); } ## if the object was trying to set a cookie, ## it probably shouldn't be cached. if (beresp.http.Set-Cookie) { return(hit_for_pass); } ## if the object is specifically saying 'don't cache me' - ## obey it. if(beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private") { return(hit_for_pass); } ## if the object is saying how long to cache it, you ## can rely on the fact that it is cachable. if (beresp.http.Cache-Control ~ "max-age") { unset beresp.http.Set-Cookie; return(deliver); } return(hit_for_pass); } #sub vcl_hit { #set obj.http.X-Cache = "TCP_HIT from " server.hostname; #} sub vcl_deliver { remove resp.http.Via; remove resp.http.Age; remove resp.http.X-Varnish; if (obj.hits > 0) { set resp.http.X-Cache = "HIT from " + server.hostname; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS from " + server.hostname; } return(deliver); } -- EOF --- Nov 3 17:12:52 varnish1 Condition((o)->magic == 0x32851d42) not true. Nov 3 17:12:52 varnish1 thread = (cache-worker) Nov 3 17:12:52 varnish1 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,epoll Nov 3 17:12:52 varnish1 Backtrace: Nov 3 17:12:52 varnish1 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 3 17:12:52 varnish1 0x448d87: /usr/sbin/varnishd [0x448d87] Nov 3 17:12:52 varnish1 0x425873: /usr/sbin/varnishd(HSH_Lookup+0x3a3) [0x425873] Nov 3 17:12:52 varnish1 0x41423e: /usr/sbin/varnishd [0x41423e] Nov 3 17:12:52 varnish1 0x417a92: /usr/sbin/varnishd(CNT_Session+0x9d2) [0x417a92] Nov 3 17:12:52 varnish1 0x42efb8: /usr/sbin/varnishd [0x42efb8] Nov 3 17:12:52 varnish1 0x42e19b: /usr/sbin/varnishd [0x42e19b] Nov 3 17:12:52 varnish1 0x391600673d: /lib64/libpthread.so.0 [0x391600673d] Nov 3 17:12:52 varnish1 0x39158d3f6d: /lib64/libc.so.6(clone+0x6d) [0x39158d3f6d] Nov 3 17:12:52 varnish1 sp = 0x2aaf8a89b008 { Nov 3 17:12:52 varnish1 fd = 117, id = 117, xid = 217771487, Nov 3 17:12:52 varnish1 client = yyy.yyy.yyy.yyy 51609, Nov 3 17:12:52 varnish1 step = STP_LOOKUP, Nov 3 17:12:52 varnish1 handling = hash, Nov 3 17:12:52 varnish1 restarts = 0, esi_level = 0 Nov 3 17:12:52 varnish1 flags = Nov 3 17:12:52 varnish1 bodystatus = 4 Nov 3 17:12:52 varnish1 ws = 0x2aaf8a89b080 { Nov 3 17:12:52 varnish1 id = "sess", Nov 3 17:12:52 varnish1 {s,f,r,e} = {0x2aaf8a89bc90,+488,+65536,+65536}, Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 http[req] = { Nov 3 17:12:52 varnish1 ws = 0x2aaf8a89b080[sess] Nov 3 17:12:52 varnish1 "GET", Nov 3 17:12:52 varnish1 "/wv/62/39/26239/wyscig.26239.0.jpg", Nov 3 17:12:52 varnish1 "HTTP/1.1", Nov 3 17:12:52 varnish1 "Accept: image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5", Nov 3 17:12:52 varnish1 "Referer: http://www.filmweb.pl/video/trailer/nr+1-25484", Nov 3 17:12:52 varnish1 "Accept-Language: pl-PL", Nov 3 17:12:52 varnish1 "User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)", Nov 3 17:12:52 varnish1 "Connection: Keep-Alive", Nov 3 17:12:52 varnish1 "host: gfx", Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 worker = 0x2aaf70e02cf0 { Nov 3 17:12:52 varnish1 ws = 0x2aaf70e02f30 { Nov 3 17:12:52 varnish1 id = "wrk", Nov 3 17:12:52 varnish1 {s,f,r,e} = {0x2aaf70df0ca0,0x2aaf70df0ca0,(nil),+65536}, Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 vcl = { Nov 3 17:12:52 varnish1 srcname = { Nov 3 17:12:52 varnish1 "input", Nov 3 17:12:52 varnish1 "Default", Nov 3 17:12:52 varnish1 "/etc/varnish/url.vcl", Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 }, Nov 3 17:12:52 varnish1 }, Nov 3 17:52:31 varnish1 Condition((o)->magic == 0x32851d42) not true. Nov 3 17:52:31 varnish1 thread = (cache-worker) Nov 3 17:52:31 varnish1 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,epoll Nov 3 17:52:31 varnish1 Backtrace: Nov 3 17:52:31 varnish1 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 3 17:52:31 varnish1 0x448d87: /usr/sbin/varnishd [0x448d87] Nov 3 17:52:31 varnish1 0x425873: /usr/sbin/varnishd(HSH_Lookup+0x3a3) [0x425873] Nov 3 17:52:31 varnish1 0x41423e: /usr/sbin/varnishd [0x41423e] Nov 3 17:52:31 varnish1 0x417a92: /usr/sbin/varnishd(CNT_Session+0x9d2) [0x417a92] Nov 3 17:52:31 varnish1 0x42efb8: /usr/sbin/varnishd [0x42efb8] Nov 3 17:52:31 varnish1 0x42e19b: /usr/sbin/varnishd [0x42e19b] Nov 3 17:52:31 varnish1 0x391600673d: /lib64/libpthread.so.0 [0x391600673d] Nov 3 17:52:31 varnish1 0x39158d3f6d: /lib64/libc.so.6(clone+0x6d) [0x39158d3f6d] Nov 3 17:52:31 varnish1 sp = 0x2aaf836bd008 { Nov 3 17:52:31 varnish1 fd = 2272, id = 2272, xid = 1537218534, Nov 3 17:52:31 varnish1 client = yyy.yyy.yyy.yyy 21213, Nov 3 17:52:31 varnish1 step = STP_LOOKUP, Nov 3 17:52:31 varnish1 handling = hash, Nov 3 17:52:31 varnish1 restarts = 0, esi_level = 0 Nov 3 17:52:31 varnish1 flags = Nov 3 17:52:31 varnish1 bodystatus = 4 Nov 3 17:52:31 varnish1 ws = 0x2aaf836bd080 { Nov 3 17:52:31 varnish1 id = "sess", Nov 3 17:52:31 varnish1 {s,f,r,e} = {0x2aaf836bdc90,+680,+65536,+65536}, Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 http[req] = { Nov 3 17:52:31 varnish1 ws = 0x2aaf836bd080[sess] Nov 3 17:52:31 varnish1 "GET", Nov 3 17:52:31 varnish1 "/ph/17/60/121760/99950.1.jpg", Nov 3 17:52:31 varnish1 "HTTP/1.1", Nov 3 17:52:31 varnish1 "User-Agent: Opera/9.80 (Windows NT 6.1; U; pl) Presto/2.7.62 Version/11.00", Nov 3 17:52:31 varnish1 "Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1", Nov 3 17:52:31 varnish1 "Accept-Language: pl-PL,pl;q=0.9,en;q=0.8", Nov 3 17:52:31 varnish1 "Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1", Nov 3 17:52:31 varnish1 "Referer: http://www.filmweb.pl/person/Naomie+Harris-84518", Nov 3 17:52:31 varnish1 "Connection: Keep-Alive, TE", Nov 3 17:52:31 varnish1 "TE: deflate, gzip, chunked, identity, trailers", Nov 3 17:52:31 varnish1 "host: gfx", Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 worker = 0x2aabc5600cf0 { Nov 3 17:52:31 varnish1 ws = 0x2aabc5600f30 { Nov 3 17:52:31 varnish1 id = "wrk", Nov 3 17:52:31 varnish1 {s,f,r,e} = {0x2aabc55eeca0,0x2aabc55eeca0,(nil),+65536}, Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 vcl = { Nov 3 17:52:31 varnish1 srcname = { Nov 3 17:52:31 varnish1 "input", Nov 3 17:52:31 varnish1 "Default", Nov 3 17:52:31 varnish1 "/etc/varnish/url.vcl", Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 }, Nov 3 17:52:31 varnish1 }, Nov 3 18:32:47 varnish1 Condition((o)->magic == 0x32851d42) not true. Nov 3 18:32:47 varnish1 thread = (cache-worker) Nov 3 18:32:47 varnish1 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,epoll Nov 3 18:32:47 varnish1 Backtrace: Nov 3 18:32:47 varnish1 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 3 18:32:47 varnish1 0x448d87: /usr/sbin/varnishd [0x448d87] Nov 3 18:32:47 varnish1 0x425873: /usr/sbin/varnishd(HSH_Lookup+0x3a3) [0x425873] Nov 3 18:32:47 varnish1 0x41423e: /usr/sbin/varnishd [0x41423e] Nov 3 18:32:47 varnish1 0x417a92: /usr/sbin/varnishd(CNT_Session+0x9d2) [0x417a92] Nov 3 18:32:47 varnish1 0x42efb8: /usr/sbin/varnishd [0x42efb8] Nov 3 18:32:47 varnish1 0x42e19b: /usr/sbin/varnishd [0x42e19b] Nov 3 18:32:47 varnish1 0x391600673d: /lib64/libpthread.so.0 [0x391600673d] Nov 3 18:32:47 varnish1 0x39158d3f6d: /lib64/libc.so.6(clone+0x6d) [0x39158d3f6d] Nov 3 18:32:47 varnish1 sp = 0x2aaf88dbd008 { Nov 3 18:32:47 varnish1 fd = 2366, id = 2366, xid = 17518841, Nov 3 18:32:47 varnish1 client = yyy.yyy.yyy.yyy 37724, Nov 3 18:32:47 varnish1 step = STP_LOOKUP, Nov 3 18:32:47 varnish1 handling = hash, Nov 3 18:32:47 varnish1 restarts = 0, esi_level = 0 Nov 3 18:32:47 varnish1 flags = Nov 3 18:32:47 varnish1 bodystatus = 4 Nov 3 18:32:47 varnish1 ws = 0x2aaf88dbd080 { Nov 3 18:32:47 varnish1 id = "sess", Nov 3 18:32:47 varnish1 {s,f,r,e} = {0x2aaf88dbdc90,+528,+65536,+65536}, Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 http[req] = { Nov 3 18:32:47 varnish1 ws = 0x2aaf88dbd080[sess] Nov 3 18:32:47 varnish1 "GET", Nov 3 18:32:47 varnish1 "/po/59/57/105957/6986464.0.jpg", Nov 3 18:32:47 varnish1 "HTTP/1.1", Nov 3 18:32:47 varnish1 "User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:7.0.1) Gecko/20100101 Firefox/7.0.1", Nov 3 18:32:47 varnish1 "Accept: image/png,image/*;q=0.8,*/*;q=0.5", Nov 3 18:32:47 varnish1 "Accept-Language: pl,en-us;q=0.7,en;q=0.3", Nov 3 18:32:47 varnish1 "Accept-Charset: ISO-8859-2,utf-8;q=0.7,*;q=0.7", Nov 3 18:32:47 varnish1 "Connection: keep-alive", Nov 3 18:32:47 varnish1 "Referer: http://www.filmweb.pl/film/Desperaci-2000-11561", Nov 3 18:32:47 varnish1 "host: gfx", Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 worker = 0x2aab46bffcf0 { Nov 3 18:32:47 varnish1 ws = 0x2aab46bfff30 { Nov 3 18:32:47 varnish1 id = "wrk", Nov 3 18:32:47 varnish1 {s,f,r,e} = {0x2aab46bedca0,0x2aab46bedca0,(nil),+65536}, Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 vcl = { Nov 3 18:32:47 varnish1 srcname = { Nov 3 18:32:47 varnish1 "input", Nov 3 18:32:47 varnish1 "Default", Nov 3 18:32:47 varnish1 "/etc/varnish/url.vcl", Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 }, Nov 3 18:32:47 varnish1 CLI communication error (hdr) Nov 3 18:32:47 varnish1 Condition(sg1->p.offset != sg->p.offset) not true. Nov 3 18:32:47 varnish1 thread = (cache-main) Nov 3 18:32:47 varnish1 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,no_waiter Nov 3 18:32:47 varnish1 Backtrace: Nov 3 18:32:47 varnish1 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 3 18:32:47 varnish1 0x4472f5: /usr/sbin/varnishd [0x4472f5] Nov 3 18:32:47 varnish1 0x4474bb: /usr/sbin/varnishd [0x4474bb] Nov 3 18:32:47 varnish1 0x444d57: /usr/sbin/varnishd(STV_open+0x27) [0x444d57] Nov 3 18:32:47 varnish1 0x42b525: /usr/sbin/varnishd(child_main+0xc5) [0x42b525] Nov 3 18:32:47 varnish1 0x43d5ec: /usr/sbin/varnishd [0x43d5ec] Nov 3 18:32:47 varnish1 0x43de7c: /usr/sbin/varnishd [0x43de7c] Nov 3 18:32:47 varnish1 0x35f2a094c7: /usr/lib64/varnish/libvarnish.so [0x35f2a094c7] Nov 3 18:32:47 varnish1 0x35f2a09b58: /usr/lib64/varnish/libvarnish.so(vev_schedule+0x88) [0x35f2a09b58] Nov 3 18:32:47 varnish1 0x43d7c2: /usr/sbin/varnishd(MGT_Run+0x132) [0x43d7c2] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 4 12:33:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 04 Nov 2011 12:33:40 -0000 Subject: [Varnish] #1050: Condition((o)->magic == 0x32851d42) not true. In-Reply-To: <044.e48e339840d9d0f9a8cfbea9ad3ac0a2@varnish-cache.org> References: <044.e48e339840d9d0f9a8cfbea9ad3ac0a2@varnish-cache.org> Message-ID: <053.f941df00aad2ea47ffa734dd4f0b30db@varnish-cache.org> #1050: Condition((o)->magic == 0x32851d42) not true. --------------------+------------------------------------------------------- Reporter: mgkula | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by mgkula): Sorry, please remove this ticket. Thank you. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 4 13:02:26 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 04 Nov 2011 13:02:26 -0000 Subject: [Varnish] #1050: Condition((o)->magic == 0x32851d42) not true. In-Reply-To: <044.e48e339840d9d0f9a8cfbea9ad3ac0a2@varnish-cache.org> References: <044.e48e339840d9d0f9a8cfbea9ad3ac0a2@varnish-cache.org> Message-ID: <053.2189356e433ca34460dee1ebfc64f057@varnish-cache.org> #1050: Condition((o)->magic == 0x32851d42) not true. ----------------------+----------------------------------------------------- Reporter: mgkula | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: invalid | Keywords: ----------------------+----------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Nov 5 13:28:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 05 Nov 2011 13:28:43 -0000 Subject: [Varnish] #1051: child process died Message-ID: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> #1051: child process died ---------------------+------------------------------------------------------ Reporter: sreniaw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ---------------------+------------------------------------------------------ I am having problem with varnish 64bit on centos 5.5. Child process died. New process is not respawning. Can you help ? {{{ Nov 5 13:37:41 blade414 Condition((const void*)(o) >= (const void*)((sg->sc)->base) && (const void*)(o) < (const void *)((sg->sc)->base + (sg->sc)->mediasize)) not true. Nov 5 13:37:41 blade414 thread = (cache-worker) Nov 5 13:37:41 blade414 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,epoll Nov 5 13:37:41 blade414 Backtrace: Nov 5 13:37:41 blade414 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 5 13:37:41 blade414 0x448edc: /usr/sbin/varnishd [0x448edc] Nov 5 13:37:41 blade414 0x425873: /usr/sbin/varnishd(HSH_Lookup+0x3a3) [0x425873] Nov 5 13:37:41 blade414 0x41423e: /usr/sbin/varnishd [0x41423e] Nov 5 13:37:41 blade414 0x417a92: /usr/sbin/varnishd(CNT_Session+0x9d2) [0x417a92] Nov 5 13:37:41 blade414 0x42efb8: /usr/sbin/varnishd [0x42efb8] Nov 5 13:37:41 blade414 0x42e19b: /usr/sbin/varnishd [0x42e19b] Nov 5 13:37:41 blade414 0x38be20673d: /lib64/libpthread.so.0 [0x38be20673d] Nov 5 13:37:41 blade414 0x38bdad3f6d: /lib64/libc.so.6(clone+0x6d) [0x38bdad3f6d] Nov 5 13:37:41 blade414 sp = 0x2aaf85cbd008 { Nov 5 13:37:41 blade414 fd = 1692, id = 1692, xid = 518917304, Nov 5 13:37:41 blade414 client = 1.1.1.1 32755, Nov 5 13:37:41 blade414 step = STP_LOOKUP, Nov 5 13:37:41 blade414 handling = hash, Nov 5 13:37:41 blade414 restarts = 0, esi_level = 0 Nov 5 13:37:41 blade414 flags = Nov 5 13:37:41 blade414 bodystatus = 4 Nov 5 13:37:41 blade414 ws = 0x2aaf85cbd080 { Nov 5 13:37:41 blade414 id = "sess", Nov 5 13:37:41 blade414 {s,f,r,e} = {0x2aaf85cbdc90,+504,+65536,+65536}, Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 http[req] = { Nov 5 13:37:41 blade414 ws = 0x2aaf85cbd080[sess] Nov 5 13:37:41 blade414 "GET", Nov 5 13:37:41 blade414 "/ph/az/22/22/az222289.jpg", Nov 5 13:37:41 blade414 "HTTP/1.1", Nov 5 13:37:41 blade414 "User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:7.0.1) Gecko/20100101 Firefox/7.0.1", Nov 5 13:37:41 blade414 "Accept: image/png,image/*;q=0.8,*/*;q=0.5", Nov 5 13:37:41 blade414 "Accept-Language: pl,en-us;q=0.7,en;q=0.3", Nov 5 13:37:41 blade414 "Accept-Charset: ISO-8859-2,utf-8;q=0.7,*;q=0.7", Nov 5 13:37:41 blade414 "Connection: keep-alive", Nov 5 13:37:41 blade414 "host: static", Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 worker = 0x2aaf4ce0ccf0 { Nov 5 13:37:41 blade414 ws = 0x2aaf4ce0cf30 { Nov 5 13:37:41 blade414 id = "wrk", Nov 5 13:37:41 blade414 {s,f,r,e} = {0x2aaf4cdfaca0,0x2aaf4cdfaca0,(nil),+65536}, Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 vcl = { Nov 5 13:37:41 blade414 srcname = { Nov 5 13:37:41 blade414 "input", Nov 5 13:37:41 blade414 "Default", Nov 5 13:37:41 blade414 "/etc/varnish/url.vcl", Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 }, Nov 5 13:37:41 blade414 CLI communication error (hdr) Nov 5 13:37:41 blade414 Condition(sg1->p.offset != sg->p.offset) not true. Nov 5 13:37:41 blade414 thread = (cache-main) Nov 5 13:37:41 blade414 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,no_waiter Nov 5 13:37:41 blade414 Backtrace: Nov 5 13:37:41 blade414 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 5 13:37:41 blade414 0x4472f5: /usr/sbin/varnishd [0x4472f5] Nov 5 13:37:41 blade414 0x4474bb: /usr/sbin/varnishd [0x4474bb] Nov 5 13:37:41 blade414 0x444d57: /usr/sbin/varnishd(STV_open+0x27) [0x444d57] Nov 5 13:37:41 blade414 0x42b525: /usr/sbin/varnishd(child_main+0xc5) [0x42b525] Nov 5 13:37:41 blade414 0x43d5ec: /usr/sbin/varnishd [0x43d5ec] Nov 5 13:37:41 blade414 0x43de7c: /usr/sbin/varnishd [0x43de7c] Nov 5 13:37:41 blade414 0x3f88c094c7: /usr/lib64/varnish/libvarnish.so [0x3f88c094c7] Nov 5 13:37:41 blade414 0x3f88c09b58: /usr/lib64/varnish/libvarnish.so(vev_schedule+0x88) [0x3f88c09b58] Nov 5 13:37:41 blade414 0x43d7c2: /usr/sbin/varnishd(MGT_Run+0x132) [0x43d7c2] }}} {{{ USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28818 0.0 0.0 31570672 1296 ? Ss Oct31 0:00 /usr/sbin/varnishd }}} {{{ -P /var/run/varnish.pid -f /etc/varnish/default.vcl -p lru_interval 20 -p cli_timeout 60 -h classic,500009 -s malloc,30G -s persistent,/var/lib/varnish/store/silo,30G -p thread_pool_min 500 -p thread_pool_max 4000 -p thread_pools 4 -p thread_pool_add_delay 2 -p session_linger 100 -p send_timeout 120 -p sess_timeout 10 -p listen_depth 4096 -p nuke_limit 50 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Nov 5 13:58:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 05 Nov 2011 13:58:51 -0000 Subject: [Varnish] #1051: child process died In-Reply-To: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> References: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> Message-ID: <054.cdb9056a6efbf2885bd809714308e3e4@varnish-cache.org> #1051: child process died ---------------------+------------------------------------------------------ Reporter: sreniaw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ---------------------+------------------------------------------------------ Comment(by sreniaw): I am not able to start varnish with persistent cache. {{{ # df -h /var/lib/varnish/store/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/Varnish 40G 28G 9.7G 75% /var/lib/varnish/store # ps root 26146 0.0 0.0 31569644 1208 ? Ss 14:52 0:00 /usr/sbin/varnishd }}} After disabling the option "-s" varnish is starting. {{{ root 23326 0.0 0.0 112368 1240 ? Ss 14:50 0:00 /usr/sbin/varnishd varnish 23327 0.2 0.3 20746912 106348 ? Sl 14:50 0:00 \_ /usr/sbin/varnishd }}} {{{ Nov 5 14:52:31 blade414 CLI communication error (hdr) Nov 5 14:52:31 blade414 Condition(sg1->p.offset != sg->p.offset) not true. Nov 5 14:52:31 blade414 thread = (cache-main) Nov 5 14:52:31 blade414 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,no_waiter Nov 5 14:52:31 blade414 Backtrace: Nov 5 14:52:31 blade414 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 5 14:52:31 blade414 0x4472f5: /usr/sbin/varnishd [0x4472f5] Nov 5 14:52:31 blade414 0x4474bb: /usr/sbin/varnishd [0x4474bb] Nov 5 14:52:31 blade414 0x444d57: /usr/sbin/varnishd(STV_open+0x27) [0x444d57] Nov 5 14:52:31 blade414 0x42b525: /usr/sbin/varnishd(child_main+0xc5) [0x42b525] Nov 5 14:52:31 blade414 0x43d5ec: /usr/sbin/varnishd [0x43d5ec] Nov 5 14:52:31 blade414 0x43d852: /usr/sbin/varnishd(MGT_Run+0x1c2) [0x43d852] Nov 5 14:52:31 blade414 0x44cacb: /usr/sbin/varnishd(main+0xd1b) [0x44cacb] Nov 5 14:52:31 blade414 0x38bda1d994: /lib64/libc.so.6(__libc_start_main+0xf4) [0x38bda1d994] Nov 5 14:52:31 blade414 0x40ba79: /usr/sbin/varnishd(VCLS_func_help+0x81) [0x40ba79] Nov 5 14:52:33 blade414 CLI communication error (hdr) Nov 5 14:52:33 blade414 Condition(sg1->p.offset != sg->p.offset) not true. Nov 5 14:52:33 blade414 thread = (cache-main) Nov 5 14:52:33 blade414 ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-spersistent,-smalloc,-hclassic,no_waiter Nov 5 14:52:33 blade414 Backtrace: Nov 5 14:52:33 blade414 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] Nov 5 14:52:33 blade414 0x4472f5: /usr/sbin/varnishd [0x4472f5] Nov 5 14:52:33 blade414 0x4474bb: /usr/sbin/varnishd [0x4474bb] Nov 5 14:52:33 blade414 0x444d57: /usr/sbin/varnishd(STV_open+0x27) [0x444d57] Nov 5 14:52:33 blade414 0x42b525: /usr/sbin/varnishd(child_main+0xc5) [0x42b525] Nov 5 14:52:33 blade414 0x43d5ec: /usr/sbin/varnishd [0x43d5ec] Nov 5 14:52:33 blade414 0x43d852: /usr/sbin/varnishd(MGT_Run+0x1c2) [0x43d852] Nov 5 14:52:33 blade414 0x44cacb: /usr/sbin/varnishd(main+0xd1b) [0x44cacb] Nov 5 14:52:33 blade414 0x38bda1d994: /lib64/libc.so.6(__libc_start_main+0xf4) [0x38bda1d994] Nov 5 14:52:33 blade414 0x40ba79: /usr/sbin/varnishd(VCLS_func_help+0x81) [0x40ba79] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 7 11:13:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 07 Nov 2011 11:13:00 -0000 Subject: [Varnish] #1051: child process died In-Reply-To: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> References: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> Message-ID: <054.6c777dd1b8a0a4ea0fcf25ac22acb8b3@varnish-cache.org> #1051: child process died ----------------------+----------------------------------------------------- Reporter: sreniaw | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: Your -spersistent silo has been corrupted, and you should remove the file entirely. Please tell me if it happens again. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 8 15:09:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Nov 2011 15:09:29 -0000 Subject: [Varnish] #1052: Persistent storage: less scary message when a silo is freshly created Message-ID: <046.a299a393f221b68cf1b4dd30204576e2@varnish-cache.org> #1052: Persistent storage: less scary message when a silo is freshly created ----------------------+----------------------------------------------------- Reporter: dumbbell | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: trivial Keywords: | ----------------------+----------------------------------------------------- When varnishd initliazes persistent storage and a new silo must be created, it displays a warning telling that the silo couldn't be reloaded. The attached patch just checks the value returned by STV_GetFile() and displays a more informative message when the silo was created. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 8 15:21:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Nov 2011 15:21:18 -0000 Subject: [Varnish] #1053: Persistent storage: space leakage Message-ID: <046.15b4aca0756757e34fe8cc21c10552fc@varnish-cache.org> #1053: Persistent storage: space leakage ----------------------+----------------------------------------------------- Reporter: dumbbell | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: | ----------------------+----------------------------------------------------- With persistent storage, at some point in time, the silo will have the following layout: {{{ |xxxxxxxESxxxx_| }}} where: * `S`: start of the segments list * `E`: end of the segments list * `x`: segments in between * `_`: unused space With such a situation, the `smp_open_segs` function (source:bin/varnishd/storage/storage_persistent.c) is responsible for finding free space. To do this, it drops the first elements of the list (starting from S) until it has "`free_reserve`" bytes of free space between `E` and the new `S`: {{{ |xxxxxxxE___Sx_| ^^^ free_reserve }}} When the segments at the tail of the silo are all cleared and there's still not enough space, the function starts to reclaim space at the front of the silo, until it reaches the `free_reserve`: {{{ |__SxxxxE______| ^^ free_reserve }}} It doesn't take into account the space freed at the tail. Unfortunately, when working on the tail of the silo, this function only considers the distance between `E` and the segment closest to the end of the silo. Therefore, it may found there's not enough space to satisfy `free_reserve` between those two points but there is between `E` and the end of the silo: {{{ |xxxxxxxE____S_| ^^^^^^ free_reserve (but not between E and S) }}} In this special case, it wraps the list too early. And later, when the same situation occurs, it won't try to reclaim the space between the segment where the list wraps and the end of the silo: {{{ |xxxxESxx______| ^^^^^^ leaked space }}} The bugfix (patch attached) consists of checking this situation before reclaiming space at the front of the silo. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 8 15:39:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 08 Nov 2011 15:39:30 -0000 Subject: [Varnish] #1053: Persistent storage: space leakage In-Reply-To: <046.15b4aca0756757e34fe8cc21c10552fc@varnish-cache.org> References: <046.15b4aca0756757e34fe8cc21c10552fc@varnish-cache.org> Message-ID: <055.9b273485c2fd391dcf1355635b55b33a@varnish-cache.org> #1053: Persistent storage: space leakage ----------------------+----------------------------------------------------- Reporter: dumbbell | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: | ----------------------+----------------------------------------------------- Comment(by dumbbell): I forgot to mention how I tested the patch... In smp_open_segs(), I added the following printf, to know where new segments will be stored after the silo is reloaded: {{{ printf("Free offset: %lu\n", sc->free_offset); }}} (after line 180 in source:bin/varnishd/storage/storage_persistent.c). I then started with a freshly created silo of 5Mb and used curl(1) to fill the cache. * With an unpatched Varnish, the maximum free offset decreases after each complete "rewrite" of the silo. This continues until this maximum free offset is too low and the process pan_ic() (caused by "`assert (l >= sc->free_reserve);`"): there's not `free_reserve` bytes between "`se`" and the element closest to the end of the silo AND not `free_reserve` bytes between the first offset (sc->ident->stuff[SMP_SPC_STUFF]) and "se". * With a patched Varnish, the maximum free offset stays around `sc->mediasize`. I confirmed this with a bigger silo (1Gb) and more clients/requests. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 06:54:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 06:54:34 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it Message-ID: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> #1054: Child not responding to CLI, killing it ----------------------+----------------------------------------------------- Reporter: scorillo | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Every few days, sometimes several times per day, our cache gets emptied because varnishd restarts it's childs. {{{ [root at varnish ~]# cat /var/log/messages Nov 6 03:46:02 varnish kernel: imklog 4.6.2, log source = /proc/kmsg started. Nov 6 03:46:02 varnish rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1173" x-info="http://www.rsyslog.com"] (re)start Nov 8 20:44:21 varnish varnishd[6719]: Child (31124) not responding to CLI, killing it. Nov 8 20:44:23 varnish abrt[16392]: file /usr/sbin/varnishd seems to be deleted Nov 8 20:44:23 varnish varnishd[6719]: Child (31124) not responding to CLI, killing it. Nov 8 20:44:23 varnish varnishd[6719]: Child (31124) died signal=3 (core dumped) Nov 8 20:44:23 varnish varnishd[6719]: child (16393) Started Nov 8 20:44:24 varnish varnishd[6719]: Child (16393) said Child starts Nov 8 20:45:28 varnish varnishd[6719]: Child (16393) not responding to CLI, killing it. Nov 8 20:45:29 varnish abrt[16861]: file /usr/sbin/varnishd seems to be deleted Nov 8 20:45:29 varnish varnishd[6719]: Child (16393) not responding to CLI, killing it. Nov 8 20:45:29 varnish varnishd[6719]: Child (16393) died signal=3 (core dumped) Nov 8 20:45:29 varnish varnishd[6719]: child (16862) Started Nov 8 20:45:29 varnish varnishd[6719]: Child (16862) said Child starts Nov 8 22:53:15 varnish varnishd[6719]: Child (16862) not responding to CLI, killing it. Nov 8 22:53:18 varnish abrt[28668]: file /usr/sbin/varnishd seems to be deleted Nov 8 22:53:18 varnish varnishd[6719]: Child (16862) not responding to CLI, killing it. Nov 8 22:53:18 varnish varnishd[6719]: Child (16862) died signal=3 (core dumped) Nov 8 22:53:18 varnish varnishd[6719]: child (28669) Started Nov 8 22:53:18 varnish varnishd[6719]: Child (28669) said Child starts Nov 9 01:22:21 varnish varnishd[6719]: Child (28669) not responding to CLI, killing it. Nov 9 01:22:25 varnish abrt[10394]: file /usr/sbin/varnishd seems to be deleted Nov 9 01:22:25 varnish varnishd[6719]: Child (28669) not responding to CLI, killing it. Nov 9 01:22:25 varnish varnishd[6719]: Child (28669) not responding to CLI, killing it. Nov 9 01:22:25 varnish varnishd[6719]: Child (28669) died signal=3 (core dumped) Nov 9 01:22:25 varnish varnishd[6719]: child (10395) Started Nov 9 01:22:25 varnish varnishd[6719]: Child (10395) said Child starts [root at varnish ~]# ls -al /usr/sbin/varnishd -rwxr-xr-x 1 root root 489880 Oct 24 10:52 /usr/sbin/varnishd [root at varnish ~]# rpm -qa | grep varnish varnish-libs-3.0.2-1.el5.x86_64 varnish-release-3.0-1.noarch varnish-3.0.2-1.el5.x86_64 [root at varnish ~]# uname -a Linux varnish 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux [root at varnish ~]# cat /etc/redhat-release CentOS Linux release 6.0 (Final) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 07:55:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 07:55:03 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore In-Reply-To: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> References: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> Message-ID: <051.20a1c4735e02cb0419dff813bb71d72e@varnish-cache.org> #1045: Ban lurker doesn't work anymore ---------------------+------------------------------------------------------ Reporter: Yvan | Type: defect Status: closed | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Resolution: fixed | Keywords: ban lurker ---------------------+------------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => fixed Comment: I just spent some quality time with the ban-lurker, it should work better now. Please report if not. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 11:58:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 11:58:04 -0000 Subject: [Varnish] #1055: Long values of shm_reclen is unsafe Message-ID: <046.5d807a3a4de8e140197e920ca285cb4b@varnish-cache.org> #1055: Long values of shm_reclen is unsafe ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Setting and using long values of shm_reclen causes problems as we run into other limits which are not dealt with properly, most notably the worker workspace. See: {{{ varnishtest "Long shm_reclen" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { import std; sub vcl_recv { std.log("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"); } } -start -cliok "param.set shm_reclen 65535" client c1 { txreq rxresp } -run }}} Output: {{{ kristian at freud:~$ varnishtest overload.vtc # top TEST overload.vtc passed (0.480) kristian at freud:~$ varnishtest overload.vtc **** top 0.0 macro def varnishd=varnishd **** top 0.0 macro def pwd=/home/kristian **** top 0.0 macro def topbuild=/home/kristian/../.. **** top 0.0 macro def bad_ip=10.255.255.255 **** top 0.0 macro def tmpdir=/tmp/vtc.23549.1ee7ed79 * top 0.0 TEST overload.vtc starting *** top 0.0 varnishtest * top 0.0 TEST Long shm_reclen *** top 0.0 server ** s1 0.0 Starting server **** s1 0.0 macro def s1_addr=127.0.0.1 **** s1 0.0 macro def s1_port=60755 **** s1 0.0 macro def s1_sock=127.0.0.1 60755 * s1 0.0 Listen on 127.0.0.1 60755 *** top 0.0 varnish ** s1 0.0 Started on 127.0.0.1 60755 ** v1 0.0 Launch *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n /tmp/vtc.23549.1ee7ed79/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.23549.1ee7ed79/v1/_S -M '127.0.0.1 47106' -P /tmp/vtc.23549.1ee7ed79/v1/varnishd.pid -sfile,/tmp/vtc.23549.1ee7ed79/v1,10M *** v1 0.0 CMD: cd /home/kristian && varnishd -d -d -n /tmp/vtc.23549.1ee7ed79/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.23549.1ee7ed79/v1/_S -M '127.0.0.1 47106' -P /tmp/vtc.23549.1ee7ed79/v1/varnishd.pid -sfile,/tmp/vtc.23549.1ee7ed79/v1,10M *** v1 0.0 PID: 23555 *** v1 0.0 debug| Platform: Linux,2.6.38-12-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| 200 245 \n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Varnish Cache CLI 1.0\n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Linux,2.6.38-12-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| \n *** v1 0.0 debug| Type 'help' for command list.\n *** v1 0.0 debug| Type 'quit' to close CLI session.\n *** v1 0.0 debug| Type 'start' to launch worker process.\n *** v1 0.0 debug| \n **** v1 0.1 CLIPOLL 1 0x1 0x0 *** v1 0.1 CLI connection fd = 9 *** v1 0.1 CLI RX 107 **** v1 0.1 CLI RX| durjbesuecbyckgwozrzhzytnfqyucly\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Authentication required.\n **** v1 0.1 CLI TX| auth 2c03d88f4efe5c174cd115f35d4aa8e311707ce289d00a2f5a532007214ac023\n *** v1 0.1 CLI RX 200 **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Varnish Cache CLI 1.0\n **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Linux,2.6.38-12-generic- pae,i686,-sfile,-smalloc,-hcritbit\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Type 'help' for command list.\n **** v1 0.1 CLI RX| Type 'quit' to close CLI session.\n **** v1 0.1 CLI RX| Type 'start' to launch worker process.\n **** v1 0.1 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n **** v1 0.1 CLI TX| backend s1 { .host = "127.0.0.1"; .port = "60755"; }\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \timport std;\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \tsub vcl_recv {\n **** v1 0.1 CLI TX| \t\tstd.log("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa... *** v1 0.2 CLI RX 200 **** v1 0.2 CLI RX| VCL compiled. **** v1 0.2 CLI TX| vcl.use vcl1 *** v1 0.2 CLI RX 200 ** v1 0.2 Start **** v1 0.2 CLI TX| start *** v1 0.3 debug| child (23568) Started\n **** v1 0.3 vsl| 0 WorkThread - 0xb50d200c start **** v1 0.3 vsl| 0 CLI - Rd vcl.load "vcl1" ./vcl.5W0vwA9C.so **** v1 0.3 vsl| 0 CLI - Wr 200 36 Loaded "./vcl.5W0vwA9C.so" as "vcl1" **** v1 0.3 vsl| 0 CLI - Rd vcl.use "vcl1" **** v1 0.3 vsl| 0 CLI - Wr 200 0 **** v1 0.3 vsl| 0 CLI - Rd start **** v1 0.3 vsl| 0 Debug - Acceptor is epoll **** v1 0.3 vsl| 0 CLI - Wr 200 0 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI TX| debug.xid 1000 *** v1 0.3 debug| Child (23568) said Not running as root, no priv- sep\n *** v1 0.3 debug| Child (23568) said Child starts\n *** v1 0.3 debug| Child (23568) said SMF.s0 mmap'ed 10485760 bytes of 10485760\n **** v1 0.3 vsl| 0 WorkThread - 0xb73ff00c start **** v1 0.3 vsl| 0 WorkThread - 0xb50c100c start **** v1 0.3 vsl| 0 WorkThread - 0xb50b000c start **** v1 0.3 vsl| 0 WorkThread - 0xb509f00c start **** v1 0.3 vsl| 0 WorkThread - 0xb508e00c start **** v1 0.3 vsl| 0 WorkThread - 0xb507d00c start **** v1 0.3 vsl| 0 WorkThread - 0xb506c00c start **** v1 0.3 vsl| 0 WorkThread - 0xb505b00c start **** v1 0.3 vsl| 0 WorkThread - 0xb504a00c start *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| XID is 1000 **** v1 0.3 CLI TX| debug.listen_address **** v1 0.3 vsl| 0 CLI - Rd debug.xid 1000 **** v1 0.3 vsl| 0 CLI - Wr 200 11 XID is 1000 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| 127.0.0.1 45836\n ** v1 0.3 Listen on 127.0.0.1 45836 **** v1 0.3 macro def v1_addr=127.0.0.1 **** v1 0.3 macro def v1_port=45836 **** v1 0.3 macro def v1_sock=127.0.0.1 45836 **** v1 0.3 CLI TX| param.set shm_reclen 65535 **** v1 0.4 vsl| 0 CLI - Rd debug.listen_address **** v1 0.4 vsl| 0 CLI - Wr 200 16 127.0.0.1 45836 *** v1 0.4 CLI RX 200 ** v1 0.4 CLI 200 *** top 0.4 client ** c1 0.4 Starting client ** c1 0.4 Waiting for client *** c1 0.4 Connect to 127.0.0.1 45836 *** c1 0.4 connected fd 10 from 127.0.0.1 36171 to 127.0.0.1 45836 *** c1 0.4 txreq **** c1 0.4 txreq| GET / HTTP/1.1\r\n **** c1 0.4 txreq| \r\n *** c1 0.4 rxresp ---- c1 0.4 HTTP rx failed (fd:10 read: Connection reset by peer) *** v1 0.4 debug| Child (23568) died signal=11\n *** v1 0.4 debug| Child cleanup complete\n * top 0.4 RESETTING after overload.vtc ** s1 0.4 Waiting for server **** s1 0.4 macro undef s1_addr **** s1 0.4 macro undef s1_port **** s1 0.4 macro undef s1_sock ** v1 1.4 Wait ** v1 1.4 R 23555 Status: 0000 * top 1.4 TEST overload.vtc FAILED # top TEST overload.vtc FAILED (1.423) exit=1 }}} Note particularly that this did NOT segfault consistently. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 14:32:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 14:32:21 -0000 Subject: [Varnish] #1055: Long values of shm_reclen is unsafe In-Reply-To: <046.5d807a3a4de8e140197e920ca285cb4b@varnish-cache.org> References: <046.5d807a3a4de8e140197e920ca285cb4b@varnish-cache.org> Message-ID: <055.f6f7465f86229eaf65d8c2a3307a5e5f@varnish-cache.org> #1055: Long values of shm_reclen is unsafe ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): Ah.... Part of the problem is that shm_reclen is evaluated directly, even though the shm_workspace is only adjusted when the thread starts. In addition, the shm_workspace seems largely unrelated to shm_reclen. There's nothing stopping you from shm_reclen=65k shm_workspace=4k, and that will of course break. All in all, it's VERY unadvised to use shm_reclen larger than, say, 16kB now. The code is simply too fragile. And you must increase the shm_workspace too, and do it /before/ starting Varnish - or starting the threads at least. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 20:43:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 20:43:08 -0000 Subject: [Varnish] #1056: obj.* should be available in vcl_deliver Message-ID: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> #1056: obj.* should be available in vcl_deliver ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- obj.ttl would be really useful, but all of obj.* should probably be readable in deliver -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 9 22:19:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 09 Nov 2011 22:19:32 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests Message-ID: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests -----------------------+---------------------------------------------------- Reporter: hidi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: streaming | -----------------------+---------------------------------------------------- I've tested 3.0.2 streaming release today. I've upgraded from 3.0.2 normal. The streaming release stops forwarding answers from the backend. I've tested the following simple VCL: backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_fetch { set beresp.do_stream = true; } Normal release works ok, but the streaming release simply "forgets" to answer to the client. The backend has sent the whole file to varnish, but somewhere it stops processing it. (Connection to the backend is closed) 0 CLI - Wr 200 19 PONG 1320874083 1.0 13 BackendOpen b default 127.0.0.1 45181 127.0.0.1 80 13 TxRequest b GET 13 TxURL b /something 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: example.com:81 13 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:7.0.1) Gecko/20 100101 Firefox/7.0.1 13 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0 .9,*/*;q=0.8 13 TxHeader b Accept-Language: hu-hu,hu;q=0.8,en-us;q=0.5,en;q=0.3 13 TxHeader b Accept-Charset: ISO-8859-2,utf-8;q=0.7,*;q=0.7 13 TxHeader b X-Forwarded-For: 192.168.44.5 13 TxHeader b X-Varnish: 520133398 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 404 13 RxResponse b Not Found 13 RxHeader b Date: Wed, 09 Nov 2011 21:28:04 GMT 13 RxHeader b Server: Apache 13 RxHeader b X-Cache-Control-Origin: php-script 13 RxHeader b Transfer-Encoding: chunked 13 RxHeader b Content-Type: text/html 13 Fetch_Body b 3(chunked) cls 0 mklen 1 13 Length b 30 13 BackendReuse b default 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1320874086 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1320874089 1.0 0 CLI - Rd ping Yes this is a 404 but it's an output of a script. ErrorDocument 404 /404.php 404.php: Streaming seems to be broken at the moment. Is it possible to lock an objest just after if varnish has received the headers of the response? I think it makes no sense to read a whole "big file" if we are sure we will not be or not want to cache it! I think the simple and best solution would be that if locking would occur just after if we have received the headers! If we will not be able the cache the answer just switch to pipe mode! Is it possible to switch to pipe mode from vcl_fetch?? (Plus: we should switch to pipe mode if the received answer reaches a length in bytes!) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 10:22:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 10:22:02 -0000 Subject: [Varnish] #1056: obj.* should be available in vcl_deliver In-Reply-To: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> References: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> Message-ID: <052.6b46a8df7b8c6637ae3999f877c05e2c@varnish-cache.org> #1056: obj.* should be available in vcl_deliver ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): Why? I can see obj.ttl as /somewhat/ useful as a read-only variable, but nothing in obj.http.* is really needed or wanted. It is more likely to add confusion than anything. And this would be a feature request, would it not? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 10:25:06 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 10:25:06 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.12355ee1dbe64f901ac0ef13833eed5f@varnish-cache.org> #1054: Child not responding to CLI, killing it ----------------------+----------------------------------------------------- Reporter: scorillo | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Comment(by kristian): This is, as the log says, because the child is not responding to the manager. Please post the parameters and/or startup arguments for Varnish, and describe the storage you use. Varnishstat -1 would also be useful, as would any information on load (both request-wise and cpu/IO-wise) be. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 11:47:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 11:47:40 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.54ed910777aa2676864f2fa91b591b5b@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests -----------------------+---------------------------------------------------- Reporter: hidi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: streaming | -----------------------+---------------------------------------------------- Comment(by martin): Hi, I have tried to recreate what you are experiencing, but to no success. Could you tell a little more under which circumstances you experience this? Is the server under high load at the time? Cache full? Does it happen every time or just occasionally? Also if you could provide the full varnishlog of the request, including the client connection part, when it fails that would be most helpful. The output of 'varnishstat -1' as well please. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 12:32:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 12:32:28 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.7ab791bf3c7fbd8caee3982e2d9c65f9@varnish-cache.org> #1054: Child not responding to CLI, killing it ----------------------+----------------------------------------------------- Reporter: scorillo | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Comment(by scorillo): This is the only varnish3 instance that we have and the only instance of varnish running on Centos6. We have 9 other instances of varnish2 (2.1.2) running on Centos5.4, using the same parameters. These varnish2 instances are working fine (uptime 36824261 1.00 Client uptime). May the 'n_wrk_max' be a problem? n_wrk_max 301076 108.22 N worker threads limited On varnish2 instances, this counter is zero. The requested information follows: /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 200,4000,120 -u varnish -g varnish -S /etc/varnish/secret -s malloc,2G -h classic,500009 -p lru_interval 20, -p listen_depth 4096, -p sess_workspace 262144 client_conn 3618 1.30 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 3618 1.30 Client requests received cache_hit 1774 0.64 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1844 0.66 Cache misses backend_conn 1840 0.66 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 4 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_retry 0 0.00 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 1840 0.66 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 25 . N struct sess_mem n_sess 0 . N struct sess n_object 1823 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 1838 . N struct objectcore n_objecthead 1838 . N struct objecthead n_waitinglist 22 . N struct waitinglist n_vbc 0 . N struct vbc n_wrk 60 . N worker threads n_wrk_create 71 0.03 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 301076 108.22 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 0 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 17 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 1571 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 3152 1.13 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 3618 1.30 Total Sessions s_req 3618 1.30 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1840 0.66 Total fetch s_hdrbytes 1271298 456.97 Total header bytes s_bodybytes 17115442 6152.21 Total body bytes sess_closed 3599 1.29 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 34 0.01 Session Linger sess_herd 19 0.01 Session herd shm_records 237082 85.22 SHM records shm_writes 20696 7.44 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 56 0.02 SHM MTX contention shm_cycles 0 0.00 SHM cycles through buffer sms_nreq 4 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 1672 . SMS bytes allocated sms_bfree 1672 . SMS bytes freed backend_req 1840 0.66 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 2782 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 12 0.00 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 3722 1.34 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 0 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 0 0.00 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 500009 179.73 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 5413 1.95 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 24 0.01 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 25 0.01 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 3883 1.40 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 8353 3.00 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 12 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 40571 14.58 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 1859 0.67 Created locks LCK.objhdr.destroy 21 0.01 Destroyed locks LCK.objhdr.locks 10933 3.93 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 5135 1.85 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 2341 0.84 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 940 0.34 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 4640 1.67 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 561 0.20 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 3688 1.33 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 2 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 7376 2.65 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.s0.c_req 3650 1.31 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 14001522 5032.90 Bytes allocated SMA.s0.c_freed 136448 49.05 Bytes freed SMA.s0.g_alloc 3646 . Allocations outstanding SMA.s0.g_bytes 13865074 . Bytes outstanding SMA.s0.g_space 2133618574 . Bytes available SMA.Transient.c_req 34 0.01 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 77012 27.68 Bytes allocated SMA.Transient.c_freed 77012 27.68 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.img_bal(172.27.101.150,,80).vcls 1 . VCL references VBE.img_bal(172.27.101.150,,80).happy18446744073707452415 . Happy health probes VBE.img_oks9(172.27.101.159,,80).vcls 1 . VCL references VBE.img_oks9(172.27.101.159,,80).happy 18446744073709551615 . Happy health probes -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 12:36:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 12:36:20 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.849caa70c537ba0b77a542bee4f7c67c@varnish-cache.org> #1054: Child not responding to CLI, killing it ----------------------+----------------------------------------------------- Reporter: scorillo | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Comment(by scorillo): Sorry about not inserting the information as code block. {{{ client_conn 3618 1.30 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 3618 1.30 Client requests received cache_hit 1774 0.64 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1844 0.66 Cache misses backend_conn 1840 0.66 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 4 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_retry 0 0.00 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 1840 0.66 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 25 . N struct sess_mem n_sess 0 . N struct sess n_object 1823 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 1838 . N struct objectcore n_objecthead 1838 . N struct objecthead n_waitinglist 22 . N struct waitinglist n_vbc 0 . N struct vbc n_wrk 60 . N worker threads n_wrk_create 71 0.03 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 301076 108.22 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 0 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 17 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 1571 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 3152 1.13 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 3618 1.30 Total Sessions s_req 3618 1.30 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 1840 0.66 Total fetch s_hdrbytes 1271298 456.97 Total header bytes s_bodybytes 17115442 6152.21 Total body bytes sess_closed 3599 1.29 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 34 0.01 Session Linger sess_herd 19 0.01 Session herd shm_records 237082 85.22 SHM records shm_writes 20696 7.44 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 56 0.02 SHM MTX contention shm_cycles 0 0.00 SHM cycles through buffer sms_nreq 4 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 1672 . SMS bytes allocated sms_bfree 1672 . SMS bytes freed backend_req 1840 0.66 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 2782 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 12 0.00 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 3722 1.34 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 0 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 0 0.00 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 500009 179.73 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 5413 1.95 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 24 0.01 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 25 0.01 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 3883 1.40 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 8353 3.00 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 12 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 40571 14.58 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 1859 0.67 Created locks LCK.objhdr.destroy 21 0.01 Destroyed locks LCK.objhdr.locks 10933 3.93 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 5135 1.85 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 2341 0.84 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 940 0.34 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 4640 1.67 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 561 0.20 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 3688 1.33 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 2 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 7376 2.65 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.s0.c_req 3650 1.31 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 14001522 5032.90 Bytes allocated SMA.s0.c_freed 136448 49.05 Bytes freed SMA.s0.g_alloc 3646 . Allocations outstanding SMA.s0.g_bytes 13865074 . Bytes outstanding SMA.s0.g_space 2133618574 . Bytes available SMA.Transient.c_req 34 0.01 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 77012 27.68 Bytes allocated SMA.Transient.c_freed 77012 27.68 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.img_bal(172.27.101.150,,80).vcls 1 . VCL references VBE.img_bal(172.27.101.150,,80).happy18446744073707452415 . Happy health probes VBE.img_oks9(172.27.101.159,,80).vcls 1 . VCL references VBE.img_oks9(172.27.101.159,,80).happy 18446744073709551615 . Happy health probes }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 13:19:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 13:19:32 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.edb9272cf1d6e784b4697577e562b9d6@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests -----------------------+---------------------------------------------------- Reporter: hidi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: streaming | -----------------------+---------------------------------------------------- Comment(by hidi): Hi, The server is a development server without any load (just tests). Linux release: Fedora 10 Backend webserver: Apache 2.2.20 Kernel: 2.6.39 Varnish version: 3.0.2-streaming (3.0.2 works fine with this simple test) Please check out the files attached! (config, cgi, varnishstat, varnishlog) Regards, Jozsef -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 13:24:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 13:24:21 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.484313e98fdb06ce7254b32562f5c6d2@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests -----------------------+---------------------------------------------------- Reporter: hidi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: streaming | -----------------------+---------------------------------------------------- Comment(by hidi): Hi, To be clear the problem is that the client doesn't receive answer at all with the streaming patch! Without streaming patch client receives the response! The connection between the client and varnish remains open without a byte! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 14:58:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 14:58:34 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.909b40f275693b171b0940404e1ba1b0@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests ----------------------+----------------------------------------------------- Reporter: hidi | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Keywords: streaming ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin Comment: I think I've figured out why you are seeing this now. I am guessing you have configured the minimum number of threads at 1? Either through parameters or the -w option to varnishd? What is happening is that for the streaming code to work, the original thread handling the request gets put to do the backend fetch, and the request is rescheduled to another thread. The thread handling code in varnish will then see that there is no immediate idle thread available, and also since the queue of sessions waiting to be served is less than thread_pool_add_threshold (another parameter), no new thread is created at this time to serve the request. You will probably see that the request would be served if some more traffic is thrown at the system. A workaround for this until I can make a proper fix could be to set the thread_pool_add_threshold parameter to 0. Also note that you should probably have a higher minimum number of threads. The centos package defaults here are too low. -Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 16:03:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 16:03:40 -0000 Subject: [Varnish] #1058: Multiple Set-Cookie Headers not being merged Message-ID: <045.524ebf32f19b764cc3baefd55503cd8e@varnish-cache.org> #1058: Multiple Set-Cookie Headers not being merged ---------------------+------------------------------------------------------ Reporter: KennyDs | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: major Keywords: | ---------------------+------------------------------------------------------ When a request with Multiple Set-Cookie headers is sent (like the one below), Varnish will only have the first "Set-Cookie" line in the sub vcl_fetch method of the headers. In the example pasted below the "EXTERNAL_NO_CACHE=1" info is used to tell varnish to retrieve a page from the backend instead of from the cache, but this fails... {{{ root at magento-development:/# curl -I http://XXXXXXXXX [1] 5332 root at magento-development:/# HTTP/1.1 200 OK Date: Thu, 10 Nov 2011 14:49:56 GMT Server: Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny13 with Suhosin-Patch X-Powered-By: PHP/5.2.6-1+lenny13 Set-Cookie: store=scfr; expires=Fri, 09-Nov-2012 14:49:56 GMT; path=/; domain=XXXXXXXXX; httponly Set-Cookie: frontend=9ac04aa3912eb78eb79f98dd531f7ba6; expires=Thu, 10 Nov 2011 15:49:57 GMT; path=/; domain=XXXXXXXXX; HttpOnly Expires: Thu, 10 Nov 2011 16:49:57 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre- check=0 Pragma: no-cache Set-Cookie: EXTERNAL_NO_CACHE=1; expires=Thu, 10-Nov-2011 15:49:57 GMT; path=/; domain=XXXXXXXXX; httponly X-Cache-Debug: 1 Vary: Accept-Encoding,User-Agent Content-Type: text/html; charset=UTF-8 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 16:52:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 16:52:24 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.5dd9801d4fcd64afdea04ab9972bc1d7@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests ----------------------+----------------------------------------------------- Reporter: hidi | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Keywords: streaming ----------------------+----------------------------------------------------- Comment(by hidi): Dear Marin, Thanks, it solves the problem, so the client receive the answer now. However it works with the first connection fine, after a few connections to a (slow) stream source. The second or third source will receive the data very slowly with a great timeout. Probably this is because of something chunk buffer, but it's still anoying for some ajax applications for example. As many users use SEF URLs we cannot tell wether we should switch to pipe mode or serve the query as usual at vlc_recv time, but we can tell it at vcl_fetch. I've tested that 3.0.2 (single connection) works great if I switch on do_stream. My personal idea and question is that is it possible to switch to pipe or stream mode and UNLOCK! the object what we will never cache at vcl_fetch time? Second "hack" is that to simply ignore somehow the lock. We have some pages using ajax or slow stream sources they will mess up without a solution! I think if we could unlock the unnecesarry object lock it would solve this great streaming problem without other fetching threads! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 17:36:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 17:36:57 -0000 Subject: [Varnish] #1057: Varnish 3.0.2-streaming stops forwarding requests In-Reply-To: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> References: <042.a16518451d4e16cc249260f7a63d144b@varnish-cache.org> Message-ID: <051.d58fbe8e7c1b39af72f2f79359dc5cad@varnish-cache.org> #1057: Varnish 3.0.2-streaming stops forwarding requests ----------------------+----------------------------------------------------- Reporter: hidi | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: major | Keywords: streaming ----------------------+----------------------------------------------------- Comment(by hidi): Dear Martin, I've found a new problem, if I simply return pipe from vcl_recv it still locks the object. Why do we lock the object without any chance to cache it in the future? I've also tried to set req.hash_ignore_busy too. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 18:26:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 18:26:23 -0000 Subject: [Varnish] #451: No X-Forwaded-For on piped requests In-Reply-To: <042.2c85d6fbb3ace5ad5acff383d05e4d16@varnish-cache.org> References: <042.2c85d6fbb3ace5ad5acff383d05e4d16@varnish-cache.org> Message-ID: <051.e99639678476920a76fcbf070d80d21d@varnish-cache.org> #451: No X-Forwaded-For on piped requests --------------------+------------------------------------------------------- Reporter: joel | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Comment(by Sporadisk): Hi, I found this thread before finding the equally relevant comment lines in the vcl reference. For some reason the comments regarding x-forwarded- for in the example configuration were not present in my own default.vcl file. As a result, I have to admit it took me quite a while to find the solution to my problem via Google, is there any way to make the explanation of this particular quirk a bit more avaliable? Perhaps an FAQ / Common Issues- section? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 10 21:50:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 10 Nov 2011 21:50:52 -0000 Subject: [Varnish] #1042: Error in "Multiple Subroutines" In-Reply-To: <047.985f1cafc09629e68ce36e9d5b2d51c0@varnish-cache.org> References: <047.985f1cafc09629e68ce36e9d5b2d51c0@varnish-cache.org> Message-ID: <056.1ed6f614e1e7bf7369574f3456dcb58d@varnish-cache.org> #1042: Error in "Multiple Subroutines" ---------------------------+------------------------------------------------ Reporter: sherrmann | Owner: scoof Type: documentation | Status: closed Priority: normal | Milestone: Component: documentation | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ---------------------------+------------------------------------------------ Changes (by Andreas Plesner Jacobsen ): * status: new => closed * resolution: => fixed Comment: (In [4f9e61d652c42f9a7b8650fd4c2ecbb204db031a]) Explicitly document that concatenation is only supported for the builtins. Fixes #1042 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 14 11:17:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Nov 2011 11:17:17 -0000 Subject: [Varnish] #1058: Multiple Set-Cookie Headers not being merged In-Reply-To: <045.524ebf32f19b764cc3baefd55503cd8e@varnish-cache.org> References: <045.524ebf32f19b764cc3baefd55503cd8e@varnish-cache.org> Message-ID: <054.ecfff04cf0bc4925d655bab9801a7b63@varnish-cache.org> #1058: Multiple Set-Cookie Headers not being merged ----------------------+----------------------------------------------------- Reporter: KennyDs | Type: defect Status: closed | Priority: high Milestone: | Component: varnishd Version: 3.0.0 | Severity: major Resolution: wontfix | Keywords: ----------------------+----------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => wontfix Comment: Working as intended. If you wish to modify individual set-cookie headers, use the header-vmod. If you wish to merge the headers(which your clients wont like), use std.collect(). The fundamental problem is that Set-Cookie isn't possible to merge and pretty much violates the HTTP spec. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 14 11:45:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Nov 2011 11:45:11 -0000 Subject: [Varnish] #837: varnishadm purge.list fails frequently In-Reply-To: <044.ccf18c20c88287bd9226a20b38ead029@varnish-cache.org> References: <044.ccf18c20c88287bd9226a20b38ead029@varnish-cache.org> Message-ID: <053.c032eca92aa828fd999c4dbb2a3ab2e0@varnish-cache.org> #837: varnishadm purge.list fails frequently ------------------------+--------------------------------------------------- Reporter: jelder | Owner: phk Type: defect | Status: closed Priority: lowest | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: purge.list | ------------------------+--------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I belive this is fixed in -trunk now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 14 14:41:50 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 14 Nov 2011 14:41:50 -0000 Subject: [Varnish] #1056: obj.* should be available in vcl_deliver In-Reply-To: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> References: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> Message-ID: <052.1084a0a3b06597b3b51e75b3db93c27e@varnish-cache.org> #1056: obj.* should be available in vcl_deliver ----------------------+----------------------------------------------------- Reporter: scoof | Owner: scoof Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- Changes (by scoof): * owner: => scoof Comment: I'll fix docs, and this has been added to Future_VCL. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 16 05:54:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 Nov 2011 05:54:23 -0000 Subject: [Varnish] #611: Websockets support In-Reply-To: <045.a4246e1ead3c534eddc2ee18642bc5a7@varnish-cache.org> References: <045.a4246e1ead3c534eddc2ee18642bc5a7@varnish-cache.org> Message-ID: <054.97b3dc1e63b9f6f0269dbafb6e22bc97@varnish-cache.org> #611: Websockets support -------------------------+-------------------------------------------------- Reporter: wesnoth | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: | -------------------------+-------------------------------------------------- Comment(by leedo): I can confirm that varnish seems to be stripping the Upgrade header, even when pipe is being used. The WebSocket parser I am using attempts to validate the request with this header. One also has to be sure to comment out `set bereq.http.connection = "close";` which was in the default debian vcl_pipe function. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 16 07:50:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 16 Nov 2011 07:50:14 -0000 Subject: [Varnish] #1059: trunk build failed Message-ID: <044.7f983c819b14fad26030ebdc533e4f84@varnish-cache.org> #1059: trunk build failed -----------------------------+---------------------------------------------- Reporter: 191919 | Type: defect Status: new | Priority: high Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: blocker Keywords: | -----------------------------+---------------------------------------------- {{{ $ ./autogen.sh ... $ ./configure ... $ make ... Making all in varnishd make[3]: *** No rule to make target `cache_acceptor.c', needed by `varnishd-cache_acceptor.o'. Stop. make[2]: *** [all-recursive] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 10:44:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 10:44:12 -0000 Subject: [Varnish] #1060: DNS director tramples on other backends Message-ID: <045.1c997b97a0d0e0d30803e5b301e7bdd2@varnish-cache.org> #1060: DNS director tramples on other backends ---------------------+------------------------------------------------------ Reporter: drwilco | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ---------------------+------------------------------------------------------ When generating the indexes into the directors array of the VCL_conf, the compiler uses the wrong variable and ends up trampling earlier backends. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 11:38:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 11:38:28 -0000 Subject: [Varnish] #1060: DNS director tramples on other backends In-Reply-To: <045.1c997b97a0d0e0d30803e5b301e7bdd2@varnish-cache.org> References: <045.1c997b97a0d0e0d30803e5b301e7bdd2@varnish-cache.org> Message-ID: <054.fc74bf5eedf3df604b72e547e10566bf@varnish-cache.org> #1060: DNS director tramples on other backends ----------------------+----------------------------------------------------- Reporter: drwilco | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by Rogier 'DocWilco' Mulhuijzen ): * status: new => closed * resolution: => fixed Comment: (In [bdbb1d59513cba8b268ed1dbe2d948619ef4ae07]) Use the right counter for directors index Fixes #1060 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 13:32:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 13:32:58 -0000 Subject: [Varnish] #1061: Removing a probe and reloading vcl will not make varnish stop probing backend Message-ID: <043.ebcecc8807b65d21d29996f3ddd23398@varnish-cache.org> #1061: Removing a probe and reloading vcl will not make varnish stop probing backend ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- If you have a backend with a probe and remove the probe and reload vcl, varnish will keep probing the backend until the old vcl is discarded. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 14:01:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 14:01:59 -0000 Subject: [Varnish] #1059: trunk build failed In-Reply-To: <044.7f983c819b14fad26030ebdc533e4f84@varnish-cache.org> References: <044.7f983c819b14fad26030ebdc533e4f84@varnish-cache.org> Message-ID: <053.9016e8d8739c5cda95b22c7233178e08@varnish-cache.org> #1059: trunk build failed ------------------------------+--------------------------------------------- Reporter: 191919 | Type: defect Status: closed | Priority: high Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: blocker Resolution: worksforme | Keywords: ------------------------------+--------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: Seems you like updated from an old snapshot; try running distclean first and it should work better. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 16:01:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 16:01:21 -0000 Subject: [Varnish] #1056: obj.* should be available in vcl_deliver In-Reply-To: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> References: <043.fdc9eba85b012a610536d23ce7035134@varnish-cache.org> Message-ID: <052.eec127f36e50013ff64992c7ce472ad6@varnish-cache.org> #1056: obj.* should be available in vcl_deliver ----------------------+----------------------------------------------------- Reporter: scoof | Owner: scoof Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- Comment(by drwilco): Kristian: for diagnostics? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 18 16:29:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 18 Nov 2011 16:29:09 -0000 Subject: [Varnish] #1062: Cannot test response header emptiness in varnishtest Message-ID: <044.6c937060324d71d31ed19d12b8d15927@varnish-cache.org> #1062: Cannot test response header emptiness in varnishtest -------------------------------------------------+-------------------------- Reporter: ofavre | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: normal Keywords: expect varnishtest vtc header empty | -------------------------------------------------+-------------------------- Consider the following test: {{{ varnishtest "Varnishtest bug: bad substitution with quoted values with resp.http.*" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_deliver { # Test with, and without the following line set resp.http.X-Test = "a"; } } -start client c1 { txreq rxresp expect resp.http.X-Test == "a" expect resp.http.X-Test != "resp.http.X-Test" } -run }}} The expect command does a replacement (using cmd_var_resolve()) in the LHS and RHS site, the enclosing double-quotes are ignored. Therefore it is impossible to test for a header emptiness: If the header exists (and is equal to "a"), the two tests are equivalent to: {{{ assert(strcmp("a", "a") == 0); assert(strcmp("a", "a") != 0); // instead of assert(strcmp("a", "resp.http.X-Test") != 0); }}} If the header doesn't exist, the two tests are equivalent to: {{{ assert(strcmp("resp.http.X-Test", "a") == 0); assert(strcmp("resp.http.X-Test", "resp.http.X-Test") != 0); // instead of assert(strcmp(NULL, "resp.http.X-Test") != 0); }}} === Here are a few possible fixes: === - Changing the language in order to make it respect double quoted strings, and maybe honor macro replacements like ${var} ; - Permit running the test would be to '''implement the "~" test''' (a third case in varnishtest/vtc_http.c:cmd_http_expect()) using a regex ; - In varnishtest/vtc_http.c:cmd_var_resolve(): if spec matches req/resp.http. and http_find_header() doesn't find the header and returns NULL, return NULL or "". -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:12:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:12:02 -0000 Subject: [Varnish] #599: WRK_Queue should prefer thread pools with idle threads / improve thread pool loadbalancing In-Reply-To: <043.9cb434ff03e8b76e8fdcc32373ecc37b@varnish-cache.org> References: <043.9cb434ff03e8b76e8fdcc32373ecc37b@varnish-cache.org> Message-ID: <052.17e08c45a2a49963a0ccdf16e36462bc@varnish-cache.org> #599: WRK_Queue should prefer thread pools with idle threads / improve thread pool loadbalancing -------------------------+-------------------------------------------------- Reporter: slink | Owner: phk Type: enhancement | Status: closed Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I think this has been addressed with the new per-pool acceptor code. If not, feel free to reopen. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:21:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:21:20 -0000 Subject: [Varnish] #805: Negative ReqEnd accept time (ESI enabled) and hanging request In-Reply-To: <044.509d9294d033556339c411a50e9af4f6@varnish-cache.org> References: <044.509d9294d033556339c411a50e9af4f6@varnish-cache.org> Message-ID: <053.45232393dfa0d175cb0b66393a276502@varnish-cache.org> #805: Negative ReqEnd accept time (ESI enabled) and hanging request ----------------------+----------------------------------------------------- Reporter: tesdal | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by kristian): * version: trunk => 3.0.2 * milestone: Varnish 3.0 dev => Comment: Still present in 3.0.2: {{{ 55 ReqEnd c 1584424892 1321874389.214827061 1321874389.216487169 1.302712917 0.001626253 0.000033855 303 ReqEnd c 1584424861 1321874388.833119392 1321874389.219733953 -0.386561871 nan nan 118 ReqEnd c 1584424894 1321874389.222037077 1321874389.222596645 -0.000537395 nan nan }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:22:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:22:17 -0000 Subject: [Varnish] #610: rushing too often may overflow session workspace In-Reply-To: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> References: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> Message-ID: <052.385fb40e55ed900c0021fdcaf6759431@varnish-cache.org> #610: rushing too often may overflow session workspace ----------------------+----------------------------------------------------- Reporter: slink | Owner: Type: defect | Status: new Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): This is suspected fixed. Slink, do you have any update on this for recent versions? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:25:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:25:17 -0000 Subject: [Varnish] #851: bad file descriptor In-Reply-To: <044.45cc74b29a65e7cfe4aa8f024de6dd48@varnish-cache.org> References: <044.45cc74b29a65e7cfe4aa8f024de6dd48@varnish-cache.org> Message-ID: <053.f508a5d1db94b78f0feebe425c5d0486@varnish-cache.org> #851: bad file descriptor ----------------------+----------------------------------------------------- Reporter: tfheen | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: This is now fixed in -trunk: We instantiate a new VSM when the child starts. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:25:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:25:34 -0000 Subject: [Varnish] #849: Session timeout while receiving POST data from client causes multiple broken backend requests In-Reply-To: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> References: <041.0e6f1a912317111f89bfe1014049adca@varnish-cache.org> Message-ID: <050.832a3ae639f00bf214eb3427aa6a6a51@varnish-cache.org> #849: Session timeout while receiving POST data from client causes multiple broken backend requests ----------------------+----------------------------------------------------- Reporter: lew | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: 2.1.4 Severity: normal | Keywords: 503, post, backend write error: 11 (Resource temporarily unavailable) ----------------------+----------------------------------------------------- Changes (by tfheen): * owner: kristian => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:30:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:30:27 -0000 Subject: [Varnish] #854: patch: extra features for the varnish initrc script In-Reply-To: <047.d51a54f258d82d1a5002e1089c1982d2@varnish-cache.org> References: <047.d51a54f258d82d1a5002e1089c1982d2@varnish-cache.org> Message-ID: <056.eb76131e0ecdb61e6e760bd475e16e13@varnish-cache.org> #854: patch: extra features for the varnish initrc script -------------------------+-------------------------------------------------- Reporter: jhalfmoon | Owner: tfheen Type: enhancement | Status: new Priority: low | Milestone: Later Component: packaging | Version: trunk Severity: trivial | Keywords: initrc patch extra feature -------------------------+-------------------------------------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 11:38:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 11:38:29 -0000 Subject: [Varnish] #610: rushing too often may overflow session workspace In-Reply-To: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> References: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> Message-ID: <052.989d5a191405a7ad41b447b2c221424e@varnish-cache.org> #610: rushing too often may overflow session workspace ----------------------+----------------------------------------------------- Reporter: slink | Owner: kristian Type: defect | Status: new Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by kristian): * owner: => kristian -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 21 20:54:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 21 Nov 2011 20:54:15 -0000 Subject: [Varnish] #971: Broken DNS director In-Reply-To: <042.7193bdb45708359e86884cde11a7ddc1@varnish-cache.org> References: <042.7193bdb45708359e86884cde11a7ddc1@varnish-cache.org> Message-ID: <051.f2bb9e09942b3ccaf950bed8699fc8eb@varnish-cache.org> #971: Broken DNS director ----------------------+----------------------------------------------------- Reporter: rdvn | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Rogier 'DocWilco' Mulhuijzen ): * status: new => closed * resolution: => fixed Comment: (In [aed74d6c93f932abcd111c2496c543d16f2745d5]) Enable regression test for bug 971, since 1060 was a dup of it 1060 was fixed with bdbb1d59513cba8b268ed1dbe2d948619ef4ae07 Fixes #971 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 23 21:11:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 23 Nov 2011 21:11:28 -0000 Subject: [Varnish] #829: Varnishstat does not return fields in specified order In-Reply-To: <042.d6e06843a3ed0e98391efef9d86964fa@varnish-cache.org> References: <042.d6e06843a3ed0e98391efef9d86964fa@varnish-cache.org> Message-ID: <051.bd64dfd2ee640dc0c86b3c90bb1cd8a2@varnish-cache.org> #829: Varnishstat does not return fields in specified order -------------------------+-------------------------------------------------- Reporter: wido | Owner: martin Type: enhancement | Status: closed Priority: low | Milestone: Component: varnishstat | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [a8aa0bafdcf6ffbabb4bd5a21fa2fd5d1c612f39]) Complete the VSC filtering, and make everything compile and pass tests. Still not done, in particular: Do not roll any releases until libvarnishapi symbo/version stuff has been polished. Fixes #829 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 24 12:05:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 Nov 2011 12:05:30 -0000 Subject: [Varnish] #1061: Removing a probe and reloading vcl will not make varnish stop probing backend In-Reply-To: <043.ebcecc8807b65d21d29996f3ddd23398@varnish-cache.org> References: <043.ebcecc8807b65d21d29996f3ddd23398@varnish-cache.org> Message-ID: <052.a852c9e9226b024eb15e6d31c103313d@varnish-cache.org> #1061: Removing a probe and reloading vcl will not make varnish stop probing backend ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by drwilco): On second thought, there's a race condition in there. Hrm. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Nov 24 19:31:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 24 Nov 2011 19:31:55 -0000 Subject: [Varnish] #1063: after installing varnish on our drupal sites, our FB comments posting all broke Message-ID: <047.e237ce624c241c9991ebbbda15f87fdd@varnish-cache.org> #1063: after installing varnish on our drupal sites, our FB comments posting all broke ------------------------------+--------------------------------------------- Reporter: drupalama | Type: defect Status: new | Priority: high Milestone: | Component: website Version: 3.0.0 | Severity: major Keywords: facebook, drupal | ------------------------------+--------------------------------------------- in IE9, after logging in to FB it's not letting us post comments at all (clicking post doesn't do anything) in other browsers (Chrome, Firefox) after logging in to FB and trying to post FB comment it keeps asking us to re-login again (eventhough we can see the user is already logged in) in our test sites without varnish, the FB comment post works just fine. please comment on this and kindly provide necessary fix. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 25 06:58:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 25 Nov 2011 06:58:09 -0000 Subject: [Varnish] #1064: [Bug] Extraneous Assert Message-ID: <047.024fcc30ad1de115e73c05631652c270@varnish-cache.org> #1064: [Bug] Extraneous Assert ------------------------------+--------------------------------------------- Reporter: pprocacci | Type: defect Status: new | Priority: lowest Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.1 | Severity: trivial Keywords: bug assert minor | ------------------------------+--------------------------------------------- This assert is already issued a couple of lines above. This just removes the unnecessary one. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 25 09:24:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 25 Nov 2011 09:24:37 -0000 Subject: [Varnish] #1065: SEGV at cache/cache_shmlog.c:121 during stress tests Message-ID: <043.80d936b149d78034dd234a8800e0eb0e@varnish-cache.org> #1065: SEGV at cache/cache_shmlog.c:121 during stress tests -------------------+-------------------------------------------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: | -------------------+-------------------------------------------------------- Child processes crash with SEGV consistently during stress tests: {{{ Program terminated with signal 11, Segmentation fault. [...] #0 0x000000000044c840 in vsl_get (len=932, records=33, flushes=0) at cache/cache_shmlog.c:121 121 *vsl_ptr = VSL_ENDMARKER; (gdb) bt #0 0x000000000044c840 in vsl_get (len=932, records=33, flushes=0) at cache/cache_shmlog.c:121 #1 0x000000000044ccb0 in WSL_Flush (w=0xfffffd7ffc969b40, overflow=0) at cache/cache_shmlog.c:194 #2 0x00000000004282c7 in cnt_done (sp=0x20e34e0) at cache/cache_center.c:349 #3 0x000000000042cb88 in CNT_Session (sp=0x20e34e0) at ../../include/tbl/steps.h:47 #4 0x0000000000447925 in Pool_Work_Thread (priv=0x5f46e0, w=0xfffffd7ffc969b40) at cache/cache_pool.c:265 #5 0x0000000000458871 in wrk_thread_real (priv=0x5f46e0, shm_workspace=8192, sess_workspace=65536, nhttp=64, http_space=1128, siov=16) at cache/cache_wrk.c:168 #6 0x0000000000458a14 in WRK_thread (priv=0x5f46e0) at cache/cache_wrk.c:192 #7 0xfffffd7fff284ae4 in _thrp_setup () from /lib/64/libc.so.1 #8 0xfffffd7fff284da0 in ?? () from /lib/64/libc.so.1 #9 0x0000000000000000 in ?? () }}} From watching varnishstat during the test, I get the impression that it happens when Varnish is getting its first cache hits. To reproduce, I run the tests with httperf: {{{ $ varnishd -a 0.0.0.0:8080 -T 127.0.0.1:6000 -b 127.0.0.1:80 -s malloc,1G -p default_keep=10 $ httperf --hog --server=127.0.0.1 --port=8080 --wlog=y,paths.lst --num- conns=25000 --num-calls=1000 commit 699607a0bcc7b7aed43cf83ef8746771c8a7d754 Date: Thu Nov 24 10:50:36 2011 +0000 $ varnishd -V varnishd (varnish-trunk revision b0fb05b) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS $ uname -a SunOS gsimmons 5.11 snv_134 i86pc i386 i86pc Solaris }}} The build on Solaris goes smoothly, regression tests pass -- I don't see the SEGV until I run the stress test. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 25 11:41:47 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 25 Nov 2011 11:41:47 -0000 Subject: [Varnish] #1065: SEGV at cache/cache_shmlog.c:121 during stress tests In-Reply-To: <043.80d936b149d78034dd234a8800e0eb0e@varnish-cache.org> References: <043.80d936b149d78034dd234a8800e0eb0e@varnish-cache.org> Message-ID: <052.adb266b2c6c18c7d1f4742d42885d3aa@varnish-cache.org> #1065: SEGV at cache/cache_shmlog.c:121 during stress tests ---------------------+------------------------------------------------------ Reporter: geoff | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [42d9a31ac0dc63f0ae971974b4a4c36c20f421ef]) Always remember what your pointers point to, when doing pointer addition. Fixes #1065 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 25 11:51:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 25 Nov 2011 11:51:17 -0000 Subject: [Varnish] #1064: [Bug] Extraneous Assert In-Reply-To: <047.024fcc30ad1de115e73c05631652c270@varnish-cache.org> References: <047.024fcc30ad1de115e73c05631652c270@varnish-cache.org> Message-ID: <056.7e3154f9e86720971b066a162de89810@varnish-cache.org> #1064: [Bug] Extraneous Assert ------------------------------+--------------------------------------------- Reporter: pprocacci | Type: defect Status: closed | Priority: lowest Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.1 | Severity: trivial Resolution: fixed | Keywords: bug assert minor ------------------------------+--------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [cd6f804c1567fc9e40a87231135a361d5d092cc8]) Remove a duplicate assert Fixes #1064 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 25 12:42:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 25 Nov 2011 12:42:43 -0000 Subject: [Varnish] #1063: after installing varnish on our drupal sites, our FB comments posting all broke In-Reply-To: <047.e237ce624c241c9991ebbbda15f87fdd@varnish-cache.org> References: <047.e237ce624c241c9991ebbbda15f87fdd@varnish-cache.org> Message-ID: <056.987f2b5724d9ad38496de735d68ef663@varnish-cache.org> #1063: after installing varnish on our drupal sites, our FB comments posting all broke -------------------------+-------------------------------------------------- Reporter: drupalama | Type: defect Status: closed | Priority: high Milestone: | Component: website Version: 3.0.0 | Severity: major Resolution: worksforme | Keywords: facebook, drupal -------------------------+-------------------------------------------------- Changes (by perbu): * status: new => closed * resolution: => worksforme Comment: Hi. This is probably not a bug in Varnish. It is likely that you do not have the correct configuration for Varnish. What you probably need to do is get someone to write the correct configuration for Varnish. The options you have are: * Read the documentation * Ask in the web forum * Ask on the mailing list There are also commercial support options available. I hope you are able to resolve this issue with the options available. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Nov 27 14:24:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 27 Nov 2011 14:24:14 -0000 Subject: [Varnish] #610: rushing too often may overflow session workspace In-Reply-To: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> References: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> Message-ID: <052.39f105d987bfb2e2d345cea76f6f2ffb@varnish-cache.org> #610: rushing too often may overflow session workspace ----------------------+----------------------------------------------------- Reporter: slink | Owner: kristian Type: defect | Status: new Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by slink): Hi Kristan, having a quick look at the current code, it looks like the session workshop overflow issue ''should'' be solved - but I would want to write a regression test to check that restarting sessions does not change {{{sp->ws->f}}}. Trouble is that currently the git head seems to have some serious issues with the VSL (similar to but not the same as #1065) and (damn it) I don't have enough time atm to fix these first. On the question of when to restart, having a quick look at the many calls to HSH_Deref in the current code, it is pretty clear that the waitinglist is simply rushed for too many cases. As this bug has not received any attention for long, I have not yet spent any brain-cycles on the current code, but I don't see the concerns I have raised in comment #comment:2 reflected. Thanks, Nils -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 28 08:23:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Nov 2011 08:23:03 -0000 Subject: [Varnish] #1061: Removing a probe and reloading vcl will not make varnish stop probing backend In-Reply-To: <043.ebcecc8807b65d21d29996f3ddd23398@varnish-cache.org> References: <043.ebcecc8807b65d21d29996f3ddd23398@varnish-cache.org> Message-ID: <052.d954576126997caee5b5f45d82b526b9@varnish-cache.org> #1061: Removing a probe and reloading vcl will not make varnish stop probing backend ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: This is actually the way it is designed to work. As long as you have a VCL loaded which polls the backend, the backend should be polled, in case you want to switch to that VCL. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 28 09:11:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Nov 2011 09:11:55 -0000 Subject: [Varnish] #1062: Cannot test response header emptiness in varnishtest In-Reply-To: <044.6c937060324d71d31ed19d12b8d15927@varnish-cache.org> References: <044.6c937060324d71d31ed19d12b8d15927@varnish-cache.org> Message-ID: <053.5ba4e0598b81b6080bcd7cf13a7b5f14@varnish-cache.org> #1062: Cannot test response header emptiness in varnishtest ---------------------+------------------------------------------------------ Reporter: ofavre | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: expect varnishtest vtc header empty ---------------------+------------------------------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [c4629a0fb7731416f5f130d8ccefa55dc58e5191]) Make it possible to test for the non-definition of a http header. Fixes #1062 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 28 10:48:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Nov 2011 10:48:57 -0000 Subject: [Varnish] #610: rushing too often may overflow session workspace In-Reply-To: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> References: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> Message-ID: <052.40e340e31fecb7dc5585d50e4a492ee4@varnish-cache.org> #610: rushing too often may overflow session workspace ----------------------+----------------------------------------------------- Reporter: slink | Owner: kristian Type: defect | Status: closed Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I agree that there is an optimization issue outstanding, but it will look quite a bit different when streaming is integrated (RSN!) so I would prefer to close this ticket as the bug-part of it is resolved. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 28 12:05:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 28 Nov 2011 12:05:35 -0000 Subject: [Varnish] #610: rushing too often may overflow session workspace In-Reply-To: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> References: <043.4029caf8729cada3f049287470cbeffa@varnish-cache.org> Message-ID: <052.3ab59b65bf11879481af474d9748d685@varnish-cache.org> #610: rushing too often may overflow session workspace ----------------------+----------------------------------------------------- Reporter: slink | Owner: kristian Type: defect | Status: closed Priority: high | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Comment(by slink): I agree with phks last comment and I will revisit this issue, should it become necessary. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 30 09:55:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Nov 2011 09:55:29 -0000 Subject: [Varnish] #1066: sub vcl_error not getting executed Message-ID: <044.1baaae0749567246614d52e8e718f456@varnish-cache.org> #1066: sub vcl_error not getting executed --------------------+------------------------------------------------------- Reporter: manjum | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: major Keywords: | --------------------+------------------------------------------------------- I need to customise the error page for 404 error and i have changed the vcl for that, just for the testing purpose. Problem is that even after restarting varnish i can't see the changes getting picked up. Request to advise me what has gone wrong. Many Thanks, Manju -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 30 10:07:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Nov 2011 10:07:45 -0000 Subject: [Varnish] #1066: sub vcl_error not getting executed In-Reply-To: <044.1baaae0749567246614d52e8e718f456@varnish-cache.org> References: <044.1baaae0749567246614d52e8e718f456@varnish-cache.org> Message-ID: <053.7bfc08fb2ef8294ddbe0b6952ac07d64@varnish-cache.org> #1066: sub vcl_error not getting executed ----------------------+----------------------------------------------------- Reporter: manjum | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: major Resolution: invalid | Keywords: ----------------------+----------------------------------------------------- Changes (by scoof): * status: new => closed * resolution: => invalid Comment: This looks like a configuration issue. Please use IRC or the mailing list for support. Trac is only for bugs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 30 23:39:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Nov 2011 23:39:59 -0000 Subject: [Varnish] #1067: Assert error in Tcheck() cache.h line 747 Message-ID: <045.8bf60881de9ff60429ade3207f21c659@varnish-cache.org> #1067: Assert error in Tcheck() cache.h line 747 ---------------------+------------------------------------------------------ Reporter: nathanm | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 2.1.5 | Severity: major Keywords: | ---------------------+------------------------------------------------------ Getting this frequently and consistently on one url: {{{ Nov 30 23:11:50 host.nmsrv.com varnishd[1516]: Child (3709) Panic message: Assert error in Tcheck(), cache.h line 747: Condition((t.b) != 0) not true. thread = (cache-worker) ident = Linux,2.6.32.27-grsec-gt-r2,x86_64,-smalloc,-hcritbit,epoll Backtrace: 0x427733: /usr/sbin/varnishd() [0x427733] 0x425565: /usr/sbin/varnishd() [0x425565] 0x42a355: /usr/sbin/varnishd(RES_BuildHttp+0x95) [0x42a355] 0x415d5d: /usr/sbin/varnishd() [0x415d5d] 0x416b45: /usr/sbin/varnishd(CNT_Session+0x325) [0x416b45] 0x4296b3: /usr/sbin/varnishd() [0x4296b3] 0x428ebe: /usr/sbin/varnishd() [0x428ebe] 0x7fb7912d14f7: /lib/libpthread.so.0(+0x74f7) [0x7fb7912d14f7] 0x7fb790dbf28d: /lib/libc.so.6(clone+0x6d) [0x7fb790dbf28d] sp = 0x7fb75778c008 { fd = 46, id = 46, xid = 883930026, client = 123.123.123.123 2146, step = STP_DELIVER, handling = deliver, restarts = 0, esis = 0 ws = 0x7fb75778c080 { id = "sess", {s,f,r,e} = {0x7fb75778d558,+4688,(nil),+65536}, }, http[req] = { ws = 0x7fb75778c080[sess] "GET", "http://www.host.com/doubleclick/DARTIframe.html?adParams=creativeIdentifier%3DGlobalTemplate_13226864065811322694705367%26mtfNoFlush%3D%26globalTemplateVersion%3D64_08%26isInterstitial%3Dfalse%26mediaServer%3Dhttp%253A//s0.2mdn.net%26adServer%3Dhttp%253A//ad.doubleclick.net%26adserverUrl%3Dhttp%253A//ad.doubleclick.net/activity%253Bsrc%253D2981993%253Bmet%253D1%253Bv%253D1%253Bpid%253D73670526%253Baid%253D247970272%253Bko%253D0%253Bcid%253D45140923%253Brid%253D45158711%253Brv%253D1%253B%26stringPostingUrl%3Dhttp%253A//ad.doubleclick.net/activity%253Bsrc%253D2981993%253Bstragg%253D1%253Bv%253D1%253Bpid%253D73670526%253Baid%253D247970272%253Bko%253D0%253Bcid%253D45140923%253Brid%253D45158711%253Brv%253D1%253Brn%253D7130235%253B%26swfParams%3Dsrc%253D2981993%2526rv%253D1%2526rid%253D45158711%2526%253D728x90%2526%26renderingId%3D45158711%26previewMode%3Dfalse%26debugEventsMode%3Dfalse%26pubHideObjects%3D%26pubHideApplets%3D%26mtfInline%3Dfalse%26pubTop%3D%26pubLeft%3D%26pubTopFloat%3D%26pubRightFloat%3D%26pubBottomFloat%3D%26pubLeftFloat%3D%26pubDuration%3D%26pubWMode%3D%26pubTopDuration%3D%26pubTopWMode%3D%26pubRightDuration%3D%26pubRightWMode%3D%26pubBottomDuration%3D%26pubBottomWMode%3D%26pubLeftDuration%3D%26pubLeftWMode%3D%26isRelativeBody%3Dfalse%26debugJSMode%3Dfalse%26adjustOverflow%3Dfalse%26asContext%3D%26clickThroughUrl%3Dhttp%253A//adclick.g.doubleclick.net/aclk%253Fsa%253DL%2526ai%253DBagAYNLjWTuLkMImFgALuuKGiCtvlq_wCq8TR5jiDq_OoaKDDcRABGAEggPvyAjgAUJfpo8MFYMnGqYvApNgPsgEYd3d3LnRoZ>Version=64_08&mediaserver=http%3A//s0.2mdn.net/879366&cid=GlobalTemplate_13226864065811322694705367&plcrjs=http%3A//s0.2mdn.net/2981993/plcr_1820142_0_1322686408519.js&globalTemplateJs=http%3A//s0.2mdn.net/879366/expandingIframeGlobalTemplate_v2_64_08.js&customScriptFile=&needSlaves=true&numberOfSlaves=2", "HTTP/1.1", "Host: static1.host.com", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.24) Gecko/20111103 Firefox/3.6.24", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: en-us", "Accept-Encoding: gzip,deflate", "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive: 115", "Connection: keep-alive", "Referer: http://googleads.g.doubleclick.net/pagead/ads?client=ca- pub-3969332055875549&output=html&h=90&slotname=5865189953&w=728&ea=0&flash=10.3.181&url=http%3A%2F%2Fwww.host.com%2Fshowthread.php%3Ft%3D956980%26page%3D3&dt=1322694704471&bpp=11&shv=r20111110&jsv=r20110914&correlator=1322694704786&frm=8&adk=148212317&ga_vid=384485472.1322694705&ga_sid=1322694705&ga_hid=508550686&ga_fc=0&u_tz=-360&u_his=5&u_java=1&u_h=768&u_w=1024&u_ah=734&u_aw=1024&u_cd=24&u_nplug=23&u_nmime=113&dff=serif&dfs=16&adx=-12245933&ady=-12245933&biw=-12245933&bih=-12245933&ifk=1633644208&fu=0&ifi=1&dtd=329", "X-Forwarded-For: 123.123.123.123", }, worker = 0x7fb6ea6e9bf0 { ws = 0x7fb6ea6e9d70 { id = "wrk", {s,f,r,e} = {0x7fb6ea6d7b80,0x7fb6ea6d7b80,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7fb783f45a00 { xid = 883930026, ws = 0x7fb783f45a20 { overflow id = "obj", {s,f,r,e} = {0x7fb783f463a8,+104,(nil),+1024}, }, http[obj] = { ws = 0x7fb783f45a20[obj] "HTTP/1.1", "301", "Date: Wed, 30 Nov 2011 23:11:50 GMT", "Server: Varnish", "Retry-After: 0", }, len = 0, store = { }, }, }, }}} I've sanitized some data out of here, can email the full data privately if needed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 30 23:40:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 30 Nov 2011 23:40:51 -0000 Subject: [Varnish] #1067: Assert error in Tcheck() cache.h line 747 In-Reply-To: <045.8bf60881de9ff60429ade3207f21c659@varnish-cache.org> References: <045.8bf60881de9ff60429ade3207f21c659@varnish-cache.org> Message-ID: <054.33f4183331373a0cc79efcf6ad920199@varnish-cache.org> #1067: Assert error in Tcheck() cache.h line 747 ---------------------+------------------------------------------------------ Reporter: nathanm | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 2.1.5 | Severity: major Keywords: | ---------------------+------------------------------------------------------ Comment(by nathanm): VCL is fairly straightforward, we're seeing the same error on two servers that use the same config. Varnishstat: {{{ tsr2 [ha backup] ~ # varnishstat -1 client_conn 9486 91.21 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 20178 194.02 Client requests received cache_hit 11194 107.63 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 4755 45.72 Cache misses backend_conn 8942 85.98 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 8144 78.31 Fetch with Length fetch_chunked 171 1.64 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 207 1.99 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 587 . N struct sess_mem n_sess 355 . N struct sess n_object 4755 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 4884 . N struct objectcore n_objecthead 4828 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 10 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 7.69 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 186401 1792.32 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 126 1.21 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 6 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 4417 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 18315 176.11 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 9476 91.12 Total Sessions s_req 20178 194.02 Total Requests s_pipe 139 1.34 Total pipe s_pass 4029 38.74 Total pass s_fetch 8784 84.46 Total fetch s_hdrbytes 7488332 72003.19 Total header bytes s_bodybytes 117706802 1131796.17 Total body bytes sess_closed 790 7.60 Session Closed sess_pipeline 81 0.78 Session Pipeline sess_readahead 28 0.27 Session Read Ahead sess_linger 19514 187.63 Session Linger sess_herd 18743 180.22 Session herd shm_records 1217125 11703.12 SHM records shm_writes 85432 821.46 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 281 2.70 SHM MTX contention shm_cycles 0 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 13312 128.00 SMA allocator requests sma_nobj 9517 . SMA outstanding allocations sma_nbytes 16390024 . SMA outstanding bytes sma_balloc 106024006 . SMA bytes allocated sma_bfree 89633982 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 8803 84.64 Backend requests made n_vcl 1 0.01 N vcl total n_vcl_avail 1 0.01 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.01 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 15954 153.40 HCB Lookups without lock hcb_lock 4699 45.18 HCB Lookups with lock hcb_insert 4699 45.18 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 104 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 262 2.52 Fetch no body (304) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator