From varnish-bugs at varnish-cache.org Mon Oct 3 10:11:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Oct 2011 10:11:45 -0000 Subject: [Varnish] #1025: varnishncsa gives continuous segfault. In-Reply-To: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> References: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> Message-ID: <063.174b398b6ff9f1c36986f1e65f46cdb0@varnish-cache.org> #1025: varnishncsa gives continuous segfault. ------------------------------+--------------------------------------------- Reporter: jonathan.labanca | Type: defect Status: new | Priority: high Milestone: | Component: varnishncsa Version: 3.0.1 | Severity: major Keywords: | ------------------------------+--------------------------------------------- Comment(by tfheen): Can you please get us a backtrace? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 3 12:23:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Oct 2011 12:23:05 -0000 Subject: [Varnish] #1025: varnishncsa gives continuous segfault. In-Reply-To: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> References: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> Message-ID: <063.59efd2f3af4417774cf8b783fee47643@varnish-cache.org> #1025: varnishncsa gives continuous segfault. ------------------------------+--------------------------------------------- Reporter: jonathan.labanca | Type: defect Status: new | Priority: high Milestone: | Component: varnishncsa Version: 3.0.1 | Severity: major Keywords: | ------------------------------+--------------------------------------------- Comment(by scoof): And a command line -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 3 12:24:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Oct 2011 12:24:36 -0000 Subject: [Varnish] #1024: CLI connected to xxx.xxx.xxx.xxx:xxxx on STDERR In-Reply-To: <044.bb33e878fc0c60fcff326417119751e8@varnish-cache.org> References: <044.bb33e878fc0c60fcff326417119751e8@varnish-cache.org> Message-ID: <053.9c0a809e2c2aaaae141985d9fddef4e6@varnish-cache.org> #1024: CLI connected to xxx.xxx.xxx.xxx:xxxx on STDERR -------------------------+-------------------------------------------------- Reporter: TrafeX | Owner: tfheen Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishadm | Version: 3.0.1 Severity: normal | Keywords: -------------------------+-------------------------------------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 3 12:37:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Oct 2011 12:37:57 -0000 Subject: [Varnish] #1024: CLI connected to xxx.xxx.xxx.xxx:xxxx on STDERR In-Reply-To: <044.bb33e878fc0c60fcff326417119751e8@varnish-cache.org> References: <044.bb33e878fc0c60fcff326417119751e8@varnish-cache.org> Message-ID: <053.4d4e85aba67516e7787c8f9b9eeb9593@varnish-cache.org> #1024: CLI connected to xxx.xxx.xxx.xxx:xxxx on STDERR -------------------------+-------------------------------------------------- Reporter: TrafeX | Owner: tfheen Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishadm | Version: 3.0.1 Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [04b017efb938826b06aef08b8c1bd592755806a3]) Drop debugging message about which host/port we have connected to Fixes: #1024 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 3 12:47:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Oct 2011 12:47:09 -0000 Subject: [Varnish] #1015: varnishncsa frequently segfaults In-Reply-To: <044.7f34c08db42070969e1cca0520956dd6@varnish-cache.org> References: <044.7f34c08db42070969e1cca0520956dd6@varnish-cache.org> Message-ID: <053.b4d50b452976d3d4c50877a8f8510919@varnish-cache.org> #1015: varnishncsa frequently segfaults -------------------------+-------------------------------------------------- Reporter: russki | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.1 | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: willthames: the bug you're responding to is something else and does not cause a segfault, just an error: With 3.0.1: > ~/varnish/bin/varnishncsa/varnishncsa -n /tmp/_v1 -F '%{Varnish:handling}x' Unknown format starting at: %{Varnish:handling}x Since I don't get a response from the original submitter, I'm closing this bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 4 13:27:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Oct 2011 13:27:01 -0000 Subject: [Varnish] #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations Message-ID: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations --------------------------------------+------------------------------------- Reporter: hno | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: strict-aliasing segfault | --------------------------------------+------------------------------------- varnish 1.2.5 and 3.0.1 both crashes on armv7 gcc 4.6.1 in early initialization. The crash is in bin/varnishd/cache_ban.c:BAN_Insert() when it uses the VTAILQ_LAST macro. 405 be = VTAILQ_LAST(&ban_head, banhead_s); and seen when code is compiled with -fstrict-aliasing -fschedule-insns optimizations enabled on armv7 (default enabled by -O2). Compiling with -Wstrict-aliasing=1 gives an strict-aliasing warning on the same line and other places where this macro is used. {{{ cache_ban.c: In function 'BAN_Insert': cache_ban.c:330:19: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_CheckLast': cache_ban.c:381:18: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'ban_lurker': cache_ban.c:522:20: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_TailRef': cache_ban.c:573:18: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'ccf_purge_list': cache_ban.c:759:20: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] }}} VTAILQ_LAST is used on all the above lines. Backtrace of the crash: {{{ #0 0x0001d3b0 in BAN_Insert (b=0x40850448) at cache_ban.c:405 #1 0x0001f8a4 in BAN_Init () at cache_ban.c:970 #2 0x00045238 in child_main () at cache_main.c:122 #3 0x00062a60 in start_child (cli=0x408690a4) at mgt_child.c:345 #4 0x00063e30 in mcf_server_startstop (cli=0x408690a4, av=0x40805180, priv=0x0) at mgt_child.c:620 #5 0x4005e84c in cls_dispatch (ac=1082560648, av=0x40072528, clp=0xa17c0, cli=0x408690a4) at cli_serve.c:228 #6 cls_vlu2 (priv=0x40805180, av=0x40072528) at cli_serve.c:284 #7 0x4005edf8 in cls_vlu (priv=0x40869088, p=0x408aa000 "start") at cli_serve.c:339 #8 0x400635d8 in LineUpProcess (l=0x40816d80) at vlu.c:154 #9 0x4005fc30 in VCLS_PollFd (cs=0x40817448, fd=, timeout=0) at cli_serve.c:489 #10 0x00064e48 in mgt_cli_callback2 (e=0x4084c1f0, what=1) at mgt_cli.c:370 #11 0x40062ac4 in vev_schedule_one (evb=0x40817420) at vev.c:498 #12 0x40062fbc in vev_schedule (evb=0x40817420) at vev.c:363 #13 0x00063d28 in MGT_Run () at mgt_child.c:602 #14 0x0007d19c in main (argc=0, argv=0xbeacbb34) at varnishd.c:650 }}} A full backtrace is attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 4 13:32:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Oct 2011 13:32:34 -0000 Subject: [Varnish] #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations In-Reply-To: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> References: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> Message-ID: <050.af48728cbd65e3d75d077ebc1859f4a1@varnish-cache.org> #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations --------------------------------------+------------------------------------- Reporter: hno | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: strict-aliasing segfault | --------------------------------------+------------------------------------- Comment(by hno): And this is the gcc warnings from 3.0.1: (I accidenlty pasted the same warnings from 2.1.5 earlier) {{ cache_ban.c: In function 'ban_CheckLast': cache_ban.c:157:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_TailRef': cache_ban.c:180:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_Insert': cache_ban.c:405:7: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'ban_lurker_work': cache_ban.c:764:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] }} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 4 13:33:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Oct 2011 13:33:29 -0000 Subject: [Varnish] #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations In-Reply-To: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> References: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> Message-ID: <050.273f425b88c7b6a7045bd31ac8c7d5ae@varnish-cache.org> #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations --------------------------------------+------------------------------------- Reporter: hno | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.0 | Severity: normal Keywords: strict-aliasing segfault | --------------------------------------+------------------------------------- Comment(by hno): And formatted correctly {{{ cache_ban.c: In function 'ban_CheckLast': cache_ban.c:157:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_TailRef': cache_ban.c:180:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'BAN_Insert': cache_ban.c:405:7: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] cache_ban.c: In function 'ban_lurker_work': cache_ban.c:764:6: warning: dereferencing type-punned pointer might break strict-aliasing rules [-Wstrict-aliasing] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 5 09:09:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 05 Oct 2011 09:09:34 -0000 Subject: [Varnish] #611: Websockets support In-Reply-To: <045.a4246e1ead3c534eddc2ee18642bc5a7@varnish-cache.org> References: <045.a4246e1ead3c534eddc2ee18642bc5a7@varnish-cache.org> Message-ID: <054.aeec5acd7dfb15dd1ad2336b64c88e50@varnish-cache.org> #611: Websockets support -------------------------+-------------------------------------------------- Reporter: wesnoth | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: | -------------------------+-------------------------------------------------- Old description: > It would be great if Varnish could support websockets from HTML5. > This might require some changes, though I haven't > tested it. There is support for websockets in > Google chrome, other browsers will get support > soon. > > More information about websockets can be > found here: > http://dev.w3.org/html5/websockets/ New description: It would be great if Varnish could support websockets from HTML5. This might require some changes, though I haven't tested it. There is support for websockets in Google chrome, other browsers will get support soon. More information about websockets can be found here: http://dev.w3.org/html5/websockets/ -- Comment(by yves): Pipe is enough for websocket to work. As jdespatis has pointed out, it is a matter of configuring the VCL. See https://www.varnish-software.com/blog/browsers-html5-websocket- varnish-cook-thief-his-wife-her-lover -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 5 21:11:25 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 05 Oct 2011 21:11:25 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver Message-ID: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -------------------+-------------------------------------------------------- Reporter: kwy | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Hi. The following VCL will core varnish on accessing the /fail url, as will any other use of error in deliver. I have specific use cases for doing this, in particular to force a synthetic response on conditions detected in vcl_deliver. https://github.com/comotion/security.vcl/blob/master/vcl/modules/cloak.vcl As it is I've worked around the issue by using restarts, but I 'spect you want to avoid a segfault anyways. --- sub vcl_recv { if (req.url ~ "^/fail"){ set req.http.fail = "fail"; return (lookup); } } sub vcl_deliver { if(req.http.fail){ error 200 "ok"; } } Child (23949) died signal=6 (core dumped) Child (23949) Panic message: Assert error in VCL_deliver_method(), ../../include/vcl_returns.h line 62: Condition((1U << sp->handling) & ((1U << 9) | (1U << 0) )) not true. thread = (cache-worker) ident = Linux,2.6.32-33-server,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x43665e: pan_backtrace+19 0x436933: pan_ic+1ad 0x43f8b8: VCL_deliver_method+10a 0x416f53: cnt_prepresp+52c 0x41bec2: CNT_Session+7c3 0x437d89: Pool_Work_Thread+8b3 0x44ab46: wrk_thread_real+3e7 0x44ad0f: WRK_thread+b4 0x7ffbcfefd9ca: _end+7ffbcf867752 0x7ffbcfc5a70d: _end+7ffbcf5c4495 sp = 0x7ffbc87b5040 { fd = 14, id = 14, xid = 1363644113, client = 87.238.35.17 59639, step = STP_PREPRESP, handling = error, err_code = 200, err_reason = ok, restarts = 0, esi_level = 0 flags = bodystatus = 0 ws = 0x7ffbc87b51a8 { id = "sess", {s,f,r,e} = {0x7ffbc87b5c48,+456,(nil),+65536}, }, http[req] = { ws = 0x7ffbc87b51a8[sess] "GET", "/fail", "HTTP/1.1", "Host: u.delta9.pl", "User-Agent: Mozilla/5.0 (X11; Linux i686; rv:2.0.1) Gecko/20100101 Firefox/4.0.1", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: en-us,en;q=0.5", "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive: 115", "DNT: 1", "Connection: keep-alive", "fail: fail", "Accept-Encoding: gzip", }, worker = 0x7ffb40ce69d0 { ws = 0x7ffb40ce6ca0 { id = "wrk", {s,f,r,e} = {0x7ffb40cd4920,+144,(nil),+65536}, }, http[resp] = { ws = 0x7ffb40ce6ca0[wrk] "HTTP/1.1", "Service Unavailable", "Server: Varnish", "Content-Type: text/html; charset=utf-8", "Retry-After: 5", "Content-Length: 419", "Accept-Ranges: bytes", "Date: Wed, 05 Oct 2011 20:27:16 GMT", "X-Varnish: 1363644113", "Age: 0", "Via: 1.1 varnish", "Connection: close", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7ffba6eff000 { xid = 1363644113, ws = 0x7ffba6eff018 { id = "obj", {s,f,r,e} = {0x7ffba6eff540,+176,(nil),+1024}, }, http[obj] = { ws = 0x7ffba6eff018[obj] "HTTP/1.1", "Service Unavailable", "Date: Wed, 05 Oct 2011 20:27:16 GMT", "Server: Varnish", "Content-Type: text/html; charset=utf-8", "Retry-After: 5", "Content-Length: 419", }, len = 419, store = { 419 { 0a 3c 3f 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 |.. Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 6 13:32:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Oct 2011 13:32:45 -0000 Subject: [Varnish] #1028: varnishncsa crashes randomly Message-ID: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> #1028: varnishncsa crashes randomly -------------------+-------------------------------------------------------- Reporter: ljorg | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.0 | Severity: major Keywords: | -------------------+-------------------------------------------------------- varnishncsa crashes randomly about 40-50 times a day. Running varnish 3.0.0 on RHEL 6 64bit. Varnishncsa runs as /usr/local/bin/varnishncsa -a -w /var/log/httpd/access_log -D -P /var/run/varnishncsa.pid -F %h %l %u %t "%m %U %H" %s %b "%{Referer}i" "%{User-agent}i" I have a coredump (attached) and when trying the suggested backtrace get this information: (gdb) bt #0 0x00000039fc47fa32 in __strlen_sse2 () from /lib64/libc.so.6 #1 0x00000039fc46663e in fputs () from /lib64/libc.so.6 #2 0x000000000040217c in h_ncsa (priv=0x13b4c00, tag=20669768, fd=, len=72, spec=, ptr=, bitmap=0) at varnishncsa.c:567 #3 0x00007f5997de4e43 in VSL_Dispatch (vd=0x13b3010, func=0x401d20 , priv=0x13b4c00) at vsl.c:308 #4 0x00000000004017fc in main (argc=9, argv=) at varnishncsa.c:817 I have tried the trunk+2011-10-04 version, but that also crashes randomly. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 6 13:35:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Oct 2011 13:35:39 -0000 Subject: [Varnish] #1028: varnishncsa crashes randomly In-Reply-To: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> References: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> Message-ID: <052.3e5ed64ecb58878f3ecd32d8b5d88357@varnish-cache.org> #1028: varnishncsa crashes randomly -------------------+-------------------------------------------------------- Reporter: ljorg | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.0 | Severity: major Keywords: | -------------------+-------------------------------------------------------- Comment(by ljorg): Okay, that backtrace looked weird. Trying again: {{{ (gdb) bt #0 0x00000039fc47fa32 in __strlen_sse2 () from /lib64/libc.so.6 #1 0x00000039fc46663e in fputs () from /lib64/libc.so.6 #2 0x000000000040217c in h_ncsa (priv=0x13b4c00, tag=20669768, fd=, len=72, spec=, ptr=, bitmap=0) at varnishncsa.c:567 #3 0x00007f5997de4e43 in VSL_Dispatch (vd=0x13b3010, func=0x401d20 , priv=0x13b4c00) at vsl.c:308 #4 0x00000000004017fc in main (argc=9, argv=) at varnishncsa.c:817 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 6 15:48:50 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Oct 2011 15:48:50 -0000 Subject: [Varnish] #1028: varnishncsa crashes on %m format with invalid requests (was: varnishncsa crashes randomly) In-Reply-To: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> References: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> Message-ID: <052.9a5d7db01533c09eec50f62b08ef4767@varnish-cache.org> #1028: varnishncsa crashes on %m format with invalid requests -------------------------+-------------------------------------------------- Reporter: ljorg | Owner: scoof Type: defect | Status: new Priority: high | Milestone: Component: varnishncsa | Version: 3.0.0 Severity: major | Keywords: -------------------------+-------------------------------------------------- Changes (by scoof): * owner: => scoof * priority: normal => high Comment: %m format does not work for invalid requests. We should probably add a default value for when the request is so broken that the method can't be deduced. Looks like you can use %r as a workaround in your case. It also looks like your " needs escaping to get what you really want. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Oct 7 07:40:55 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 07 Oct 2011 07:40:55 -0000 Subject: [Varnish] #1028: varnishncsa crashes on %m format with invalid requests In-Reply-To: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> References: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> Message-ID: <052.335b69bbbfb7fa40aa49137f443f8056@varnish-cache.org> #1028: varnishncsa crashes on %m format with invalid requests -------------------------+-------------------------------------------------- Reporter: ljorg | Owner: scoof Type: defect | Status: new Priority: high | Milestone: Component: varnishncsa | Version: 3.0.0 Severity: major | Keywords: -------------------------+-------------------------------------------------- Comment(by ljorg): Thank you for your reply. %r doesn't pan out here, it seems to confuse our log analyzer. I'll hold on to the crashing setup (gives a bunch of 2 minute holes in the logging, but otherwise works) until varnishncsa is fixed for %m My quotes are escaped correctly, I hope. I showed how varnishncsa is _running_ in the original post. It's started from an init script with this option string: {{{ DAEMON_OPTS="-a -w $logfile -D -P $pidfile -F '%h %l %u %t \"%m %U %H\" %s %b \"%{Referer}i\" \"%{User-agent}i\"'" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Oct 7 13:19:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 07 Oct 2011 13:19:59 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output Message-ID: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output --------------------+------------------------------------------------------- Reporter: sthing | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.1 | Severity: normal Keywords: | --------------------+------------------------------------------------------- I normally have gzip compression on html served by apache. Today I disabled gzip compression when debugging a client side problem. When I reloaded my page the requested URL was refetched from the server, and the ESI-page was taken from cache. Varnish delivered a page with the outer HTML non-compressed but the ESI- part compressed. Source files, result, lines from varnishlog and tcpdump attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 10 10:10:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Oct 2011 10:10:32 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.64c9908bd23b7a870009e73917fcc6ce@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.1 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by lkarsten): * owner: => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 10 10:14:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Oct 2011 10:14:12 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.63d8a544c18c5e10f64874a85eac6874@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -------------------+-------------------------------------------------------- Reporter: kwy | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Description changed by kristian: Old description: > Hi. > > The following VCL will core varnish on accessing the /fail url, as will > any other use of error in deliver. > > I have specific use cases for doing this, in particular to force a > synthetic response on conditions detected in vcl_deliver. > > https://github.com/comotion/security.vcl/blob/master/vcl/modules/cloak.vcl > > As it is I've worked around the issue by using restarts, but I 'spect you > want to avoid a segfault anyways. > > --- > > sub vcl_recv { > if (req.url ~ "^/fail"){ > set req.http.fail = "fail"; > return (lookup); > } > } > > sub vcl_deliver { > if(req.http.fail){ > error 200 "ok"; > } > } > > > Child (23949) died signal=6 (core dumped) > Child (23949) Panic message: Assert error in VCL_deliver_method(), > ../../include/vcl_returns.h line 62: > Condition((1U << sp->handling) & ((1U << 9) | (1U << 0) )) not true. > thread = (cache-worker) > ident = Linux,2.6.32-33-server,x86_64,-sfile,-smalloc,-hcritbit,epoll > Backtrace: > 0x43665e: pan_backtrace+19 > 0x436933: pan_ic+1ad > 0x43f8b8: VCL_deliver_method+10a > 0x416f53: cnt_prepresp+52c > 0x41bec2: CNT_Session+7c3 > 0x437d89: Pool_Work_Thread+8b3 > 0x44ab46: wrk_thread_real+3e7 > 0x44ad0f: WRK_thread+b4 > 0x7ffbcfefd9ca: _end+7ffbcf867752 > 0x7ffbcfc5a70d: _end+7ffbcf5c4495 > sp = 0x7ffbc87b5040 { > fd = 14, id = 14, xid = 1363644113, > client = 87.238.35.17 59639, > step = STP_PREPRESP, > handling = error, > err_code = 200, err_reason = ok, > restarts = 0, esi_level = 0 > flags = > bodystatus = 0 > ws = 0x7ffbc87b51a8 { > id = "sess", > {s,f,r,e} = {0x7ffbc87b5c48,+456,(nil),+65536}, > }, > http[req] = { > ws = 0x7ffbc87b51a8[sess] > "GET", > "/fail", > "HTTP/1.1", > "Host: u.delta9.pl", > "User-Agent: Mozilla/5.0 (X11; Linux i686; rv:2.0.1) Gecko/20100101 > Firefox/4.0.1", > "Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", > "Accept-Language: en-us,en;q=0.5", > "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7", > "Keep-Alive: 115", > "DNT: 1", > "Connection: keep-alive", > "fail: fail", > "Accept-Encoding: gzip", > }, > worker = 0x7ffb40ce69d0 { > ws = 0x7ffb40ce6ca0 { > id = "wrk", > {s,f,r,e} = {0x7ffb40cd4920,+144,(nil),+65536}, > }, > http[resp] = { > ws = 0x7ffb40ce6ca0[wrk] > "HTTP/1.1", > "Service Unavailable", > "Server: Varnish", > "Content-Type: text/html; charset=utf-8", > "Retry-After: 5", > "Content-Length: 419", > "Accept-Ranges: bytes", > "Date: Wed, 05 Oct 2011 20:27:16 GMT", > "X-Varnish: 1363644113", > "Age: 0", > "Via: 1.1 varnish", > "Connection: close", > }, > }, > vcl = { > srcname = { > "input", > "Default", > }, > }, > obj = 0x7ffba6eff000 { > xid = 1363644113, > ws = 0x7ffba6eff018 { > id = "obj", > {s,f,r,e} = {0x7ffba6eff540,+176,(nil),+1024}, > }, > http[obj] = { > ws = 0x7ffba6eff018[obj] > "HTTP/1.1", > "Service Unavailable", > "Date: Wed, 05 Oct 2011 20:27:16 GMT", > "Server: Varnish", > "Content-Type: text/html; charset=utf-8", > "Retry-After: 5", > "Content-Length: 419", > }, > len = 419, > store = { > 419 { > 0a 3c 3f 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 |. version="| > 31 2e 30 22 20 65 6e 63 6f 64 69 6e 67 3d 22 75 |1.0" > encoding="u| > 74 66 2d 38 22 3f 3e 0a 3c 21 44 4f 43 54 59 50 > |tf-8"?>. 45 20 68 74 6d 6c 20 50 55 42 4c 49 43 20 22 2d |E html PUBLIC > "-| > [355 more] > }, > }, > }, > }, > > Child cleanup complete > child (23985) Started > Child (23985) said Child starts > Child (23985) said SMF.s0 mmap'ed 2147483648 bytes of 2147483648 New description: Hi. The following VCL will core varnish on accessing the /fail url, as will any other use of error in deliver. I have specific use cases for doing this, in particular to force a synthetic response on conditions detected in vcl_deliver. https://github.com/comotion/security.vcl/blob/master/vcl/modules/cloak.vcl As it is I've worked around the issue by using restarts, but I 'spect you want to avoid a segfault anyways. --- {{{ sub vcl_recv { if (req.url ~ "^/fail"){ set req.http.fail = "fail"; return (lookup); } } sub vcl_deliver { if(req.http.fail){ error 200 "ok"; } } }}} {{{ Child (23949) died signal=6 (core dumped) Child (23949) Panic message: Assert error in VCL_deliver_method(), ../../include/vcl_returns.h line 62: Condition((1U << sp->handling) & ((1U << 9) | (1U << 0) )) not true. thread = (cache-worker) ident = Linux,2.6.32-33-server,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x43665e: pan_backtrace+19 0x436933: pan_ic+1ad 0x43f8b8: VCL_deliver_method+10a 0x416f53: cnt_prepresp+52c 0x41bec2: CNT_Session+7c3 0x437d89: Pool_Work_Thread+8b3 0x44ab46: wrk_thread_real+3e7 0x44ad0f: WRK_thread+b4 0x7ffbcfefd9ca: _end+7ffbcf867752 0x7ffbcfc5a70d: _end+7ffbcf5c4495 sp = 0x7ffbc87b5040 { fd = 14, id = 14, xid = 1363644113, client = 87.238.35.17 59639, step = STP_PREPRESP, handling = error, err_code = 200, err_reason = ok, restarts = 0, esi_level = 0 flags = bodystatus = 0 ws = 0x7ffbc87b51a8 { id = "sess", {s,f,r,e} = {0x7ffbc87b5c48,+456,(nil),+65536}, }, http[req] = { ws = 0x7ffbc87b51a8[sess] "GET", "/fail", "HTTP/1.1", "Host: u.delta9.pl", "User-Agent: Mozilla/5.0 (X11; Linux i686; rv:2.0.1) Gecko/20100101 Firefox/4.0.1", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language: en-us,en;q=0.5", "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7", "Keep-Alive: 115", "DNT: 1", "Connection: keep-alive", "fail: fail", "Accept-Encoding: gzip", }, worker = 0x7ffb40ce69d0 { ws = 0x7ffb40ce6ca0 { id = "wrk", {s,f,r,e} = {0x7ffb40cd4920,+144,(nil),+65536}, }, http[resp] = { ws = 0x7ffb40ce6ca0[wrk] "HTTP/1.1", "Service Unavailable", "Server: Varnish", "Content-Type: text/html; charset=utf-8", "Retry-After: 5", "Content-Length: 419", "Accept-Ranges: bytes", "Date: Wed, 05 Oct 2011 20:27:16 GMT", "X-Varnish: 1363644113", "Age: 0", "Via: 1.1 varnish", "Connection: close", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7ffba6eff000 { xid = 1363644113, ws = 0x7ffba6eff018 { id = "obj", {s,f,r,e} = {0x7ffba6eff540,+176,(nil),+1024}, }, http[obj] = { ws = 0x7ffba6eff018[obj] "HTTP/1.1", "Service Unavailable", "Date: Wed, 05 Oct 2011 20:27:16 GMT", "Server: Varnish", "Content-Type: text/html; charset=utf-8", "Retry-After: 5", "Content-Length: 419", }, len = 419, store = { 419 { 0a 3c 3f 78 6d 6c 20 76 65 72 73 69 6f 6e 3d 22 |.. Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 10 12:57:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Oct 2011 12:57:16 -0000 Subject: [Varnish] #1028: varnishncsa crashes on %m format with invalid requests In-Reply-To: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> References: <043.375dabb3546b520d910b5372263afda0@varnish-cache.org> Message-ID: <052.6eb601b944525c0f50953ed282ece701@varnish-cache.org> #1028: varnishncsa crashes on %m format with invalid requests -------------------------+-------------------------------------------------- Reporter: ljorg | Owner: scoof Type: defect | Status: closed Priority: high | Milestone: Component: varnishncsa | Version: 3.0.0 Severity: major | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by Andreas Plesner Jacobsen ): * status: new => closed * resolution: => fixed Comment: (In [7a0dc3f78987b37edc927c231a42879915e9f2ac]) Add default values for some fields when logging imcomplete records. Allow %r format to log incomplete records too. Update docs to reflect new defaults Fixes #1028 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 10 13:09:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 10 Oct 2011 13:09:32 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.97a70233dd4ef23505d88144f7186edd@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by lkarsten): * version: 3.0.1 => trunk Comment: I am able to reproduce this bug on (varnish-3.0.0 revision cbf1284) and on trunk (7a0dc3f). root at immer:~# a2enmod deflate; service apache2 restart; lkarsten at immer:~$ GET -Used -H "Accept-Encoding: gzip" http://localhost:81/1029/esi.html root at immer:~# a2dismod deflate; service apache2 restart; # gives part plaintext, part gziped response, without Content-Encoding header. lkarsten at immer:~$ GET -H "Accept-Encoding: gzip" -Use http://localhost:81/1029/main.html -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 11 04:44:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 11 Oct 2011 04:44:24 -0000 Subject: [Varnish] #516: vsl_mtx "deadlock"; child stops responding In-Reply-To: <040.47b4df5616659526789f282834055e47@varnish-cache.org> References: <040.47b4df5616659526789f282834055e47@varnish-cache.org> Message-ID: <049.5dde55d4d0abb5a0436c0c0d89641799@varnish-cache.org> #516: vsl_mtx "deadlock"; child stops responding ----------------------+----------------------------------------------------- Reporter: kb | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: | ----------------------+----------------------------------------------------- Comment(by melong001): Replying to [ticket:516 kb]: > I'm seeing children that stop responding reliably every day at roughly the same time: > > {{{ > Jun 4 07:08:32 statcache0 varnishd[5958]: Child (26929) not responding to ping, killing it. > Jun 4 07:08:36 statcache0 last message repeated 2 times > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (26929) died signal=3 (core dumped) > Jun 4 07:08:36 statcache0 varnishd[5958]: Child cleanup complete > }}} > > GDB: > > {{{ > (gdb) where > #0 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > #1 0x00007f054c4acb08 in _L_lock_104 () from /lib/libpthread.so.0 > #2 0x00007f054c4ac470 in pthread_mutex_lock () from /lib/libpthread.so.0 > #3 0x000000000042de2b in VSL (tag=SLT_CLI, id=0, fmt=0x43b502 "Rd %s") at shmlog.c:154 > #4 0x0000000000411e95 in cli_vlu (priv=0x7fff553115e0, p=0xffffffffffffffff
) at cache_cli.c:105 > #5 0x00007f054ceec472 in LineUpProcess (l=0x7f054bb08370) at vlu.c:156 > #6 0x0000000000411d9c in CLI_Run () at cache_cli.c:165 > #7 0x000000000041a243 in child_main () at cache_main.c:134 > #8 0x0000000000428a0a in start_child (cli=0x0) at mgt_child.c:319 > #9 0x0000000000429212 in mgt_sigchld (e=, what=) at mgt_child.c:472 > #10 0x00007f054ceeb4ea in vev_sched_signal (evb=0x7f054bb0d040) at vev.c:437 > #11 0x00007f054ceebb3d in vev_schedule (evb=0x7f054bb0d040) at vev.c:365 > #12 0x0000000000428cca in mgt_run (dflag=0, T_arg=) at mgt_child.c:560 > #13 0x000000000043228a in main (argc=, argv=0x7fff55311d48) at varnishd.c:655 > }}} > > It's a block on vsl_mtx, and most other threads are blocked too: > > {{{ > (gdb) info thread > 16 process 26930 0x00007f054c4b1e81 in nanosleep () from /lib/libpthread.so.0 > 15 process 26931 0x00007f054c4aeb99 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 > 14 process 26933 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 13 process 26942 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 12 process 26943 0x00007f054bd76c86 in poll () from /lib/libc.so.6 > 11 process 8811 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 10 process 15459 0x00007f054bd788e3 in writev () from /lib/libc.so.6 > 9 process 15676 0x000000000042dadd in WSL_Flush (w=0x44e39be0, overflow=) at shmlog.c:194 > 8 process 17612 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 7 process 18638 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 6 process 19059 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 5 process 20041 0x00007f054bd788e3 in writev () from /lib/libc.so.6 > 4 process 20226 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > 3 process 22666 HSH_Lookup (sp=0x7f04ff330008) at cache_hash.c:297 > 2 process 1427 0x00007f054bd788e3 in writev () from /lib/libc.so.6 > * 1 process 26929 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 > }}} > > The blocks are either the LOCKSHM(&vsl_mtx) in VSL() (line 154 in 2.0.4) or LOCKSHM(&vsl_mtx) in WSL_Flush() (line 187). > > One thread is always at line 194 of WSL_Flush(): > > p[l] = SLT_ENDMARKER; > > p is pretty wierd; 2^64^-1 above and in the thread that terms at line 194: > > {{{ > (gdb) where full > #0 0x000000000042dadd in WSL_Flush (w=0x44e39be0, overflow=) at shmlog.c:194 > p = (unsigned char *) 0x7f0549fc688c
> l = 1958 > __func__ = "WSL_Flush" > #1 0x0000000000410b17 in cnt_done (sp=0x7f04ff32b008) at cache_center.c:235 > dh = > dp = > da = > pfd = {{fd = -13455352, events = 32516, revents = 0}} > i = > __func__ = "cnt_done" > #2 0x0000000000411019 in CNT_Session (sp=0x7f04ff32b008) at steps.h:44 > done = 0 > w = (struct worker *) 0x44e39be0 > __func__ = "CNT_Session" > #3 0x000000000041cb6d in wrk_do_cnt_sess (w=0x44e39be0, priv=) at cache_pool.c:398 > sess = (struct sess *) 0x7f04ff32b008 > __func__ = "wrk_do_cnt_sess" > #4 0x000000000041c21b in wrk_thread (priv=0x7f054bb0c0b0) at cache_pool.c:310 > ww = {magic = 1670491599, nobjhead = 0x0, nobj = 0x0, lastused = 1244124456.0244427, cond = {__data = {__lock = 0, __futex = 700852, __total_seq = 350426, __wakeup_seq = 350426, __woken_seq = 350426, > __mutex = 0x7f0546904228, __nwaiters = 0, __broadcast_seq = 0}, > __size = "\000\000\000\000??\n\000?X\005\000\000\000\000\000?X\005\000\000\000\000\000?X\005\000\000\000\000\000(B\220F\005\177\000\000\000\000\000\000\000\000\000", __align = 3010136419336192}, list = { > vtqe_next = 0x45e3bbe0, vtqe_prev = 0x7f054bb0c0c0}, wrq = 0x7f04ff32b198, wfd = 0x0, werr = 0, iov = {{iov_base = 0x7f052d0a6358, iov_len = 8}, {iov_base = 0x43cfa8, iov_len = 1}, { > iov_base = 0x7f052d0a6361, iov_len = 3}, {iov_base = 0x43cfa8, iov_len = 1}, {iov_base = 0x7f052d0a6365, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6368, iov_len = 15}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6461, iov_len = 38}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6488, iov_len = 34}, {iov_base = 0x43ea4b, iov_len = 2}, { > iov_base = 0x7f052d0a64ab, iov_len = 44}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a64d8, iov_len = 17}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a650e, iov_len = 59}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a654a, iov_len = 24}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6563, iov_len = 17}, {iov_base = 0x43ea4b, iov_len = 2}, { > iov_base = 0x7f04ff32bf5e, iov_len = 35}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf82, iov_len = 21}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf98, iov_len = 6}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43d49b, iov_len = 16}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf9f, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f051fd74000, iov_len = 185528}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052c21b000, iov_len = 80}, {iov_base = 0x43ea4b, iov_len = 2}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff12e31d, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x44e35a00, iov_len = 145}, { > iov_base = 0x7f04ff376dd8, iov_len = 33}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed092a0, iov_len = 21}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed092b6, iov_len = 28}, { > iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed0931c, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x0, iov_len = 0} }, > niov = 0, liov = 0, vcl = 0x7f053f765328, srcaddr = 0x7f04fe53d2c0, wlb = 0x44e37bb0 "\026", wlp = 0x44e38356 "", wle = 0x44e39bb0 "", wlr = 52, sha256ctx = 0x44e3a0a0} > sha256 = {state = {0, 0, 0, 0, 0, 0, 0, 0}, count = 0, buf = '\0' } > __func__ = "wrk_thread" > #5 0x00007f054c4aa3f7 in start_thread () from /lib/libpthread.so.0 > No symbol table info available. > #6 0x00007f054bd7fb3d in clone () from /lib/libc.so.6 > No symbol table info available. > #7 0x0000000000000000 in ?? () > No symbol table info available. > }}} > > Again p is out of bounds. > > Fascinating, note what ''always'' happens right before the FAIL: > > {{{ > Jun 4 07:08:30 statcache0 syslogd 1.5.0#1ubuntu1: restart. > Jun 4 07:08:32 statcache0 varnishd[5958]: Child (26929) not responding to ping, killing it. > Jun 4 07:08:36 statcache0 last message repeated 2 times > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (26929) died signal=3 (core dumped) > Jun 4 07:08:36 statcache0 varnishd[5958]: Child cleanup complete > Jun 4 07:08:36 statcache0 varnishd[5958]: child (2535) Started > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Closed fds: 4 5 6 9 10 12 13 > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Child starts > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said managed to mmap 1073741824 bytes of 1073741824 > Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Ready > }}} > > Nothing else from varnishd shows up in the log except the above, so there's no spurious log flood. I'm also not doing any syslog() C tricks (yet) so what syslog() or log rotation dependency is there within varnishd that could cause this? Manually running the daily scripts doesn't cause this, but I'm trying to find a reproduction case. Though clearly something varnishy is awry. > > Thx,[[BR]] > -- [[BR]] > Ken.[[BR]] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 11 04:48:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 11 Oct 2011 04:48:20 -0000 Subject: [Varnish] #516: vsl_mtx "deadlock"; child stops responding In-Reply-To: <040.47b4df5616659526789f282834055e47@varnish-cache.org> References: <040.47b4df5616659526789f282834055e47@varnish-cache.org> Message-ID: <049.52c68b977054005d093539e9f05e68f7@varnish-cache.org> #516: vsl_mtx "deadlock"; child stops responding ----------------------+----------------------------------------------------- Reporter: kb | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: | ----------------------+----------------------------------------------------- Comment(by melong001): sorry,I just got a error form varnish?? The GDB imformation as fellow!! could you help me!! #0 0xffffe424 in __kernel_vsyscall () #1 0x0058f30e in __lll_mutex_lock_wait () from /lib/tls/libpthread.so.0 #2 0x0058bf3b in _L_mutex_lock_35 () from /lib/tls/libpthread.so.0 #3 0xbffb3498 in ?? () #4 0xb77bb2e4 in ?? () from /usr/local/varnishd/lib/libvarnish.so.1 #5 0xb760221c in ?? () #6 0xbffb3f00 in ?? () #7 0xbffb34b8 in ?? () #8 0x0808a37a in VSL (tag=134962740, id=37, fmt=0xb77b8e88 "Unknown request.\nType 'help' for more info.\n") at shmlog.c:160 #9 0x0808a37a in VSL (tag=SLT_CLI, id=0, fmt=0x80a2abb "Rd %s") at shmlog.c:160 #10 0x0805ed31 in cli_cb_before (cli=0xb760221c) at cache_cli.c:90 #11 0xb77adbb1 in cls_vlu2 (priv=0xb7602200, av=0x860409c0) at cli_serve.c:263 #12 0xb77ae0c6 in cls_vlu (priv=0xb7602200, p=0xaea13000 "ping") at cli_serve.c:342 #13 0xb77b23d7 in LineUpProcess (l=0xb7601440) at vlu.c:157 #14 0xb77b25b8 in VLU_Fd (fd=7, l=0xb7601440) at vlu.c:182 #15 0xb77af14e in CLS_Poll (cs=0xb7607430, timeout=-1) at cli_serve.c:532 #16 0x0805eeb5 in CLI_Run () at cache_cli.c:116 #17 0x0806fc2b in child_main () at cache_main.c:139 #18 0x08082234 in start_child (cli=0xb760211c) at mgt_child.c:407 #19 0x0808301d in mcf_server_startstop (cli=0xb760211c, av=0xb76091c0, priv=0x0) at mgt_child.c:658 #20 0xb77ad920 in cls_dispatch (cli=0xb760211c, clp=0x80b5500, av=0xb76091c0, ac=1) at cli_serve.c:231 #21 0xb77adcc9 in cls_vlu2 (priv=0xb7602100, av=0xb76091c0) at cli_serve.c:287 #22 0xb77ae0c6 in cls_vlu (priv=0xb7602100, p=0xb7667000 "start") at cli_serve.c:342 #23 0xb77b23d7 in LineUpProcess (l=0xb76012e0) at vlu.c:157 #24 0xb77b25b8 in VLU_Fd (fd=0, l=0xb76012e0) at vlu.c:182 #25 0xb77aee69 in CLS_PollFd (cs=0xb76073d0, fd=0, timeout=0) at cli_serve.c:493 #26 0x08085a0b in mgt_cli_callback2 (e=0xb76260b0, what=1) at mgt_cli.c:386 #27 0xb77b1df8 in vev_schedule_one (evb=0xb7607370) at vev.c:501 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 11 17:08:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 11 Oct 2011 17:08:42 -0000 Subject: [Varnish] #1030: ban_lurker doesn't sleep for 1 sec when nothing can be done Message-ID: <043.e139d9ea98433bae1b334f53b588b82b@varnish-cache.org> #1030: ban_lurker doesn't sleep for 1 sec when nothing can be done ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+----------------------------------------------------- Either the code or the docs are wrong, ban_lurker will only sleep for ban_lurker_sleep, even when there's no work to be done. The only case when it sleeps for one second is if it's disabled entirely. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 12 17:17:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 12 Oct 2011 17:17:48 -0000 Subject: [Varnish] #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things Message-ID: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- VTC code: {{{ kristian at freud:~$ cat foo.vtc varnishtest "Test overflowing the response through sp->err_reason" varnish v1 -vcl { backend blatti { .host = "127.0.0.1"; } sub vcl_recv { error 200 "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; } sub vcl_error { return(deliver); } } -start client c1 { txreq -req GET rxresp expect resp.status == 200 } -run }}} Result: {{{ **** top 0.0 macro def varnishd=varnishd **** top 0.0 macro def pwd=/home/kristian **** top 0.0 macro def topbuild=/home/kristian/../.. **** top 0.0 macro def bad_ip=10.255.255.255 **** top 0.0 macro def tmpdir=/tmp/vtc.931.5efb64dd * top 0.0 TEST foo.vtc starting *** top 0.0 varnishtest * top 0.0 TEST Test overflowing the response through sp->err_reason *** top 0.0 varnish ** v1 0.0 Launch *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid -sfile,/tmp/vtc.931.5efb64dd/v1,10M *** v1 0.0 CMD: cd /home/kristian && varnishd -d -d -n /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid -sfile,/tmp/vtc.931.5efb64dd/v1,10M *** v1 0.0 PID: 936 *** v1 0.0 debug| Platform: Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| 200 245 \n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Varnish Cache CLI 1.0\n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| \n *** v1 0.0 debug| Type 'help' for command list.\n *** v1 0.0 debug| Type 'quit' to close CLI session.\n *** v1 0.0 debug| Type 'start' to launch worker process.\n *** v1 0.0 debug| \n **** v1 0.1 CLIPOLL 1 0x1 0x0 *** v1 0.1 CLI connection fd = 7 *** v1 0.1 CLI RX 107 **** v1 0.1 CLI RX| dfgufafbesqprwjuzishruoicbawioen\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Authentication required.\n **** v1 0.1 CLI TX| auth 1c28913db635bc02fd8bcc0dc64bca844d65a1971c55ea2d86c6279ef09251cd\n *** v1 0.1 CLI RX 200 **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Varnish Cache CLI 1.0\n **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Type 'help' for command list.\n **** v1 0.1 CLI RX| Type 'quit' to close CLI session.\n **** v1 0.1 CLI RX| Type 'start' to launch worker process.\n **** v1 0.1 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \tbackend blatti {\n **** v1 0.1 CLI TX| \t\t.host = "127.0.0.1";\n **** v1 0.1 CLI TX| \t}\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \tsub vcl_recv {\n **** v1 0.1 CLI TX| \t\terror 200 "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa... *** v1 0.2 CLI RX 200 **** v1 0.2 CLI RX| VCL compiled. **** v1 0.2 CLI TX| vcl.use vcl1 *** v1 0.2 CLI RX 200 ** v1 0.2 Start **** v1 0.2 CLI TX| start *** v1 0.3 debug| child (949) Started\n **** v1 0.3 vsl| 0 WorkThread - 0xb58fd00c start **** v1 0.3 vsl| 0 CLI - Rd vcl.load "vcl1" ./vcl.ICtxlBZs.so **** v1 0.3 vsl| 0 CLI - Wr 200 36 Loaded "./vcl.ICtxlBZs.so" as "vcl1" **** v1 0.3 vsl| 0 CLI - Rd vcl.use "vcl1" **** v1 0.3 vsl| 0 CLI - Wr 200 0 **** v1 0.3 vsl| 0 CLI - Rd start **** v1 0.3 vsl| 0 Debug - Acceptor is epoll **** v1 0.3 vsl| 0 CLI - Wr 200 0 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI TX| debug.xid 1000 *** v1 0.3 debug| Child (949) said Not running as root, no priv-sep\n *** v1 0.3 debug| Child (949) said Child starts\n *** v1 0.3 debug| Child (949) said SMF.s0 mmap'ed 10485760 bytes of 10485760\n **** v1 0.3 vsl| 0 WorkThread - 0xb752700c start **** v1 0.3 vsl| 0 WorkThread - 0xb751600c start **** v1 0.3 vsl| 0 WorkThread - 0xb50eb00c start **** v1 0.3 vsl| 0 WorkThread - 0xb50da00c start **** v1 0.3 vsl| 0 WorkThread - 0xb50c900c start **** v1 0.3 vsl| 0 WorkThread - 0xb50b800c start **** v1 0.3 vsl| 0 WorkThread - 0xb50a700c start **** v1 0.3 vsl| 0 WorkThread - 0xb509600c start **** v1 0.3 vsl| 0 WorkThread - 0xb508500c start *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| XID is 1000 **** v1 0.3 CLI TX| debug.listen_address **** v1 0.3 vsl| 0 CLI - Rd debug.xid 1000 **** v1 0.3 vsl| 0 CLI - Wr 200 11 XID is 1000 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| 127.0.0.1 38730\n ** v1 0.3 Listen on 127.0.0.1 38730 **** v1 0.3 macro def v1_addr=127.0.0.1 **** v1 0.3 macro def v1_port=38730 **** v1 0.3 macro def v1_sock=127.0.0.1 38730 *** top 0.3 client ** c1 0.3 Starting client ** c1 0.3 Waiting for client *** c1 0.3 Connect to 127.0.0.1 38730 *** c1 0.3 connected fd 8 from 127.0.0.1 58396 to 127.0.0.1 38730 *** c1 0.3 txreq **** c1 0.3 txreq| GET / HTTP/1.1\r\n **** c1 0.3 txreq| \r\n *** c1 0.3 rxresp ---- c1 0.3 HTTP rx EOF (fd:8 read: Success) *** v1 0.3 debug| Child (949) died signal=6\n * top 0.3 RESETTING after foo.vtc *** v1 0.3 debug| Child (949) Panic message: Assert error in Tcheck(), cache.h line 981:\n *** v1 0.3 debug| Condition((t.b) != 0) not true.\n *** v1 0.3 debug| thread = (cache-worker)\n *** v1 0.3 debug| ident = Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit,epoll\n *** v1 0.3 debug| Backtrace:\n *** v1 0.3 debug| 0x80783a2: varnishd() [0x80783a2]\n *** v1 0.3 debug| 0x80724d3: varnishd() [0x80724d3]\n *** v1 0.3 debug| 0x807b807: varnishd(RES_BuildHttp+0x97) [0x807b807]\n *** v1 0.3 debug| 0x805cfd3: varnishd() [0x805cfd3]\n *** v1 0.3 debug| 0x805dc12: varnishd(CNT_Session+0x842) [0x805dc12]\n *** v1 0.3 debug| 0x8079b3f: varnishd() [0x8079b3f]\n *** v1 0.3 debug| 0x807a7d6: varnishd() [0x807a7d6]\n *** v1 0.3 debug| 0x807ae2b: ... *** v1 0.3 debug| Child cleanup complete\n **** v1 0.4 vsl| 0 CLI - Rd debug.listen_address **** v1 0.4 vsl| 0 CLI - Wr 200 16 127.0.0.1 38730 **** v1 0.4 vsl| 13 SessionOpen c 127.0.0.1 58396 127.0.0.1:0 ** v1 1.3 Wait ** v1 1.3 R 936 Status: 0000 * top 1.4 TEST foo.vtc FAILED # top TEST foo.vtc FAILED (1.370) exit=1 }}} Discussion: Note that setting a large obj.response manually in vcl_error will NOT cause problems, even though the workspace is overflowed. There are several different considerations: 1. We're using a rather mysterious workspace (aka: obj workspace) for obj.response in vcl_error. There's no way to tune that. 2. The response message is set in two different ways depending on whether it's set through sp->err_reason (which will cause it to be set in cache_center.c:cnt_error() using http_PutResponse()) or later through VRT_l_obj_response() which will use cache_vrt.c and vrt_do_string()). 3. I wish for an assert in the code that actually reads this value instead of failing in Tcheck(). 4. Perhaps even a test after VCL_error_method() setting the default if the response is blank? I'd say "default VCL" but there isn't a valid The biggest difference between the setter-methods is that going through http_PutResponse() will actually /zero/ the existing value if the WS allocation fails, thereby unsetting the response, while vrt_do_string() will just log it as a lost header and return. This was spotted by the APDM-people, pondered by Martin and finally broken apart by Martin and myself. Unfortunately, we're now too hungry and sleepy to fix it properly. The actual use-case that triggered it was HTTP redirects, something along the lines of: {{{ recv: if (req.http.host ~ "^example\.com$") { error 750 "http://www.example.com" + req.url; } error: if (obj.status == 750) { set obj.http.location = obj.response; set obj.status = 301; return (deliver); } }}} Workaround: {{{ recv: ... error 750 "CAKE"; error: if (... == 750) { set obj.http.location = "http://www.example.com/Bunnies/fallback_for_overflow"; set obj.http.location = "http://www.example.com" + req.url; } }}} The above will still fail due to overflowing the object workspace, but not with an assert(). It will just be a losthdr and redirecting to a funny bunny page instead of the real page. In theory. Technically, this was only tested on 3.0.2-rc1, not trunk. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 12 17:24:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 12 Oct 2011 17:24:20 -0000 Subject: [Varnish] #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things In-Reply-To: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> References: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> Message-ID: <055.48f6f1e01513e6347737a21157e8e5c4@varnish-cache.org> #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Old description: > VTC code: > {{{ > kristian at freud:~$ cat foo.vtc > varnishtest "Test overflowing the response through sp->err_reason" > > varnish v1 -vcl { > backend blatti { > .host = "127.0.0.1"; > } > > sub vcl_recv { > error 200 > "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; > } > sub vcl_error { > return(deliver); > } > } -start > > client c1 { > txreq -req GET > rxresp > expect resp.status == 200 > } -run > }}} > > Result: > > {{{ > **** top 0.0 macro def varnishd=varnishd > **** top 0.0 macro def pwd=/home/kristian > **** top 0.0 macro def topbuild=/home/kristian/../.. > **** top 0.0 macro def bad_ip=10.255.255.255 > **** top 0.0 macro def tmpdir=/tmp/vtc.931.5efb64dd > * top 0.0 TEST foo.vtc starting > *** top 0.0 varnishtest > * top 0.0 TEST Test overflowing the response through sp->err_reason > *** top 0.0 varnish > ** v1 0.0 Launch > *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n > /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p > syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M > '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid > -sfile,/tmp/vtc.931.5efb64dd/v1,10M > *** v1 0.0 CMD: cd /home/kristian && varnishd -d -d -n > /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p > syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M > '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid > -sfile,/tmp/vtc.931.5efb64dd/v1,10M > *** v1 0.0 PID: 936 > *** v1 0.0 debug| Platform: Linux,2.6.38-11-generic- > pae,i686,-sfile,-smalloc,-hcritbit\n > *** v1 0.0 debug| 200 245 \n > *** v1 0.0 debug| -----------------------------\n > *** v1 0.0 debug| Varnish Cache CLI 1.0\n > *** v1 0.0 debug| -----------------------------\n > *** v1 0.0 debug| Linux,2.6.38-11-generic- > pae,i686,-sfile,-smalloc,-hcritbit\n > *** v1 0.0 debug| \n > *** v1 0.0 debug| Type 'help' for command list.\n > *** v1 0.0 debug| Type 'quit' to close CLI session.\n > *** v1 0.0 debug| Type 'start' to launch worker process.\n > *** v1 0.0 debug| \n > **** v1 0.1 CLIPOLL 1 0x1 0x0 > *** v1 0.1 CLI connection fd = 7 > *** v1 0.1 CLI RX 107 > **** v1 0.1 CLI RX| dfgufafbesqprwjuzishruoicbawioen\n > **** v1 0.1 CLI RX| \n > **** v1 0.1 CLI RX| Authentication required.\n > **** v1 0.1 CLI TX| auth > 1c28913db635bc02fd8bcc0dc64bca844d65a1971c55ea2d86c6279ef09251cd\n > *** v1 0.1 CLI RX 200 > **** v1 0.1 CLI RX| -----------------------------\n > **** v1 0.1 CLI RX| Varnish Cache CLI 1.0\n > **** v1 0.1 CLI RX| -----------------------------\n > **** v1 0.1 CLI RX| Linux,2.6.38-11-generic- > pae,i686,-sfile,-smalloc,-hcritbit\n > **** v1 0.1 CLI RX| \n > **** v1 0.1 CLI RX| Type 'help' for command list.\n > **** v1 0.1 CLI RX| Type 'quit' to close CLI session.\n > **** v1 0.1 CLI RX| Type 'start' to launch worker process.\n > **** v1 0.1 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n > **** v1 0.1 CLI TX| \n > **** v1 0.1 CLI TX| \tbackend blatti {\n > **** v1 0.1 CLI TX| \t\t.host = "127.0.0.1";\n > **** v1 0.1 CLI TX| \t}\n > **** v1 0.1 CLI TX| \n > **** v1 0.1 CLI TX| \tsub vcl_recv {\n > **** v1 0.1 CLI TX| \t\terror 200 > "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa... > *** v1 0.2 CLI RX 200 > **** v1 0.2 CLI RX| VCL compiled. > **** v1 0.2 CLI TX| vcl.use vcl1 > *** v1 0.2 CLI RX 200 > ** v1 0.2 Start > **** v1 0.2 CLI TX| start > *** v1 0.3 debug| child (949) Started\n > **** v1 0.3 vsl| 0 WorkThread - 0xb58fd00c start > **** v1 0.3 vsl| 0 CLI - Rd vcl.load "vcl1" > ./vcl.ICtxlBZs.so > **** v1 0.3 vsl| 0 CLI - Wr 200 36 Loaded > "./vcl.ICtxlBZs.so" as "vcl1" > **** v1 0.3 vsl| 0 CLI - Rd vcl.use "vcl1" > **** v1 0.3 vsl| 0 CLI - Wr 200 0 > **** v1 0.3 vsl| 0 CLI - Rd start > **** v1 0.3 vsl| 0 Debug - Acceptor is epoll > **** v1 0.3 vsl| 0 CLI - Wr 200 0 > *** v1 0.3 CLI RX 200 > **** v1 0.3 CLI TX| debug.xid 1000 > *** v1 0.3 debug| Child (949) said Not running as root, no priv-sep\n > *** v1 0.3 debug| Child (949) said Child starts\n > *** v1 0.3 debug| Child (949) said SMF.s0 mmap'ed 10485760 bytes of > 10485760\n > **** v1 0.3 vsl| 0 WorkThread - 0xb752700c start > **** v1 0.3 vsl| 0 WorkThread - 0xb751600c start > **** v1 0.3 vsl| 0 WorkThread - 0xb50eb00c start > **** v1 0.3 vsl| 0 WorkThread - 0xb50da00c start > **** v1 0.3 vsl| 0 WorkThread - 0xb50c900c start > **** v1 0.3 vsl| 0 WorkThread - 0xb50b800c start > **** v1 0.3 vsl| 0 WorkThread - 0xb50a700c start > **** v1 0.3 vsl| 0 WorkThread - 0xb509600c start > **** v1 0.3 vsl| 0 WorkThread - 0xb508500c start > *** v1 0.3 CLI RX 200 > **** v1 0.3 CLI RX| XID is 1000 > **** v1 0.3 CLI TX| debug.listen_address > **** v1 0.3 vsl| 0 CLI - Rd debug.xid 1000 > **** v1 0.3 vsl| 0 CLI - Wr 200 11 XID is 1000 > *** v1 0.3 CLI RX 200 > **** v1 0.3 CLI RX| 127.0.0.1 38730\n > ** v1 0.3 Listen on 127.0.0.1 38730 > **** v1 0.3 macro def v1_addr=127.0.0.1 > **** v1 0.3 macro def v1_port=38730 > **** v1 0.3 macro def v1_sock=127.0.0.1 38730 > *** top 0.3 client > ** c1 0.3 Starting client > ** c1 0.3 Waiting for client > *** c1 0.3 Connect to 127.0.0.1 38730 > *** c1 0.3 connected fd 8 from 127.0.0.1 58396 to 127.0.0.1 38730 > *** c1 0.3 txreq > **** c1 0.3 txreq| GET / HTTP/1.1\r\n > **** c1 0.3 txreq| \r\n > *** c1 0.3 rxresp > ---- c1 0.3 HTTP rx EOF (fd:8 read: Success) > *** v1 0.3 debug| Child (949) died signal=6\n > * top 0.3 RESETTING after foo.vtc > *** v1 0.3 debug| Child (949) Panic message: Assert error in > Tcheck(), cache.h line 981:\n > *** v1 0.3 debug| Condition((t.b) != 0) not true.\n > *** v1 0.3 debug| thread = (cache-worker)\n > *** v1 0.3 debug| ident = Linux,2.6.38-11-generic- > pae,i686,-sfile,-smalloc,-hcritbit,epoll\n > *** v1 0.3 debug| Backtrace:\n > *** v1 0.3 debug| 0x80783a2: varnishd() [0x80783a2]\n > *** v1 0.3 debug| 0x80724d3: varnishd() [0x80724d3]\n > *** v1 0.3 debug| 0x807b807: varnishd(RES_BuildHttp+0x97) > [0x807b807]\n > *** v1 0.3 debug| 0x805cfd3: varnishd() [0x805cfd3]\n > *** v1 0.3 debug| 0x805dc12: varnishd(CNT_Session+0x842) > [0x805dc12]\n > *** v1 0.3 debug| 0x8079b3f: varnishd() [0x8079b3f]\n > *** v1 0.3 debug| 0x807a7d6: varnishd() [0x807a7d6]\n > *** v1 0.3 debug| 0x807ae2b: ... > *** v1 0.3 debug| Child cleanup complete\n > **** v1 0.4 vsl| 0 CLI - Rd debug.listen_address > **** v1 0.4 vsl| 0 CLI - Wr 200 16 127.0.0.1 38730 > > **** v1 0.4 vsl| 13 SessionOpen c 127.0.0.1 58396 127.0.0.1:0 > ** v1 1.3 Wait > ** v1 1.3 R 936 Status: 0000 > * top 1.4 TEST foo.vtc FAILED > > # top TEST foo.vtc FAILED (1.370) exit=1 > }}} > > Discussion: > > Note that setting a large obj.response manually in vcl_error will NOT > cause problems, even though the workspace is overflowed. > > There are several different considerations: > > 1. We're using a rather mysterious workspace (aka: obj workspace) for > obj.response in vcl_error. There's no way to tune that. > > 2. The response message is set in two different ways depending on whether > it's set through sp->err_reason (which will cause it to be set in > cache_center.c:cnt_error() using http_PutResponse()) or later through > VRT_l_obj_response() which will use cache_vrt.c and vrt_do_string()). > > 3. I wish for an assert in the code that actually reads this value > instead of failing in Tcheck(). > > 4. Perhaps even a test after VCL_error_method() setting the default if > the response is blank? I'd say "default VCL" but there isn't a valid > > The biggest difference between the setter-methods is that going through > http_PutResponse() will actually /zero/ the existing value if the WS > allocation fails, thereby unsetting the response, while vrt_do_string() > will just log it as a lost header and return. > > This was spotted by the APDM-people, pondered by Martin and finally > broken apart by Martin and myself. Unfortunately, we're now too hungry > and sleepy to fix it properly. The actual use-case that triggered it was > HTTP redirects, something along the lines of: > > {{{ > recv: > if (req.http.host ~ "^example\.com$") { > error 750 "http://www.example.com" + req.url; > } > > error: > if (obj.status == 750) { > set obj.http.location = obj.response; > set obj.status = 301; > return (deliver); > } > }}} > > Workaround: > {{{ > recv: > ... error 750 "CAKE"; > > error: > if (... == 750) { > set obj.http.location = > "http://www.example.com/Bunnies/fallback_for_overflow"; > set obj.http.location = "http://www.example.com" + req.url; > } > }}} > > The above will still fail due to overflowing the object workspace, but > not with an assert(). It will just be a losthdr and redirecting to a > funny bunny page instead of the real page. In theory. > > Technically, this was only tested on 3.0.2-rc1, not trunk. New description: VTC code: {{{ kristian at freud:~$ cat foo.vtc varnishtest "Test overflowing the response through sp->err_reason" varnish v1 -vcl { backend blatti { .host = "127.0.0.1"; } sub vcl_recv { error 200 "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; } sub vcl_error { return(deliver); } } -start client c1 { txreq -req GET rxresp expect resp.status == 200 } -run }}} Result: {{{ **** top 0.0 macro def varnishd=varnishd **** top 0.0 macro def pwd=/home/kristian **** top 0.0 macro def topbuild=/home/kristian/../.. **** top 0.0 macro def bad_ip=10.255.255.255 **** top 0.0 macro def tmpdir=/tmp/vtc.931.5efb64dd * top 0.0 TEST foo.vtc starting *** top 0.0 varnishtest * top 0.0 TEST Test overflowing the response through sp->err_reason *** top 0.0 varnish ** v1 0.0 Launch *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid -sfile,/tmp/vtc.931.5efb64dd/v1,10M *** v1 0.0 CMD: cd /home/kristian && varnishd -d -d -n /tmp/vtc.931.5efb64dd/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.931.5efb64dd/v1/_S -M '127.0.0.1 46806' -P /tmp/vtc.931.5efb64dd/v1/varnishd.pid -sfile,/tmp/vtc.931.5efb64dd/v1,10M *** v1 0.0 PID: 936 *** v1 0.0 debug| Platform: Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| 200 245 \n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Varnish Cache CLI 1.0\n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| \n *** v1 0.0 debug| Type 'help' for command list.\n *** v1 0.0 debug| Type 'quit' to close CLI session.\n *** v1 0.0 debug| Type 'start' to launch worker process.\n *** v1 0.0 debug| \n **** v1 0.1 CLIPOLL 1 0x1 0x0 *** v1 0.1 CLI connection fd = 7 *** v1 0.1 CLI RX 107 **** v1 0.1 CLI RX| dfgufafbesqprwjuzishruoicbawioen\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Authentication required.\n **** v1 0.1 CLI TX| auth 1c28913db635bc02fd8bcc0dc64bca844d65a1971c55ea2d86c6279ef09251cd\n *** v1 0.1 CLI RX 200 **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Varnish Cache CLI 1.0\n **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Type 'help' for command list.\n **** v1 0.1 CLI RX| Type 'quit' to close CLI session.\n **** v1 0.1 CLI RX| Type 'start' to launch worker process.\n **** v1 0.1 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \tbackend blatti {\n **** v1 0.1 CLI TX| \t\t.host = "127.0.0.1";\n **** v1 0.1 CLI TX| \t}\n **** v1 0.1 CLI TX| \n **** v1 0.1 CLI TX| \tsub vcl_recv {\n **** v1 0.1 CLI TX| \t\terror 200 "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa... *** v1 0.2 CLI RX 200 **** v1 0.2 CLI RX| VCL compiled. **** v1 0.2 CLI TX| vcl.use vcl1 *** v1 0.2 CLI RX 200 ** v1 0.2 Start **** v1 0.2 CLI TX| start *** v1 0.3 debug| child (949) Started\n **** v1 0.3 vsl| 0 WorkThread - 0xb58fd00c start **** v1 0.3 vsl| 0 CLI - Rd vcl.load "vcl1" ./vcl.ICtxlBZs.so **** v1 0.3 vsl| 0 CLI - Wr 200 36 Loaded "./vcl.ICtxlBZs.so" as "vcl1" **** v1 0.3 vsl| 0 CLI - Rd vcl.use "vcl1" **** v1 0.3 vsl| 0 CLI - Wr 200 0 **** v1 0.3 vsl| 0 CLI - Rd start **** v1 0.3 vsl| 0 Debug - Acceptor is epoll **** v1 0.3 vsl| 0 CLI - Wr 200 0 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI TX| debug.xid 1000 *** v1 0.3 debug| Child (949) said Not running as root, no priv-sep\n *** v1 0.3 debug| Child (949) said Child starts\n *** v1 0.3 debug| Child (949) said SMF.s0 mmap'ed 10485760 bytes of 10485760\n **** v1 0.3 vsl| 0 WorkThread - 0xb752700c start **** v1 0.3 vsl| 0 WorkThread - 0xb751600c start **** v1 0.3 vsl| 0 WorkThread - 0xb50eb00c start **** v1 0.3 vsl| 0 WorkThread - 0xb50da00c start **** v1 0.3 vsl| 0 WorkThread - 0xb50c900c start **** v1 0.3 vsl| 0 WorkThread - 0xb50b800c start **** v1 0.3 vsl| 0 WorkThread - 0xb50a700c start **** v1 0.3 vsl| 0 WorkThread - 0xb509600c start **** v1 0.3 vsl| 0 WorkThread - 0xb508500c start *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| XID is 1000 **** v1 0.3 CLI TX| debug.listen_address **** v1 0.3 vsl| 0 CLI - Rd debug.xid 1000 **** v1 0.3 vsl| 0 CLI - Wr 200 11 XID is 1000 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| 127.0.0.1 38730\n ** v1 0.3 Listen on 127.0.0.1 38730 **** v1 0.3 macro def v1_addr=127.0.0.1 **** v1 0.3 macro def v1_port=38730 **** v1 0.3 macro def v1_sock=127.0.0.1 38730 *** top 0.3 client ** c1 0.3 Starting client ** c1 0.3 Waiting for client *** c1 0.3 Connect to 127.0.0.1 38730 *** c1 0.3 connected fd 8 from 127.0.0.1 58396 to 127.0.0.1 38730 *** c1 0.3 txreq **** c1 0.3 txreq| GET / HTTP/1.1\r\n **** c1 0.3 txreq| \r\n *** c1 0.3 rxresp ---- c1 0.3 HTTP rx EOF (fd:8 read: Success) *** v1 0.3 debug| Child (949) died signal=6\n * top 0.3 RESETTING after foo.vtc *** v1 0.3 debug| Child (949) Panic message: Assert error in Tcheck(), cache.h line 981:\n *** v1 0.3 debug| Condition((t.b) != 0) not true.\n *** v1 0.3 debug| thread = (cache-worker)\n *** v1 0.3 debug| ident = Linux,2.6.38-11-generic- pae,i686,-sfile,-smalloc,-hcritbit,epoll\n *** v1 0.3 debug| Backtrace:\n *** v1 0.3 debug| 0x80783a2: varnishd() [0x80783a2]\n *** v1 0.3 debug| 0x80724d3: varnishd() [0x80724d3]\n *** v1 0.3 debug| 0x807b807: varnishd(RES_BuildHttp+0x97) [0x807b807]\n *** v1 0.3 debug| 0x805cfd3: varnishd() [0x805cfd3]\n *** v1 0.3 debug| 0x805dc12: varnishd(CNT_Session+0x842) [0x805dc12]\n *** v1 0.3 debug| 0x8079b3f: varnishd() [0x8079b3f]\n *** v1 0.3 debug| 0x807a7d6: varnishd() [0x807a7d6]\n *** v1 0.3 debug| 0x807ae2b: ... *** v1 0.3 debug| Child cleanup complete\n **** v1 0.4 vsl| 0 CLI - Rd debug.listen_address **** v1 0.4 vsl| 0 CLI - Wr 200 16 127.0.0.1 38730 **** v1 0.4 vsl| 13 SessionOpen c 127.0.0.1 58396 127.0.0.1:0 ** v1 1.3 Wait ** v1 1.3 R 936 Status: 0000 * top 1.4 TEST foo.vtc FAILED # top TEST foo.vtc FAILED (1.370) exit=1 }}} Discussion: Note that setting a large obj.response manually in vcl_error will NOT cause problems, even though the workspace is overflowed. There are several different considerations: 1. We're using a rather mysterious workspace (aka: obj workspace) for obj.response in vcl_error. There's no way to tune that. 2. The response message is set in two different ways depending on whether it's set through sp->err_reason (which will cause it to be set in cache_center.c:cnt_error() using http_PutResponse()) or later through VRT_l_obj_response() which will use cache_vrt.c and vrt_do_string()). 3. I wish for an assert in the code that actually reads this value instead of failing in Tcheck(). 4. Perhaps even a test after VCL_error_method() setting the default if the response is blank? I'd say "default VCL" but there isn't a valid use case for a blank response message (and it would still leave the problem as to what to do about it). The biggest difference between the setter-methods is that going through http_PutResponse() will actually /zero/ the existing value if the WS allocation fails, thereby unsetting the response, while vrt_do_string() will just log it as a lost header and return. This was spotted by the APDM-people, pondered by Martin and finally broken apart by Martin and myself. Unfortunately, we're now too hungry and sleepy to fix it properly. The actual use-case that triggered it was HTTP redirects, something along the lines of: {{{ recv: if (req.http.host ~ "^example\.com$") { error 750 "http://www.example.com" + req.url; } error: if (obj.status == 750) { set obj.http.location = obj.response; set obj.status = 301; return (deliver); } }}} Workaround: {{{ recv: ... error 750 "CAKE"; error: if (... == 750) { set obj.http.location = "http://www.example.com/Bunnies/fallback_for_overflow"; set obj.http.location = "http://www.example.com" + req.url; } }}} The above will still fail due to overflowing the object workspace, but not with an assert(). It will just be a losthdr and redirecting to a funny bunny page instead of the real page. In theory. Technically, this was only tested on 3.0.2-rc1, not trunk. -- Comment(by kristian): (forgot to finish a sentence) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 12 17:26:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 12 Oct 2011 17:26:07 -0000 Subject: [Varnish] #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things In-Reply-To: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> References: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> Message-ID: <055.dd7f5a03cc1070399d594affc17b97d2@varnish-cache.org> #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by slink): * cc: nils.goroll@? (added) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 13 04:31:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Oct 2011 04:31:09 -0000 Subject: [Varnish] #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl Message-ID: <047.d54e50b4347c16688ddabe4acefdfc0a@varnish-cache.org> #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl -----------------------+---------------------------------------------------- Reporter: chrismsnz | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -----------------------+---------------------------------------------------- The error in question is on this page: https://www.varnish-cache.org/docs/3.0/tutorial/esi.html#example-esi- include Trying to compile the code fragment: {{{ sub vcl_fetch { if (req.url == "/test.html") { set beresp.do_esi = true; /* Do ESI processing */ set obj.ttl = 24 h; /* Sets the TTL on the HTML above */ } elseif (req.url == "/cgi-bin/date.cgi") { set obj.ttl = 1m; /* Sets a one minute TTL on */ /* the included object */ } } }}} Causes varnish to throw an error: {{{ Message from VCC-compiler: 'obj.ttl': cannot be set in method 'vcl_fetch'. }}} The correct way to set the TTL in Varnish 3 is to use {{{ beresp.ttl }}} and the documentation should be updated to reflect that. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 13 07:26:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Oct 2011 07:26:34 -0000 Subject: [Varnish] #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl In-Reply-To: <047.d54e50b4347c16688ddabe4acefdfc0a@varnish-cache.org> References: <047.d54e50b4347c16688ddabe4acefdfc0a@varnish-cache.org> Message-ID: <056.2a3991a774343e0e8c4d417902e388bb@varnish-cache.org> #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl ---------------------------+------------------------------------------------ Reporter: chrismsnz | Owner: scoof Type: documentation | Status: new Priority: low | Milestone: Component: documentation | Version: 3.0.0 Severity: normal | Keywords: ---------------------------+------------------------------------------------ Changes (by scoof): * owner: => scoof -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 13 07:31:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Oct 2011 07:31:02 -0000 Subject: [Varnish] #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl In-Reply-To: <047.d54e50b4347c16688ddabe4acefdfc0a@varnish-cache.org> References: <047.d54e50b4347c16688ddabe4acefdfc0a@varnish-cache.org> Message-ID: <056.ce0f63e822605f00d7638bf894ba7bb5@varnish-cache.org> #1032: Varnish 3 ESI tutorial incorrectly refers to obj.ttl ---------------------------+------------------------------------------------ Reporter: chrismsnz | Owner: scoof Type: documentation | Status: closed Priority: low | Milestone: Component: documentation | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ---------------------------+------------------------------------------------ Changes (by Andreas Plesner Jacobsen ): * status: new => closed * resolution: => fixed Comment: (In [28e866b59ba0e9c95bdf314965d5895698616136]) Update docs for 3.0 Fixes #1032 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 13 09:33:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Oct 2011 09:33:13 -0000 Subject: [Varnish] #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things In-Reply-To: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> References: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> Message-ID: <055.4e458c0ec63d3657fecfe0e727740727@varnish-cache.org> #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by martin): I see in cnt_error() (cache_center.c) that for synthetic objects the object workspace is set statically to 1024 in the call to STV_NewObject. This value should perhaps be http_resp_size or some other parameter instead? 1k workspace in vcl_error seems a little tight. -Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Oct 13 13:13:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 13 Oct 2011 13:13:48 -0000 Subject: [Varnish] #1033: purge; in vcl_pass Message-ID: <046.5c6ed7d7ed0c2d768626ff3e3184eaef@varnish-cache.org> #1033: purge; in vcl_pass ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- In short: without purge; in vcl_pass, hitpass objects become an issue. Demonstration: {{{ varnishtest "Test purge;" # This test first gets object 1, then purges it, then gets object 2, which # is a hit-for-pass, then issues a purge-command, then gets object 3, which # is supposed to be cached, then tries to get object 3 once more. # Lack of purge; in vcl_pass, however, means that the hit-for-pass object # resulting from object 2 is kept, so object 3 doesn't get cached and the # last request instead gets a fourth object. server s1 { rxreq txresp -status 200 -hdr foo:1 rxreq txresp -status 200 -hdr foo:2 rxreq txresp -status 200 -hdr foo:3 rxreq txresp -status 200 -hdr foo:4 } -start varnish v1 -vcl+backend { sub vcl_recv { return (lookup); } sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "PURGED"; } } sub vcl_miss { if (req.request == "PURGE") { purge; error 200 "PURGED404"; } } sub vcl_pass { if (req.request == "PURGE") { // purge; NOT SUPPORTED error 200 "PURGEPASS"; } } sub vcl_fetch { set beresp.ttl = 5s; if (beresp.http.foo ~ "2") { return (hit_for_pass); } } } -start client c1 { txreq -req GET rxresp expect resp.status == 200 expect resp.http.foo == 1 txreq -req PURGE rxresp expect resp.status == 200 } -run delay 0.5 client c2 { txreq -req GET rxresp expect resp.status == 200 expect resp.http.foo == 2 txreq -req PURGE rxresp expect resp.status == 200 } -run delay 0.5 client c3 { txreq -req GET rxresp expect resp.status == 200 expect resp.http.foo == 3 txreq -req GET rxresp expect resp.status == 200 expect resp.http.foo == 3 } -run }}} Output snippets: {{{ kristian at freud:~$ varnishtest foo.vtc | grep -v 'vsl|' [...] * top 0.0 TEST foo.vtc starting [...] ** c1 0.3 Starting client [...] **** c1 0.3 txreq| GET / HTTP/1.1\r\n **** c1 0.3 txreq| \r\n *** c1 0.3 rxresp *** s1 0.3 accepted fd 4 *** s1 0.3 rxreq **** s1 0.3 rxhdr| GET / HTTP/1.1\r\n **** s1 0.3 rxhdr| X-Varnish: 1001\r\n **** s1 0.3 rxhdr| Accept-Encoding: gzip\r\n **** s1 0.3 rxhdr| Host: 127.0.0.1\r\n **** s1 0.3 rxhdr| \r\n **** s1 0.3 http[ 0] | GET **** s1 0.3 http[ 1] | / **** s1 0.3 http[ 2] | HTTP/1.1 **** s1 0.3 http[ 3] | X-Varnish: 1001 **** s1 0.3 http[ 4] | Accept-Encoding: gzip **** s1 0.3 http[ 5] | Host: 127.0.0.1 **** s1 0.3 bodylen = 0 *** s1 0.3 txresp **** s1 0.3 txresp| HTTP/1.1 200 Ok\r\n **** s1 0.3 txresp| foo:1\r\n **** s1 0.3 txresp| Content-Length: 0\r\n **** s1 0.3 txresp| \r\n *** s1 0.3 rxreq **** c1 0.3 rxhdr| HTTP/1.1 200 Ok\r\n **** c1 0.3 rxhdr| foo:1\r\n **** c1 0.3 rxhdr| Content-Length: 0\r\n **** c1 0.3 rxhdr| Accept-Ranges: bytes\r\n **** c1 0.3 rxhdr| Date: Thu, 13 Oct 2011 13:09:57 GMT\r\n **** c1 0.3 rxhdr| X-Varnish: 1001\r\n **** c1 0.3 rxhdr| Age: 0\r\n **** c1 0.3 rxhdr| Via: 1.1 varnish\r\n **** c1 0.3 rxhdr| Connection: keep-alive\r\n **** c1 0.3 rxhdr| \r\n **** c1 0.3 http[ 0] | HTTP/1.1 **** c1 0.3 http[ 1] | 200 **** c1 0.3 http[ 2] | Ok **** c1 0.3 http[ 3] | foo:1 **** c1 0.3 http[ 4] | Content-Length: 0 **** c1 0.3 http[ 5] | Accept-Ranges: bytes **** c1 0.3 http[ 6] | Date: Thu, 13 Oct 2011 13:09:57 GMT **** c1 0.3 http[ 7] | X-Varnish: 1001 **** c1 0.3 http[ 8] | Age: 0 **** c1 0.3 http[ 9] | Via: 1.1 varnish **** c1 0.3 http[10] | Connection: keep-alive **** c1 0.3 bodylen = 0 *** c1 0.3 expect **** c1 0.3 EXPECT resp.status (200) == 200 (200) match *** c1 0.3 expect **** c1 0.3 EXPECT resp.http.foo (1) == 1 (1) match *** c1 0.3 txreq **** c1 0.3 txreq| PURGE / HTTP/1.1\r\n **** c1 0.3 txreq| \r\n *** c1 0.3 rxresp **** c1 0.3 rxhdr| HTTP/1.1 200 PURGED\r\n **** c1 0.3 rxhdr| Server: Varnish\r\n **** c1 0.3 rxhdr| Content-Type: text/html; charset=utf-8\r\n **** c1 0.3 rxhdr| Retry-After: 5\r\n **** c1 0.3 rxhdr| Content-Length: 374\r\n **** c1 0.3 rxhdr| Accept-Ranges: bytes\r\n **** c1 0.3 rxhdr| Date: Thu, 13 Oct 2011 13:09:57 GMT\r\n **** c1 0.3 rxhdr| X-Varnish: 1002\r\n **** c1 0.3 rxhdr| Age: 0\r\n **** c1 0.3 rxhdr| Via: 1.1 varnish\r\n **** c1 0.3 rxhdr| Connection: close\r\n **** c1 0.3 rxhdr| \r\n **** c1 0.3 http[ 0] | HTTP/1.1 **** c1 0.3 http[ 1] | 200 **** c1 0.3 http[ 2] | PURGED **** c1 0.3 http[ 3] | Server: Varnish **** c1 0.3 http[ 4] | Content-Type: text/html; charset=utf-8 **** c1 0.3 http[ 5] | Retry-After: 5 **** c1 0.3 http[ 6] | Content-Length: 374 **** c1 0.3 http[ 7] | Accept-Ranges: bytes **** c1 0.3 http[ 8] | Date: Thu, 13 Oct 2011 13:09:57 GMT **** c1 0.3 http[ 9] | X-Varnish: 1002 **** c1 0.3 http[10] | Age: 0 **** c1 0.3 http[11] | Via: 1.1 varnish **** c1 0.3 http[12] | Connection: close **** c1 0.3 body| \n [...] **** c1 0.3 bodylen = 374 *** c1 0.3 expect **** c1 0.3 EXPECT resp.status (200) == 200 (200) match *** c1 0.3 closing fd 10 ** c1 0.3 Ending *** top 0.3 delay *** top 0.3 delaying 0.5 second(s) *** top 0.8 client ** c2 0.8 Starting client ** c2 0.8 Waiting for client *** c2 0.8 Connect to 127.0.0.1 59523 *** c2 0.8 connected fd 10 from 127.0.0.1 36136 to 127.0.0.1 59523 *** c2 0.8 txreq **** c2 0.8 txreq| GET / HTTP/1.1\r\n **** c2 0.8 txreq| \r\n *** c2 0.8 rxresp **** s1 0.8 rxhdr| GET / HTTP/1.1\r\n **** s1 0.8 rxhdr| X-Varnish: 1003\r\n **** s1 0.8 rxhdr| Accept-Encoding: gzip\r\n **** s1 0.8 rxhdr| Host: 127.0.0.1\r\n **** s1 0.8 rxhdr| \r\n **** s1 0.8 http[ 0] | GET **** s1 0.8 http[ 1] | / **** s1 0.8 http[ 2] | HTTP/1.1 **** s1 0.8 http[ 3] | X-Varnish: 1003 **** s1 0.8 http[ 4] | Accept-Encoding: gzip **** s1 0.8 http[ 5] | Host: 127.0.0.1 **** s1 0.8 bodylen = 0 *** s1 0.8 txresp **** s1 0.8 txresp| HTTP/1.1 200 Ok\r\n **** s1 0.8 txresp| foo:2\r\n **** s1 0.8 txresp| Content-Length: 0\r\n **** s1 0.8 txresp| \r\n *** s1 0.8 rxreq **** c2 0.8 rxhdr| HTTP/1.1 200 Ok\r\n **** c2 0.8 rxhdr| foo:2\r\n **** c2 0.8 rxhdr| Content-Length: 0\r\n **** c2 0.8 rxhdr| Accept-Ranges: bytes\r\n **** c2 0.8 rxhdr| Date: Thu, 13 Oct 2011 13:09:57 GMT\r\n **** c2 0.8 rxhdr| X-Varnish: 1003\r\n **** c2 0.8 rxhdr| Age: 0\r\n **** c2 0.8 rxhdr| Via: 1.1 varnish\r\n **** c2 0.8 rxhdr| Connection: keep-alive\r\n **** c2 0.8 rxhdr| \r\n **** c2 0.8 http[ 0] | HTTP/1.1 **** c2 0.8 http[ 1] | 200 **** c2 0.8 http[ 2] | Ok **** c2 0.8 http[ 3] | foo:2 **** c2 0.8 http[ 4] | Content-Length: 0 **** c2 0.8 http[ 5] | Accept-Ranges: bytes **** c2 0.8 http[ 6] | Date: Thu, 13 Oct 2011 13:09:57 GMT **** c2 0.8 http[ 7] | X-Varnish: 1003 **** c2 0.8 http[ 8] | Age: 0 **** c2 0.8 http[ 9] | Via: 1.1 varnish **** c2 0.8 http[10] | Connection: keep-alive **** c2 0.8 bodylen = 0 *** c2 0.8 expect **** c2 0.8 EXPECT resp.status (200) == 200 (200) match *** c2 0.8 expect **** c2 0.8 EXPECT resp.http.foo (2) == 2 (2) match *** c2 0.8 txreq **** c2 0.8 txreq| PURGE / HTTP/1.1\r\n **** c2 0.8 txreq| \r\n *** c2 0.8 rxresp **** c2 0.8 rxhdr| HTTP/1.1 200 PURGEPASS\r\n **** c2 0.8 rxhdr| Server: Varnish\r\n **** c2 0.8 rxhdr| Content-Type: text/html; charset=utf-8\r\n **** c2 0.8 rxhdr| Retry-After: 5\r\n **** c2 0.8 rxhdr| Content-Length: 383\r\n **** c2 0.8 rxhdr| Accept-Ranges: bytes\r\n **** c2 0.8 rxhdr| Date: Thu, 13 Oct 2011 13:09:57 GMT\r\n **** c2 0.8 rxhdr| X-Varnish: 1004\r\n **** c2 0.8 rxhdr| Age: 0\r\n **** c2 0.8 rxhdr| Via: 1.1 varnish\r\n **** c2 0.8 rxhdr| Connection: close\r\n **** c2 0.8 rxhdr| \r\n **** c2 0.8 http[ 0] | HTTP/1.1 **** c2 0.8 http[ 1] | 200 **** c2 0.8 http[ 2] | PURGEPASS **** c2 0.8 http[ 3] | Server: Varnish **** c2 0.8 http[ 4] | Content-Type: text/html; charset=utf-8 **** c2 0.8 http[ 5] | Retry-After: 5 **** c2 0.8 http[ 6] | Content-Length: 383 **** c2 0.8 http[ 7] | Accept-Ranges: bytes **** c2 0.8 http[ 8] | Date: Thu, 13 Oct 2011 13:09:57 GMT **** c2 0.8 http[ 9] | X-Varnish: 1004 **** c2 0.8 http[10] | Age: 0 **** c2 0.8 http[11] | Via: 1.1 varnish **** c2 0.8 http[12] | Connection: close **** c2 0.8 body| \n [...] **** c2 0.8 bodylen = 383 *** c2 0.8 expect **** c2 0.8 EXPECT resp.status (200) == 200 (200) match *** c2 0.8 closing fd 10 ** c2 0.8 Ending *** top 0.8 delay *** top 0.8 delaying 0.5 second(s) *** top 1.3 client ** c3 1.3 Starting client ** c3 1.3 Waiting for client *** c3 1.3 Connect to 127.0.0.1 59523 *** c3 1.3 connected fd 10 from 127.0.0.1 36137 to 127.0.0.1 59523 *** c3 1.3 txreq **** c3 1.3 txreq| GET / HTTP/1.1\r\n **** c3 1.3 txreq| \r\n *** c3 1.3 rxresp **** s1 1.3 rxhdr| GET / HTTP/1.1\r\n **** s1 1.3 rxhdr| X-Varnish: 1005\r\n **** s1 1.3 rxhdr| Host: 127.0.0.1\r\n **** s1 1.3 rxhdr| \r\n **** s1 1.3 http[ 0] | GET **** s1 1.3 http[ 1] | / **** s1 1.3 http[ 2] | HTTP/1.1 **** s1 1.3 http[ 3] | X-Varnish: 1005 **** s1 1.3 http[ 4] | Host: 127.0.0.1 **** s1 1.3 bodylen = 0 *** s1 1.3 txresp **** s1 1.3 txresp| HTTP/1.1 200 Ok\r\n **** s1 1.3 txresp| foo:3\r\n **** s1 1.3 txresp| Content-Length: 0\r\n **** s1 1.3 txresp| \r\n *** s1 1.3 rxreq **** c3 1.3 rxhdr| HTTP/1.1 200 Ok\r\n **** c3 1.3 rxhdr| foo:3\r\n **** c3 1.3 rxhdr| Content-Length: 0\r\n **** c3 1.3 rxhdr| Accept-Ranges: bytes\r\n **** c3 1.3 rxhdr| Date: Thu, 13 Oct 2011 13:09:58 GMT\r\n **** c3 1.3 rxhdr| X-Varnish: 1005\r\n **** c3 1.3 rxhdr| Age: 0\r\n **** c3 1.3 rxhdr| Via: 1.1 varnish\r\n **** c3 1.3 rxhdr| Connection: keep-alive\r\n **** c3 1.3 rxhdr| \r\n **** c3 1.3 http[ 0] | HTTP/1.1 **** c3 1.3 http[ 1] | 200 **** c3 1.3 http[ 2] | Ok **** c3 1.3 http[ 3] | foo:3 **** c3 1.3 http[ 4] | Content-Length: 0 **** c3 1.3 http[ 5] | Accept-Ranges: bytes **** c3 1.3 http[ 6] | Date: Thu, 13 Oct 2011 13:09:58 GMT **** c3 1.3 http[ 7] | X-Varnish: 1005 **** c3 1.3 http[ 8] | Age: 0 **** c3 1.3 http[ 9] | Via: 1.1 varnish **** c3 1.3 http[10] | Connection: keep-alive **** c3 1.3 bodylen = 0 *** c3 1.3 expect **** c3 1.3 EXPECT resp.status (200) == 200 (200) match *** c3 1.3 expect **** c3 1.3 EXPECT resp.http.foo (3) == 3 (3) match *** c3 1.3 txreq **** c3 1.3 txreq| GET / HTTP/1.1\r\n **** c3 1.3 txreq| \r\n *** c3 1.3 rxresp **** s1 1.3 rxhdr| GET / HTTP/1.1\r\n **** s1 1.3 rxhdr| X-Varnish: 1006\r\n **** s1 1.3 rxhdr| Host: 127.0.0.1\r\n **** s1 1.3 rxhdr| \r\n **** s1 1.3 http[ 0] | GET **** s1 1.3 http[ 1] | / **** s1 1.3 http[ 2] | HTTP/1.1 **** s1 1.3 http[ 3] | X-Varnish: 1006 **** s1 1.3 http[ 4] | Host: 127.0.0.1 **** s1 1.3 bodylen = 0 *** s1 1.3 txresp **** s1 1.3 txresp| HTTP/1.1 200 Ok\r\n **** s1 1.3 txresp| foo:4\r\n **** s1 1.3 txresp| Content-Length: 0\r\n **** s1 1.3 txresp| \r\n *** s1 1.3 shutting fd 4 ** s1 1.3 Ending **** c3 1.3 rxhdr| HTTP/1.1 200 Ok\r\n **** c3 1.3 rxhdr| foo:4\r\n **** c3 1.3 rxhdr| Content-Length: 0\r\n **** c3 1.3 rxhdr| Accept-Ranges: bytes\r\n **** c3 1.3 rxhdr| Date: Thu, 13 Oct 2011 13:09:58 GMT\r\n **** c3 1.3 rxhdr| X-Varnish: 1006\r\n **** c3 1.3 rxhdr| Age: 0\r\n **** c3 1.3 rxhdr| Via: 1.1 varnish\r\n **** c3 1.3 rxhdr| Connection: keep-alive\r\n **** c3 1.3 rxhdr| \r\n **** c3 1.3 http[ 0] | HTTP/1.1 **** c3 1.3 http[ 1] | 200 **** c3 1.3 http[ 2] | Ok **** c3 1.3 http[ 3] | foo:4 **** c3 1.3 http[ 4] | Content-Length: 0 **** c3 1.3 http[ 5] | Accept-Ranges: bytes **** c3 1.3 http[ 6] | Date: Thu, 13 Oct 2011 13:09:58 GMT **** c3 1.3 http[ 7] | X-Varnish: 1006 **** c3 1.3 http[ 8] | Age: 0 **** c3 1.3 http[ 9] | Via: 1.1 varnish **** c3 1.3 http[10] | Connection: keep-alive **** c3 1.3 bodylen = 0 *** c3 1.3 expect **** c3 1.3 EXPECT resp.status (200) == 200 (200) match *** c3 1.3 expect ---- c3 1.3 EXPECT resp.http.foo (4) == 3 (3) failed * top 1.3 RESETTING after foo.vtc ** s1 1.3 Waiting for server **** s1 1.3 macro undef s1_addr **** s1 1.3 macro undef s1_port **** s1 1.3 macro undef s1_sock ** v1 2.3 Wait ** v1 2.3 R 22659 Status: 0000 * top 2.3 TEST foo.vtc FAILED # top TEST foo.vtc FAILED (2.349) exit=1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 11:35:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 11:35:42 -0000 Subject: [Varnish] #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things In-Reply-To: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> References: <046.70006b389b787dd13b443bf4e69a6684@varnish-cache.org> Message-ID: <055.117e47d5aec3d2b0f11af3580650b38d@varnish-cache.org> #1031: error 200 "long string" overflows obj.ws effectively unsetting obj.response which subsequently causes Bad Things ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Kristian Lyngstol ): * status: new => closed * resolution: => fixed Comment: (In [601ee74203b8d0088251c43c5e5de380dabdcba0]) Ensure obj->response is set sensibly for errors The http_PutProtocol() and http_PutResponse() would in the case of workspace overflow leave the headers as NULL and log a SLT_LostHeader. This would make Varnish assert correctly later when writing to the wire, as these are mandated by HTTP. This commit changes them to set the fields to static strings instead ("HTTP/1.1" and "Lost Response") when failing to write them to the workspace. This leaves enough information to complete the protocol in the case of overflow. The patch also increases the synthetic object's workspace from static 1024 to param->http_resp_size. This leaves more (and configurable) room for manipulating the headers of the synthetic object in vcl_error. This whole thing has been a collaboration between Martin and myself. I'll leave it a mystery who wrote what line of code, which part of the comment and contributed what to the test-case. In all fairness, it's not a prefect solution, but a far step closer to one. So it sort of, kinda, more or less, for now, until we get a better solution: Fixes: #1031 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 12:11:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 12:11:51 -0000 Subject: [Varnish] #1030: ban_lurker doesn't sleep for 1 sec when nothing can be done In-Reply-To: <043.e139d9ea98433bae1b334f53b588b82b@varnish-cache.org> References: <043.e139d9ea98433bae1b334f53b588b82b@varnish-cache.org> Message-ID: <052.364c2ffcb8d5cf7c20b75ca8e2423212@varnish-cache.org> #1030: ban_lurker doesn't sleep for 1 sec when nothing can be done ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Kristian Lyngstol ): * status: new => closed * resolution: => fixed Comment: (In [fa3b136f2169a71b63603835c69441ca37913507]) Ensure ban lurker sleeps 1.0s on failure As per documentation, the ban lurker sleeps ban_lurker_sleep when it is successful, but on failure it should only sleep 1.0s. No point hammering the ban list every 0.01s if bans aren't even used. Fixes #1030 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 12:23:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 12:23:09 -0000 Subject: [Varnish] #1029: ESI mixes compressed and noncompressed output In-Reply-To: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> References: <044.230cf0a51ac93a0bf51dedbebb3d7c54@varnish-cache.org> Message-ID: <053.c385fb2335d587c4dcf2b77cadd2f27e@varnish-cache.org> #1029: ESI mixes compressed and noncompressed output ----------------------+----------------------------------------------------- Reporter: sthing | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): This really does sound like a dupe of #899. This should also be avoidable if you use beresp.do_gzip, which means Varnish will compress the content regardless of what apache says. I'm leaving it open for now, as I'm not entirely satisfied with the "solution" in #899 and want a discussion first. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 12:43:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 12:43:18 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.dc770b63cf43fdd3694e8994bc9def8e@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -------------------+-------------------------------------------------------- Reporter: kwy | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by kristian): Test case: {{{ varnishtest "Test if you can error in vcl_deliver" server s1 { rxreq txresp -status 200 rxreq txresp -status 200 } -start varnish v1 -vcl+backend { sub vcl_deliver { error 201 "ok"; } } -start client c1 { txreq -req GET rxresp expect resp.status == 201 } -run }}} Comment: So, hmm, there are two parts to this. One is that VCC and Varnish disagree on where error is allowed (i.e: It should either work or give a VCL compile error). The second part is that we don't have error in deliver.... Pretty sure we had it a short while ago. Ah, yes, here we go. 714e0ef684edef8a370c10676e00fe8411894b91 https://www.varnish- cache.org/trac/changeset/714e0ef684edef8a370c10676e00fe8411894b91 So that explains why you can't do it. I'm removing this from the list of allowed VCL returns in deliver now, and that turns this into a feature request of sorts. Though considering we had error in vcl_deliver in the past, it might pass as a regression? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 12:58:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 12:58:01 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.0a340a604d67a545289aa4b2cd6f3f6c@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver ---------------------+------------------------------------------------------ Reporter: kwy | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Kristian Lyngstol ): * status: new => closed * resolution: => fixed Comment: (In [e18a6ab53fbae30b633fbe5f040b5686bec6ea4d]) Formally remove error from vcl_deliver VCC Note that error wasn't actually working in vcl_deliver, and this just puts VCC in line with the rest of Varnish. Syntax errors are better than assert errors. Re #1027 I'll leave it for later discussion to see if we close #1027, which is technically a feature request now, though a request for a feature we used to have (not sure how well it worked). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 12:59:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 12:59:17 -0000 Subject: [Varnish] #1027: signal 6 on calling error in vcl_deliver In-Reply-To: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> References: <041.c65e75d5bdfe801f5ce705c67f514e8b@varnish-cache.org> Message-ID: <050.58d08ee8425b9f95147fb338a93fd10e@varnish-cache.org> #1027: signal 6 on calling error in vcl_deliver -----------------------+---------------------------------------------------- Reporter: kwy | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Changes (by kristian): * status: closed => reopened * resolution: fixed => Comment: GOD DARN IT. I actually /looked up the syntax for trac references/ and then that @!#!@#... So what I was saying was: oops. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 17 13:33:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Oct 2011 13:33:13 -0000 Subject: [Varnish] #884: Assert error in res_WriteDirObj(), cache_response.c line 334 - while fetching large obj In-Reply-To: <043.9f330782fff822ddcda54115aca81505@varnish-cache.org> References: <043.9f330782fff822ddcda54115aca81505@varnish-cache.org> Message-ID: <052.347067353eb29cf24bb6eb622ee4b634@varnish-cache.org> #884: Assert error in res_WriteDirObj(), cache_response.c line 334 - while fetching large obj --------------------+------------------------------------------------------- Reporter: perbu | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => fixed Comment: This should be fixed for a while now, thus closing it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 19 13:39:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 19 Oct 2011 13:39:05 -0000 Subject: [Varnish] #1034: Dual storage depending on object size Message-ID: <048.4d94e6c8f53143e48e3077958f94e7e6@varnish-cache.org> #1034: Dual storage depending on object size ------------------------+--------------------------------------------------- Reporter: gdelacroix | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | ------------------------+--------------------------------------------------- Hi ! One of my rare problems with Varnish is choosing between storage size and performance. Having a large storage size means using disk. Having good perf means using RAM storage. Wouldn't it be possible to allow Varnish using both storage types, small objects being stored in memory and...guess what...big objects on disk ! Small objects size limit being a startup setting. Balancing between two Varnish instances (one on RAM, one on disk) is not possible since we don't know the size of the object before receiving it, but maybe Varnish could choose the right storage after fetching the object. To my eyes, the only bad thing in this system would be the additional work on lookup since two storages potentially contain the object. The best solution would be to lookup in RAM cache first (best perf for small objects). Would this feature be possible ? Is it already in the pipe ? Thanks ! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 19 13:43:54 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 19 Oct 2011 13:43:54 -0000 Subject: [Varnish] #1034: Dual storage depending on object size In-Reply-To: <048.4d94e6c8f53143e48e3077958f94e7e6@varnish-cache.org> References: <048.4d94e6c8f53143e48e3077958f94e7e6@varnish-cache.org> Message-ID: <057.0b29b6b971b4a9e6b6f164b3b82d3172@varnish-cache.org> #1034: Dual storage depending on object size ------------------------+--------------------------------------------------- Reporter: gdelacroix | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | ------------------------+--------------------------------------------------- Comment(by gdelacroix): Note : I know Varnish stores objects in system buffers, even with file storage, but we can't select what's in memory, and we really don't want those fat video files polluting our RAM cache ;o) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 19 14:30:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 19 Oct 2011 14:30:07 -0000 Subject: [Varnish] #1034: Dual storage depending on object size In-Reply-To: <048.4d94e6c8f53143e48e3077958f94e7e6@varnish-cache.org> References: <048.4d94e6c8f53143e48e3077958f94e7e6@varnish-cache.org> Message-ID: <057.b40db011d12547039ecb394731565d91@varnish-cache.org> #1034: Dual storage depending on object size -------------------------+-------------------------------------------------- Reporter: gdelacroix | Type: enhancement Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: invalid | Keywords: -------------------------+-------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => invalid Comment: Greetings, We don't use the bug tracker to track feature requests, so I'm closing the bug. Feel free to send a mail to one of the mail lists (-dev or -misc). That said, there are several flaws in your logic. You can use -sfile as much as you like and it will use whatever memory you have. It will only write to disk if it has to. And if you really want to, you can mix -sfile and -smalloc and address each of them individually in VCL, granted, you still have to make some assumptions since you can't actually know the size of an object until after it's fetched, at which point it's a little late to figure out where to put it. As for 'system buffers', that's mmap() and it'll figure out what actually needs to be in memory by itself - it knows better than your VCL what you're actually using. Anyway, please continue this on -misc. And I do recommend you read up on mmap() and the memory management of modern operating systems. Regards, Kristian -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Oct 21 10:52:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 21 Oct 2011 10:52:42 -0000 Subject: [Varnish] #1035: Port numbers are not sanitized, e.g: 1234124124 Message-ID: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> #1035: Port numbers are not sanitized, e.g: 1234124124 ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Keywords: ----------------------+----------------------------------------------------- Varnish will happily accept a -a :124124124 option and overflow. Not a big problem, but slightly unexpected when testing. (Mostly filing this as a reminder) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 08:27:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 08:27:02 -0000 Subject: [Varnish] #1036: Cannot allocate memory in 3.0.2-rc1 Message-ID: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> #1036: Cannot allocate memory in 3.0.2-rc1 ----------------------+----------------------------------------------------- Reporter: nicholas | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Varnish panics and restarts on a regular basis, it goes out of memory proportionally with amount allocated with -s malloc. Box has 32G memory and the attached graphs is of -s malloc,5G. My theory is that LRU is not kicking in, I see no trace of it in any munin graphs. OS is Scientific Linux 6, which is a new platform for us to run varnish. We will try varnish 3.0.0 to see if OS is playing tricks with us. Greetings Nicholas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 09:30:49 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 09:30:49 -0000 Subject: [Varnish] #1036: Cannot allocate memory in 3.0.2-rc1 In-Reply-To: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> References: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> Message-ID: <055.9f63924b07b8ea1dd1e5de1adba1b5b1@varnish-cache.org> #1036: Cannot allocate memory in 3.0.2-rc1 ----------------------+----------------------------------------------------- Reporter: nicholas | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Comment(by nicholas): Meh. We are using the fallback director to use the same appservers for login stuff across dns round-robin. We didn't notice that the fallback director was first introduced in >3.0.0. This makes it tricky to test 3.0.0 in production. I reproduced the error in a test environment on centos5, using -s malloc,4M and 4 wgets crawling different parts of the sites. 3.0.0-2 behaves as expected, 3.0.2-rc1 crashes with the same error: {{{ Last panic at: Mon, 24 Oct 2011 09:18:37 GMT Assert error in VGZ_Ibuf(), cache_gzip.c line 222: Condition((vg->vz.avail_in) == 0) not true. errno = 12 (Cannot allocate memory) thread = (cache-worker) ident = Linux,2.6.18-194.32.1.el5xen,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c1b6: /usr/sbin/varnishd [0x42c1b6] 0x4223aa: /usr/sbin/varnishd(VGZ_Ibuf+0x7a) [0x4223aa] 0x4229e9: /usr/sbin/varnishd [0x4229e9] 0x4211cd: /usr/sbin/varnishd(FetchBody+0x3fd) [0x4211cd] 0x414fb8: /usr/sbin/varnishd [0x414fb8] 0x417686: /usr/sbin/varnishd(CNT_Session+0x9f6) [0x417686] 0x42e9c8: /usr/sbin/varnishd [0x42e9c8] 0x42dbab: /usr/sbin/varnishd [0x42dbab] 0x379e40673d: /lib64/libpthread.so.0 [0x379e40673d] 0x33822d3f6d: /lib64/libc.so.6(clone+0x6d) [0x33822d3f6d] ... }}} Should be reproducible? Looks like a regression in 3.0.2-rc1? :-) Nicholas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 10:22:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 10:22:21 -0000 Subject: [Varnish] #1036: Cannot allocate memory in 3.0.2-rc1 In-Reply-To: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> References: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> Message-ID: <055.08e702cfcfbf715d8098c217f96b4dae@varnish-cache.org> #1036: Cannot allocate memory in 3.0.2-rc1 ----------------------+----------------------------------------------------- Reporter: nicholas | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 10:25:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 10:25:07 -0000 Subject: [Varnish] #1035: Port numbers are not sanitized, e.g: 1234124124 In-Reply-To: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> References: <046.a11ee676e6c734d3daeec6b064574295@varnish-cache.org> Message-ID: <055.b5ef81754ab8c14629e31efe52fa030b@varnish-cache.org> #1035: Port numbers are not sanitized, e.g: 1234124124 ----------------------+----------------------------------------------------- Reporter: kristian | Owner: kristian Type: defect | Status: assigned Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: trivial | Keywords: ----------------------+----------------------------------------------------- Changes (by kristian): * owner: => kristian * status: new => assigned -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 10:56:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 10:56:34 -0000 Subject: [Varnish] #1025: varnishncsa gives continuous segfault. In-Reply-To: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> References: <054.5fafb649c6e723745a4efa31f34e6774@varnish-cache.org> Message-ID: <063.98097dbe3bcee906067a730b0f955e3a@varnish-cache.org> #1025: varnishncsa gives continuous segfault. -------------------------------+-------------------------------------------- Reporter: jonathan.labanca | Type: defect Status: closed | Priority: high Milestone: | Component: varnishncsa Version: 3.0.1 | Severity: major Resolution: worksforme | Keywords: -------------------------------+-------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: No response from submitter; closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 11:47:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 11:47:07 -0000 Subject: [Varnish] #1037: Assert in VGZ_Destroy Message-ID: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> #1037: Assert in VGZ_Destroy ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- While trying to figure out what triggered #1036, I triggered this assert: {{{ Last panic at: Mon, 24 Oct 2011 11:19:05 GMT Assert error in VGZ_Destroy(), cache_gzip.c line 426: Condition((deflateEnd(&vg->vz)) == 0) not true. thread = (cache-worker) ident = Linux,3.0.0-1-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42f2d5: pan_ic+d5 0x425c50: VGZ_Destroy+140 0x41dc71: vfp_esi_end+171 0x423f5f: FetchBody+88f 0x415aa8: cnt_fetchbody+5b8 0x417865: CNT_Session+905 0x430cfd: Pool_Work_Thread+1bd 0x43fca2: wrk_thread_real+202 0x7f0ac00b2b40: _end+7f0abfa32e10 0x7f0abfdfd36d: _end+7f0abf77d63d sp = 0x7f0aafb02040 { fd = 14, id = 14, xid = 1795924191, client = 127.0.0.1 50335, step = STP_FETCHBODY, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = do_esi is_gzip bodystatus = 3 ws = 0x7f0aafb021a8 { id = "sess", {s,f,r,e} = {0x7f0aafb02c48,+192,(nil),+65536}, }, http[req] = { ws = 0x7f0aafb021a8[sess] "GET", "/esi.html?1", "HTTP/1.1", "User-Agent: Wget/1.13 (linux-gnu)", "Accept: */*", "Host: localhost", "Connection: Keep-Alive", "X-Forwarded-For: 127.0.0.1", }, worker = 0x7f0ab04eda20 { ws = 0x7f0ab04edcf0 { id = "wrk", {s,f,r,e} = {0x7f0ab04db9d0,+8744,(nil),+65536}, }, http[bereq] = { ws = 0x7f0ab04edcf0[wrk] "GET", "/esi.html?1", "HTTP/1.1", "User-Agent: Wget/1.13 (linux-gnu)", "Accept: */*", "Host: localhost", "X-Forwarded-For: 127.0.0.1", "X-Varnish: 1795924191", "Accept-Encoding: gzip", }, http[beresp] = { ws = 0x7f0ab04edcf0[wrk] "HTTP/1.1", "200", "OK", "Date: Mon, 24 Oct 2011 11:19:05 GMT", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 11:19:01 GMT", "ETag: "8807fe-100021-4b00997054d78"", "Accept-Ranges: bytes", "Vary: Accept-Encoding", "Content-Encoding: gzip", "Transfer-Encoding: chunked", "Content-Type: text/html", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f0aaf3b3000 { xid = 1795924191, ws = 0x7f0aaf3b3018 { id = "obj", {s,f,r,e} = {0x7f0aaf3b3200,+280,(nil),+336}, }, http[obj] = { ws = 0x7f0aaf3b3018[obj] "HTTP/1.1", "OK", "Date: Mon, 24 Oct 2011 11:19:05 GMT", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 11:19:01 GMT", "ETag: "8807fe-100021-4b00997054d78"", "Vary: Accept-Encoding", "Content-Encoding: gzip", "Content-Type: text/html", }, len = 131072, store = { 131072 { 1f 8b 08 00 00 00 00 00 00 03 02 00 00 00 ff ff |................| b2 49 2d ce b4 ca cc 4b ce 29 4d 49 55 28 2e 4a |.I-....K.)MIU(.J| b6 55 d7 07 8a 18 e9 65 94 e4 e6 a8 2b e8 db 01 |.U.....e....+...| 00 00 00 ff ff 00 0d 40 f2 bf 0a a3 5f 15 87 e2 |....... at ...._...| [131008 more] }, }, }, }, }}} The assert occured right after I enabled mod_deflate on the backend. I've been unable to reproduce it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 12:10:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 12:10:53 -0000 Subject: [Varnish] #1037: Assert in VGZ_Destroy In-Reply-To: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> References: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> Message-ID: <052.06c3d4db0992784dbf1337fb80a979b5@varnish-cache.org> #1037: Assert in VGZ_Destroy ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by scoof): To reproduce on my laptop (this seems to be highly timing dependent): Place http://static.nerd.dk/esi.html and http://static.nerd.dk/esi2.html on an apache backend with mod_deflate enabled. Use attached vcl. varnishd command line: varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,4m Execute: for a in `seq 1 1000` ; do wget -qO /dev/null http://localhost/esi.html?$a & done -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 13:21:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 13:21:38 -0000 Subject: [Varnish] #1036: Cannot allocate memory in 3.0.2-rc1 In-Reply-To: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> References: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> Message-ID: <055.8b30cff5032b0550a326e650c0cdafc8@varnish-cache.org> #1036: Cannot allocate memory in 3.0.2-rc1 ----------------------+----------------------------------------------------- Reporter: nicholas | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by martin): Added a test case that will reproduce the assert. There might be more to this though, as Varnish should have been able to free enough data to not hit the memory limit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 14:58:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 14:58:40 -0000 Subject: [Varnish] #1038: Assert error in ESI_DeliverChild Message-ID: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> #1038: Assert error in ESI_DeliverChild ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Varnish with gzip and esi asserts when doing many ESI requests. To recreate, generate ESI files with many ESI includes: for a in `seq 1 1000`; do echo "" >> esimany.html; echo abc$a > esimany$a.html; done When varnish is fetching esimany.html and fragments from the backend, everything works, but when fetching from cache, it asserts: Last panic at: Mon, 24 Oct 2011 14:51:10 GMT Assert error in ESI_DeliverChild(), cache_esi_deliver.c line 493: Condition((dbits) != 0) not true. thread = (cache-worker) ident = Linux,3.0.0-1-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42f2d5: pan_ic+d5 0x41dac1: ESI_DeliverChild+491 0x432508: RES_WriteObj+4a8 0x4175f7: CNT_Session+697 0x41d326: ESI_Deliver+896 0x432308: RES_WriteObj+2a8 0x4175f7: CNT_Session+697 0x430cfd: Pool_Work_Thread+1bd 0x43fca2: wrk_thread_real+202 0x7f642bb89b40: _end+7f642b509e10 sp = 0x7f6420c07040 { fd = 15, id = 15, xid = 2007150718, client = 127.0.0.1 47825, step = STP_DELIVER, handling = deliver, restarts = 0, esi_level = 1 flags = bodystatus = 0 ws = 0x7f6420c071a8 { id = "sess", {s,f,r,e} = {0x7f6420c07c48,+384,(nil),+65536}, }, http[req] = { ws = 0x7f6420c071a8[sess] "GET", "/esimany432.html", "HTTP/1.1", "User-Agent: curl/7.21.7 (x86_64-pc-linux-gnu) libcurl/7.21.7 OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 libssh2/1.2.8 librtmp/2.3", "Host: localhost", "Accept: */*", "X-Forwarded-For: 127.0.0.1", "Accept-Encoding: gzip", }, worker = 0x7f641c7eea20 { ws = 0x7f641c7eecf0 { overflow id = "wrk", {s,f,r,e} = {0x7f641c7dc9d0,+65536,(nil),+65536}, }, http[resp] = { ws = 0x7f641c7eecf0[wrk] "HTTP/1.1", "OK", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 14:49:56 GMT", "ETag: "8815a2-7-4b00c894c3d8e"", "Content-Type: text/html", "Content-Encoding: gzip", "Age: 1", "Via: 1.1 varnish", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f641b6d3800 { xid = 2007150717, ws = 0x7f641b6d3818 { id = "obj", {s,f,r,e} = {0x7f641b6d39f0,+248,(nil),+280}, }, http[obj] = { ws = 0x7f641b6d3818[obj] "HTTP/1.1", "OK", "Date: Mon, 24 Oct 2011 14:51:08 GMT", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 14:49:56 GMT", "ETag: "8815a2-7-4b00c894c3d8e"", "Content-Type: text/html", "Content-Encoding: gzip", "Content-Length: 39", }, len = 39, store = { 39 { 1f 8b 08 00 00 00 00 00 00 03 02 00 00 00 ff ff |................| 4a 4c 4a 36 31 36 e2 02 00 00 00 ff ff 03 00 b3 |JLJ616..........| b2 c2 7f 07 00 00 00 |.......| }, }, }, }, It looks like the assert happens at around the same time that varnishlog starts logging LostHeader. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 15:02:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 15:02:45 -0000 Subject: [Varnish] #1038: Assert error in ESI_DeliverChild In-Reply-To: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> References: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> Message-ID: <052.7ee5ef73b7b928fed687ee1e3fa01b1f@varnish-cache.org> #1038: Assert error in ESI_DeliverChild ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by scoof): Fragment from varnishlog when the problem appears: 15 Hash c /esimany453.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 VCL_call c deliver deliver 15 VCL_call c recv lookup 15 VCL_call c hash 15 Hash c /esimany454.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 VCL_call c deliver deliver 15 VCL_call c recv lookup 15 VCL_call c hash 15 Hash c /esimany455.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 LostHeader c Transfer-Encodi 15 LostHeader c Date: Mon, 24 O 15 LostHeader c X-Varnish: 6298 15 LostHeader c Connect 15 VCL_call c deliver deliver 15 Interrupted c SessionOpen 0 CLI - Rd vcl.load "boot" ./vcl._ItwceHt.so 0 WorkThread - 0x7f64216fba20 start 0 WorkThread - 0x7f6420efaa20 start 0 CLI - Wr 200 36 Loaded "./vcl._ItwceHt.so" as "boot" 0 CLI - Rd vcl.use "boot" 0 CLI - Wr 200 0 0 CLI - Rd start 0 CLI - Wr 200 0 0 WorkThread - 0x7f641f8f5a20 start 0 WorkThread - 0x7f641f0f4a20 start 0 WorkThread - 0x7f641e8f3a20 start 0 WorkThread - 0x7f641e0f2a20 start 0 WorkThread - 0x7f641d0f0a20 start 0 WorkThread - 0x7f641d8f1a20 start 0 WorkThread - 0x7f641c0eea20 start 0 WorkThread - 0x7f641c8efa20 start 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1319468534 1.0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 15:03:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 15:03:10 -0000 Subject: [Varnish] #1038: Assert error in ESI_DeliverChild In-Reply-To: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> References: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> Message-ID: <052.7914d1b9db231ad3d8a85e4b84b1175b@varnish-cache.org> #1038: Assert error in ESI_DeliverChild ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by scoof): Properly formatted: {{{ 15 Hash c /esimany453.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 VCL_call c deliver deliver 15 VCL_call c recv lookup 15 VCL_call c hash 15 Hash c /esimany454.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 VCL_call c deliver deliver 15 VCL_call c recv lookup 15 VCL_call c hash 15 Hash c /esimany455.html 15 Hash c localhost 15 VCL_return c hash 15 Hit c 629844764 15 VCL_call c hit deliver 15 LostHeader c Transfer-Encodi 15 LostHeader c Date: Mon, 24 O 15 LostHeader c X-Varnish: 6298 15 LostHeader c Connect 15 VCL_call c deliver deliver 15 Interrupted c SessionOpen 0 CLI - Rd vcl.load "boot" ./vcl._ItwceHt.so 0 WorkThread - 0x7f64216fba20 start 0 WorkThread - 0x7f6420efaa20 start 0 CLI - Wr 200 36 Loaded "./vcl._ItwceHt.so" as "boot" 0 CLI - Rd vcl.use "boot" 0 CLI - Wr 200 0 0 CLI - Rd start 0 CLI - Wr 200 0 0 WorkThread - 0x7f641f8f5a20 start 0 WorkThread - 0x7f641f0f4a20 start 0 WorkThread - 0x7f641e8f3a20 start 0 WorkThread - 0x7f641e0f2a20 start 0 WorkThread - 0x7f641d0f0a20 start 0 WorkThread - 0x7f641d8f1a20 start 0 WorkThread - 0x7f641c0eea20 start 0 WorkThread - 0x7f641c8efa20 start 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1319468534 1.0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 15:07:11 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 15:07:11 -0000 Subject: [Varnish] #1039: gzipped content is broken when mixing ESI & compression and having a HIT Message-ID: <042.c23c019a7f100aa652ac04b85ea9460a@varnish-cache.org> #1039: gzipped content is broken when mixing ESI & compression and having a HIT ----------------------+----------------------------------------------------- Reporter: niko | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.1 | Severity: normal Keywords: ESI gzip | ----------------------+----------------------------------------------------- Basically everytime I have a cache HIT when requesting gezipped content the unzipping fails with a "gzip: stdin: unexpected end of file". To reproduce feel free to hit this URL with w3m: w3m -dump http://api.laut.fm/stations?broken_gzip Be shure to have a cache HIT. When having a HIT the cache expires, so the next is a MISS again. I posted the IRC log with @scoof (thanks for the help, mate), the varnish log, the panic.show, some syslog lines and my VCL in this gist: https://gist.github.com/6236cbfbdd3885a706f6 When not accepting compressed content everything works fine. I could reproduce this with 3.0.0 and 3.0.1. (I was unsure wether to attache the stuff in the gist as files. It seems more accessible as gist to me. Just drop a note and I'll include them as files) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 15:09:31 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 15:09:31 -0000 Subject: [Varnish] #1039: gzipped content is broken when mixing ESI & compression and having a HIT In-Reply-To: <042.c23c019a7f100aa652ac04b85ea9460a@varnish-cache.org> References: <042.c23c019a7f100aa652ac04b85ea9460a@varnish-cache.org> Message-ID: <051.15210f4b805b4f70671078062efb43ed@varnish-cache.org> #1039: gzipped content is broken when mixing ESI & compression and having a HIT ------------------------+--------------------------------------------------- Reporter: niko | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.1 | Severity: normal Resolution: duplicate | Keywords: ESI gzip ------------------------+--------------------------------------------------- Changes (by scoof): * status: new => closed * resolution: => duplicate Comment: See #1038 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 20:38:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 20:38:34 -0000 Subject: [Varnish] #1040: truncated headers with varnishncsa logging Message-ID: <044.9f5cb50357455abeda8760d679cd870c@varnish-cache.org> #1040: truncated headers with varnishncsa logging --------------------+------------------------------------------------------- Reporter: mamico | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- I'm trying to log access with varnishncsa, and I need to record also long header (i.e. Cookie) on my logs. But it appears that all headers are truncated to 1024 chars (on 64bit systems) or to 256 chars (on 32bit). Is there a config parameter for increase header size? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 24 20:47:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 24 Oct 2011 20:47:53 -0000 Subject: [Varnish] #1040: truncated headers with varnishncsa logging In-Reply-To: <044.9f5cb50357455abeda8760d679cd870c@varnish-cache.org> References: <044.9f5cb50357455abeda8760d679cd870c@varnish-cache.org> Message-ID: <053.4d3e59cfa293311a0ccaa84eee2bee1e@varnish-cache.org> #1040: truncated headers with varnishncsa logging ----------------------+----------------------------------------------------- Reporter: mamico | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishncsa Version: trunk | Severity: normal Resolution: invalid | Keywords: ----------------------+----------------------------------------------------- Changes (by scoof): * status: new => closed * resolution: => invalid Comment: shm_reclen Please use the mailing lists for questions. Trac is only for bugs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 25 07:53:14 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 25 Oct 2011 07:53:14 -0000 Subject: [Varnish] #1038: Assert error in ESI_DeliverChild In-Reply-To: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> References: <043.63231cb8ec7af88e55db7080a6f81953@varnish-cache.org> Message-ID: <052.61edfc9e65210372b8c30bbecaee9b3a@varnish-cache.org> #1038: Assert error in ESI_DeliverChild ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [f6baebcc2931036353356b16662e70de0363a1fe]) Also snapshot the worker thread workspace around esi:include processing. Convert a few http_PrintfHeader() to http_SetHeader() for good measure: There is no reason to waste workspace on compiled in strings. Fixes #1038 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 25 09:47:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 25 Oct 2011 09:47:59 -0000 Subject: [Varnish] #1036: Cannot allocate memory in 3.0.2-rc1 In-Reply-To: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> References: <046.1bac7e1a64473d3766c9e4eb84d7b7a2@varnish-cache.org> Message-ID: <055.6019106b7436d2e6565035d39a383d94@varnish-cache.org> #1036: Cannot allocate memory in 3.0.2-rc1 ----------------------+----------------------------------------------------- Reporter: nicholas | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: This is fixed by a2f873f20f4ee3b05ada950b168a25523550c99c: Register buffer allocation failuers on vgz's and make failure to clean those up non-fatal. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 25 09:49:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 25 Oct 2011 09:49:01 -0000 Subject: [Varnish] #1037: Assert in VGZ_Destroy In-Reply-To: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> References: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> Message-ID: <052.aec54da4cf5e70123490d1bf05304e1f@varnish-cache.org> #1037: Assert in VGZ_Destroy ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I think this is a duplicate of #1036 and it should now be fixed by a2f873f20f4ee3b05ada950b168a25523550c99c If not, please reopen. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 25 10:08:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 25 Oct 2011 10:08:18 -0000 Subject: [Varnish] #1037: Assert in VGZ_Destroy In-Reply-To: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> References: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> Message-ID: <052.7bfe8eea3bd82a899e5c0c48008e615f@varnish-cache.org> #1037: Assert in VGZ_Destroy ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by scoof): * status: closed => reopened * resolution: fixed => Comment: Now fails with: {{{ Last panic at: Tue, 25 Oct 2011 10:06:04 GMT Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 387: Condition((vef->error) == 0) not true.thread = (cache-worker) ident = Linux,3.0.0-1-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x4303c5: pan_ic+d5 0x41dd13: vfp_esi_end+1b3 0x424533: FetchBody+953 0x415ae4: cnt_fetchbody+5d4 0x4178b5: CNT_Session+8f5 0x431ded: Pool_Work_Thread+1bd 0x440ed2: wrk_thread_real+202 0x7f7bf2511b40: _end+7f7bf1e90690 0x7f7bf225c36d: _end+7f7bf1bdaebd sp = 0x7f7be1802040 { fd = 18, id = 18, xid = 244938314, client = 127.0.0.1 41977, step = STP_FETCHBODY, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = do_esi is_gzip bodystatus = 3 ws = 0x7f7be18021a8 { id = "sess", {s,f,r,e} = {0x7f7be1802c40,+192,(nil),+65536}, }, http[req] = { ws = 0x7f7be18021a8[sess] "GET", "/esi.html?347", "HTTP/1.1", "User-Agent: Wget/1.13 (linux-gnu)", "Accept: */*", "Host: localhost", "Connection: Keep-Alive", "X-Forwarded-For: 127.0.0.1", }, worker = 0x7f7be52f3a10 { ws = 0x7f7be52f3ce0 { id = "wrk", {s,f,r,e} = {0x7f7be52e19c0,+33080,(nil),+65536}, }, http[bereq] = { ws = 0x7f7be52f3ce0[wrk] "GET", "/esi.html?347", "HTTP/1.1", "User-Agent: Wget/1.13 (linux-gnu)", "Accept: */*", "Host: localhost", "X-Forwarded-For: 127.0.0.1", "X-Varnish: 244938314", "Accept-Encoding: gzip", }, http[beresp] = { ws = 0x7f7be52f3ce0[wrk] "HTTP/1.1", "200", "OK", "Date: Tue, 25 Oct 2011 10:06:04 GMT", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 12:00:15 GMT", "ETag: "8807fe-100021-4b00a2a774d39"", "Accept-Ranges: bytes", "Vary: Accept-Encoding", "Content-Encoding: gzip", "Transfer-Encoding: chunked", "Content-Type: text/html", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f7be181a400 { xid = 244938314, ws = 0x7f7be181a418 { id = "obj", {s,f,r,e} = {0x7f7be181a600,+280,(nil),+336}, }, http[obj] = { ws = 0x7f7be181a418[obj] "HTTP/1.1", "OK", "Date: Tue, 25 Oct 2011 10:06:04 GMT", "Server: Apache/2.2.21 (Debian)", "Last-Modified: Mon, 24 Oct 2011 12:00:15 GMT", "ETag: "8807fe-100021-4b00a2a774d39"", "Vary: Accept-Encoding", "Content-Encoding: gzip", "Content-Type: text/html", }, len = 524288, store = { 131072 { 1f 8b 08 00 00 00 00 00 00 03 02 00 00 00 ff ff |................| b2 49 2d ce b4 ca cc 4b ce 29 4d 49 55 28 2e 4a |.I-....K.)MIU(.J| b6 55 d7 07 8a 18 e9 65 94 e4 e6 a8 2b e8 db 01 |.U.....e....+...| 00 00 00 ff ff 00 0d 40 f2 bf 0a a3 5f 15 87 e2 |....... at ...._...| [131008 more] }, 131072 { e9 5b 4c bd 4e 2b 57 f6 2a c8 98 60 90 63 fd 34 |.[L.N+W.*..`.c.4| 95 bf 0f 4e 90 4d eb 26 5d 90 6f 1d be 32 fc 27 |...N.M.&].o..2.'| 6a db 77 53 5f 76 2e a9 ac ec a3 75 b7 f0 2f d4 |j.wS_v.....u../.| 57 5b 53 bc 07 69 42 7a e4 29 23 38 1e 0e 4b 4b |W[S..iBz.)#8..KK| [131008 more] }, 131072 { 3f 76 49 5e 3f 39 39 4b bb 10 9e bc 43 9b 45 e5 |?vI^?99K....C.E.| 64 cd 2a 77 fb 04 17 99 5b 12 1d 6f f4 88 e7 ee |d.*w....[..o....| 94 c1 99 40 26 2d 19 97 61 e2 58 59 c5 24 fb 2a |...@&-..a.XY.$.*| 9a d3 fa fd ba db 4c 0c 88 2b 65 d5 59 55 b7 f5 |......L..+e.YU..| [131008 more] }, 131072 { 93 e6 06 5f d6 7b eb 81 14 be 7d fd 45 68 91 9b |..._.{....}.Eh..| 61 df 6d a1 6d 15 ec d2 35 e8 2a 51 af c9 75 de |a.m.m...5.*Q..u.| a7 6f 4c de 94 98 68 0d eb 76 55 52 cd e5 8f c8 |.oL...h..vUR....| 27 49 d9 bb 1b c3 85 0d fd 32 76 66 bc 5f cc e9 |'I.......2vf._..| [131008 more] }, }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Oct 25 10:12:41 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 25 Oct 2011 10:12:41 -0000 Subject: [Varnish] #1037: XXXassert in vfp_esi_end (was: Assert in VGZ_Destroy) In-Reply-To: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> References: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> Message-ID: <052.ef3eac9857eb01e1e78d0ed329c8ff6d@varnish-cache.org> #1037: XXXassert in vfp_esi_end ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 26 05:00:24 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Oct 2011 05:00:24 -0000 Subject: [Varnish] #1041: sess_timeout being applied during HTTP request Message-ID: <044.b04da0d62332d14ff8ca139f1d963a82@varnish-cache.org> #1041: sess_timeout being applied during HTTP request --------------------+------------------------------------------------------- Reporter: insyte | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | --------------------+------------------------------------------------------- If I manually enter a POST request (via netcat) and stop typing after entering a few bytes of the body, after 5 seconds I receive a 503 from varnish and "FetchError c backend write error: 11 (Resource temporarily unavailable)" is logged to the varnish log. The time before the 503 error changes with the value of "sess_timeout". Tested at 10 and 15 seconds. We are occasionally receiving complaints about failed POSTs, primarily from users with slow connections. This seems closely related, if not identical, to both #849 and #748. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 26 05:57:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Oct 2011 05:57:52 -0000 Subject: [Varnish] #1041: sess_timeout being applied during HTTP request In-Reply-To: <044.b04da0d62332d14ff8ca139f1d963a82@varnish-cache.org> References: <044.b04da0d62332d14ff8ca139f1d963a82@varnish-cache.org> Message-ID: <053.02c304764bd85a7a06997d37426b8850@varnish-cache.org> #1041: sess_timeout being applied during HTTP request --------------------+------------------------------------------------------- Reporter: insyte | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by insyte): Forgot to say: I've tested on 3.0.1 and 3.0.2-rc1. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 26 08:55:56 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Oct 2011 08:55:56 -0000 Subject: [Varnish] #1042: Error in "Multiple Subroutines" Message-ID: <047.985f1cafc09629e68ce36e9d5b2d51c0@varnish-cache.org> #1042: Error in "Multiple Subroutines" -----------------------+---------------------------------------------------- Reporter: sherrmann | Type: documentation Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.0 | Severity: normal Keywords: | -----------------------+---------------------------------------------------- The documentation (https://www.varnish- cache.org/docs/trunk/reference/vcl.html#multiple-subroutines) states: "If multiple subroutines with the same name are defined, they are concatenated in the order in which the appear in the source." This is only true for the builtin subroutines. Trying to do {{{ sub foo {} sub foo {} }}} gives the error "Function foo redefined" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 26 13:14:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Oct 2011 13:14:51 -0000 Subject: [Varnish] #1043: Missing errorhandling code in FetchBody(), cache_fetch.c line 516: Condition((w->vfp->end(sp)) == 0) not true.errno = 12 (Cannot allocate memory) Message-ID: <044.f26ea7d89ef9f4ed1db5e04ad434ba61@varnish-cache.org> #1043: Missing errorhandling code in FetchBody(), cache_fetch.c line 516: Condition((w->vfp->end(sp)) == 0) not true.errno = 12 (Cannot allocate memory) --------------------+------------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Keywords: --------------------+------------------------------------------------------- Tested on 3.0.2-rc1, with patch from #1036 applied. When having do_gzip and failing to free enough memory through LRU (exceeding nuke_limit), this assert sometimes is triggered. The error stems from vfp_gzip_end() returning -1 when VGZ_ObufStorage() is called and fails. See attached panic message. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Oct 26 13:44:53 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Oct 2011 13:44:53 -0000 Subject: [Varnish] #1044: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388: Condition((vef->error) == 0) not true.thread = (cache-worker) Message-ID: <044.f32ff7647da85d2e7aa5afd5516992d5@varnish-cache.org> #1044: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388: Condition((vef->error) == 0) not true.thread = (cache-worker) --------------------+------------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Keywords: --------------------+------------------------------------------------------- Tested on 3.0.2-rc1 with patch from #1036 applied. If VGZ_ObufStorage() should fail in vfp_vep_callback() (cache_esi_fetch.c), vef->error is set. This makes vfp_esi_end() assert on missing errorhandling code. See attached panic message. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Oct 29 00:22:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Oct 2011 00:22:58 -0000 Subject: [Varnish] #897: sess_mem "leak" on hyper-threaded cpu In-Reply-To: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> References: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> Message-ID: <055.d07a804971f642bd85b815347c9548a4@varnish-cache.org> #897: sess_mem "leak" on hyper-threaded cpu ----------------------+----------------------------------------------------- Reporter: askalski | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: sess_mem leak n_sess race condition ----------------------+----------------------------------------------------- Comment(by JakaJancar): I believe I'm getting affected by this. Varnish gets 100000 sess and sess_mem, at which point it's using 1 GB of memory and stops accepting most new requests. I'm getting 500 connections/s on on an 8-core HT-enabled machine and I get too 100k in ~15 minutes. Is this solved in 3.0? Will it ever be in 2.1.6? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Oct 29 00:37:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Oct 2011 00:37:27 -0000 Subject: [Varnish] #897: sess_mem "leak" on hyper-threaded cpu In-Reply-To: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> References: <046.214e041eae3c0d463d92b586ac7bbd29@varnish-cache.org> Message-ID: <055.ab4be1e8b4942e14f2f553072d08805b@varnish-cache.org> #897: sess_mem "leak" on hyper-threaded cpu ----------------------+----------------------------------------------------- Reporter: askalski | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: sess_mem leak n_sess race condition ----------------------+----------------------------------------------------- Comment(by JakaJancar): Another post-mortem from me: http://pastebin.com/0usH0vyT -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Oct 29 06:12:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Oct 2011 06:12:16 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore Message-ID: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> #1045: Ban lurker doesn't work anymore ------------------------+--------------------------------------------------- Reporter: Yvan | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: ban lurker | ------------------------+--------------------------------------------------- I've just upgraded from 3.0.1 to 3.0.2, using Varnish's Debian packages (deb http://repo.varnish-cache.org/debian/ squeeze varnish-3.0). It seems that the ban lurker isn't clearing out the ban list, or does it too slowly. I've just did a test (on a production server, very low traffic, about 5k objects in cache): - restarted varnish (to empty ban list) - set the ban_lurker_sleep to 0.001s - sent a ban rule, such as: obj.http.x-host == mydomain.com && obj.http.x-url == /index.html - waited for the ban rule to disappear from "varnishadm ban.list" (using "watch") On 3.0.1, it takes about 4 seconds. On 3.0.2 it takes 168 seconds. Until the upgrade, setting ban_lurker_sleep to 0.001s was enough to empty the ban list in about 1h. Setting it to 0.000001s after the upgrade only sees the ban list increase (about 100k rules in 24h, even if most of these were in Gone state). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Oct 29 11:55:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Oct 2011 11:55:21 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore In-Reply-To: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> References: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> Message-ID: <051.6ef0e517de25e9674617756ac3792fee@varnish-cache.org> #1045: Ban lurker doesn't work anymore ------------------------+--------------------------------------------------- Reporter: Yvan | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: ban lurker | ------------------------+--------------------------------------------------- Comment(by scoof): Proposed fix attached, awaiting somebody to bless this before committing -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Oct 30 13:11:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Oct 2011 13:11:15 -0000 Subject: [Varnish] #1046: exp2 portability Message-ID: <048.e2a3240d9558155d1a81fc80c16a9cc2@varnish-cache.org> #1046: exp2 portability ------------------------+--------------------------------------------------- Reporter: msporleder | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: | ------------------------+--------------------------------------------------- exp2 isn't in netbsd 5 so varnish fails to compile. The following patches should work: --- configure.ac.orig 2011-10-30 12:53:05.000000000 +0000 +++ configure.ac @@ -380,6 +380,8 @@ else ac_cv_func_port_create=no fi +AC_CHECK_FUNCS([exp2]) + AM_MISSING_HAS_RUN AC_CHECK_PROGS(PYTHON, [python3 python3.1 python3.2 python2.7 python2.6 python2.5 python2 python], [AC_MSG_ERROR([Python is needed to build Varnish, please install python.])]) --- bin/varnishd/cache_dir_random.c.orig 2011-10-24 07:25:09.000000000 +0000 +++ bin/varnishd/cache_dir_random.c @@ -62,6 +62,11 @@ #include "vsha256.h" #include "vend.h" +#ifndef HAVE_EXP2 + #define EXP2_32 4294967296 + #define EXP2_31 2147483648 +#endif + /*--------------------------------------------------------------------*/ struct vdi_random_host { @@ -97,7 +102,11 @@ vdi_random_sha(const char *input, ssize_ SHA256_Init(&ctx); SHA256_Update(&ctx, input, len); SHA256_Final(sign, &ctx); +#ifndef HAVE_EXP2 + return (vle32dec(sign) / EXP2_32); +#else return (vle32dec(sign) / exp2(32)); +#endif } /* @@ -119,11 +128,19 @@ vdi_random_init_seed(const struct vdi_ra break; case c_hash: AN(sp->digest); +#ifndef HAVE_EXP2 + retval = vle32dec(sp->digest) / EXP2_32; +#else retval = vle32dec(sp->digest) / exp2(32); +#endif break; case c_random: default: +#ifndef HAVE_EXP2 + retval = random() / EXP2_31; +#else retval = random() / exp2(31); +#endif break; } return (retval); -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Oct 30 13:37:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Oct 2011 13:37:07 -0000 Subject: [Varnish] #1047: regsuball, infinite loops and confusing behaviours Message-ID: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> #1047: regsuball, infinite loops and confusing behaviours ------------------------------+--------------------------------------------- Reporter: ctrix | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: regsuball freeze | ------------------------------+--------------------------------------------- Something like the following: regsuball(req.http.Cookie, "^[;]*", ""); Causes a tight loop that never exits, consumong all CPU and exhausting all available slots for the clients. The cause is that the regexp always match and that the fix is trivial. Less trivial is to understand the problem while debugging the hangs. I suggest as a fix to break the loop somewhere or, at least, add a warning in the logs. (this was spotted on a 3.0.2, if it matters). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Oct 30 21:52:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Oct 2011 21:52:20 -0000 Subject: [Varnish] #1047: regsuball, infinite loops and confusing behaviours In-Reply-To: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> References: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> Message-ID: <052.af319d149f58b645ec07894cd69f0625@varnish-cache.org> #1047: regsuball, infinite loops and confusing behaviours ------------------------------+--------------------------------------------- Reporter: ctrix | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: regsuball freeze | ------------------------------+--------------------------------------------- Comment(by scoof): I don't know if libpcre has mechanisms to continue searching, but the attached diff fixes this for me. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Oct 30 22:51:10 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Oct 2011 22:51:10 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore In-Reply-To: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> References: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> Message-ID: <051.2be58c4d86780cbc32c8499bae446825@varnish-cache.org> #1045: Ban lurker doesn't work anymore ------------------------+--------------------------------------------------- Reporter: Yvan | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: ban lurker | ------------------------+--------------------------------------------------- Comment(by kristian): Hmm, I see where this went wrong. There is no indication in ban_check_object() whether the check was OK or not, only if an object was banned. We blindly assume that return 0 means sleep 1.0s. In #1030 we corrected the basic issue, but didn't take into account what happens when the ban list is perfectly ban-lurker-friendly but objects simply can't be banned. Scoof: I'm not too happy with the solution in your patch because it negates large parts of the fix in #1030. Phk: Seems like we need three return values from ban_check_object(). I suppose it's dirty to "hide" that in BAN_CheckObject (e.g: ret = ban_check_object(..); if (ret == 0 || ret == 1) return 0; else return 1; )? The third return value would be "nothing banned, but bans updated", which shouldn't be relevant outside of cache_ban.c. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Oct 30 22:52:56 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 30 Oct 2011 22:52:56 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore In-Reply-To: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> References: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> Message-ID: <051.d6bf7e76cbc96721d5dfe663ecffa85f@varnish-cache.org> #1045: Ban lurker doesn't work anymore ------------------------+--------------------------------------------------- Reporter: Yvan | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Keywords: ban lurker | ------------------------+--------------------------------------------------- Comment(by kristian): Let me try that again: Phk: Seems like we need three return values from ban_check_object(). I suppose it's dirty to "hide" that in BAN_CheckObject (e.g: {{{ int BAN_CheckObject(...) { ret = ban_check_object(..); if (ret == 0 || ret == 1) return 0; else return 1; } }}} The third return value would be "nothing banned, but bans updated", which shouldn't be relevant outside of cache_ban.c. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 06:45:22 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 06:45:22 -0000 Subject: [Varnish] #1047: regsuball, infinite loops and confusing behaviours In-Reply-To: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> References: <043.7e26ef971a60cee89742c13c637f7c1e@varnish-cache.org> Message-ID: <052.9b4427bab85d080b1726113b742195a6@varnish-cache.org> #1047: regsuball, infinite loops and confusing behaviours ------------------------------+--------------------------------------------- Reporter: ctrix | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: regsuball freeze | ------------------------------+--------------------------------------------- Comment(by scoof): Replying to [comment:1 scoof]: > I don't know if libpcre has mechanisms to continue searching, but the attached diff fixes this for me. No. Ignore that patch, it's not correct. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 11:25:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 11:25:15 -0000 Subject: [Varnish] #1046: exp2 portability In-Reply-To: <048.e2a3240d9558155d1a81fc80c16a9cc2@varnish-cache.org> References: <048.e2a3240d9558155d1a81fc80c16a9cc2@varnish-cache.org> Message-ID: <057.deee8fbdd16fc67aee1af8ee56a7b3d6@varnish-cache.org> #1046: exp2 portability ------------------------+--------------------------------------------------- Reporter: msporleder | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: | ------------------------+--------------------------------------------------- Description changed by kristian: Old description: > exp2 isn't in netbsd 5 so varnish fails to compile. The following > patches should work: > > --- configure.ac.orig 2011-10-30 12:53:05.000000000 +0000 > +++ configure.ac > @@ -380,6 +380,8 @@ else > ac_cv_func_port_create=no > fi > > +AC_CHECK_FUNCS([exp2]) > + > AM_MISSING_HAS_RUN > AC_CHECK_PROGS(PYTHON, [python3 python3.1 python3.2 python2.7 python2.6 > python2.5 python2 python], [AC_MSG_ERROR([Python is needed to build > Varnish, please install python.])]) > > > --- bin/varnishd/cache_dir_random.c.orig 2011-10-24 > 07:25:09.000000000 +0000 > +++ bin/varnishd/cache_dir_random.c > @@ -62,6 +62,11 @@ > #include "vsha256.h" > #include "vend.h" > > +#ifndef HAVE_EXP2 > + #define EXP2_32 4294967296 > + #define EXP2_31 2147483648 > +#endif > + > /*--------------------------------------------------------------------*/ > > struct vdi_random_host { > @@ -97,7 +102,11 @@ vdi_random_sha(const char *input, ssize_ > SHA256_Init(&ctx); > SHA256_Update(&ctx, input, len); > SHA256_Final(sign, &ctx); > +#ifndef HAVE_EXP2 > + return (vle32dec(sign) / EXP2_32); > +#else > return (vle32dec(sign) / exp2(32)); > +#endif > } > > /* > @@ -119,11 +128,19 @@ vdi_random_init_seed(const struct vdi_ra > break; > case c_hash: > AN(sp->digest); > +#ifndef HAVE_EXP2 > + retval = vle32dec(sp->digest) / EXP2_32; > +#else > retval = vle32dec(sp->digest) / exp2(32); > +#endif > break; > case c_random: > default: > +#ifndef HAVE_EXP2 > + retval = random() / EXP2_31; > +#else > retval = random() / exp2(31); > +#endif > break; > } > return (retval); New description: exp2 isn't in netbsd 5 so varnish fails to compile. The following patches should work: {{{ --- configure.ac.orig 2011-10-30 12:53:05.000000000 +0000 +++ configure.ac @@ -380,6 +380,8 @@ else ac_cv_func_port_create=no fi +AC_CHECK_FUNCS([exp2]) + AM_MISSING_HAS_RUN AC_CHECK_PROGS(PYTHON, [python3 python3.1 python3.2 python2.7 python2.6 python2.5 python2 python], [AC_MSG_ERROR([Python is needed to build Varnish, please install python.])]) --- bin/varnishd/cache_dir_random.c.orig 2011-10-24 07:25:09.000000000 +0000 +++ bin/varnishd/cache_dir_random.c @@ -62,6 +62,11 @@ #include "vsha256.h" #include "vend.h" +#ifndef HAVE_EXP2 + #define EXP2_32 4294967296 + #define EXP2_31 2147483648 +#endif + /*--------------------------------------------------------------------*/ struct vdi_random_host { @@ -97,7 +102,11 @@ vdi_random_sha(const char *input, ssize_ SHA256_Init(&ctx); SHA256_Update(&ctx, input, len); SHA256_Final(sign, &ctx); +#ifndef HAVE_EXP2 + return (vle32dec(sign) / EXP2_32); +#else return (vle32dec(sign) / exp2(32)); +#endif } /* @@ -119,11 +128,19 @@ vdi_random_init_seed(const struct vdi_ra break; case c_hash: AN(sp->digest); +#ifndef HAVE_EXP2 + retval = vle32dec(sp->digest) / EXP2_32; +#else retval = vle32dec(sp->digest) / exp2(32); +#endif break; case c_random: default: +#ifndef HAVE_EXP2 + retval = random() / EXP2_31; +#else retval = random() / exp2(31); +#endif break; } return (retval); }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 11:37:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 11:37:02 -0000 Subject: [Varnish] #1042: Error in "Multiple Subroutines" In-Reply-To: <047.985f1cafc09629e68ce36e9d5b2d51c0@varnish-cache.org> References: <047.985f1cafc09629e68ce36e9d5b2d51c0@varnish-cache.org> Message-ID: <056.6d1ee12b66bf53449c8307226d75a775@varnish-cache.org> #1042: Error in "Multiple Subroutines" ---------------------------+------------------------------------------------ Reporter: sherrmann | Owner: scoof Type: documentation | Status: new Priority: normal | Milestone: Component: documentation | Version: 3.0.0 Severity: normal | Keywords: ---------------------------+------------------------------------------------ Changes (by scoof): * owner: => scoof -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 13:40:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 13:40:28 -0000 Subject: [Varnish] #1041: sess_timeout being applied during HTTP request In-Reply-To: <044.b04da0d62332d14ff8ca139f1d963a82@varnish-cache.org> References: <044.b04da0d62332d14ff8ca139f1d963a82@varnish-cache.org> Message-ID: <053.90c36e7035cc84e3fb5f5a1585b8e490@varnish-cache.org> #1041: sess_timeout being applied during HTTP request --------------------+------------------------------------------------------- Reporter: insyte | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 14:22:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 14:22:28 -0000 Subject: [Varnish] #1043: Missing errorhandling code in FetchBody(), cache_fetch.c line 516: Condition((w->vfp->end(sp)) == 0) not true.errno = 12 (Cannot allocate memory) In-Reply-To: <044.f26ea7d89ef9f4ed1db5e04ad434ba61@varnish-cache.org> References: <044.f26ea7d89ef9f4ed1db5e04ad434ba61@varnish-cache.org> Message-ID: <053.7ddebf391c59575a549f2d5c3e90b57d@varnish-cache.org> #1043: Missing errorhandling code in FetchBody(), cache_fetch.c line 516: Condition((w->vfp->end(sp)) == 0) not true.errno = 12 (Cannot allocate memory) --------------------+------------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [6e4e013f407c6685241c7a4c88a72dbd671102ba]) Overhaul the detection and reporting of fetch errors, to properly catch trouble that materializes only when we destroy the VGZ instance. Fixes #1037 Fixes #1043 Fixes #1044 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 14:22:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 14:22:30 -0000 Subject: [Varnish] #1037: XXXassert in vfp_esi_end In-Reply-To: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> References: <043.47d2d009e17023c4deb071119393d9c8@varnish-cache.org> Message-ID: <052.d3ec308dc6a0873abe776b7596244080@varnish-cache.org> #1037: XXXassert in vfp_esi_end ----------------------+----------------------------------------------------- Reporter: scoof | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: reopened => closed * resolution: => fixed Comment: (In [6e4e013f407c6685241c7a4c88a72dbd671102ba]) Overhaul the detection and reporting of fetch errors, to properly catch trouble that materializes only when we destroy the VGZ instance. Fixes #1037 Fixes #1043 Fixes #1044 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Oct 31 14:22:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 31 Oct 2011 14:22:33 -0000 Subject: [Varnish] #1044: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388: Condition((vef->error) == 0) not true.thread = (cache-worker) In-Reply-To: <044.f32ff7647da85d2e7aa5afd5516992d5@varnish-cache.org> References: <044.f32ff7647da85d2e7aa5afd5516992d5@varnish-cache.org> Message-ID: <053.963b35dfd06a45e2d356c39ddf67c86c@varnish-cache.org> #1044: Missing errorhandling code in vfp_esi_end(), cache_esi_fetch.c line 388: Condition((vef->error) == 0) not true.thread = (cache-worker) --------------------+------------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [6e4e013f407c6685241c7a4c88a72dbd671102ba]) Overhaul the detection and reporting of fetch errors, to properly catch trouble that materializes only when we destroy the VGZ instance. Fixes #1037 Fixes #1043 Fixes #1044 -- Ticket URL: Varnish The Varnish HTTP Accelerator