From varnish-bugs at varnish-cache.org Mon Jun 1 09:15:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 09:15:58 -0000 Subject: [Varnish] #1743: Workspace exhaustion under load In-Reply-To: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> References: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> Message-ID: <058.907dace39f551a3f614d209f06186e93@varnish-cache.org> #1743: Workspace exhaustion under load ----------------------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: workspace, esi, load | ----------------------------------+---------------------------------- Comment (by geoff): It turns out that this problem was caused by a bug in VCL, and does not indicate a problem with workspace management under load in varnishd. We had VCL code that caused the req.url to be changed in vcl_recv() under certain circumstances, but it should only have been done at ESI level 0, and did not check the ESI level. That meant that the URL was replaced at every ESI level, and the response at that URL also had deep ESI nesting, as mentioned above. Not only had we increased max_esi_depth to allow the deep ESI nesting, we also had an "ESI include tree" that expands widely in breadth. The effect was so much ESI expansion as to explode workspaces. After fixing VCL so that the req.url replacement is only done at ESI level 0, we've been able to repeat load tests with no workspace problems. This ticket can be closed now, thanks to phk for the help. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 09:39:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 09:39:18 -0000 Subject: [Varnish] #1743: Workspace exhaustion under load In-Reply-To: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> References: <043.51a23a57d9c85027357d34c46e32d875@varnish-cache.org> Message-ID: <058.e3dd6f556603ae937cda84fc4092eff8@varnish-cache.org> #1743: Workspace exhaustion under load ----------------------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: worksforme Keywords: workspace, esi, load | ----------------------------------+---------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 11:13:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 11:13:21 -0000 Subject: [Varnish] #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: In-Reply-To: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> References: <043.f71a854c635b7ea5de5ec292dbcbe25b@varnish-cache.org> Message-ID: <058.13e44aaca1ad4dfd15b70fe8abe39efe@varnish-cache.org> #1739: overflow on "o ws - Assert error in VFP_Push(), cache/cache_fetch_proc.c line 200: ----------------------+-------------------- Reporter: slink | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by slink): * owner: => slink -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 11:30:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 11:30:54 -0000 Subject: [Varnish] #1745: Assert error in Req_Cleanup(), cache/cache_req.c line 171: Condition((req->vsl->wid) != 0) not true. Message-ID: <043.f9edb0f35e69ac5f413c059d90b1e0f3@varnish-cache.org> #1745: Assert error in Req_Cleanup(), cache/cache_req.c line 171: Condition((req->vsl->wid) != 0) not true. ----------------------+------------------- Reporter: slink | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- {{{varnishd (varnish-trunk revision 2b1ac1c)}}} {{{ Jun 1 13:08:40 c12 Assert error in Req_Cleanup(), cache/cache_req.c line 171: Jun 1 13:08:40 c12 Condition((req->vsl->wid) != 0) not true. Jun 1 13:08:40 c12 errno = 131 (Connection reset by peer) Jun 1 13:08:40 c12 thread = (cache-worker) Jun 1 13:08:40 c12 version = varnish-trunk revision 2b1ac1c Jun 1 13:08:40 c12 ident = -jsolaris,-smalloc,-smalloc,-hcritbit,ports Jun 1 13:08:40 c12 Backtrace: Jun 1 13:08:40 c12 8087824: pan_backtrace+0x14 Jun 1 13:08:40 c12 8087b20: pan_ic+0x1d9 Jun 1 13:08:40 c12 808add3: Req_Cleanup+0x238 Jun 1 13:08:40 c12 80a394e: HTTP1_Session+0x166 Jun 1 13:08:40 c12 8091252: SES_Proto_Req+0x104 Jun 1 13:08:40 c12 808997c: Pool_Work_Thread+0x432 Jun 1 13:08:40 c12 809e311: WRK_Thread+0x1ce Jun 1 13:08:40 c12 8089a58: pool_thread+0x7d Jun 1 13:08:40 c12 fec0cd56: libc.so.1'_thrp_setup+0x7e [0xfec0cd56] Jun 1 13:08:40 c12 fec0cfe0: libc.so.1'_lwp_start+0x0 [0xfec0cfe0] Jun 1 13:08:40 c12 req = 9b23660 { Jun 1 13:08:40 c12 sp = e267d48, vxid = 0, step = R_STP_RESTART, Jun 1 13:08:40 c12 req_body = R_BODY_INIT, Jun 1 13:08:40 c12 restarts = 0, esi_level = 0, Jun 1 13:08:40 c12 sp = e267d48 { Jun 1 13:08:40 c12 fd = -1, vxid = 383115, Jun 1 13:08:40 c12 client = 212.23.139.98 49945, Jun 1 13:08:40 c12 step = S_STP_H1NEWREQ, Jun 1 13:08:40 c12 }, Jun 1 13:08:40 c12 ws = 9b23794 { Jun 1 13:08:40 c12 id = "req", Jun 1 13:08:40 c12 {s,f,r,e} = {9b24f38,9b24f38,+12288,+18184}, Jun 1 13:08:40 c12 }, Jun 1 13:08:40 c12 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 11:39:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 11:39:50 -0000 Subject: [Varnish] #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) Message-ID: <043.dc31ec3dfd189c2092c4e0a23480e2b7@varnish-cache.org> #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) -------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Later | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+---------------------- Testing a trunk build, rev 15773a9 built on Friday May 29th, this assertion failure is reproducible: {{{ #2 0x000000000043cb24 in pan_ic (func=0x4826c0 "ved_stripgzip", file=0x48238e "cache/cache_esi_deliver.c", line=466, cond=0x4825af "start > 0 && start < olen * 8", err=0, kind=VAS_ASSERT) at cache/cache_panic.c:574 #3 0x000000000041db86 in ved_stripgzip (req=0x2b56cc60a020) at cache/cache_esi_deliver.c:466 #4 0x000000000041e4bc in ESI_DeliverChild (req=0x2b56cc60a020, bo=0x0) at cache/cache_esi_deliver.c:631 }}} From the panic messages and varnishlogs we've seen so far, it seems to have been an ESI-included 204 No content response in every case. I'll attach a full stack trace. We can't post the panic message publicly, but I can send one separately if needed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 11:54:18 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 11:54:18 -0000 Subject: [Varnish] #1745: Assert error in Req_Cleanup(), cache/cache_req.c line 171: Condition((req->vsl->wid) != 0) not true. In-Reply-To: <043.f9edb0f35e69ac5f413c059d90b1e0f3@varnish-cache.org> References: <043.f9edb0f35e69ac5f413c059d90b1e0f3@varnish-cache.org> Message-ID: <058.7ed56c43831abd5b3f0a846ae9cccf1e@varnish-cache.org> #1745: Assert error in Req_Cleanup(), cache/cache_req.c line 171: Condition((req->vsl->wid) != 0) not true. ----------------------+--------------------- Reporter: slink | Owner: slink Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Nils Goroll ): * status: new => closed * resolution: => fixed Comment: In [a72ab6cf297b97b25d7a392ef0202b8cfec8a6a0]: {{{ #!CommitTicketReference repository="" revision="a72ab6cf297b97b25d7a392ef0202b8cfec8a6a0" We don't have a vsl when coming from error in HTTP1_Session() Fixes #1745 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 12:01:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 12:01:02 -0000 Subject: [Varnish] #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) In-Reply-To: <043.dc31ec3dfd189c2092c4e0a23480e2b7@varnish-cache.org> References: <043.dc31ec3dfd189c2092c4e0a23480e2b7@varnish-cache.org> Message-ID: <058.849aea86d4d0177dd57d5df769aa91bc@varnish-cache.org> #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) ----------------------+-------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by geoff): I've gone through about 10 coredumps, and it was the same situation in each case -- the assertion failure as shown above, after receiving a 204 No content response in ESI delivery. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 12:18:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 12:18:57 -0000 Subject: [Varnish] #1744: Unable to provide sfile size as a percentage of free space In-Reply-To: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> References: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> Message-ID: <061.70145032153d18281d85c8f08c4aa1b3@varnish-cache.org> #1744: Unable to provide sfile size as a percentage of free space ----------------------+-------------------- Reporter: fgaillot | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by martin): Yes, the possibility of specifying disk size as percentage of underlying file system has been removed, though the corresponding part in the user guide was not updated to reflect this. Thanks for pointing that out. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 12:21:38 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 12:21:38 -0000 Subject: [Varnish] #1744: Unable to provide sfile size as a percentage of free space In-Reply-To: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> References: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> Message-ID: <061.90e382581430753d64c1aa0c63b282f4@varnish-cache.org> #1744: Unable to provide sfile size as a percentage of free space ----------------------+----------------------------------------------- Reporter: fgaillot | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * owner: => Martin Blix Grydeland * resolution: => fixed Comment: In [977b831d7c21b392104a4156e4ea5af1d9d68f65]: {{{ #!CommitTicketReference repository="" revision="977b831d7c21b392104a4156e4ea5af1d9d68f65" Update the users guide to for new -sfile syntax It used to be possible to specify -sfile size in percentage. This feature has been removed. Update the users guide to reflect this. Fixes: #1744 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 12:25:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 12:25:07 -0000 Subject: [Varnish] #1744: Unable to provide sfile size as a percentage of free space In-Reply-To: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> References: <046.06bf44fb0e5dc52a4e04ab86e66b3298@varnish-cache.org> Message-ID: <061.d995ffff2898b57af4613320b3526901@varnish-cache.org> #1744: Unable to provide sfile size as a percentage of free space ----------------------+----------------------------------------------- Reporter: fgaillot | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------- Comment (by Martin Blix Grydeland ): In [57d2e7a8dedd214fee2ecae436bcfe4d67d228e9]: {{{ #!CommitTicketReference repository="" revision="57d2e7a8dedd214fee2ecae436bcfe4d67d228e9" Update the users guide to for new -sfile syntax It used to be possible to specify -sfile size in percentage. This feature has been removed. Update the users guide to reflect this. Fixes: #1744 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 13:04:04 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 13:04:04 -0000 Subject: [Varnish] #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests Message-ID: <043.682f47782860e7013b59f5f4b02bcc9a@varnish-cache.org> #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests ---------------------------------+------------------------ Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishlog Version: 4.0.3 | Severity: normal Keywords: | ---------------------------------+------------------------ This is more about the VSL API, since I've been getting the error from three apps that use it. Two of them I made myself, and I'll take care of those, but the other one is varnishlog, hence the ticket. These all occurred during load tests, have not seen the problem except under load. {{{ (gdb) bt #0 0x00007fe48f817625 in raise () from /lib64/libc.so.6 #1 0x00007fe48f818e05 in abort () from /lib64/libc.so.6 #2 0x00007fe49045681d in VAS_Fail_default ( func=0x7fe49047500e "vslc_vsm_next", file=0x7fe490474c7c "vsl_cursor.c", line=130, cond=0x7fe490474d19 "c->next.ptr < c->end", err=0, kind=VAS_ASSERT) at ../libvarnish/vas.c:67 #3 0x00007fe4904602ed in vslc_vsm_next (cursor=0x1e6c648) at vsl_cursor.c:130 #4 0x00007fe49046179a in VSL_Next (cursor=0x1e6c648) at vsl_cursor.c:470 #5 0x00007fe490467867 in vslq_next (vslq=0x1e6cc90) at vsl_dispatch.c:1228 #6 0x00007fe490467d40 in VSLQ_Dispatch (vslq=0x1e6cc90, func=0x4020f5 , priv=0x0) at vsl_dispatch.c:1308 #7 0x00000000004030bb in VUT_Main () at ../../lib/libvarnishtools/vut.c:335 #8 0x000000000040207b in main (argc=11, argv=0x7fff2be25728) at varnishlog.c:161 }}} vsl_cursor.c at line 130 appears to be unchanged in the current trunk. I have gdb prints of the contents of cursor->priv_data at the offending frames, all of which look something like this: {{{ (gdb) p *c $1 = {magic = 1295582118, cursor = {rec = {ptr = 0x0, priv = 0}, priv_tbl = 0x7fe49067d240, priv_data = 0x1e6c640}, options = 3, vsm = 0x1e6c200, vf = {chunk = 0x7fe46f6f57e0, b = 0x7fe46f6f5888, e = 0x7fe48f6f5938, priv = 498506351, class = "Log\000\000\000\000", type = "\000\000\000\000\000\000\000", ident = '\000' }, head = 0x7fe46f6f5888, end = 0x7fe48f6f5938, segsize = 16777219, next = {ptr = 0x7fe48f6facec, priv = 2098056490}} }}} I have backtraces and "p *c" from the other apps and coredumps which I can add (but I suspect that they all tell the same story). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 15:36:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 15:36:13 -0000 Subject: [Varnish] #1748: varnishncsa: logged spaces in userid Message-ID: <045.5fca919bea31bb4e8bef6de6ab760adc@varnish-cache.org> #1748: varnishncsa: logged spaces in userid ---------------------+------------------------- Reporter: mandark | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.5 | Severity: normal Keywords: | ---------------------+------------------------- It may be normal, yet I think it's not: If a user agent uses spaces as a basic auth loggin, like : curl --user '- - - - -:-' 0 Varnish logs: 127.0.0.1 - - - - - - [01/Jun/2015:17:19:02 +0200] "GET http://127.0.0.1/ HTTP/1.1" 404 1675 "-" "curl/7.26.0" What's wrong ? Nothing at first, yet I think the NCSA format is a great one because the number of fields is constant as no field can contain space but the user agent, and, the user agent is last so there is no ambiguity. Due to this fact, some parsers don't use regular expressions to parse NCSA log format, but a simple and faster "split" or "cut" like method. The behavior of logging spaces in userid break those parsers (And probably parsers using regex but not expecting a space here. I didn't not searched if they exist.) I also think this behavior may be bad in the sense that breaking those parser may help hidding an attack. But with a limited impact, as basic auth will split on ":" we can't inject a false date (As a date contains ":"), followed by a false verb, a false path, etc, pushing the true log behind an injected user-agent. Yet I have absolutely no idea on how to remove or encode cleanly those spaces without breaking every existing parsers/loggers. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 1 19:36:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 01 Jun 2015 19:36:10 -0000 Subject: [Varnish] #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) In-Reply-To: <043.dc31ec3dfd189c2092c4e0a23480e2b7@varnish-cache.org> References: <043.dc31ec3dfd189c2092c4e0a23480e2b7@varnish-cache.org> Message-ID: <058.b195507f73dba4505ca2d4fd0064328c@varnish-cache.org> #1746: Assertion failure "start > 0 && start < olen * 8" in ved_stripgzip(), cache/cache_esi_deliver.c (rev 15773a9) ----------------------+---------------------------------------- Reporter: geoff | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [9b4080814677ae45cb858b97900046d96226c7d9]: {{{ #!CommitTicketReference repository="" revision="9b4080814677ae45cb858b97900046d96226c7d9" Only bother/do g[un]zip if there actually is a beresp.body. Fixes: #1746 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 8 09:04:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Jun 2015 09:04:16 -0000 Subject: [Varnish] #1749: Assert error in HTTP_Encode Message-ID: <045.f0d6f243cf4ee9f71aadc083cfce83da@varnish-cache.org> #1749: Assert error in HTTP_Encode --------------------------+---------------------- Reporter: llavaud | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: Assert error | --------------------------+---------------------- {{{ Jun 6 08:46:19 webcache15 varnishd[278087]: Child (278106) Panic message: Assert error in HTTP_Encode(), cache/cache_http.c line 822: Condition(fm->nhd < fm->shd) not true. thread = (cache-worker) version = varnish-trunk revision cbb2f5d ident = Linux,3.2.0-4-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x434564: pan_ic+0x134 0x42db4d: HTTP_Encode+0x5fd 0x422585: vbf_beresp2obj+0x275 0x4237ac: vbf_fetch_thread+0x5bc 0x436d77: Pool_Work_Thread+0x357 0x449c03: WRK_Thread+0x103 0x435afb: pool_thread+0x2b 0x7f74f1cedb50: /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f74f1cedb50] 0x7f74f1a3795d: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f74f1a3795d] busyobj = 0x7f5b1e913020 { ws = 0x7f5b1e9130e0 { OVERFLOW id = ""o", {s,f,r,e} = {0x7f5b1e914fa0,+6504,(nil),+254072}, }, refcnt = 2 retries = 0 failed = 0 state = 1 is_do_esi is_do_gzip is_uncacheable is_is_gunzip bodystatus = 2 (chunked), filters = ESI_GZIP=0 }, http[bereq] = { ws = 0x7f5b1e9130e0["o] "GET", "myuri", "HTTP/1.1", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36", "Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4", "Host: myhost", "Surrogate-Capability: abc=ESI/1.0", "X-Forwarded-For: myip", "Cookie: xtvrn=$517037$; lauser_token=9fjJSF7D7hlvZ6kIsGIYfZiCW3XNdNTr7FZ_xhxKtLc; xtan=-; xtant=1; A2DCAPPV2=6082%3A1433411810%3A980120873; A2DCAPPVBD2=6082%3A1433411810%3A980120873", "X-Varnish: 407192649", }, http[beresp] = { ws = 0x7f5b1e9130e0["o] "HTTP/1.1", -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 8 16:21:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Jun 2015 16:21:05 -0000 Subject: [Varnish] #1750: Fail more gracefully on -l >= 4GB Message-ID: <043.043bbd8ec6de89449726e0316939ec56@varnish-cache.org> #1750: Fail more gracefully on -l >= 4GB ---------------------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: low Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: minor Keywords: | ---------------------------------+---------------------- Varnish cannot run with a VSM file size of 4GB or more -- as discussed with Martin on IRC, the VSM API cannot accommodate sizes that large (for example due to unsigned ints as offsets). But the varnishd command succeeds with -l 4g -- the management process starts, and then the child process fails an assert. So the invocation appears to be normal, although Varnish fails almost immediately. On my system, there was a message in /var/log/messages indicating failure of the child due to SIGABRT, but there was no core dump (although I always get them otherwise). And since the management process also stopped, there was no panic message, so there was nothing useful to diagnose the problem. I suggest checking -l against the maximum on invocation, and failing right away to stderr. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 8 16:38:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 08 Jun 2015 16:38:02 -0000 Subject: [Varnish] #1750: Fail more gracefully on -l >= 4GB In-Reply-To: <043.043bbd8ec6de89449726e0316939ec56@varnish-cache.org> References: <043.043bbd8ec6de89449726e0316939ec56@varnish-cache.org> Message-ID: <058.2cfc69dc36bd2e5fe55cc91b4b1e0821@varnish-cache.org> #1750: Fail more gracefully on -l >= 4GB ----------------------+---------------------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: low | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: minor | Resolution: Keywords: | ----------------------+---------------------------------- Comment (by geoff): Also, the varnishd documentation (both 4.0 and trunk) has this to say about -l: "Scaling suffixes like 'k', 'M' can be used up to (E)xabytes." Obviously there won't be any exabytes. Better to document the 4GB limit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 9 09:15:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Jun 2015 09:15:26 -0000 Subject: [Varnish] #1748: varnishncsa: logged spaces in userid In-Reply-To: <045.5fca919bea31bb4e8bef6de6ab760adc@varnish-cache.org> References: <045.5fca919bea31bb4e8bef6de6ab760adc@varnish-cache.org> Message-ID: <060.bfc5463d381157c5f3e2fd8003f10f35@varnish-cache.org> #1748: varnishncsa: logged spaces in userid -------------------------+-------------------- Reporter: mandark | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishncsa | Version: 3.0.5 Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by Dridi): Hi, It's the same behaviour's as httpd's logging. Varnishncsa's default format is the combined log format in http's documentation (https://httpd.apache.org/docs/current/logs.html#combined). If you expect usernames containing spaces, you should change the pattern to: {{{ %h %l "%u" %t "%r" %s %b "%{Referer}i" "%{User-agent}i" }}} If your log parser breaks on valid ncsa output, you should report it to the parser's authors instead. You said: > What's wrong ? Nothing at first, yet I think the NCSA format is a great one because the number of fields is constant as no field can contain space but the user agent, and, the user agent is last so there is no ambiguity. The %r pattern represents the HTTP request's start-line which by definition contains one or two spaces (one space if you're using HTTP/0.9). For this reason, it needs to be quoted to be seen as a single field, and as you can see, it is. There is also no constant number of fields, from the documentation link above, the combined format is one of the defaults in httpd, there is also common which is a subset of combined. There's nothing wrong with varnishncsa. Best Regards, Dridi -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 9 10:17:34 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Jun 2015 10:17:34 -0000 Subject: [Varnish] #1748: varnishncsa: logged spaces in userid In-Reply-To: <045.5fca919bea31bb4e8bef6de6ab760adc@varnish-cache.org> References: <045.5fca919bea31bb4e8bef6de6ab760adc@varnish-cache.org> Message-ID: <060.be841140e2e3ce51f43b6d4b0cef1d85@varnish-cache.org> #1748: varnishncsa: logged spaces in userid -------------------------+---------------------- Reporter: mandark | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishncsa | Version: 3.0.5 Severity: normal | Resolution: wontfix Keywords: | -------------------------+---------------------- Changes (by Dridi): * status: new => closed * resolution: => wontfix -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 9 11:36:32 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Jun 2015 11:36:32 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile In-Reply-To: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> References: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> Message-ID: <062.a2907c99e8559e42a76c4758f0dd70f5@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: vmod | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------- Comment (by lkarsten): Yes, /usr/lib/varnish/vmods/ is how we intend to keep it. The generic way would be to get it from pkg-config: {{{ lkarsten at IMMER ~> pkg-config --variable=vmoddir varnishapi /usr/lib/varnish/vmods }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 9 15:24:36 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Jun 2015 15:24:36 -0000 Subject: [Varnish] #1742: varnishlog -B produces a file varnishlog -r cannot read In-Reply-To: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> References: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> Message-ID: <058.32a632fec15a5c75a8197a46f0aefcf7@varnish-cache.org> #1742: varnishlog -B produces a file varnishlog -r cannot read -------------------------+------------------------------------------------- Reporter: Dridi | Owner: Dridi Boukelmoune Type: defect | Priority: normal | Status: closed Component: varnishlog | Milestone: Severity: normal | Version: 4.0.3 Keywords: vsl vsm | Resolution: fixed -------------------------+------------------------------------------------- Comment (by Dridi Boukelmoune ): In [299bc3d6d0f12994a5722ee5b40279a0bba94c76]: {{{ #!CommitTicketReference repository="" revision="299bc3d6d0f12994a5722ee5b40279a0bba94c76" Document varnishlog -w/-r with more details Fixes #1742 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 9 15:24:36 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 09 Jun 2015 15:24:36 -0000 Subject: [Varnish] #1742: varnishlog -B produces a file varnishlog -r cannot read In-Reply-To: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> References: <043.98d45c119b409c806f03334de170ddaf@varnish-cache.org> Message-ID: <058.6db785b7e08c7745e2917efae3fe3574@varnish-cache.org> #1742: varnishlog -B produces a file varnishlog -r cannot read -------------------------+------------------------------------------------- Reporter: Dridi | Owner: Dridi Boukelmoune Type: defect | Priority: normal | Status: closed Component: varnishlog | Milestone: Severity: normal | Version: 4.0.3 Keywords: vsl vsm | Resolution: fixed -------------------------+------------------------------------------------- Changes (by Dridi Boukelmoune ): * status: new => closed * owner: => Dridi Boukelmoune * resolution: => fixed Comment: In [45553c62dbd025b1aa522129992a7c7e65b4df45]: {{{ #!CommitTicketReference repository="" revision="45553c62dbd025b1aa522129992a7c7e65b4df45" Document varnishlog -w/-r with more details Fixes #1742 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 15 07:46:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Jun 2015 07:46:10 -0000 Subject: [Varnish] #1751: varnishapi.pc and varnish.m4 not installed by rpm Message-ID: <053.02eb5f02907f0cbf4040e4a0a6eb4e97@varnish-cache.org> #1751: varnishapi.pc and varnish.m4 not installed by rpm -----------------------------+----------------------- Reporter: askbjoernhansen | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 4.0.3 | Severity: normal Keywords: | -----------------------------+----------------------- According to https://lassekarstensen.wordpress.com/2013/12/19/converting-a-varnish-3-0 -vmod-to-4-0/ varnish.m4 and varnishapi.pc should be part of the "standard installation". It doesn't appear to be the case for the RPMs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 15 08:44:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Jun 2015 08:44:25 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.bc6fccf273fe1160511cc6b5ae617834@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by lkarsten): Still crashing: {{{ Last panic at: Sun, 14 Jun 2015 04:56:04 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 96: Condition((vbc->backend) != NULL) not true. thread = (cache-epoll) version = varnish-trunk revision 0fd62ec ident = Linux,3.13.0-54-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x43ab27: pan_backtrace+0x19 0x43af0f: pan_ic+0x299 0x414914: tcp_handle+0x1ae 0x47a559: Wait_Call+0x1db 0x47b25b: vwe_thread+0x5a4 0x7f1e94674182: libpthread.so.0(+0x8182) [0x7f1e94674182] 0x7f1e943a147d: libc.so.6(clone+0x6d) [0x7f1e943a147d] }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 15 11:03:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 15 Jun 2015 11:03:22 -0000 Subject: [Varnish] #1751: varnishapi.pc and varnish.m4 not installed by rpm In-Reply-To: <053.02eb5f02907f0cbf4040e4a0a6eb4e97@varnish-cache.org> References: <053.02eb5f02907f0cbf4040e4a0a6eb4e97@varnish-cache.org> Message-ID: <068.0dcf31035bd672b76d1269a44c38a403@varnish-cache.org> #1751: varnishapi.pc and varnish.m4 not installed by rpm -----------------------------+------------------------- Reporter: askbjoernhansen | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: packaging | Version: 4.0.3 Severity: normal | Resolution: worksforme Keywords: | -----------------------------+------------------------- Changes (by lkarsten): * status: new => closed * resolution: => worksforme Comment: On Redhat systems the pkg-config files are in varnish-libs-devel. {{{ [root at el6 ~]# rpm -ql varnish-libs-devel | egrep '(m4|.pc)' /usr/lib64/pkgconfig/varnishapi.pc /usr/share/aclocal/varnish.m4 }}} {{{ [root at el6 ~]# rpm -qi varnish-libs-devel Name : varnish-libs-devel Relocations: (not relocatable) Version : 4.0.3 Vendor: (none) Release : 1.el6 Build Date: Wed Feb 18 15:20:36 2015 Install Date: Mon Jun 8 13:57:18 2015 Build Host: nancy.varnish- software.com Group : System Environment/Libraries Source RPM: varnish-4.0.3-1.el6.src.rpm Size : 324567 License: BSD Signature : RSA/SHA1, Wed Feb 18 16:03:42 2015, Key ID 60e7c096c4deffeb URL : https://www.varnish-cache.org/ Summary : Development files for varnish-libs Description : Development files for varnish-libs Varnish Cache is a high-performance HTTP accelerator -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 17 07:30:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Jun 2015 07:30:47 -0000 Subject: [Varnish] #1752: Varnish redirect Message-ID: <044.17cd063e7fa58d42f006633dda539c28@varnish-cache.org> #1752: Varnish redirect -----------------------------------+--------------------------- Reporter: cdie2k | Type: documentation Status: new | Priority: normal Milestone: Varnish 2.1 release | Component: build Version: 2.1.4 | Severity: normal Keywords: redirect php no cache | -----------------------------------+--------------------------- Hello, I am using the varnishd (varnish-2.1.4 SVN 5447M), I am researching how can I do to 1: varnish redirect all the PHP trafic to the backends directly (no cached content) 2 all the no php content must follow the normal varnish behavior (cache and if its not at the cache them must go to the backends to get the files). I was not able to make a config file for that and I didnt found the way to do it on the internet. is it possible to do it with this version? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 17 12:29:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 17 Jun 2015 12:29:56 -0000 Subject: [Varnish] #1752: Varnish redirect In-Reply-To: <044.17cd063e7fa58d42f006633dda539c28@varnish-cache.org> References: <044.17cd063e7fa58d42f006633dda539c28@varnish-cache.org> Message-ID: <059.a3544544af9dfcde5cddd842e238bc6e@varnish-cache.org> #1752: Varnish redirect -----------------------------------+---------------------------------- Reporter: cdie2k | Owner: Type: documentation | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: build | Version: 2.1.4 Severity: normal | Resolution: invalid Keywords: redirect php no cache | -----------------------------------+---------------------------------- Changes (by lkarsten): * status: new => closed * resolution: => invalid Comment: Please use the varnish-misc@ email list for getting help. You should also know that Varnish 2.1.4 is fairly old and shouldn't be used any more. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 19 11:22:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 19 Jun 2015 11:22:54 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.1496d6dde7997c4c056e25acabb33c2b@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: needinfo Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by lkarsten): Got a panic with -Wpoll as well: {{{ Last panic at: Thu, 18 Jun 2015 07:19:15 GMT Assert error in VBT_Close(), cache/cache_backend_tcp.c line 321: Condition(vbc->state == VBC_STATE_USED) not true. thread = (cache-worker) version = varnish-trunk revision 25ae15e ident = Linux,3.13.0-55-generic,x86_64,-junix,-smalloc,-smalloc,-hcritbit,poll Backtrace: 0x43ab27: pan_backtrace+0x19 0x43af0f: pan_ic+0x299 0x4157bc: VBT_Close+0x109 0x410541: vbe_dir_finish+0x2a3 0x410a93: vbe_dir_gethdrs+0x399 0x41bcc1: VDI_GetHdr+0x140 0x4260db: vbf_stp_startfetch+0x333 0x428a01: vbf_fetch_thread+0x3af 0x455657: Pool_Work_Thread+0x51d 0x454287: WRK_Thread+0x2a9 busyobj = 0x7f1c5589c020 { ws = 0x7f1c5589c0e0 { id = "bo", {s,f,r,e} = {0x7f1c5589df98,+224,(nil),+57440}, }, refcnt = 2 retries = 0 failed = 0 state = 1 flags = { do_stream do_pass uncacheable } bodystatus = 0 (none), }, http[bereq] = { ws = 0x7f1c5589c0e0[bo] "POST", "/pdf/IPV6WHY.PDF", "HTTP/1.1", "User-Agent: Mozilla/5.0", "Accept: */*", "Content-Type: application/x-www-form-urlencoded", "Host: hyse.org", "Content-Length: 395", "X-Forwarded-For: 61.164.253.70", "X-Varnish: 65709", }, http[beresp] = { ws = 0x7f1c5589c0e0[bo] }, objcore (FETCH) = 0x7f1c540f2d80 { refcnt = 2 flags = 0x102 objhead = 0x7f1c668ef500 stevedore = (nil) } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 24 09:59:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Jun 2015 09:59:15 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.84f39831f8c29a9d95ebfeb89a5aaafb@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Changes (by Poul-Henning Kamp ): * status: needinfo => closed * resolution: => fixed Comment: In [bd702410005f0963dab2ed93c1407ddaa577db12]: {{{ #!CommitTicketReference repository="" revision="bd702410005f0963dab2ed93c1407ddaa577db12" Make sure we don't finish off a backend connection while the waiter is still engaged with it. Simplify the "do a single retry" logic slightly while here. Add more asserts. This probably fixes #1675. Initial diagnosis by: martin }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 24 17:06:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 24 Jun 2015 17:06:07 -0000 Subject: [Varnish] #1753: Bakend flapping and High load Message-ID: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> #1753: Bakend flapping and High load ---------------------------------+-------------------- Reporter: zaterio@? | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: build Version: trunk | Severity: normal Keywords: | ---------------------------------+-------------------- Between 30 min and 2 hours uptime, varnish indicates that one of my backend is not healthy: {{{ Backend name Admin Probe boot.b1 probe Healthy 5/5 boot.vod probe Sick 2/5 boot.b2 probe Healthy 5/5 boot.b3 probe Healthy 5/5 }}} every 5 segs this backends goes sick in varnishadm, from the first time it is detected unhealthy CPU load rises, and varnish no longer responds to curl. strangely, the backend always respond to curl requests with 200 (curl in while loop), the vod backend is nginx, and probes (/test) is the status page. {{{ backend vod { .host = "127.0.0.1"; .port = "1000"; .probe = { .url = "/test"; .interval = 5s; .timeout = 5s; .window = 5; .threshold = 3; } } }}} {{{ curl 127.0.0.1:1000/test Active connections: 67 server accepts handled requests 3142415 3142415 3142398 Reading: 0 Writing: 50 Waiting: 17 }}} {{{ varnish> panic.show 300 Child has not panicked or panic has been cleared }}} {{{ varnish> status 200 Child in state running }}} {{{ varnishd -V varnishd (varnish-trunk revision 746b4da) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} In varnishstats, VBE was flapping, i capture every 1 seg: {{{ root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 15536286954897674387 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18444492273887459327 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 17845854207609320379 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446744071427850111 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 12354031126361541413 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18302623353715294207 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 13017343740530059032 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446708819527426047 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 13359589301309196377 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446744073709543423 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 3592070478478106612 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446743935857131519 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 1717656466256125366 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446603198782177279 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 6961815821132309689 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446741874686296063 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 15945693309856776194 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 0 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 14640273067908615597 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18302488130965471231 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 7320422237346507443 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446744073709518847 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 3193800528665906738 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18443858954913054719 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 15152390158794437337 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18374646605195247615 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 10284485890731071394 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18442081869529743231 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 7147808378940885556 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 17870283321137692671 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 16139634276151714427 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18446744065111220223 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 17234149873749130718 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18432107323379407871 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 3761480790024349081 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18302628885096791999 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 14911115919654808265 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18148376200282602879 . Happy health probes root at varnish01:/etc/varnish# varnishstat -1|grep VBE|grep happy VBE.boot.b1.happy 18446744073709551615 . Happy health probes VBE.boot.vod.happy 4134780051288534766 . Happy health probes VBE.boot.b2.happy 18446744073709551615 . Happy health probes VBE.boot.b3.happy 18410703182061633535 . Happy health probes }}} before this machine, with this same configuration, presented #1675, with the same frecuency. I upgrade to 746b4da3cafc6146fbcea65dbc741ccf7353d66e in order to fix #1675 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 25 12:32:10 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 25 Jun 2015 12:32:10 -0000 Subject: [Varnish] #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests In-Reply-To: <043.682f47782860e7013b59f5f4b02bcc9a@varnish-cache.org> References: <043.682f47782860e7013b59f5f4b02bcc9a@varnish-cache.org> Message-ID: <058.0227a32559b98f5e5141297f7e248b42@varnish-cache.org> #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests ------------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishlog | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ------------------------+---------------------------------- Changes (by slink): * owner: => slink -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 26 12:38:39 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 26 Jun 2015 12:38:39 -0000 Subject: [Varnish] #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests In-Reply-To: <043.682f47782860e7013b59f5f4b02bcc9a@varnish-cache.org> References: <043.682f47782860e7013b59f5f4b02bcc9a@varnish-cache.org> Message-ID: <058.8688d9612552abe6fb247f5cb9d08f4f@varnish-cache.org> #1747: VSL API: assertion failure "c->next.ptr < c->end" in vsl_cursor.c/vslc_vsm_next() in load tests ------------------------+---------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishlog | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ------------------------+---------------------------------- Changes (by Nils Goroll ): * status: new => closed * resolution: => fixed Comment: In [35cf78131c2aaaf8227c31ba92eb577c5bea9855]: {{{ #!CommitTicketReference repository="" revision="35cf78131c2aaaf8227c31ba92eb577c5bea9855" Simplify vsl segment management, fixing spurious vsl overruns vsl sequence and segment updates didn't happen atomically, so vslc_vsm_check could report spurious overruns. Replace sequence and segment index with a counter (segment_n), which always increments (with overflow after UINT_MAX). The actual segment index will be segment_n % VSL_SEGMENTS. Overrun detection by calculation of the difference between two segment numbers becomes simple and safe because we only ever access/update a single integer. Update the shared memory log head. (struct VSLC_ptr).priv is now the the equivalent of segment_n from the reader side. It gets initialized once and is maintained independently. Patch prepared in collaboration with Martin Blix Grydeland Fixes: #1747 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 06:42:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 06:42:58 -0000 Subject: [Varnish] #1753: Bakend flapping and High load In-Reply-To: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> References: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> Message-ID: <070.df76028b25583a37cae24f142776b07c@varnish-cache.org> #1753: Bakend flapping and High load -----------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by phk): First: We did change the probing code recently, so I'm not ruling out that there may be bugs. Try doing a "backend.list -p" CLI command, that should show you which exact part of the probe is missing. From the above varnishstat I can see that b1 and b2 probe flawlessly, b3 has some errors and vod has many errors. Also, varnishlog would likely contain clues. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 08:30:14 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 08:30:14 -0000 Subject: [Varnish] #1749: Assert error in HTTP_Encode In-Reply-To: <045.f0d6f243cf4ee9f71aadc083cfce83da@varnish-cache.org> References: <045.f0d6f243cf4ee9f71aadc083cfce83da@varnish-cache.org> Message-ID: <060.fb50640196c4737f9d8ab074db17779f@varnish-cache.org> #1749: Assert error in HTTP_Encode --------------------------+---------------------------------------- Reporter: llavaud | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: Assert error | --------------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [5177b9950ca3ba6c44178cb51e8e2637e7d6d6c7]: {{{ #!CommitTicketReference repository="" revision="5177b9950ca3ba6c44178cb51e8e2637e7d6d6c7" Fix an assert which was (safe) off by one. Strengthen another assert for increased safety. Fixes: #1749 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 09:09:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 09:09:13 -0000 Subject: [Varnish] #1733: Unstable test-cases In-Reply-To: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> References: <041.e3e9abd44efc9f6e731fb370c77db66e@varnish-cache.org> Message-ID: <056.3fc74f1817a38b931b411dc301c245b8@varnish-cache.org> #1733: Unstable test-cases -------------------------+--------------------- Reporter: phk | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+--------------------- Comment (by phk): Rerunning this test, it's now u00000 which is by far the most unstable test -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 10:24:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 10:24:50 -0000 Subject: [Varnish] #1753: Bakend flapping and High load In-Reply-To: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> References: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> Message-ID: <070.2c01ef81dc4bedaddad080e3fab53ded@varnish-cache.org> #1753: Bakend flapping and High load -----------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by phk): I fixed a race in the backend probe code, please try trunk at 6179e316776ba8aae31a1b5dfe7ae4d226e0e15e or later -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 11:06:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 11:06:30 -0000 Subject: [Varnish] #1750: Fail more gracefully on -l >= 4GB In-Reply-To: <043.043bbd8ec6de89449726e0316939ec56@varnish-cache.org> References: <043.043bbd8ec6de89449726e0316939ec56@varnish-cache.org> Message-ID: <058.9f2ff793f6b3022397ed02317f8c97aa@varnish-cache.org> #1750: Fail more gracefully on -l >= 4GB ----------------------+---------------------------------- Reporter: geoff | Owner: martin Type: defect | Status: new Priority: low | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.3 Severity: minor | Resolution: Keywords: | ----------------------+---------------------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 11:29:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 11:29:28 -0000 Subject: [Varnish] #1701: Bans keep increasing, despite ban lurker removing them In-Reply-To: <044.5cae992690f0e4f5183b9aae2c094e59@varnish-cache.org> References: <044.5cae992690f0e4f5183b9aae2c094e59@varnish-cache.org> Message-ID: <059.6b10579bd943f6f3cd65f12bb6650210@varnish-cache.org> #1701: Bans keep increasing, despite ban lurker removing them --------------------+---------------------- Reporter: rbartl | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.6 Severity: normal | Resolution: wontfix Keywords: | --------------------+---------------------- Changes (by lkarsten): * status: new => closed * resolution: => wontfix Comment: 3.0 is not supported any longer, see if you can reproduce this with 4.0.3. (possible soft workaround: increasing cli_timeout should help with varnishd being killed) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 12:24:34 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 12:24:34 -0000 Subject: [Varnish] #1754: n_objecthead increase just before oomkill Message-ID: <046.913b54b05f4537fbf8fe340cac2f9093@varnish-cache.org> #1754: n_objecthead increase just before oomkill ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- In 4.0.3 we've seen on some occasions that systems running with long TTLs + LRU to clean up, with storage tuned to size of physical memory, have been subject to the out of memory handling (oomkiller) in Linux. A rise in n_objecthead, not correlated with n_object or n_objectcore, was seen in the hours before on some varnishstat plots. Assumption that something starts to leak that is not accounted as normal storage, and the system (tuned just to the edge of storage) goes over the safe limit after a while. We don't have the full picture here, making this ticket to have somewhere to keep it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 29 15:00:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 29 Jun 2015 15:00:16 -0000 Subject: [Varnish] #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. In-Reply-To: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> References: <055.b30ac21bd1012ab374c23fc4ba7edb26@varnish-cache.org> Message-ID: <070.f02bab433f9b87ac7319fd1f78183824@varnish-cache.org> #1740: Condition((ObjGetSpace(wrk, req->objcore, &sz, &ptr)) != 0) not true. -----------------------+----------------------------------------------- Reporter: zaterio@? | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * owner: => Martin Blix Grydeland * status: new => closed * resolution: => fixed Comment: In [7fdf6d09fbf30c53f51e9da085fb1a3a8d1d5cc5]: {{{ #!CommitTicketReference repository="" revision="7fdf6d09fbf30c53f51e9da085fb1a3a8d1d5cc5" Fix out of storage condition when failing to allocate transient synth Do not assert if the transient stevedore fails to provide storage for the synth body. Log an SLT_Error record when failing on out of storage. Document that synth objects are created on transient storage. Fix an objcore leak that would come if we failed to create the object. Fixes: #1740 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 30 14:59:24 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Jun 2015 14:59:24 -0000 Subject: [Varnish] #1755: The (struct backend).n_conn counter is never decremented Message-ID: <043.496bde3b5afd5f126177e7dbadac9b7c@varnish-cache.org> #1755: The (struct backend).n_conn counter is never decremented ----------------------+------------------- Reporter: Dridi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Keywords: ----------------------+------------------- You'll eventually reach the maximum number of connections for a given backend and never be able to reach this backend again. More on the mailing list: - https://www.varnish-cache.org/lists/pipermail/varnish- dev/2015-June/008379.html - https://www.varnish-cache.org/lists/pipermail/varnish- dev/2015-June/008380.html Test case: {{{ varnishtest "The (struct backend).n_conn counter is never decremented" server s1 { rxreq txresp rxreq txresp } -start varnish v1 -vcl { backend s1 { .host = "${s1_addr}"; .port = "${s1_port}"; .max_connections = 1; } } -start client c1 { txreq -url "/foo" rxresp expect resp.status == 200 txreq -url "/bar" rxresp expect resp.status == 200 } -run varnish v1 -expect backend_busy == 0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 30 17:38:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Jun 2015 17:38:07 -0000 Subject: [Varnish] #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu Message-ID: <044.b6984289b41a3c6207dadd00b7d1e379@varnish-cache.org> #1756: /etc/init.d/varnish reload always returns 0 on debian/ubuntu --------------------------------------------+-------------------- Reporter: fleish | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: unknown | Severity: normal Keywords: init init.d reload exit return | --------------------------------------------+-------------------- This is similar to the issue reported in ticket #981, but continues on despite the fix. Despite the reload command "/usr/share/varnish/reload-vcl -q" returning 1, the closing "exit 0" in the script makes it so checking the exit status of "/etc/init.d/varnish reload" is always 0 even when the reload fails due to a syntax error for example. As a workaround, I've removed the closing "exit 0" out of the script, but there might be a better solution. I am running varnishd (varnish-3.0.7 revision f544cd8) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 30 23:00:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 30 Jun 2015 23:00:22 -0000 Subject: [Varnish] #1753: Bakend flapping and High load In-Reply-To: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> References: <055.c248038e6b81e813234f043d436513ed@varnish-cache.org> Message-ID: <070.a9589093ad161b6bbaa3cabf02af88f3@varnish-cache.org> #1753: Bakend flapping and High load -----------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by zaterio@?): {{{ varnishd -V varnishd (varnish-trunk revision 7585d89) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS }}} backend flapping: no high load: no varnish stop answering to curl after 30 min to 2 hours uptime. No firewall rules. logging method: {{{ varnishlog -g raw > file }}} VSL tag "Backend_health" always shows "Still healthy" for all probes, in all cases. Reversing to trunk e05ac94a893090707ea95bf16578b13388917165 not show this problem. In vcl I do not have any status 909 (backends only responds with standars status codes), however: {{{ cat file|grep -w 909|grep Debug 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" 0 Debug - "909 HTTP/1.1" }}} initially, SeessClose is primary by RX_TIMEOUT, when the problem begins to manifest, SessClose changes to RESP_CLOSE, and then only REM_CLOSE. {{{ cat file|grep SessClose 28181773 SessClose c RX_TIMEOUT 8.288 29230644 SessClose c RX_TIMEOUT 10.043 30017083 SessClose c RX_TIMEOUT 5.969 27427924 SessClose c RX_TIMEOUT 11.128 30115316 SessClose c RX_TIMEOUT 5.018 27034843 SessClose c REM_CLOSE 0.058 29361159 SessClose c RX_TIMEOUT 6.682 26018627 SessClose c RX_TIMEOUT 6.862 28051101 SessClose c RX_TIMEOUT 5.007 27886957 SessClose c RX_TIMEOUT 10.863 29689242 SessClose c RX_TIMEOUT 5.008 28051103 SessClose c RX_TIMEOUT 5.023 30213501 SessClose c RX_TIMEOUT 5.009 24314349 SessClose c RX_TIMEOUT 5.840 27034839 SessClose c RX_TIMEOUT 5.457 24839580 SessClose c RX_TIMEOUT 7.157 27721798 SessClose c RX_TIMEOUT 5.009 29426488 SessClose c RX_TIMEOUT 9.512 27460946 SessClose c TX_PIPE 79.635 28968293 SessClose c RX_TIMEOUT 6.148 29361165 SessClose c RX_TIMEOUT 6.041 27034847 SessClose c RESP_CLOSE 0.527 29689240 SessClose c RX_TIMEOUT 6.073 29361153 SessClose c RX_TIMEOUT 8.122 29329091 SessClose c REM_CLOSE 1.721 29591055 SessClose c RX_TIMEOUT 5.974 28542401 SessClose c REM_CLOSE 15.868 28116504 SessClose c RX_TIMEOUT 9.612 30017085 SessClose c RX_TIMEOUT 7.173 29754997 SessClose c RX_TIMEOUT 12.243 27034835 SessClose c RX_TIMEOUT 7.454 29787524 SessClose c RX_TIMEOUT 8.181 30017087 SessClose c RX_TIMEOUT 5.001 28509327 SessClose c RX_TIMEOUT 7.706 29230634 SessClose c RX_TIMEOUT 23.206 24839582 SessClose c RX_TIMEOUT 6.868 27886963 SessClose c RX_TIMEOUT 5.024 29361163 SessClose c RX_TIMEOUT 7.618 27526567 SessClose c RX_TIMEOUT 5.010 28181776 SessClose c RX_TIMEOUT 5.026 28509325 SessClose c RX_TIMEOUT 8.446 30311461 SessClose c RX_TIMEOUT 9.973 30343214 SessClose c RX_TIMEOUT 6.021 27721803 SessClose c RX_TIMEOUT 6.663 27721795 SessClose c RX_TIMEOUT 7.225 28574964 SessClose c RX_TIMEOUT 7.488 29951370 SessClose c RX_TIMEOUT 8.841 30311466 SessClose c RX_TIMEOUT 7.357 28116507 SessClose c RX_TIMEOUT 6.894 30115325 SessClose c REM_CLOSE 0.057 17368373 SessClose c RX_TIMEOUT 9.951 29689244 SessClose c RX_TIMEOUT 7.652 24314350 SessClose c RX_TIMEOUT 8.642 29525555 SessClose c RX_TIMEOUT 5.764 24839585 SessClose c RX_TIMEOUT 6.207 29525557 SessClose c RX_TIMEOUT 5.008 28051107 SessClose c RX_TIMEOUT 8.053 30148308 SessClose c RX_TIMEOUT 5.618 30246079 SessClose c RX_TIMEOUT 5.009 28935006 SessClose c RX_TIMEOUT 271.592 29787527 SessClose c RX_TIMEOUT 7.326 27721808 SessClose c RX_TIMEOUT 5.879 27721800 SessClose c RX_TIMEOUT 8.358 29034079 SessClose c RESP_CLOSE 5.992 28968297 SessClose c RX_TIMEOUT 8.133 27526546 SessClose c RX_TIMEOUT 23.798 29852821 SessClose c RX_TIMEOUT 7.227 27526565 SessClose c RX_TIMEOUT 7.299 29525560 SessClose c RX_TIMEOUT 5.009 30115328 SessClose c REM_CLOSE 1.334 29164565 SessClose c RX_TIMEOUT 8.844 25035834 SessClose c RX_TIMEOUT 5.023 30279078 SessClose c RX_TIMEOUT 8.482 29394129 SessClose c RX_TIMEOUT 5.009 30343210 SessClose c RX_TIMEOUT 11.158 25986792 SessClose c RX_TIMEOUT 11.840 28116509 SessClose c RX_TIMEOUT 7.006 27755577 SessClose c RX_TIMEOUT 5.033 28639985 SessClose c RX_TIMEOUT 5.034 30049545 SessClose c RX_TIMEOUT 7.580 27526570 SessClose c RX_TIMEOUT 5.973 30343217 SessClose c RX_TIMEOUT 7.161 30311468 SessClose c RX_TIMEOUT 7.866 28542411 SessClose c RX_TIMEOUT 11.771 28542414 SessClose c RX_TIMEOUT 5.009 29787531 SessClose c RX_TIMEOUT 6.515 30049547 SessClose c RX_TIMEOUT 5.001 29885736 SessClose c RX_TIMEOUT 6.875 30049548 SessClose c RX_TIMEOUT 5.007 27721806 SessClose c RX_TIMEOUT 8.294 28181779 SessClose c RX_TIMEOUT 7.268 25035837 SessClose c RX_TIMEOUT 5.888 17368376 SessClose c RX_TIMEOUT 7.159 29787529 SessClose c RX_TIMEOUT 6.923 29591058 SessClose c RX_TIMEOUT 7.026 30148311 SessClose c RX_TIMEOUT 7.122 28051109 SessClose c RX_TIMEOUT 7.540 17368382 SessClose c RX_TIMEOUT 5.000 30311471 SessClose c RX_TIMEOUT 5.681 28803460 SessClose c RX_TIMEOUT 5.000 28574967 SessClose c RX_TIMEOUT 7.845 27755574 SessClose c RX_TIMEOUT 8.741 17368385 SessClose c RX_TIMEOUT 5.001 29591060 SessClose c RX_TIMEOUT 5.534 27034841 SessClose c RX_TIMEOUT 6.681 29755000 SessClose c RX_TIMEOUT 5.954 29689246 SessClose c RX_TIMEOUT 8.551 28246672 SessClose c RX_TIMEOUT 14.269 29394131 SessClose c RX_TIMEOUT 7.027 27721810 SessClose c RX_TIMEOUT 8.100 29787532 SessClose c RX_TIMEOUT 7.368 29230655 SessClose c RX_TIMEOUT 8.801 27526575 SessClose c RX_TIMEOUT 5.014 26443810 SessClose c RESP_CLOSE 0.006 29197598 SessClose c REM_CLOSE 120.276 27328552 SessClose c RESP_CLOSE 0.001 26018636 SessClose c RESP_CLOSE 7.402 26148873 SessClose c RESP_CLOSE 0.004 30474243 SessClose c RESP_CLOSE 0.004 26148875 SessClose c RESP_CLOSE 0.004 29230508 SessClose c REM_CLOSE 183.285 24248396 SessClose c RESP_CLOSE 0.005 26836999 SessClose c RESP_CLOSE 5.606 25460789 SessClose c RESP_CLOSE 5.261 17760298 SessClose c RESP_CLOSE 6.270 19071084 SessClose c RESP_CLOSE 5.795 28869524 SessClose c REM_CLOSE 120.014 28968261 SessClose c REM_CLOSE 122.986 25100292 SessClose c RESP_CLOSE 5.864 27853914 SessClose c REM_CLOSE 180.076 23363627 SessClose c RESP_CLOSE 0.001 30572545 SessClose c RESP_CLOSE 7.637 17956935 SessClose c REM_CLOSE 0.053 22478889 SessClose c RESP_CLOSE 4.894 22511625 SessClose c RESP_CLOSE 6.387 21856270 SessClose c REM_CLOSE 2.135 20217896 SessClose c RESP_CLOSE 5.854 21692422 SessClose c REM_CLOSE 4.879 21299207 SessClose c RESP_CLOSE 6.373 19398706 SessClose c RESP_CLOSE 5.090 19791900 SessClose c REM_CLOSE 1.515 20086797 SessClose c RESP_CLOSE 5.741 16810059 SessClose c REM_CLOSE 0.036 18415647 SessClose c RESP_CLOSE 5.386 30932993 SessClose c RESP_CLOSE 6.758 17268746 SessClose c RESP_CLOSE 6.856 30965761 SessClose c RESP_CLOSE 5.294 16580613 SessClose c RESP_CLOSE 5.987 15007779 SessClose c RESP_CLOSE 6.436 31064067 SessClose c RESP_CLOSE 6.832 31096835 SessClose c RESP_CLOSE 6.364 13271061 SessClose c RESP_CLOSE 6.068 13172746 SessClose c REM_CLOSE 0.025 11239436 SessClose c REM_CLOSE 0.024 12910609 SessClose c RESP_CLOSE 6.656 12484614 SessClose c RESP_CLOSE 5.623 11534350 SessClose c RESP_CLOSE 7.308 31358979 SessClose c RESP_CLOSE 5.770 9175068 SessClose c RESP_CLOSE 5.707 7634962 SessClose c RESP_CLOSE 6.553 7634966 SessClose c RESP_CLOSE 6.927 8945682 SessClose c RESP_CLOSE 6.848 7340070 SessClose c RESP_CLOSE 5.315 6324241 SessClose c RESP_CLOSE 4.749 6225936 SessClose c RESP_CLOSE 5.050 4849677 SessClose c REM_CLOSE 1.917 3506225 SessClose c RESP_CLOSE 6.466 4522007 SessClose c RESP_CLOSE 6.754 3964957 SessClose c RESP_CLOSE 5.354 2031678 SessClose c REM_CLOSE 0.025 4128772 SessClose c REM_CLOSE 0.026 3637251 SessClose c RESP_CLOSE 5.736 3244046 SessClose c RESP_CLOSE 6.622 2326567 SessClose c RESP_CLOSE 5.311 31850497 SessClose c REM_CLOSE 0.024 2326569 SessClose c REM_CLOSE 0.024 2326571 SessClose c RESP_CLOSE 6.405 2129932 SessClose c REM_CLOSE 1.393 1671195 SessClose c RESP_CLOSE 5.623 1441809 SessClose c RESP_CLOSE 5.513 917535 SessClose c REM_CLOSE 0.652 1015817 SessClose c RESP_CLOSE 5.739 786434 SessClose c REM_CLOSE 0.515 32374785 SessClose c REM_CLOSE 0.000 32768001 SessClose c REM_CLOSE 0.000 32931845 SessClose c REM_CLOSE 0.000 33226753 SessClose c REM_CLOSE 0.000 33226754 SessClose c REM_CLOSE 0.000 33193985 SessClose c REM_CLOSE 1.984 33554433 SessClose c REM_CLOSE 0.000 33685507 SessClose c REM_CLOSE 0.037 34078721 SessClose c REM_CLOSE 0.000 34701313 SessClose c REM_CLOSE 0.000 34766849 SessClose c REM_CLOSE 0.000 34766850 SessClose c REM_CLOSE 0.000 34766851 SessClose c REM_CLOSE 0.000 35061763 SessClose c REM_CLOSE 0.002 35061769 SessClose c REM_CLOSE 0.000 35323909 SessClose c REM_CLOSE 0.000 35389441 SessClose c REM_CLOSE 0.000 35717123 SessClose c REM_CLOSE 0.000 35749891 SessClose c REM_CLOSE 0.000 35782659 SessClose c REM_CLOSE 0.000 35880961 SessClose c REM_CLOSE 0.000 }}} Another interesting log: {{{ cat file.log | grep Debug|grep -v RES|grep -v Broken|grep -v retrying|grep -v "Connection reset by peer"|grep -v 909 27526281 Debug c "Write error, retval = -1, len = 140631, errno = Resource temporarily unavailable" 29197599 Debug c "Write error, retval = -1, len = 700846, errno = Resource temporarily unavailable" 29820301 Debug c "Write error, retval = -1, len = 1109783, errno = Resource temporarily unavailable" 28869525 Debug c "Write error, retval = -1, len = 152421, errno = Resource temporarily unavailable" 28378768 Debug c "Write error, retval = -1, len = 958763, errno = Resource temporarily unavailable" 27853916 Debug c "Write error, retval = -1, len = 549933, errno = Resource temporarily unavailable" }}} another example (after restart varnish): {{{ cat file|grep SessClose 13697065 SessClose c RX_TIMEOUT 10.752 9896076 SessClose c RX_TIMEOUT 21.189 15728645 SessClose c RX_TIMEOUT 5.999 15368203 SessClose c RX_TIMEOUT 5.000 10813550 SessClose c RX_TIMEOUT 5.000 14614548 SessClose c RX_TIMEOUT 27.981 10616975 SessClose c RX_TIMEOUT 6.484 11698293 SessClose c RX_TIMEOUT 6.110 13697071 SessClose c RX_TIMEOUT 9.640 14614569 SessClose c RX_TIMEOUT 7.206 12451907 SessClose c RX_TIMEOUT 6.656 10223758 SessClose c REM_CLOSE 23.312 12451915 SessClose c RX_TIMEOUT 6.239 14811178 SessClose c TX_PIPE 0.045 13828163 SessClose c RX_TIMEOUT 7.665 14221353 SessClose c RX_TIMEOUT 5.010 15826949 SessClose c RX_TIMEOUT 5.019 1638558 SessClose c RX_TIMEOUT 6.976 14745673 SessClose c TX_PIPE 0.037 14155791 SessClose c TX_PIPE 42.360 13828166 SessClose c TX_PIPE 5.336 15663111 SessClose c RX_TIMEOUT 5.871 14811170 SessClose c RX_TIMEOUT 6.891 6848788 SessClose c RX_TIMEOUT 8.531 12582985 SessClose c RX_TIMEOUT 8.832 15269916 SessClose c RX_TIMEOUT 6.397 7176360 SessClose c REM_CLOSE 7.211 12451913 SessClose c RX_TIMEOUT 7.059 1638561 SessClose c RX_TIMEOUT 7.368 14745676 SessClose c TX_PIPE 0.286 10813546 SessClose c RX_TIMEOUT 15.983 14450716 SessClose c RX_TIMEOUT 10.746 14811152 SessClose c RX_TIMEOUT 15.734 15761414 SessClose c RX_TIMEOUT 5.007 15138832 SessClose c RX_TIMEOUT 7.806 11403424 SessClose c RX_TIMEOUT 16.144 11403435 SessClose c RX_TIMEOUT 15.877 15368210 SessClose c RX_TIMEOUT 5.000 15859720 SessClose c RX_TIMEOUT 5.000 14516261 SessClose c RX_TIMEOUT 5.017 14319688 SessClose c RX_TIMEOUT 19.693 15564818 SessClose c RX_TIMEOUT 6.239 15859721 SessClose c RX_TIMEOUT 5.003 14221355 SessClose c RX_TIMEOUT 5.004 15302678 SessClose c RX_TIMEOUT 12.262 12353661 SessClose c RX_TIMEOUT 7.196 14516263 SessClose c RX_TIMEOUT 5.513 15302692 SessClose c RX_TIMEOUT 6.276 14450727 SessClose c RX_TIMEOUT 5.009 15368206 SessClose c RX_TIMEOUT 8.205 12419190 SessClose c RX_TIMEOUT 7.357 14745670 SessClose c REM_CLOSE 9.053 6848795 SessClose c RX_TIMEOUT 6.026 10223768 SessClose c RX_TIMEOUT 12.442 14319692 SessClose c RX_TIMEOUT 6.306 14450719 SessClose c RX_TIMEOUT 7.926 14221357 SessClose c RX_TIMEOUT 6.714 14221359 SessClose c RX_TIMEOUT 5.001 15663114 SessClose c RX_TIMEOUT 6.867 11960446 SessClose c RX_TIMEOUT 11.245 14450729 SessClose c RX_TIMEOUT 6.668 15859717 SessClose c RX_TIMEOUT 7.886 15663108 SessClose c RX_TIMEOUT 10.451 15302697 SessClose c RX_TIMEOUT 7.992 15106089 SessClose c RX_TIMEOUT 15.843 14319701 SessClose c RX_TIMEOUT 6.969 14221360 SessClose c RX_TIMEOUT 5.961 15990785 SessClose c RX_TIMEOUT 5.076 14450724 SessClose c RX_TIMEOUT 7.928 15958017 SessClose c RX_TIMEOUT 7.158 15958022 SessClose c RX_TIMEOUT 5.013 15368212 SessClose c RX_TIMEOUT 6.159 15794181 SessClose c RX_TIMEOUT 5.025 15958029 SessClose c RX_TIMEOUT 5.001 14516268 SessClose c RX_TIMEOUT 7.593 12877868 SessClose c RX_TIMEOUT 23.554 15794186 SessClose c RX_TIMEOUT 5.009 15859724 SessClose c RX_TIMEOUT 5.035 15859728 SessClose c RX_TIMEOUT 5.008 15794189 SessClose c RX_TIMEOUT 5.007 15859730 SessClose c RX_TIMEOUT 5.055 15794191 SessClose c RX_TIMEOUT 5.015 15663117 SessClose c RX_TIMEOUT 7.499 16089089 SessClose c RX_TIMEOUT 5.615 15302703 SessClose c RX_TIMEOUT 7.730 6848798 SessClose c RX_TIMEOUT 10.057 15204381 SessClose c RX_TIMEOUT 9.821 14778385 SessClose c RX_TIMEOUT 7.749 15106097 SessClose c RX_TIMEOUT 10.413 12353633 SessClose c RX_TIMEOUT 34.255 15794184 SessClose c RX_TIMEOUT 10.964 9175194 SessClose c REM_CLOSE 0.048 14778388 SessClose c RX_TIMEOUT 6.667 15106101 SessClose c RX_TIMEOUT 6.855 15106107 SessClose c RX_TIMEOUT 5.001 16220161 SessClose c RX_TIMEOUT 6.373 15368219 SessClose c RX_TIMEOUT 7.956 15106111 SessClose c RX_TIMEOUT 6.038 15368221 SessClose c RX_TIMEOUT 6.502 15368225 SessClose c RX_TIMEOUT 5.002 15368230 SessClose c RX_TIMEOUT 6.896 16416771 SessClose c RX_TIMEOUT 6.389 16449537 SessClose c RX_TIMEOUT 6.652 16121876 SessClose c RX_TIMEOUT 5.001 16351239 SessClose c RX_TIMEOUT 5.824 16351241 SessClose c RX_TIMEOUT 6.161 16547841 SessClose c RX_TIMEOUT 7.863 16351243 SessClose c RX_TIMEOUT 6.342 15007762 SessClose c REM_CLOSE 47.493 16416783 SessClose c RESP_CLOSE 0.002 13074447 SessClose c RESP_CLOSE 0.002 17235973 SessClose c RESP_CLOSE 0.006 13074449 SessClose c RESP_CLOSE 0.005 17530881 SessClose c RESP_CLOSE 0.007 18251781 SessClose c REM_CLOSE 0.000 18382849 SessClose c REM_CLOSE 0.000 18350083 SessClose c REM_CLOSE 0.001 18382851 SessClose c REM_CLOSE 0.001 18350085 SessClose c REM_CLOSE 0.000 18382853 SessClose c REM_CLOSE 0.001 18415617 SessClose c REM_CLOSE 0.000 10158138 SessClose c REM_CLOSE 0.000 18415619 SessClose c REM_CLOSE 0.000 10158140 SessClose c REM_CLOSE 0.001 18415621 SessClose c REM_CLOSE 0.000 10158142 SessClose c REM_CLOSE 0.000 18415623 SessClose c REM_CLOSE 0.001 10158144 SessClose c REM_CLOSE 0.002 18415625 SessClose c REM_CLOSE 0.047 18415627 SessClose c REM_CLOSE 0.000 18513921 SessClose c REM_CLOSE 0.000 18415629 SessClose c REM_CLOSE 0.000 18513923 SessClose c REM_CLOSE 0.000 18415631 SessClose c REM_CLOSE 0.001 18513925 SessClose c REM_CLOSE 0.000 18546691 SessClose c REM_CLOSE 0.000 18579459 SessClose c REM_CLOSE 0.000 19890177 SessClose c REM_CLOSE 3.239 20185089 SessClose c REM_CLOSE 0.258 11829273 SessClose c REM_CLOSE 0.050 20578305 SessClose c REM_CLOSE 1.879 20709377 SessClose c REM_CLOSE 11.948 10321987 SessClose c REM_CLOSE 18.187 21626881 SessClose c REM_CLOSE 11.227 22183939 SessClose c REM_CLOSE 1.251 22478851 SessClose c REM_CLOSE 0.049 22511619 SessClose c REM_CLOSE 0.609 23166977 SessClose c REM_CLOSE 0.050 23625729 SessClose c REM_CLOSE 0.354 24543233 SessClose c REM_CLOSE 0.048 25559043 SessClose c RESP_CLOSE 0.048 25591812 SessClose c RESP_CLOSE 0.025 25559045 SessClose c RESP_CLOSE 1.206 25591815 SessClose c RESP_CLOSE 1.226 25591817 SessClose c RESP_CLOSE 0.754 25591819 SessClose c RESP_CLOSE 0.803 25755653 SessClose c RESP_CLOSE 1.179 25853953 SessClose c RESP_CLOSE 2.062 25853955 SessClose c RESP_CLOSE 1.590 26411009 SessClose c REM_CLOSE 4.109 5341188 SessClose c REM_CLOSE 0.021 30015489 SessClose c REM_CLOSE 0.088 32079873 SessClose c REM_CLOSE 0.000 32079874 SessClose c REM_CLOSE 0.000 32374787 SessClose c REM_CLOSE 0.000 32374788 SessClose c REM_CLOSE 0.000 32374790 SessClose c REM_CLOSE 0.000 32374791 SessClose c REM_CLOSE 0.000 32866308 SessClose c REM_CLOSE 0.000 32866309 SessClose c REM_CLOSE 0.000 32899073 SessClose c REM_CLOSE 0.000 32931841 SessClose c REM_CLOSE 0.000 32931842 SessClose c REM_CLOSE 0.000 32964609 SessClose c REM_CLOSE 0.000 33062913 SessClose c REM_CLOSE 0.000 33193985 SessClose c REM_CLOSE 0.000 33193986 SessClose c REM_CLOSE 0.000 33193987 SessClose c REM_CLOSE 0.000 33226753 SessClose c REM_CLOSE 0.000 33292291 SessClose c REM_CLOSE 0.000 33292296 SessClose c REM_CLOSE 0.000 33357825 SessClose c REM_CLOSE 0.000 33456129 SessClose c REM_CLOSE 0.001 33456131 SessClose c REM_CLOSE 0.000 33456132 SessClose c REM_CLOSE 0.000 33685512 SessClose c REM_CLOSE 0.001 33685518 SessClose c REM_CLOSE 0.000 33685520 SessClose c REM_CLOSE 0.000 33685522 SessClose c REM_CLOSE 0.000 33685523 SessClose c REM_CLOSE 0.000 34373633 SessClose c REM_CLOSE 0.000 34537473 SessClose c REM_CLOSE 0.001 34701315 SessClose c REM_CLOSE 0.000 34701317 SessClose c REM_CLOSE 0.000 34701321 SessClose c REM_CLOSE 0.000 34799617 SessClose c REM_CLOSE 0.000 34799618 SessClose c REM_CLOSE 0.000 34799619 SessClose c REM_CLOSE 0.000 34865153 SessClose c REM_CLOSE 0.000 34897925 SessClose c REM_CLOSE 0.000 34897926 SessClose c REM_CLOSE 0.000 34897927 SessClose c REM_CLOSE 0.000 34897928 SessClose c REM_CLOSE 0.000 34930689 SessClose c REM_CLOSE 0.000 35160069 SessClose c REM_CLOSE 0.000 35160070 SessClose c REM_CLOSE 0.000 35160076 SessClose c REM_CLOSE 0.000 35160077 SessClose c REM_CLOSE 0.000 35291137 SessClose c REM_CLOSE 0.000 35323905 SessClose c REM_CLOSE 0.001 35422209 SessClose c REM_CLOSE 0.001 35454982 SessClose c REM_CLOSE 0.000 35553281 SessClose c REM_CLOSE 0.000 35782657 SessClose c REM_CLOSE 0.000 36306947 SessClose c REM_CLOSE 0.000 36306950 SessClose c REM_CLOSE 0.000 36339713 SessClose c REM_CLOSE 0.040 36405255 SessClose c REM_CLOSE 0.107 36470787 SessClose c REM_CLOSE 0.000 36470788 SessClose c REM_CLOSE 0.000 36438023 SessClose c REM_CLOSE 0.127 36438031 SessClose c REM_CLOSE 0.000 36438032 SessClose c REM_CLOSE 0.000 36438035 SessClose c REM_CLOSE 0.000 36438036 SessClose c REM_CLOSE 0.000 36503563 SessClose c REM_CLOSE 0.020 36503565 SessClose c REM_CLOSE 0.000 36503566 SessClose c REM_CLOSE 0.000 36503569 SessClose c REM_CLOSE 0.000 36601859 SessClose c REM_CLOSE 0.000 36601860 SessClose c REM_CLOSE 0.000 36601861 SessClose c REM_CLOSE 0.000 36569095 SessClose c REM_CLOSE 0.119 36929543 SessClose c REM_CLOSE 0.000 36929546 SessClose c REM_CLOSE 0.075 37224457 SessClose c REM_CLOSE 0.086 37257217 SessClose c REM_CLOSE 0.000 37257220 SessClose c REM_CLOSE 0.000 37355523 SessClose c REM_CLOSE 0.332 37486597 SessClose c REM_CLOSE 0.125 37617669 SessClose c REM_CLOSE 0.017 37617675 SessClose c REM_CLOSE 0.000 37617676 SessClose c REM_CLOSE 0.000 37617677 SessClose c REM_CLOSE 0.000 37683201 SessClose c REM_CLOSE 0.073 38109189 SessClose c REM_CLOSE 0.000 38141953 SessClose c REM_CLOSE 0.108 38469633 SessClose c REM_CLOSE 0.025 38469637 SessClose c REM_CLOSE 0.000 38469644 SessClose c REM_CLOSE 0.000 38535169 SessClose c REM_CLOSE 0.000 38535172 SessClose c REM_CLOSE 0.000 38535173 SessClose c REM_CLOSE 0.121 38699009 SessClose c REM_CLOSE 0.077 38699025 SessClose c REM_CLOSE 0.046 38699031 SessClose c REM_CLOSE 0.000 38731783 SessClose c REM_CLOSE 0.000 38731784 SessClose c REM_CLOSE 0.000 38731785 SessClose c REM_CLOSE 0.000 38731786 SessClose c REM_CLOSE 0.000 38764545 SessClose c REM_CLOSE 0.097 38764549 SessClose c REM_CLOSE 0.000 38764550 SessClose c REM_CLOSE 0.000 38764551 SessClose c REM_CLOSE 0.000 38862849 SessClose c REM_CLOSE 0.001 38928385 SessClose c REM_CLOSE 0.001 38993921 SessClose c REM_CLOSE 0.000 39124993 SessClose c REM_CLOSE 0.057 39550977 SessClose c REM_CLOSE 0.094 39550981 SessClose c REM_CLOSE 0.000 39550982 SessClose c REM_CLOSE 0.000 39550983 SessClose c REM_CLOSE 0.000 39747585 SessClose c REM_CLOSE 0.000 39780355 SessClose c REM_CLOSE 0.106 39780361 SessClose c REM_CLOSE 0.016 39813125 SessClose c REM_CLOSE 0.000 39813126 SessClose c REM_CLOSE 0.000 39813127 SessClose c REM_CLOSE 0.000 39878657 SessClose c REM_CLOSE 0.000 39911427 SessClose c REM_CLOSE 0.000 39911428 SessClose c REM_CLOSE 0.000 39944193 SessClose c REM_CLOSE 0.284 39944195 SessClose c REM_CLOSE 0.000 39944202 SessClose c REM_CLOSE 0.000 39944203 SessClose c REM_CLOSE 0.000 40009729 SessClose c REM_CLOSE 0.000 40042507 SessClose c REM_CLOSE 0.000 40108033 SessClose c REM_CLOSE 0.099 40108035 SessClose c REM_CLOSE 0.000 40140801 SessClose c REM_CLOSE 0.000 40140806 SessClose c REM_CLOSE 0.000 40173569 SessClose c REM_CLOSE 0.000 40173572 SessClose c REM_CLOSE 0.000 38928389 SessClose c REM_CLOSE 0.000 40271875 SessClose c REM_CLOSE 0.000 40271876 SessClose c REM_CLOSE 0.000 40304641 SessClose c REM_CLOSE 0.000 38928393 SessClose c REM_CLOSE 0.001 40435713 SessClose c REM_CLOSE 0.000 40468481 SessClose c REM_CLOSE 0.001 40501249 SessClose c REM_CLOSE 0.001 40534017 SessClose c REM_CLOSE 0.000 40566785 SessClose c REM_CLOSE 0.000 40566789 SessClose c REM_CLOSE 0.000 40566793 SessClose c REM_CLOSE 0.000 40566795 SessClose c REM_CLOSE 0.000 40632321 SessClose c REM_CLOSE 0.000 40632323 SessClose c REM_CLOSE 0.175 40665091 SessClose c REM_CLOSE 0.000 40665092 SessClose c REM_CLOSE 0.000 40665095 SessClose c REM_CLOSE 0.013 40665097 SessClose c REM_CLOSE 0.000 40665098 SessClose c REM_CLOSE 0.182 40566796 SessClose c REM_CLOSE 0.000 40697857 SessClose c REM_CLOSE 0.142 40730625 SessClose c REM_CLOSE 0.000 40632325 SessClose c REM_CLOSE 0.001 40632327 SessClose c REM_CLOSE 0.000 40632328 SessClose c REM_CLOSE 0.000 40763393 SessClose c REM_CLOSE 0.000 40796161 SessClose c REM_CLOSE 0.000 40828929 SessClose c REM_CLOSE 0.001 40828931 SessClose c REM_CLOSE 0.000 40828932 SessClose c REM_CLOSE 0.001 40861697 SessClose c REM_CLOSE 0.001 40894465 SessClose c REM_CLOSE 0.001 40894467 SessClose c REM_CLOSE 0.059 40894469 SessClose c REM_CLOSE 0.000 40992769 SessClose c REM_CLOSE 0.000 41058307 SessClose c REM_CLOSE 0.054 41058309 SessClose c REM_CLOSE 0.000 41058310 SessClose c REM_CLOSE 0.000 41091073 SessClose c REM_CLOSE 0.000 41189379 SessClose c REM_CLOSE 0.025 41189381 SessClose c REM_CLOSE 0.000 41189383 SessClose c REM_CLOSE 0.000 41189386 SessClose c REM_CLOSE 0.000 41189389 SessClose c REM_CLOSE 0.000 41320449 SessClose c REM_CLOSE 0.000 41320452 SessClose c REM_CLOSE 0.000 41353217 SessClose c REM_CLOSE 0.000 41385987 SessClose c REM_CLOSE 0.049 41385991 SessClose c REM_CLOSE 0.058 41385995 SessClose c REM_CLOSE 0.000 41418753 SessClose c REM_CLOSE 0.000 41418757 SessClose c REM_CLOSE 0.000 41451523 SessClose c REM_CLOSE 0.000 41517059 SessClose c REM_CLOSE 0.000 41517062 SessClose c REM_CLOSE 0.013 41549825 SessClose c REM_CLOSE 0.043 41582593 SessClose c REM_CLOSE 0.000 41582594 SessClose c REM_CLOSE 0.000 41582596 SessClose c REM_CLOSE 0.000 41582599 SessClose c REM_CLOSE 0.000 41713665 SessClose c REM_CLOSE 0.000 41811969 SessClose c REM_CLOSE 0.000 41779203 SessClose c REM_CLOSE 0.000 42008577 SessClose c REM_CLOSE 0.000 42008580 SessClose c REM_CLOSE 0.124 42041349 SessClose c REM_CLOSE 0.000 42270721 SessClose c REM_CLOSE 0.308 42270739 SessClose c REM_CLOSE 0.013 42303491 SessClose c REM_CLOSE 0.000 42303492 SessClose c REM_CLOSE 0.000 42303493 SessClose c REM_CLOSE 0.000 42336257 SessClose c REM_CLOSE 0.000 42369025 SessClose c REM_CLOSE 0.000 42500103 SessClose c REM_CLOSE 0.000 42532865 SessClose c REM_CLOSE 0.000 42532867 SessClose c REM_CLOSE 0.000 42532868 SessClose c REM_CLOSE 0.108 42565635 SessClose c REM_CLOSE 0.000 42663937 SessClose c REM_CLOSE 0.175 42663939 SessClose c REM_CLOSE 0.000 42663942 SessClose c REM_CLOSE 0.000 42696705 SessClose c REM_CLOSE 2.000 42729473 SessClose c REM_CLOSE 0.000 42827787 SessClose c REM_CLOSE 0.111 42991619 SessClose c REM_CLOSE 0.000 42991620 SessClose c REM_CLOSE 0.000 43024385 SessClose c REM_CLOSE 0.101 43155457 SessClose c REM_CLOSE 0.000 43155460 SessClose c REM_CLOSE 0.000 43155470 SessClose c REM_CLOSE 0.000 43155473 SessClose c REM_CLOSE 0.000 43188227 SessClose c REM_CLOSE 0.089 43384841 SessClose c REM_CLOSE 0.000 43384846 SessClose c REM_CLOSE 0.000 43417603 SessClose c REM_CLOSE 0.000 43417606 SessClose c REM_CLOSE 0.065 43417608 SessClose c REM_CLOSE 0.014 43483137 SessClose c REM_CLOSE 0.133 43515905 SessClose c REM_CLOSE 0.000 43548675 SessClose c REM_CLOSE 0.000 43548678 SessClose c REM_CLOSE 0.000 43548681 SessClose c REM_CLOSE 0.000 43548682 SessClose c REM_CLOSE 0.107 43548684 SessClose c REM_CLOSE 0.000 43548686 SessClose c REM_CLOSE 0.000 43548687 SessClose c REM_CLOSE 0.000 43548688 SessClose c REM_CLOSE 0.000 43548691 SessClose c REM_CLOSE 0.063 43581443 SessClose c REM_CLOSE 0.000 43614211 SessClose c REM_CLOSE 0.000 43646977 SessClose c REM_CLOSE 0.053 43909123 SessClose c REM_CLOSE 0.000 43909124 SessClose c REM_CLOSE 0.000 43909125 SessClose c REM_CLOSE 0.000 43909126 SessClose c REM_CLOSE 0.000 43941889 SessClose c REM_CLOSE 0.000 44007427 SessClose c REM_CLOSE 0.121 44498945 SessClose c REM_CLOSE 0.102 44564483 SessClose c REM_CLOSE 0.000 44564494 SessClose c REM_CLOSE 0.000 44597251 SessClose c REM_CLOSE 0.000 44564495 SessClose c REM_CLOSE 0.076 44597256 SessClose c REM_CLOSE 0.041 44728321 SessClose c REM_CLOSE 0.000 44728322 SessClose c REM_CLOSE 0.135 44728328 SessClose c REM_CLOSE 0.000 44761089 SessClose c REM_CLOSE 0.000 44793879 SessClose c REM_CLOSE 0.000 44826625 SessClose c REM_CLOSE 0.000 44826626 SessClose c REM_CLOSE 0.048 45285377 SessClose c REM_CLOSE 0.000 45285384 SessClose c REM_CLOSE 0.012 45285388 SessClose c REM_CLOSE 0.000 45547527 SessClose c REM_CLOSE 0.324 45547529 SessClose c REM_CLOSE 0.000 45547540 SessClose c REM_CLOSE 0.000 45547543 SessClose c REM_CLOSE 0.013 45547559 SessClose c REM_CLOSE 0.020 45547567 SessClose c REM_CLOSE 0.000 45547570 SessClose c REM_CLOSE 0.000 45547573 SessClose c REM_CLOSE 0.000 45580291 SessClose c REM_CLOSE 0.000 45580296 SessClose c REM_CLOSE 0.055 45580304 SessClose c REM_CLOSE 0.000 45613057 SessClose c REM_CLOSE 0.000 45875201 SessClose c REM_CLOSE 0.000 45973505 SessClose c REM_CLOSE 0.000 46039041 SessClose c REM_CLOSE 0.000 46039048 SessClose c REM_CLOSE 0.000 46104581 SessClose c REM_CLOSE 0.000 46170113 SessClose c REM_CLOSE 0.034 46301189 SessClose c REM_CLOSE 0.000 46301190 SessClose c REM_CLOSE 0.000 46301197 SessClose c REM_CLOSE 0.115 46563331 SessClose c REM_CLOSE 0.000 46596099 SessClose c REM_CLOSE 0.000 46694403 SessClose c REM_CLOSE 0.276 46694405 SessClose c REM_CLOSE 0.000 46694406 SessClose c REM_CLOSE 0.000 46694407 SessClose c REM_CLOSE 0.000 46694408 SessClose c REM_CLOSE 0.000 46694409 SessClose c REM_CLOSE 0.000 46727175 SessClose c REM_CLOSE 0.225 46727187 SessClose c REM_CLOSE 0.168 46727197 SessClose c REM_CLOSE 0.000 46727198 SessClose c REM_CLOSE 0.000 46727205 SessClose c REM_CLOSE 0.000 46727210 SessClose c REM_CLOSE 0.191 46727220 SessClose c REM_CLOSE 0.000 46792705 SessClose c REM_CLOSE 0.000 46792706 SessClose c REM_CLOSE 0.000 46792711 SessClose c REM_CLOSE 0.000 47120385 SessClose c REM_CLOSE 0.153 47218689 SessClose c REM_CLOSE 0.074 47284233 SessClose c REM_CLOSE 0.000 47415301 SessClose c REM_CLOSE 0.109 47579143 SessClose c REM_CLOSE 0.000 47579146 SessClose c REM_CLOSE 0.060 47611905 SessClose c REM_CLOSE 0.000 47644673 SessClose c REM_CLOSE 0.000 47644674 SessClose c REM_CLOSE 0.000 47677441 SessClose c REM_CLOSE 0.000 47677442 SessClose c REM_CLOSE 0.000 47710209 SessClose c REM_CLOSE 0.063 47742977 SessClose c REM_CLOSE 0.000 47742978 SessClose c REM_CLOSE 0.097 47775745 SessClose c REM_CLOSE 0.000 47775761 SessClose c REM_CLOSE 0.000 47775770 SessClose c REM_CLOSE 0.047 47775780 SessClose c REM_CLOSE 0.000 47808513 SessClose c REM_CLOSE 0.000 47808514 SessClose c REM_CLOSE 0.000 44498953 SessClose c REM_CLOSE 179.998 }}} {{{ cat file.log| grep Debug|grep -v RES|grep -v Broken|grep -v retrying|grep -v "Connection reset by peer"|grep -v 909 6979596 Debug c "Write error, retval = -1, len = 836290, errno = Resource temporarily unavailable" 1638486 Debug c "Write error, retval = -1, len = 717552, errno = Resource temporarily unavailable" 44498954 Debug c "Write error, retval = -1, len = 1330963, errno = Resource temporarily unavailable" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator