From varnish-bugs at varnish-cache.org Mon Sep 2 13:30:20 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 02 Sep 2013 13:30:20 -0000 Subject: [Varnish] #1334: DNS Director with hostname resolving to multiple IPs is not possible In-Reply-To: <043.991b3b2d7f57c8dd72a70935f5412ba5@varnish-cache.org> References: <043.991b3b2d7f57c8dd72a70935f5412ba5@varnish-cache.org> Message-ID: <058.80d1cbb7e2cb34995fbeccd0c12866de@varnish-cache.org> #1334: DNS Director with hostname resolving to multiple IPs is not possible --------------------------------+-------------------- Reporter: timrh | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: dns director .host | --------------------------------+-------------------- Comment (by perbu): The DNS director will create directors based on the content of a DNS zone. So, if you define 192.168.0.0/24 as a backend set and foo.com.internal resolves to 192.168.0.1 and 192.168.0.2, it should round-robin between these two. Please note that the DNS director will most likely be sunsetted in the next major release. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 4 10:06:47 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Sep 2013 10:06:47 -0000 Subject: [Varnish] #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) Message-ID: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- Found a Varnishd child panic on master (209f797) with fryer. Should be pretty simple to reproduce. A single siege process, 15 concurrent connections with keepalive. 1 single 1K object hit repeatedly. Child crashes in vcl_miss after 120 seconds. startargs: 2013-09-04 11:06:54,021 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid {{{ Last panic at: Wed, 04 Sep 2013 09:14:48 GMT Assert error in cnt_miss(), cache/cache_req_fsm.c line 561: Condition((req->objcore) != NULL) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e6b7: ObjIterEnd+5d7 0x433a02: CNT_Request+21a2 0x429ca8: HTTP1_Session+4b8 0x4352b8: RFC2616_Do_Cond+148 0x4364e5: SES_pool_accept_task+1f5 0x430af3: Pool_Work_Thread+c3 0x443a38: WRK_SumStat+108 0x7f0e0bb7be9a: _end+7f0e0b4f8702 0x7f0e0b8a8ccd: _end+7f0e0b225535 req = 0x7f0dc4060a00 { sp = 0x7f0dc80008e0, vxid = 1079740951, step = R_STP_MISS, req_body = R_BODY_NONE, restarts = 0, esi_level = 0 sp = 0x7f0dc80008e0 { fd = 24, vxid = 262145, client = 194.31.39.161 57623, step = S_STP_WORKING, }, worker = 0x7f0df3e12c80 { ws = 0x7f0df3e12e78 { id = "wrk", {s,f,r,e} = {0x7f0df3e12470,0x7f0df3e12470,(nil),+2048}, }, VCL::method = 0x0, VCL::return = fetch, }, ws = 0x7f0dc4060ba8 { id = "req", {s,f,r,e} = {0x7f0dc40621b0,+256,(nil),+59472}, }, http[req] = { ws = 0x7f0dc4060ba8[req] "GET", "/cacheabledata/set_hot1/index.html", "HTTP/1.1", "Host: fryer1.varnish-software.com:6081", "Accept: */*", "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", "Connection: keep-alive", "X-Forwarded-For: 194.31.39.161", "Accept-Encoding: gzip", }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 4 10:09:20 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Sep 2013 10:09:20 -0000 Subject: [Varnish] #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) In-Reply-To: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> References: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> Message-ID: <061.4f2b6d7757b2643329f1ca89a50945c4@varnish-cache.org> #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Description changed by lkarsten: Old description: > Found a Varnishd child panic on master (209f797) with fryer. > > Should be pretty simple to reproduce. A single siege process, 15 > concurrent connections with keepalive. 1 single 1K object hit repeatedly. > > Child crashes in vcl_miss after 120 seconds. > > startargs: > 2013-09-04 11:06:54,021 - fry - INFO - run[fryer1]: > /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f > /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p > thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid > > {{{ > Last panic at: Wed, 04 Sep 2013 09:14:48 GMT > Assert error in cnt_miss(), cache/cache_req_fsm.c line 561: > Condition((req->objcore) != NULL) not true. > thread = (cache-worker) > ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x42e6b7: ObjIterEnd+5d7 > 0x433a02: CNT_Request+21a2 > 0x429ca8: HTTP1_Session+4b8 > 0x4352b8: RFC2616_Do_Cond+148 > 0x4364e5: SES_pool_accept_task+1f5 > 0x430af3: Pool_Work_Thread+c3 > 0x443a38: WRK_SumStat+108 > 0x7f0e0bb7be9a: _end+7f0e0b4f8702 > 0x7f0e0b8a8ccd: _end+7f0e0b225535 > req = 0x7f0dc4060a00 { > sp = 0x7f0dc80008e0, vxid = 1079740951, step = R_STP_MISS, > req_body = R_BODY_NONE, > restarts = 0, esi_level = 0 > sp = 0x7f0dc80008e0 { > fd = 24, vxid = 262145, > client = 194.31.39.161 57623, > step = S_STP_WORKING, > }, > worker = 0x7f0df3e12c80 { > ws = 0x7f0df3e12e78 { > id = "wrk", > {s,f,r,e} = {0x7f0df3e12470,0x7f0df3e12470,(nil),+2048}, > }, > VCL::method = 0x0, > VCL::return = fetch, > }, > ws = 0x7f0dc4060ba8 { > id = "req", > {s,f,r,e} = {0x7f0dc40621b0,+256,(nil),+59472}, > }, > http[req] = { > ws = 0x7f0dc4060ba8[req] > "GET", > "/cacheabledata/set_hot1/index.html", > "HTTP/1.1", > "Host: fryer1.varnish-software.com:6081", > "Accept: */*", > "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", > "Connection: keep-alive", > "X-Forwarded-For: 194.31.39.161", > "Accept-Encoding: gzip", > }, > vcl = { > srcname = { > "input", > "Default", > }, > }, > }, > }}} New description: Found a Varnishd child panic on master (209f797) with fryer. Should be pretty simple to reproduce. A single siege process, 15 concurrent connections with keepalive. 1 single 1K object hit repeatedly. Child crashes in vcl_miss after 120 seconds. startargs: 2013-09-04 11:06:54,021 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid {{{ Last panic at: Wed, 04 Sep 2013 09:14:48 GMT Assert error in cnt_miss(), cache/cache_req_fsm.c line 561: Condition((req->objcore) != NULL) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e6b7: ObjIterEnd+5d7 0x433a02: CNT_Request+21a2 0x429ca8: HTTP1_Session+4b8 0x4352b8: RFC2616_Do_Cond+148 0x4364e5: SES_pool_accept_task+1f5 0x430af3: Pool_Work_Thread+c3 0x443a38: WRK_SumStat+108 0x7f0e0bb7be9a: _end+7f0e0b4f8702 0x7f0e0b8a8ccd: _end+7f0e0b225535 req = 0x7f0dc4060a00 { sp = 0x7f0dc80008e0, vxid = 1079740951, step = R_STP_MISS, req_body = R_BODY_NONE, restarts = 0, esi_level = 0 sp = 0x7f0dc80008e0 { fd = 24, vxid = 262145, client = 194.31.39.161 57623, step = S_STP_WORKING, }, worker = 0x7f0df3e12c80 { ws = 0x7f0df3e12e78 { id = "wrk", {s,f,r,e} = {0x7f0df3e12470,0x7f0df3e12470,(nil),+2048}, }, VCL::method = 0x0, VCL::return = fetch, }, ws = 0x7f0dc4060ba8 { id = "req", {s,f,r,e} = {0x7f0dc40621b0,+256,(nil),+59472}, }, http[req] = { ws = 0x7f0dc4060ba8[req] "GET", "/cacheabledata/set_hot1/index.html", "HTTP/1.1", "Host: fryer1.varnish-software.com:6081", "Accept: */*", "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", "Connection: keep-alive", "X-Forwarded-For: 194.31.39.161", "Accept-Encoding: gzip", }, vcl = { srcname = { "input", "Default", }, }, }, }}} testsuite.vcl: {{{ backend default { .host = "localhost"; .port = "80"; } }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 4 10:09:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Sep 2013 10:09:50 -0000 Subject: [Varnish] #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) In-Reply-To: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> References: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> Message-ID: <061.277ba7966b81a1f4b69045b93ce13bfa@varnish-cache.org> #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Description changed by lkarsten: Old description: > Found a Varnishd child panic on master (209f797) with fryer. > > Should be pretty simple to reproduce. A single siege process, 15 > concurrent connections with keepalive. 1 single 1K object hit repeatedly. > > Child crashes in vcl_miss after 120 seconds. > > startargs: > 2013-09-04 11:06:54,021 - fry - INFO - run[fryer1]: > /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f > /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p > thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid > > {{{ > Last panic at: Wed, 04 Sep 2013 09:14:48 GMT > Assert error in cnt_miss(), cache/cache_req_fsm.c line 561: > Condition((req->objcore) != NULL) not true. > thread = (cache-worker) > ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x42e6b7: ObjIterEnd+5d7 > 0x433a02: CNT_Request+21a2 > 0x429ca8: HTTP1_Session+4b8 > 0x4352b8: RFC2616_Do_Cond+148 > 0x4364e5: SES_pool_accept_task+1f5 > 0x430af3: Pool_Work_Thread+c3 > 0x443a38: WRK_SumStat+108 > 0x7f0e0bb7be9a: _end+7f0e0b4f8702 > 0x7f0e0b8a8ccd: _end+7f0e0b225535 > req = 0x7f0dc4060a00 { > sp = 0x7f0dc80008e0, vxid = 1079740951, step = R_STP_MISS, > req_body = R_BODY_NONE, > restarts = 0, esi_level = 0 > sp = 0x7f0dc80008e0 { > fd = 24, vxid = 262145, > client = 194.31.39.161 57623, > step = S_STP_WORKING, > }, > worker = 0x7f0df3e12c80 { > ws = 0x7f0df3e12e78 { > id = "wrk", > {s,f,r,e} = {0x7f0df3e12470,0x7f0df3e12470,(nil),+2048}, > }, > VCL::method = 0x0, > VCL::return = fetch, > }, > ws = 0x7f0dc4060ba8 { > id = "req", > {s,f,r,e} = {0x7f0dc40621b0,+256,(nil),+59472}, > }, > http[req] = { > ws = 0x7f0dc4060ba8[req] > "GET", > "/cacheabledata/set_hot1/index.html", > "HTTP/1.1", > "Host: fryer1.varnish-software.com:6081", > "Accept: */*", > "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", > "Connection: keep-alive", > "X-Forwarded-For: 194.31.39.161", > "Accept-Encoding: gzip", > }, > vcl = { > srcname = { > "input", > "Default", > }, > }, > }, > }}} > > testsuite.vcl: > {{{ > backend default { > .host = "localhost"; > .port = "80"; > } > }}} New description: Found a Varnishd child panic on master (209f797) with fryer. Should be pretty simple to reproduce. A single siege process, 15 concurrent connections with keepalive. 1 single 1K object hit repeatedly. Child crashes after 120 seconds. startargs: 2013-09-04 11:06:54,021 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid {{{ Last panic at: Wed, 04 Sep 2013 09:14:48 GMT Assert error in cnt_miss(), cache/cache_req_fsm.c line 561: Condition((req->objcore) != NULL) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e6b7: ObjIterEnd+5d7 0x433a02: CNT_Request+21a2 0x429ca8: HTTP1_Session+4b8 0x4352b8: RFC2616_Do_Cond+148 0x4364e5: SES_pool_accept_task+1f5 0x430af3: Pool_Work_Thread+c3 0x443a38: WRK_SumStat+108 0x7f0e0bb7be9a: _end+7f0e0b4f8702 0x7f0e0b8a8ccd: _end+7f0e0b225535 req = 0x7f0dc4060a00 { sp = 0x7f0dc80008e0, vxid = 1079740951, step = R_STP_MISS, req_body = R_BODY_NONE, restarts = 0, esi_level = 0 sp = 0x7f0dc80008e0 { fd = 24, vxid = 262145, client = 194.31.39.161 57623, step = S_STP_WORKING, }, worker = 0x7f0df3e12c80 { ws = 0x7f0df3e12e78 { id = "wrk", {s,f,r,e} = {0x7f0df3e12470,0x7f0df3e12470,(nil),+2048}, }, VCL::method = 0x0, VCL::return = fetch, }, ws = 0x7f0dc4060ba8 { id = "req", {s,f,r,e} = {0x7f0dc40621b0,+256,(nil),+59472}, }, http[req] = { ws = 0x7f0dc4060ba8[req] "GET", "/cacheabledata/set_hot1/index.html", "HTTP/1.1", "Host: fryer1.varnish-software.com:6081", "Accept: */*", "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", "Connection: keep-alive", "X-Forwarded-For: 194.31.39.161", "Accept-Encoding: gzip", }, vcl = { srcname = { "input", "Default", }, }, }, }}} testsuite.vcl: {{{ backend default { .host = "localhost"; .port = "80"; } }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 4 23:01:38 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Sep 2013 23:01:38 -0000 Subject: [Varnish] #1336: varnishlog fails to parse log records > 1016 bytes Message-ID: <045.8dadb7898aa21ef58549ee630dfa15bb@varnish-cache.org> #1336: varnishlog fails to parse log records > 1016 bytes ---------------------+------------------------ Reporter: mkasick | Type: defect Status: new | Priority: normal Milestone: | Component: varnishlog Version: 3.0.4 | Severity: normal Keywords: | ---------------------+------------------------ There is a bug present in the 3.0 branch, including HEAD (presently 0a7e6caa0c6bd93003d4733e96f5ea054ac84cbc), whereby varnishlog fails to parse log records > 1016 bytes from a log file (as opposed to shared memory). Specifically, the bug is present in lib/libvarnishapi/vsl.c:162:vsl_nextlog where vsl->rbuf is reallocated to support log entries larger than 256 words. Here, the new word-size of rbuf is calculated and assigned to "l", and after the buffer is reallocated, the log record is read with it's size specified using the same, now incorrect "l" value. Attached is a patch that addresses this issue by assigning the new buffer size to a new variable, "nl", leaving "l" untouched for the following read operation. Steps to trigger this bug: 1. Start varnishd with `-p shm_reclen=1017` or larger; any reasonable VCL with a defined default backend will do. 2. Start varnishlog writing to a file: `varnishlog -w /tmp/varnish.log` 3. Make a large request: `ruby -r socket -e 'Socket.tcp("localhost", 80) {|s| s << "GET #{(0...2*1024).step(8).map {|n| ("%8d" % n).gsub(" ", ".")}.join} HTTP/1.0\r\n\r\n"}` 4. Read the log entries from the file: `varnishlog -r /tmp/varnish.log` Expected output: The full client request including the ReqEnd record. Actual output: varnishlog stops abruptly after the large RxURL record. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 5 11:33:05 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 05 Sep 2013 11:33:05 -0000 Subject: [Varnish] #1337: Bad assert causes panic on invalid http status codes Message-ID: <041.8fd8fff6778724baf73c9fbe4b97f40c@varnish-cache.org> #1337: Bad assert causes panic on invalid http status codes -------------------+---------------------- Reporter: mha | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.4 | Severity: major Keywords: | -------------------+---------------------- A backend returning http status code 008 (for example) causes varnish to panic due to an assert on line 103 of cache_http.c: {{{ const char * http_StatusMessage(unsigned status) { struct http_msg *mp; assert(status >= 100 && status <= 999); for (mp = http_msg; mp->nbr != 0 && mp->nbr <= status; mp++) if (mp->nbr == status) return (mp->txt); return ("Unknown Error"); } }}} While this is not a valid http status code, surely the system shouldn't panic on that - it should just return an error. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 5 11:35:16 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 05 Sep 2013 11:35:16 -0000 Subject: [Varnish] #1337: Bad assert causes panic on invalid http status codes In-Reply-To: <041.8fd8fff6778724baf73c9fbe4b97f40c@varnish-cache.org> References: <041.8fd8fff6778724baf73c9fbe4b97f40c@varnish-cache.org> Message-ID: <056.67c387d622ae35280b1422b8fff42792@varnish-cache.org> #1337: Bad assert causes panic on invalid http status codes ----------------------+-------------------- Reporter: mha | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.4 Severity: major | Resolution: Keywords: | ----------------------+-------------------- Comment (by mha): Some panic dump data: Last panic at: Sat, 24 Aug 2013 19:06:06 GMT Assert error in http_StatusMessage(), cache_http.c line 103: Condition(status >= 100 && status <= 999) not true. thread = (cache-worker) ident = Linux,2.6.32-358.11.1.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42f718: /usr/sbin/varnishd() [0x42f718] 0x429d05: /usr/sbin/varnishd(http_StatusMessage+0x75) [0x429d05] 0x42d892: /usr/sbin/varnishd(http_DissectResponse+0x152) [0x42d892] 0x424a02: /usr/sbin/varnishd(FetchHdr+0x412) [0x424a02] 0x41679c: /usr/sbin/varnishd() [0x41679c] 0x4191ad: /usr/sbin/varnishd(CNT_Session+0x67d) [0x4191ad] 0x431461: /usr/sbin/varnishd() [0x431461] 0x3d37a07851: /lib64/libpthread.so.0() [0x3d37a07851] 0x3d372e890d: /lib64/libc.so.6(clone+0x6d) [0x3d372e890d] sp = 0x7f61450a7008 { fd = 172, id = 172, xid = 1298842466, client = 127.0.6.1 24072, step = STP_FETCH, handling = pass, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7f61450a7080 { id = "sess", {s,f,r,e} = {0x7f61450a7c78,+1960,(nil),+65536}, }, -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 6 09:01:34 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Sep 2013 09:01:34 -0000 Subject: [Varnish] #1338: varnishlog mangpage is out of date Message-ID: <041.23d893a15dc8f4dda9f4dd6568d0590b@varnish-cache.org> #1338: varnishlog mangpage is out of date -------------------+--------------------------- Reporter: mha | Type: documentation Status: new | Priority: normal Milestone: | Component: documentation Version: 3.0.4 | Severity: normal Keywords: | -------------------+--------------------------- The `varnishlog` mangpage contains a list of "currently defined" tags. This list is quite out of date - as an example, the `Gzip` tags are not there. It should be updated, or if it's possible to just autogenerate that list from the source code in the build system. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 6 14:33:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Sep 2013 14:33:51 -0000 Subject: [Varnish] #1323: suffix-byte-range requests larger than the file size don't work In-Reply-To: <047.84cb6dd391a524ebaf1be3c574302f07@varnish-cache.org> References: <047.84cb6dd391a524ebaf1be3c574302f07@varnish-cache.org> Message-ID: <062.d73052101e1c87a3ff1c0eb99eda5818@varnish-cache.org> #1323: suffix-byte-range requests larger than the file size don't work ---------------------------+-------------------- Reporter: gquintard | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.4 Severity: normal | Resolution: Keywords: range request | ---------------------------+-------------------- Changes (by tfheen): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 09:58:11 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 09:58:11 -0000 Subject: [Varnish] #1339: vcl_fetch : configuration order Message-ID: <046.6b076be364c89ac3be902541277ade15@varnish-cache.org> #1339: vcl_fetch : configuration order -------------------------------------+-------------------- Reporter: flafolie | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.3 | Severity: normal Keywords: beresp.ttl hit_for_pass | -------------------------------------+-------------------- I'm not sure I'm in the good section to submit this problem. The following problem only occured with an high traffic on the server. When the traffic is normal, there is no problem. With a lot of requests on a file of the type file.js?v=36, I notice an anormal CPU use on Varnish process (No problem on apache web servers), it seems Varnish makes loops. The file is never cached by Varnish and Varnish makes more than 30 sec to finally deliver the file. During the problem, if I test with file.js?v=37 (or anything different than version currently used : v=36), the file is delivered without delay. I can solve the problem switching the order of this 2 tests in vcl_fetch : if (!beresp.ttl > 0s) { return (hit_for_pass); } if (req.request == "GET" && req.url ~ "\.(css|js|gif|jpg|png|ico|swf|doc|ppt|pps|xls|pdf|mp3|zip|avi|htc)(\?.*)?$") { unset beresp.http.set-cookie; set beresp.http.cache-control = "max-age=2592000"; set beresp.ttl = 30d; return (deliver); } In this order, I got the problem, not in the other order. In the other order, my file is correctly cached ans the header age of the file rises normally. Is there a reason to this ? Thank you -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:16:30 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:16:30 -0000 Subject: [Varnish] #1338: varnishlog mangpage is out of date In-Reply-To: <041.23d893a15dc8f4dda9f4dd6568d0590b@varnish-cache.org> References: <041.23d893a15dc8f4dda9f4dd6568d0590b@varnish-cache.org> Message-ID: <056.02b9db528397d1b966795fb805653d0f@varnish-cache.org> #1338: varnishlog mangpage is out of date ---------------------------+--------------------- Reporter: mha | Owner: martin Type: documentation | Status: new Priority: normal | Milestone: Component: documentation | Version: 3.0.4 Severity: normal | Resolution: Keywords: | ---------------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:17:11 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:17:11 -0000 Subject: [Varnish] #1337: Bad assert causes panic on invalid http status codes In-Reply-To: <041.8fd8fff6778724baf73c9fbe4b97f40c@varnish-cache.org> References: <041.8fd8fff6778724baf73c9fbe4b97f40c@varnish-cache.org> Message-ID: <056.5037ca1d3d58819720c4b4aa8ee4d467@varnish-cache.org> #1337: Bad assert causes panic on invalid http status codes ----------------------+-------------------- Reporter: mha | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.4 Severity: major | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:17:30 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:17:30 -0000 Subject: [Varnish] #1339: vcl_fetch : configuration order In-Reply-To: <046.6b076be364c89ac3be902541277ade15@varnish-cache.org> References: <046.6b076be364c89ac3be902541277ade15@varnish-cache.org> Message-ID: <061.f6c69458724817711c1c7b73be684212@varnish-cache.org> #1339: vcl_fetch : configuration order -------------------------------------+---------------------- Reporter: flafolie | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: invalid Keywords: beresp.ttl hit_for_pass | -------------------------------------+---------------------- Changes (by scoof): * status: new => closed * resolution: => invalid Comment: This is not actionable as a bug. Please seek help on the mailing lists or forum to figure out if this really is a bug. I'd say it's most likely a case of misuse of hit_for_pass, and that you're serializing the requests for the uncacheable content. By returning hit_for_pass with beresp.ttl set to <=0s, you're not saving that you don't wish to cache this object, thereby creating a queue for the object. Please get back to us, if you have more information, such as logs that show that this is really a bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:18:14 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:18:14 -0000 Subject: [Varnish] #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) In-Reply-To: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> References: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> Message-ID: <061.e2dc992070c3d6cc3cb492786eef75f2@varnish-cache.org> #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) ----------------------+-------------------- Reporter: lkarsten | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:18:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:18:22 -0000 Subject: [Varnish] #1336: varnishlog fails to parse log records > 1016 bytes In-Reply-To: <045.8dadb7898aa21ef58549ee630dfa15bb@varnish-cache.org> References: <045.8dadb7898aa21ef58549ee630dfa15bb@varnish-cache.org> Message-ID: <060.d67b4582590c588a60edc438f31c5e77@varnish-cache.org> #1336: varnishlog fails to parse log records > 1016 bytes ------------------------+--------------------- Reporter: mkasick | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 3.0.4 Severity: normal | Resolution: Keywords: | ------------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:21:54 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:21:54 -0000 Subject: [Varnish] #1334: DNS Director with hostname resolving to multiple IPs is not possible In-Reply-To: <043.991b3b2d7f57c8dd72a70935f5412ba5@varnish-cache.org> References: <043.991b3b2d7f57c8dd72a70935f5412ba5@varnish-cache.org> Message-ID: <058.686df27ad0f72d3a42cb00170ebba0a9@varnish-cache.org> #1334: DNS Director with hostname resolving to multiple IPs is not possible --------------------------------+------------------------- Reporter: timrh | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: worksforme Keywords: dns director .host | --------------------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 10:28:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 10:28:50 -0000 Subject: [Varnish] #1333: ESI include parsing HTTPS urls as relative not absolute In-Reply-To: <045.6b8080c61995515ca77fff449e5494d0@varnish-cache.org> References: <045.6b8080c61995515ca77fff449e5494d0@varnish-cache.org> Message-ID: <060.7e00cc7a87377bfb6f26c5e7b22bd929@varnish-cache.org> #1333: ESI include parsing HTTPS urls as relative not absolute ----------------------+---------------------- Reporter: rabbitt | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 12:18:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 12:18:35 -0000 Subject: [Varnish] #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) In-Reply-To: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> References: <046.302b1276d6a887a917068e85d5fcbaad@varnish-cache.org> Message-ID: <061.f18c9aa5c92a1942c50e88f1a837193b@varnish-cache.org> #1335: Assert error in cnt_miss(), cache/cache_req_fsm.c (fryer) ----------------------+--------------------- Reporter: lkarsten | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: In [8f06d0f81741bd680eebdd0cc923206b642c0204]: {{{ #!CommitTicketReference repository="" revision="8f06d0f81741bd680eebdd0cc923206b642c0204" If vcl_hit{} returns fetch{} without a BUSY object, treat is as a pass, it will be anyway. Fixes #1335 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 9 14:30:20 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Sep 2013 14:30:20 -0000 Subject: [Varnish] #1340: Multiple backend requests per miss Message-ID: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> #1340: Multiple backend requests per miss ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: ----------------------+------------------- In master (8f06d0f) varnishd sends multiple requests to the backend when an object times out. Expected behaviour: As in 3.0; a single backend request goes out and the connections are either kept waiting or given a graced version. default.vcl in use. Setup: apache2 with mod_expires and max-age=5s, varnish in front of this, and siege sends requests to a single url. varnishd startup args: 2013-09-09 16:23:36,985 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid Siege args: 2013-09-09 16:23:41,160 - fry - INFO - run[fryer2]: ulimit -n 65548; siege -t 30s -c 15 http://fryer1.varnish- software.com:6081/cacheabledata/set_hot1/index.html apache2 access log: (first req is initial miss) {{{ 127.0.0.1 - - [09/Sep/2013:16:23:41 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [09/Sep/2013:16:23:46 +0200] "GET /cacheabledata/set_hot1/index.html HTTP/1.1" 200 884 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" }}} Counters after completed siege run (30s): {{{ LCK.backend.creat 1 LCK.backend.destroy 0 LCK.backend.locks 193 LCK.ban.creat 1 LCK.ban.destroy 0 LCK.ban.locks 65 LCK.busyobj.creat 89 LCK.busyobj.destroy 89 LCK.busyobj.locks 519 LCK.cli.creat 1 LCK.cli.destroy 0 LCK.cli.locks 25 LCK.exp.creat 1 LCK.exp.destroy 0 LCK.exp.locks 37 LCK.hcb.creat 1 LCK.hcb.destroy 0 LCK.hcb.locks 2 LCK.hcl.creat 0 LCK.hcl.destroy 0 LCK.hcl.locks 0 LCK.herder.creat 0 LCK.herder.destroy 0 LCK.herder.locks 0 LCK.hsl.creat 0 LCK.hsl.destroy 0 LCK.hsl.locks 0 LCK.lru.creat 2 LCK.lru.destroy 0 LCK.lru.locks 20 LCK.mempool.creat 6 LCK.mempool.destroy 0 LCK.mempool.locks 665 LCK.nbusyobj.creat 0 LCK.nbusyobj.destroy 0 LCK.nbusyobj.locks 0 LCK.objhdr.creat 18 LCK.objhdr.destroy 0 LCK.objhdr.locks 5058036 LCK.sess.creat 15 LCK.sess.destroy 15 LCK.sess.locks 0 LCK.sessmem.creat 0 LCK.sessmem.destroy 0 LCK.sessmem.locks 0 LCK.sma.creat 2 LCK.sma.destroy 0 LCK.sma.locks 350 LCK.smf.creat 0 LCK.smf.destroy 0 LCK.smf.locks 0 LCK.smp.creat 0 LCK.smp.destroy 0 LCK.smp.locks 0 LCK.sms.creat 1 LCK.sms.destroy 0 LCK.sms.locks 0 LCK.vbp.creat 1 LCK.vbp.destroy 0 LCK.vbp.locks 0 LCK.vcapace.creat 1 LCK.vcapace.destroy 0 LCK.vcapace.locks 0 LCK.vcl.creat 1 LCK.vcl.destroy 0 LCK.vcl.locks 195 LCK.vxid.creat 1 LCK.vxid.destroy 0 LCK.vxid.locks 46 LCK.wq.creat 3 LCK.wq.destroy 0 LCK.wq.locks 4372 LCK.wstat.creat 1 LCK.wstat.destroy 0 LCK.wstat.locks 1264553 MAIN.backend_busy 0 MAIN.backend_conn 15 MAIN.backend_fail 0 MAIN.backend_recycle 89 MAIN.backend_req 89 MAIN.backend_retry 0 MAIN.backend_reuse 74 MAIN.backend_toolate 0 MAIN.backend_unhealthy 0 MAIN.bans 1 MAIN.bans_added 1 MAIN.bans_deleted 0 MAIN.bans_dups 0 MAIN.bans_gone 1 MAIN.bans_persisted_bytes 13 MAIN.bans_persisted_fragmentation 0 MAIN.bans_req 0 MAIN.bans_tested 0 MAIN.bans_tests_tested 0 MAIN.busy_sleep 17 MAIN.busy_wakeup 17 MAIN.cache_hit 1264208 MAIN.cache_hitpass 0 MAIN.cache_miss 6 MAIN.client_req 1264297 MAIN.client_req_400 0 MAIN.client_req_413 0 MAIN.client_req_417 0 MAIN.dir_dns_cache_full 0 MAIN.dir_dns_failed 0 MAIN.dir_dns_hit 0 MAIN.dir_dns_lookups 0 MAIN.esi_errors 0 MAIN.esi_warnings 0 MAIN.fetch_1xx 0 MAIN.fetch_204 0 MAIN.fetch_304 0 MAIN.fetch_bad 0 MAIN.fetch_chunked 0 MAIN.fetch_close 0 MAIN.fetch_eof 0 MAIN.fetch_failed 0 MAIN.fetch_head 0 MAIN.fetch_length 89 MAIN.fetch_oldhttp 0 MAIN.fetch_zero 0 MAIN.hcb_insert 1 MAIN.hcb_lock 1 MAIN.hcb_nolock 1264297 MAIN.losthdr 0 MAIN.n_backend 1 MAIN.n_expired 3 MAIN.n_gunzip 89 MAIN.n_gzip 0 MAIN.n_lru_moved 11 MAIN.n_lru_nuked 0 MAIN.n_object 3 MAIN.n_objectcore 19 MAIN.n_objecthead 17 MAIN.n_vampireobject 0 MAIN.n_vcl 1 MAIN.n_vcl_avail 1 MAIN.n_vcl_discard 0 MAIN.n_waitinglist 15 MAIN.pools 2 MAIN.s_bodybytes 676398895 MAIN.s_error 0 MAIN.s_fetch 89 MAIN.s_hdrbytes 537495411 MAIN.s_pass 83 MAIN.s_pipe 0 MAIN.s_req 1264297 MAIN.s_sess 15 MAIN.sess_closed 0 MAIN.sess_conn 15 MAIN.sess_drop 0 MAIN.sess_dropped 0 MAIN.sess_fail 0 MAIN.sess_herd 62 MAIN.sess_pipe_overflow 0 MAIN.sess_pipeline 0 MAIN.sess_queued 0 MAIN.sess_readahead 0 MAIN.shm_cont 96781 MAIN.shm_cycles 17 MAIN.shm_flushes 0 MAIN.shm_records 54370348 MAIN.shm_writes 2528937 MAIN.sms_balloc 0 MAIN.sms_bfree 0 MAIN.sms_nbytes 0 MAIN.sms_nobj 0 MAIN.sms_nreq 0 MAIN.thread_queue_len 0 MAIN.threads 2000 MAIN.threads_created 2000 MAIN.threads_destroyed 0 MAIN.threads_failed 0 MAIN.threads_limited 0 MAIN.uptime 32 MAIN.vmods 0 MAIN.vsm_cooling 0 MAIN.vsm_free 973296 MAIN.vsm_overflow 0 MAIN.vsm_overflowed 0 MAIN.vsm_used 83961312 MEMPOOL.busyobj.allocs 89 MEMPOOL.busyobj.frees 89 MEMPOOL.busyobj.live 0 MEMPOOL.busyobj.pool 15 MEMPOOL.busyobj.randry 5 MEMPOOL.busyobj.recycle 84 MEMPOOL.busyobj.surplus 0 MEMPOOL.busyobj.sz_needed 65568 MEMPOOL.busyobj.sz_wanted 65536 MEMPOOL.busyobj.timeout 0 MEMPOOL.busyobj.toosmall 0 MEMPOOL.req0.allocs 31 MEMPOOL.req0.frees 31 MEMPOOL.req0.live 0 MEMPOOL.req0.pool 10 MEMPOOL.req0.randry 0 MEMPOOL.req0.recycle 31 MEMPOOL.req0.surplus 0 MEMPOOL.req0.sz_needed 65568 MEMPOOL.req0.sz_wanted 65536 MEMPOOL.req0.timeout 6 MEMPOOL.req0.toosmall 0 MEMPOOL.req1.allocs 46 MEMPOOL.req1.frees 46 MEMPOOL.req1.live 0 MEMPOOL.req1.pool 10 MEMPOOL.req1.randry 0 MEMPOOL.req1.recycle 46 MEMPOOL.req1.surplus 0 MEMPOOL.req1.sz_needed 65568 MEMPOOL.req1.sz_wanted 65536 MEMPOOL.req1.timeout 9 MEMPOOL.req1.toosmall 0 MEMPOOL.sess0.allocs 6 MEMPOOL.sess0.frees 6 MEMPOOL.sess0.live 0 MEMPOOL.sess0.pool 10 MEMPOOL.sess0.randry 0 MEMPOOL.sess0.recycle 6 MEMPOOL.sess0.surplus 0 MEMPOOL.sess0.sz_needed 528 MEMPOOL.sess0.sz_wanted 496 MEMPOOL.sess0.timeout 6 MEMPOOL.sess0.toosmall 0 MEMPOOL.sess1.allocs 9 MEMPOOL.sess1.frees 9 MEMPOOL.sess1.live 0 MEMPOOL.sess1.pool 10 MEMPOOL.sess1.randry 0 MEMPOOL.sess1.recycle 9 MEMPOOL.sess1.surplus 0 MEMPOOL.sess1.sz_needed 528 MEMPOOL.sess1.sz_wanted 496 MEMPOOL.sess1.timeout 9 MEMPOOL.sess1.toosmall 0 MEMPOOL.vbc.allocs 15 MEMPOOL.vbc.frees 0 MEMPOOL.vbc.live 15 MEMPOOL.vbc.pool 10 MEMPOOL.vbc.randry 4 MEMPOOL.vbc.recycle 11 MEMPOOL.vbc.surplus 0 MEMPOOL.vbc.sz_needed 120 MEMPOOL.vbc.sz_wanted 88 MEMPOOL.vbc.timeout 0 MEMPOOL.vbc.toosmall 0 MGT.child_died 0 MGT.child_dump 0 MGT.child_exit 0 MGT.child_panic 0 MGT.child_start 1 MGT.child_stop 0 MGT.uptime 33 SMA.Transient.c_bytes 11747000 SMA.Transient.c_fail 0 SMA.Transient.c_freed 11351168 SMA.Transient.c_req 178 SMA.Transient.g_alloc 6 SMA.Transient.g_bytes 395832 SMA.Transient.g_space 0 SMA.s0.c_bytes 0 SMA.s0.c_fail 0 SMA.s0.c_freed 0 SMA.s0.c_req 0 SMA.s0.g_alloc 0 SMA.s0.g_bytes 0 SMA.s0.g_space 20971520 VBE.default(127.0.0.1,,80).happy 0 VBE.default(127.0.0.1,,80).vcls 1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 12 12:24:26 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 12 Sep 2013 12:24:26 -0000 Subject: [Varnish] #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c Message-ID: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- 2013-09-12 14:08:17,636 - fry - INFO - varnishd (varnish-trunk revision 53e6f23) 2013-09-12 14:08:17,637 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,20M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid vcl has only (trivial) backend definition. standard siege benchmark run with 15 concurrent connections. Siege output has a lot of the following, which I haven't seen on earlier revisions: {{{ [alert] HTTP: unable to determine chunk size [alert] HTTP: unable to determine chunk size [alert] HTTP: unable to determine chunk size }}} panic.show output: {{{ 2013-09-12 14:08:23,771 - fry - WARNING - panic.show output: Last panic at: Thu, 12 Sep 2013 12:08:18 GMT Assert error in VBO_waitlen(), cache/cache_busyobj.c line 231: Condition(l <= bo->fetch_obj->len) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42f2f7: ObjIterEnd+637 0x4172c4: VBO_waitlen+c4 0x42eae2: ObjIter+52 0x4348dd: CNT_Request+266d 0x4352f8: V1D_Deliver+708 0x4325dc: CNT_Request+36c 0x42a728: HTTP1_Session+4b8 0x435dd8: RFC2616_Do_Cond+148 0x437005: SES_pool_accept_task+1f5 0x431733: Pool_Work_Thread+c3 req = 0x7f7cfc090a90 { sp = 0x7f7cf00019e0, vxid = 1073840136, step = R_STP_DELIVER, req_body = R_BODY_NONE, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 sp = 0x7f7cf00019e0 { fd = 17, vxid = 98305, client = 194.31.39.161 36757, step = S_STP_WORKING, }, worker = 0x7f7d11f8dc80 { ws = 0x7f7d11f8de78 { id = "wrk", {s,f,r,e} = {0x7f7d11f8d470,+80,+2048,+2048}, }, VCL::method = 0x0, VCL::return = deliver, }, ws = 0x7f7cfc090c38 { id = "req", {s,f,r,e} = {0x7f7cfc092290,+344,(nil),+59392}, }, http[req] = { ws = 0x7f7cfc090c38[req] "GET", "/cacheabledata/set_medialibrary1/sources/file21", "HTTP/1.1", "Host: fryer1.varnish-software.com:6081", "Accept: */*", "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70)", "Connection: keep-alive", "X-Forwarded-For: 194.31.39.161", "Accept-Encoding: gzip", }, http[resp] = { ws = 0x7f7cfc090c38[req] "HTTP/1.1", "200", "OK", "Server: Apache/2.2.22 (Ubuntu)", "Last-Modified: Mon, 26 Aug 2013 14:06:44 GMT", "ETag: "820ad0-500000-4e4da442e7251"", "Cache-Control: max-age=31104000", "Expires: Sun, 07 Sep 2014 12:08:17 GMT", "Date: Thu, 12 Sep 2013 12:08:17 GMT", "X-Varnish: 98312", "Age: 0", "Via: 1.1 varnish", "Transfer-Encoding: chunked", "Connection: keep-alive", "Accept-Ranges: bytes", }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f7cac0008c0 { vxid = 2147581961, http[obj] = { ws = 0x7f7cf8090cc0[obj] "HTTP/1.1", "OK", "Date: Thu, 12 Sep 2013 12:08:17 GMT", "Server: Apache/2.2.22 (Ubuntu)", "Last-Modified: Mon, 26 Aug 2013 14:06:44 GMT", "ETag: "820ad0-500000-4e4da442e7251"", "Cache-Control: max-age=31104000", "Expires: Sun, 07 Sep 2014 12:08:17 GMT", }, len = 0, store = { 2621440 { 00 00 20 92 23 e4 e2 41 00 00 00 7d 6f bb cb 41 |.. .#..A...}o..A| 00 00 00 4a 23 23 c1 41 00 00 00 1e af ff d6 41 |...J##.A.......A| 00 00 20 19 66 46 e6 41 00 00 60 85 3f 11 ee 41 |.. .fF.A..`.?..A| 00 00 60 41 5d d9 ea 41 00 00 20 1a 4d fd ea 41 |..`A]..A.. .M..A| [2621376 more] }, 131072 { 00 00 c0 17 02 21 dc 41 00 00 80 7f b6 1a d3 41 |.....!.A.......A| 00 00 a0 49 62 e3 e0 41 00 00 80 2a 6c 0d ed 41 |...Ib..A...*l..A| 00 00 00 46 86 ca b7 41 00 00 e0 b2 a7 eb ea 41 |...F...A.......A| 00 00 e0 16 08 42 e3 41 00 00 00 d3 4b 7d d0 41 |.....B.A....K}.A| [131008 more] }, 131072 { 00 00 40 f9 cd 92 dc 41 00 00 80 08 82 1d ef 41 |.. at ....A.......A| 00 00 80 b9 6c bd dd 41 00 00 a0 24 6b d9 ed 41 |....l..A...$k..A| 00 00 00 12 3f 10 e3 41 00 00 60 7c 96 d6 e1 41 |....?..A..`|...A| 00 00 60 ba 75 7f e3 41 00 00 40 4d 3f e0 eb 41 |..`.u..A.. at M?..A| [131008 more] }, 131072 { 00 00 00 75 4c bc c1 41 00 00 80 f2 23 d3 ce 41 |...uL..A....#..A| 00 00 a0 86 3b 0e ea 41 00 00 80 fa ed cf e8 41 |....;..A.......A| 00 00 40 8b 08 bf dc 41 00 00 e0 ee 38 24 e4 41 |.. at ....A....8$.A| 00 00 e0 b3 13 95 e0 41 00 00 80 dc e4 76 dc 41 |.......A.....v.A| [131008 more] }, 131072 { 00 00 a0 81 b4 5f ef 41 00 00 a0 ba 0a 57 e5 41 |....._.A.....W.A| 00 00 00 38 76 f3 d3 41 00 00 00 6f 0a ad b0 41 |...8v..A...o...A| 00 00 c0 3b 9c d3 e9 41 00 00 60 c5 0b 8e ea 41 |...;...A..`....A| 00 00 80 3a ca 0d d1 41 00 00 a0 2b 73 26 eb 41 |...:...A...+s&.A| [131008 more] }, 131072 { 00 00 40 97 9d e5 e6 41 00 00 00 68 d5 15 da 41 |.. at ....A...h...A| 00 00 00 09 51 8b da 41 00 00 80 89 57 83 e1 41 |....Q..A....W..A| 00 00 00 5d 84 35 d2 41 00 00 40 ae be 89 d1 41 |...].5.A.. at ....A| 00 00 80 b8 b7 35 ee 41 00 00 40 86 a2 44 ec 41 |.....5.A.. at ..D.A| [131008 more] }, 131072 { 00 00 00 1c 3e b7 9d 41 00 00 00 15 23 34 c0 41 |....>..A....#4.A| 00 00 00 d6 a7 2c b7 41 00 00 80 24 64 4e d2 41 |.....,.A...$dN.A| 00 00 00 20 37 48 6c 41 00 00 00 51 5a 77 ed 41 |... 7HlA...QZw.A| 00 00 00 18 b5 aa cc 41 00 00 20 7c 4e bd eb 41 |.......A.. |N..A| [131008 more] }, 131072 { 00 00 80 ce 7e 54 e6 41 00 00 00 5c d0 be e7 41 |....~T.A...\...A| 00 00 00 c0 fd 9f d1 41 00 00 80 67 7a 4e e7 41 |.......A...gzN.A| 00 00 40 4d 4b 5a e9 41 00 00 40 62 b7 82 e1 41 |.. at MKZ.A..@b...A| 00 00 00 b6 5c b9 ee 41 00 00 00 27 bf 46 d9 41 |....\..A...'.F.A| [131008 more] }, 131072 { 00 00 c0 15 0d fa e8 41 00 00 40 72 0c c2 e5 41 |.......A.. at r...A| 00 00 40 a5 fb 4c d3 41 00 00 00 7f ef 39 c5 41 |.. at ..L.A.....9.A| 00 00 40 63 f0 67 db 41 00 00 c0 3f 51 20 d1 41 |.. at c.g.A...?Q .A| 00 00 00 48 00 f9 88 41 00 00 00 b5 f2 ee cf 41 |...H...A.......A| [131008 more] }, 131072 { 00 00 60 16 b9 8e ee 41 00 00 60 39 25 ae e7 41 |..`....A..`9%..A| 00 00 c0 45 81 1a d2 41 00 00 00 0e 22 aa a4 41 |...E...A...."..A| 00 00 60 ac ec fd e0 41 00 00 a0 30 08 6e e5 41 |..`....A...0.n.A| 00 00 a0 91 f6 37 e3 41 00 00 a0 fa 83 69 e8 41 |.....7.A.....i.A| [131008 more] }, 131072 { 00 00 00 10 8e d8 a2 41 00 00 00 e2 9a f6 cd 41 |.......A.......A| 00 00 00 4f 40 ea ce 41 00 00 a0 ba a5 59 eb 41 |...O at ..A.....Y.A| 00 00 80 a0 15 26 c4 41 00 00 00 33 64 8c cd 41 |.....&.A...3d..A| 00 00 c0 46 45 70 d0 41 00 00 00 1a 63 fc d4 41 |...FEp.A....c..A| [131008 more] }, 131072 { 00 00 80 f1 b6 64 ed 41 00 00 40 d7 46 e7 ea 41 |.....d.A.. at .F..A| 00 00 c0 1c 8e dc e3 41 00 00 00 73 e6 dd c7 41 |.......A...s...A| 00 00 00 7d 66 5f d5 41 00 00 c0 4e a4 fa d6 41 |...}f_.A...N...A| 00 00 a0 07 3f cc e8 41 00 00 80 12 b8 29 ec 41 |....?..A.....).A| [131008 more] }, 131072 { 00 00 60 c1 9b d7 eb 41 00 00 c0 b4 c5 22 e2 41 |..`....A.....".A| 00 00 a0 9e 2c 57 e6 41 00 00 80 5b 36 4c c2 41 |....,W.A...[6L.A| 00 00 e0 91 5e 75 e7 41 00 00 80 ae aa b9 e2 41 |....^u.A.......A| 00 00 40 f6 dc f1 e2 41 00 00 80 d1 68 37 df 41 |.. at ....A....h7.A| [131008 more] }, 131072 { 00 00 80 59 c7 a6 ec 41 00 00 c0 54 65 47 ed 41 |...Y...A...TeG.A| 00 00 80 b2 3e 2f cf 41 00 00 00 b4 1f 6d bc 41 |....>/.A.....m.A| 00 00 00 9c 3b f9 dd 41 00 00 60 bd c2 36 ee 41 |....;..A..`..6.A| 00 00 00 f8 0b a6 85 41 00 00 c0 96 64 62 e6 41 |.......A....db.A| [131008 more] }, 131072 { 00 00 00 09 01 47 e8 41 00 00 80 6a b7 0d c8 41 |.....G.A...j...A| 00 00 40 5a 2c ae e4 41 00 00 e0 a1 fa a1 e2 41 |.. at Z,..A.......A| 00 00 80 aa bb 9e c4 41 00 00 80 83 f2 cb e8 41 |.......A.......A| 00 00 00 d3 40 2b d4 41 00 00 80 01 3e d2 d1 41 |.... at +.A....>..A| [131008 more] }, 131072 { 00 00 a0 61 79 86 e1 41 00 00 80 1e 21 60 e5 41 |...ay..A....!`.A| 00 00 80 3d 01 b6 c8 41 00 00 20 68 75 4d ea 41 |...=...A.. huM.A| 00 00 80 be 34 72 cb 41 00 00 40 72 dc a0 e9 41 |....4r.A.. at r...A| 00 00 a0 9e cc c8 e1 41 00 00 e0 24 73 b6 e5 41 |.......A...$s..A| [131008 more] }, 131072 { 00 00 00 be 21 b7 c4 41 00 00 20 54 cf 53 e8 41 |....!..A.. T.S.A| 00 00 00 42 c3 27 e7 41 00 00 00 58 ac 07 9b 41 |...B.'.A...X...A| 00 00 20 21 10 be e2 41 00 00 20 ac 13 36 e7 41 |.. !...A.. ..6.A| 00 00 00 a4 1c 09 a2 41 00 00 20 16 fc 79 e3 41 |.......A.. ..y.A| [131008 more] }, 131072 { 00 00 80 44 75 df c0 41 00 00 e0 ef 9a 07 e6 41 |...Du..A.......A| 00 00 80 69 f5 25 dd 41 00 00 00 40 f0 db e7 41 |...i.%.A... at ...A| 00 00 00 ae 28 9b c1 41 00 00 80 61 21 2b c5 41 |....(..A...a!+.A| 00 00 80 d5 06 c1 d3 41 00 00 00 b3 0d 88 e9 41 |.......A.......A| [131008 more] }, 131072 { 00 00 a0 f8 2b b5 e7 41 00 00 00 44 f6 21 b9 41 |....+..A...D.!.A| 00 00 40 59 83 c5 ef 41 00 00 80 67 b6 58 cb 41 |.. at Y...A...g.X.A| 00 00 20 52 bd 30 e6 41 00 00 40 93 86 b6 e6 41 |.. R.0.A.. at ....A| 00 00 c0 a6 b5 58 d7 41 00 00 c0 df f6 52 eb 41 |.....X.A.....R.A| [131008 more] }, 131072 { 00 00 c0 89 be 99 d5 41 00 00 00 10 be fc b1 41 |.......A.......A| 00 00 c0 b7 c5 01 ea 41 00 00 40 b8 6b fe ea 41 |.......A.. at .k..A| 00 00 20 c7 91 81 e1 41 00 00 00 51 10 77 e2 41 |.. ....A...Q.w.A| 00 00 60 7d 66 22 e7 41 00 00 c0 e4 8b a8 ee 41 |..`}f".A.......A| [131008 more] }, }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 12 13:01:02 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 12 Sep 2013 13:01:02 -0000 Subject: [Varnish] #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c In-Reply-To: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> References: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> Message-ID: <061.e82811bdc98a189bcc9ec2fe6d757fca@varnish-cache.org> #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): The objects being requested are 5MB in size. Tested with different malloc storage sizes. Asserts on 20MB, 50MB, seems to work on bigger cache sizes. Having a little trouble of reproducing this after the initial 3-5 runs. Siege gets stuck and eats all the cpu it can get. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 12 14:27:04 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 12 Sep 2013 14:27:04 -0000 Subject: [Varnish] #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: Message-ID: <046.1a2248d39f503a7625f8419094025f1a@varnish-cache.org> #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- I have seen an assert in current master (53e6f23) This happened once, I've not been able to reproduce it after ~6 subsequent runs. 30 concurrent siege workers. (pulls ~170MB/s, 2x 1GE bonded) startup args: {{{ /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,50M -p thread_pool_max=3000 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid }}} vcl: {{{ backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_backend_response { set beresp.do_stream = false; }## end }}} {{{ Last panic at: Thu, 12 Sep 2013 14:00:30 GMT Assert error in hsh_rush(), cache/cache_hash.c line 534: Condition((req->wrk) == 0) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42f327: ObjIterEnd+637 0x423081: VGZ_Destroy+971 0x4244c1: HSH_DerefObjCore+b1 0x416c95: VBO_DerefBusyObj+165 0x41f02e: EXP_Init+a1e 0x431763: Pool_Work_Thread+c3 0x444598: WRK_SumStat+108 0x7fbd4a110e9a: _end+7fbd49a8c702 0x7fbd49e3dccd: _end+7fbd497b9535 busyobj = 0x7fbd180008e0 { ws = 0x7fbd18000980 { id = "bo", {s,f,r,e} = {0x7fbd18002890,+32792,(nil),+57424}, }, is_gunzip bodystatus = 4 (length), }, ws = 0x7fbd18000b10 { id = "obj", {s,f,r,e} = {0x7fbc980115b0,+256,(nil),+264}, }, } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 13 11:33:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Sep 2013 11:33:59 -0000 Subject: [Varnish] #1343: Storage file handling fails if file exists Message-ID: <046.7e806953bab79cec3b63fad3eb7638f4@varnish-cache.org> #1343: Storage file handling fails if file exists ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: ----------------------+------------------- When starting Varnish with -sfile,/tmp/file,SIZE , and the SIZE has changed since last run, the error message output when the file exists is confusing. Expected result (for me): File is truncated to new size, Varnishd starts as usual. {{{ root at fryer1:~# /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -s file,/tmp/frosk,350G -d Platform: Linux,3.2.0-51-generic,x86_64,-sfile,-smalloc,-hcritbit 200 273 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.2.0-51-generic,x86_64,-sfile,-smalloc,-hcritbit varnish-trunk revision 09548e6 Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. quit 500 22 Closing CLI connection root at fryer1:~# ls -l /tmp total 46128 drwx------ 2 root root 4096 Sep 9 14:17 aheaQ147 drwx------ 2 root root 4096 Sep 9 15:24 am4tAtn17I drwx------ 2 root root 4096 Sep 9 14:17 am4tWn0ufF -rw------- 1 root root 375809638400 Sep 13 13:28 frosk [..] root at fryer1:~# /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -s file,/tmp/frosk,10G -d WARNING: (-sfile) file size reduced to 344769626112 (80% of available disk space) Platform: Linux,3.2.0-51-generic,x86_64,-sfile,-smalloc,-hcritbit 200 273 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.2.0-51-generic,x86_64,-sfile,-smalloc,-hcritbit varnish-trunk revision 09548e6 Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. quit 500 22 Closing CLI connection root at fryer1:~# root at fryer1:~# ls -l /tmp/frosk -rw------- 1 root root 344769626112 Sep 13 13:28 /tmp/frosk }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 13 11:34:20 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Sep 2013 11:34:20 -0000 Subject: [Varnish] #1343: Storage file handling is confused if file exists (was: Storage file handling fails if file exists) In-Reply-To: <046.7e806953bab79cec3b63fad3eb7638f4@varnish-cache.org> References: <046.7e806953bab79cec3b63fad3eb7638f4@varnish-cache.org> Message-ID: <061.4cbb5565f8a5c18876269e4fcb281920@varnish-cache.org> #1343: Storage file handling is confused if file exists ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: Keywords: | ----------------------+-------------------- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 13 17:36:14 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Sep 2013 17:36:14 -0000 Subject: [Varnish] #1344: Varnish Panic Messages: http_SetH(), Tcheck() in version 2.1.5 Message-ID: <046.59f950dde6ddcda1c09ba3732a0f206b@varnish-cache.org> #1344: Varnish Panic Messages: http_SetH(), Tcheck() in version 2.1.5 ----------------------+-------------------- Reporter: zkisling | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Keywords: | ----------------------+-------------------- Our Varnish expert recently left our organization and we are seeking some advice on Varnish restarts involving Panic messages found on Varnish version 2.1.5. We have 3 Varnish servers across our stack all running the same version. We have discovered Varnish restarts that occur about 10 times an hour due to Panic messages. These panic messages are: Panic message: Assert error in Tcheck(), cache.h line 747: Panic message: Assert error in http_SetH(), cache_http.c line 656: After research we see that these errors have been fixed in Varnish 3.*.* With our current setup we are looking for options to fix this without upgrading to Varnish 3.*.*. varnishstat -1 output: {{{ client_conn 420639 180.92 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 420638 180.92 Client requests received cache_hit 636898 273.93 Cache hits cache_hitpass 654 0.28 Cache hits for pass cache_miss 61884 26.62 Cache misses backend_conn 85515 36.78 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 61913 26.63 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 61914 26.63 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 36 0.02 Fetch head fetch_length 94274 40.55 Fetch with Length fetch_chunked 49693 21.37 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 3174 1.37 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 103 . N struct sess_mem n_sess 18446744073709551614 . N struct sess n_object 60059 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 60149 . N struct objectcore n_objecthead 59108 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 2 . N struct vbe_conn n_wrk 1200 . N worker threads n_wrk_create 1200 0.52 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 71261 30.65 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 141 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 14 . N backends n_expired 1813 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 180990 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 607488 261.29 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 420639 180.92 Total Sessions s_req 420639 180.92 Total Requests s_pipe 232 0.10 Total pipe s_pass 85306 36.69 Total pass s_fetch 147178 63.30 Total fetch s_hdrbytes 161409628 69423.50 Total header bytes s_bodybytes 21518168795 9255126.36 Total body bytes sess_closed 420639 180.92 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 29693679 12771.47 SHM records shm_writes 2373635 1020.92 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 4607 1.98 SHM MTX contention shm_cycles 13 0.01 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 227407 97.81 SMA allocator requests sma_nobj 140508 . SMA outstanding allocations sma_nbytes 6450545316 . SMA outstanding bytes sma_balloc 9858678676 . SMA bytes allocated sma_bfree 3408133360 . SMA bytes free sms_nreq 1048 0.45 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 9542 . SMS bytes allocated sms_bfree 9542 . SMS bytes freed backend_req 147196 63.31 Backend requests made n_vcl 11 0.00 N vcl total n_vcl_avail 11 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 877 . N total active purges n_purge_add 1122 0.48 N new purges added n_purge_retire 245 0.11 N old purges deleted n_purge_obj_test 160768 69.15 N objects tested n_purge_re_test 3407768 1465.71 N regexps tested against n_purge_dups 973 0.42 N duplicate purges removed hcb_nolock 699433 300.83 HCB Lookups without lock hcb_lock 59334 25.52 HCB Lookups with lock hcb_insert 59333 25.52 HCB Inserts esi_parse 23278 10.01 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 2325 1.00 Client uptime backend_retry 6 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) }}} And log output: {{{ Panic message: Assert error in http_SetH(), cache_http.c line 656: Condition((fm) != 0) not true. thread = (cache-worker) ident = Linux,2.6.18-308.24.1.el5,x86_64,-smalloc,-hcritbit,epoll... Panic message: Assert error in Tcheck(), cache.h line 747: Condition((t.b) != 0) not true. thread = (cache-worker) ident = Linux,2.6.18-308.24.1.el5,x86_64,-smalloc,-hcritbit,epoll... }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 14 13:14:31 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 14 Sep 2013 13:14:31 -0000 Subject: [Varnish] #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: In-Reply-To: <046.1a2248d39f503a7625f8419094025f1a@varnish-cache.org> References: <046.1a2248d39f503a7625f8419094025f1a@varnish-cache.org> Message-ID: <061.830510693e0abcfa84fe8f4abf7e0d39@varnish-cache.org> #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): Reproduced in the nightly fryer run last night. This happened 184 seconds after load was applied. Same VCL as in initial report. 2013-09-14 00:01:24,804 - fry - INFO - run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,0.8G -p thread_pool_max=3000 -p default_ttl=86400 -p thread_pool_min=1000 -P /opt/varnish/var/varnishd.pid 2013-09-14 00:01:27,776 - fry - INFO - run[fryer2]: wget -q -O set_imagehosting1.txt http://fryer1.varnish- software.com/cacheabledata/urlsets/set_imagehosting1.txt 2013-09-14 00:01:28,214 - fry - INFO - run[fryer2]: ulimit -n 65548; siege -t 1H -c 20 --file=set_imagehosting1.txt {{{ 2013-09-14 00:04:33,476 - fry - WARNING - panic.show output: Last panic at: Fri, 13 Sep 2013 22:04:32 GMT Assert error in hsh_rush(), cache/cache_hash.c line 534: Condition((req->wrk) == 0) not true. thread = (cache-worker) ident = Linux,3.2.0-51-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtr ace: 0x42f2f7: ObjIterEnd+637 0x423051: VGZ_Destroy+971 0x423d66: HSH_Unbusy+f6 0x41f60c: EXP_Init+102c 0x431733: Pool_Work_Thread+c3 0x444568: WRK_SumStat+108 0x7fcc8731be9a: _end+7fcc86c97702 0x7fcc87048ccd: _end+7fcc869c4535 busyobj = 0x7fcc4c080a60 { ws = 0x7fcc4c080b00 { id = "bo", {s,f,r,e} = {0x7fcc4c082a10,+32792,(nil),+57424}, }, is_gunzip bodystatus = 4 (length), }, ws = 0x7fcc4c080c90 { id = "obj", {s,f,r,e} = {0x7fcbec2a0590,+216,(nil),+224}, }, } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 09:14:33 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 09:14:33 -0000 Subject: [Varnish] #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: In-Reply-To: <046.1a2248d39f503a7625f8419094025f1a@varnish-cache.org> References: <046.1a2248d39f503a7625f8419094025f1a@varnish-cache.org> Message-ID: <061.ccdd9d8919960b4547f12601afeaf9be@varnish-cache.org> #1342: Assert error in hsh_rush(), cache/cache_hash.c line 534: ----------------------+---------------------------------------- Reporter: lkarsten | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [2c1606779d86b5afee891465031f4f159b3e29d6]: {{{ #!CommitTicketReference repository="" revision="2c1606779d86b5afee891465031f4f159b3e29d6" Clear the req->wrk while holding the oh mutex, to close a race against the busy-list. Fixes #1342 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 10:19:30 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 10:19:30 -0000 Subject: [Varnish] #1333: ESI include parsing HTTPS urls as relative not absolute In-Reply-To: <045.6b8080c61995515ca77fff449e5494d0@varnish-cache.org> References: <045.6b8080c61995515ca77fff449e5494d0@varnish-cache.org> Message-ID: <060.0d14f7496974f0b3251d68cb86260ab3@varnish-cache.org> #1333: ESI include parsing HTTPS urls as relative not absolute ----------------------+---------------------- Reporter: rabbitt | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: In [69ad7389fe3aee4481aa4f291461bc03e1bc172a]: {{{ #!CommitTicketReference repository="" revision="69ad7389fe3aee4481aa4f291461bc03e1bc172a" By default ignore an with src="https://..." Feature +esi_ignore_https treats it as http://... instead. Default is "safe" in order to not expose any data by accident. Fixes #1333 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 10:23:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 10:23:08 -0000 Subject: [Varnish] #1139: default_keep makes objects stay around too long In-Reply-To: <044.8c1a9c748ce289914d1adcd0203efe6a@varnish-cache.org> References: <044.8c1a9c748ce289914d1adcd0203efe6a@varnish-cache.org> Message-ID: <059.7c3d9949f1a3cc47000ae10600670dc6@varnish-cache.org> #1139: default_keep makes objects stay around too long ----------------------+-------------------- Reporter: martin | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by martin): From discussion during bugwash 2013-09-16: * Remove the -1 being treated as default logic from the TTL routines * Set up new objects with the default values as part of the initial TTL set up This should give us the least surprising behavior, where change of default values will never affect existing objects. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 10:24:12 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 10:24:12 -0000 Subject: [Varnish] #1343: Storage file handling is confused if file exists In-Reply-To: <046.7e806953bab79cec3b63fad3eb7638f4@varnish-cache.org> References: <046.7e806953bab79cec3b63fad3eb7638f4@varnish-cache.org> Message-ID: <061.ff48cbb47d11708a43e315024e7db834@varnish-cache.org> #1343: Storage file handling is confused if file exists ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: Keywords: | ----------------------+-------------------- Comment (by tfheen): Based on IRC conversation: - If size is specified on the command line, the file should be adjusted to match. - If the storage is persistent and the size does not match, error out (with a sensible message) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 10:27:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 10:27:22 -0000 Subject: [Varnish] #1344: Varnish Panic Messages: http_SetH(), Tcheck() in version 2.1.5 In-Reply-To: <046.59f950dde6ddcda1c09ba3732a0f206b@varnish-cache.org> References: <046.59f950dde6ddcda1c09ba3732a0f206b@varnish-cache.org> Message-ID: <061.a35e41b18767fb9e5c226c18b154da2f@varnish-cache.org> #1344: Varnish Panic Messages: http_SetH(), Tcheck() in version 2.1.5 ----------------------+------------------------ Reporter: zkisling | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 2.1.5 Severity: normal | Resolution: duplicate Keywords: | ----------------------+------------------------ Changes (by scoof): * status: new => closed * resolution: => duplicate Comment: Duplicate of #1031. Please only use the bug tracker for current bugs. This is not a support system. This will not be fixed in 2.1. You're free to backport the patch(es) to 3.0, and you may be able to get some help doing so on the mailing lists. The root cause (according to #1031) seem to be overflow of the obj workspace, maybe you can get some help on the mailing lists to avoid that, if you post your vcl. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 16 10:29:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Sep 2013 10:29:35 -0000 Subject: [Varnish] #1340: Multiple backend requests per miss In-Reply-To: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> References: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> Message-ID: <061.7ed5f89f363e9d5e32f1e0bd51a3ca68@varnish-cache.org> #1340: Multiple backend requests per miss ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): As discussed on IRC, I'll retest this with streaming/conditionals disabled. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 18 09:29:36 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 18 Sep 2013 09:29:36 -0000 Subject: [Varnish] #1340: Multiple backend requests per miss In-Reply-To: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> References: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> Message-ID: <061.aef5166e0b7093dea12069850da2e4d2@varnish-cache.org> #1340: Multiple backend requests per miss ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): Retested and confirmed in latest master with conditionals and streaming disabled. {{{ 127.0.0.1 - - [18/Sep/2013:11:27:17 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" 127.0.0.1 - - [18/Sep/2013:11:27:22 +0200] "GET /cacheabledata/set_tiny1/index.txt HTTP/1.1" 200 306 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.70)" }}} varnishd (varnish-trunk revision 731d208) root at fryer2:~# siege -c 15 http://fryer1:6081/cacheabledata/set_tiny1/index.txt root at fryer1:/# /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,8G -p thread_pool_max=3000 -p default_ttl=5 -p thread_pool_min=1000 {{{ root at fryer1:~# cat /opt/varnish/etc/testsuite.vcl backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { unset req.http.if-modified-since; unset req.http.if-none-match; } sub vcl_backend_response { set beresp.do_stream = false; } root at fryer1:~# }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 18 10:05:46 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 18 Sep 2013 10:05:46 -0000 Subject: [Varnish] #1340: Multiple backend requests per miss In-Reply-To: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> References: <046.428f3fe760f6bc854cf7d82980f4a006@varnish-cache.org> Message-ID: <061.eed05338a849969823aefe72410e75f9@varnish-cache.org> #1340: Multiple backend requests per miss ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): I took a varnishlog dump of a subsequent run. root at fryer1:~# varnishlog -r foo.log | LC_ALL=c grep MISS - Debug "XXXX MISS" - VCL_call MISS - VCL_call MISS root at fryer1:~# Entire varnishlog file (~230MB, 40MB gzip-9ed) is here (temporarily): http://hyse.org/1340/foo.log.gz -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 18 11:00:22 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 18 Sep 2013 11:00:22 -0000 Subject: [Varnish] #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c In-Reply-To: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> References: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> Message-ID: <061.35c1807c6761dc597c8a82ade59a781d@varnish-cache.org> #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): I can not reproduce this any more with streaming disabled. (on 3f0f848) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 18 11:17:32 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 18 Sep 2013 11:17:32 -0000 Subject: [Varnish] #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c In-Reply-To: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> References: <046.744aeb614ae2d3396c09a0492d773d2d@varnish-cache.org> Message-ID: <061.0de471d50e750f108badc04b4363e9f9@varnish-cache.org> #1341: Assert error in VBO_waitlen(), cache/cache_busyobj.c ----------------------+-------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by lkarsten): As a data point; with streaming on/not disabled, very strange things happen. I don't get the Varnish assert, but siege appears to get stuck and eat all the CPU it gets. (as before) This is probably related to bad timeout handling (best guess) in siege. Here are the last logged entries before it gets stuck: {{{ * << Request >> 92 - Begin req 91 - ReqMethod GET - ReqURL /cacheabledata/set_medialibrary1/file106.bin - ReqProtocol HTTP/1.1 - ReqHeader Host: fryer1.varnish-software.com:6081 - ReqHeader Accept: */* - ReqHeader Accept-Encoding: gzip - ReqHeader User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.70) - ReqHeader Connection: close - ReqStart 194.31.39.161 38079 - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - Link bereq 93 - VCL_call DELIVER - VCL_return deliver - Debug "RES_MODE 8" - RespProtocol HTTP/1.1 - RespStatus 200 - RespResponse OK - RespHeader Server: Apache/2.2.22 (Ubuntu) - RespHeader Last-Modified: Wed, 18 Sep 2013 10:27:12 GMT - RespHeader ETag: "82468f-7c5bf0-4e6a5e163aefb" - RespHeader Cache-Control: max-age=31104000 - RespHeader Expires: Sat, 13 Sep 2014 11:14:55 GMT - RespHeader Content-Type: application/octet-stream - RespHeader Date: Wed, 18 Sep 2013 11:14:55 GMT - RespHeader X-Varnish: 92 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish - RespHeader Transfer-Encoding: chunked - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Debug "Hit idle send timeout, wrote = 4255670/7281991; retrying" - Debug "Write error, retval = -1, len = 3026321, errno = Bad address" - Debug "XXX REF 2" - ReqEnd 1379502895.270712376 1379502895.270682573 -0.000737429 0.000707626 -0.000737429 - End * << Session >> 91 - Begin sess - SessOpen 194.31.39.161 38079 :6081 194.31.39.160 6081 1379502895.270683 12 - Link req 92 - SessClose REM_CLOSE 0.868 1 0 0 1 394 8150000 - End }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 18 11:29:19 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 18 Sep 2013 11:29:19 -0000 Subject: [Varnish] #1345: varnishlog -d does not exit after dump is completed Message-ID: <046.42c68e867a537b9306aafc62a4483ad0@varnish-cache.org> #1345: varnishlog -d does not exit after dump is completed ------------------------+--------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: unknown Severity: normal | Keywords: ------------------------+--------------------- varnishlog in master (1ca91d9) invoked with "-d" does not exit after the entire log. Requests arriving after that are logged as usual. Expected behaviour: output entire varnishlog segment and then exit. (as in 3.0) This is very useful when taking diagnostic dumps of the log. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 19 14:59:44 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 19 Sep 2013 14:59:44 -0000 Subject: [Varnish] #1346: 400 Error In Varnish Message-ID: <047.fff92aea95de6991a62db12fef97bbe3@varnish-cache.org> #1346: 400 Error In Varnish -----------------------+---------------------- Reporter: abhilashn | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | -----------------------+---------------------- Hi Team, We were getting 400 error in our application URL due to Varnish. It was resolved by restarting Varnish. Can you help us to understand what all can cause the 400 errors in the server. We are running varnish-3.0.2-1.el5. Thank you. -- With Best Regards, Abhilash V Nair -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 19 15:53:39 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 19 Sep 2013 15:53:39 -0000 Subject: [Varnish] #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver In-Reply-To: <044.5d092cfd81b0ae7cedd771400862058b@varnish-cache.org> References: <044.5d092cfd81b0ae7cedd771400862058b@varnish-cache.org> Message-ID: <059.618c1b0647dff0d0719dd588487eec37@varnish-cache.org> #979: sp->wrk->h_content_length is not cleared when do_stream and restart in vcl_deliver ----------------------+----------------------- Reporter: martin | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by gquintard): * status: closed => reopened * resolution: fixed => Comment: Sorry to dig out old corpses, but it seems this test only works because the vfp is not set in cnt_fetchbody. If the server reply with gzip, we have a segfault. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Sep 22 18:07:36 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 22 Sep 2013 18:07:36 -0000 Subject: [Varnish] #1346: 400 Error In Varnish In-Reply-To: <047.fff92aea95de6991a62db12fef97bbe3@varnish-cache.org> References: <047.fff92aea95de6991a62db12fef97bbe3@varnish-cache.org> Message-ID: <062.d3e59ef7cc6e3bd7ff2286fda605a9a8@varnish-cache.org> #1346: 400 Error In Varnish -----------------------+---------------------- Reporter: abhilashn | Owner: Type: defect | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: 3.0.2 Severity: trivial | Resolution: invalid Keywords: | -----------------------+---------------------- Changes (by perbu): * priority: high => lowest * status: new => closed * resolution: => invalid * severity: normal => trivial Comment: This does not seem to be a bug. Please ask on the webforums or the varnish-misc list. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 23 10:48:17 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Sep 2013 10:48:17 -0000 Subject: [Varnish] #1347: varnishtest: cache_hitpass counter does not counting hit_for_pass Message-ID: <040.e7cb1743e83e491b15d7ace8e8d4ffed@varnish-cache.org> #1347: varnishtest: cache_hitpass counter does not counting hit_for_pass -------------------+------------------------- Reporter: lo | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.4 | Severity: normal Keywords: | -------------------+------------------------- {{{ varnishtest "Hit for pass counter" server s1 { rxreq txresp } -start varnish v1 -vcl+backend { sub vcl_fetch { return(hit_for_pass); } } -start client c1 { txreq -url "/" rxresp } -run varnish v1 -expect cache_hitpass == 1 }}} Results in: {{{ ---- v1 1.4 Not true: cache_hitpass (0) == 1 (1) }}} This is Varnish compiled from git tag varnish-3.0.4 running on Debian Wheezy. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 23 12:32:52 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Sep 2013 12:32:52 -0000 Subject: [Varnish] #1348: varnishadm still loops on piped input Message-ID: <041.61299e9631a40ea2849cd0fac22e72d1@varnish-cache.org> #1348: varnishadm still loops on piped input -----------------------------+-------------------- Reporter: ghp | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.4 | Severity: normal Keywords: varnishadm pipe | -----------------------------+-------------------- varnishadm still hangs (with high CPU usage) when commands are piped to it. @ echo quit | varnishadm 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,2.6.32-279.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit varnish-3.0.4 revision 9f83e8f Type 'help' for command list. Type 'quit' to close CLI session. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 24 14:53:18 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 24 Sep 2013 14:53:18 -0000 Subject: [Varnish] #1349: No exact match on varnishadm backend.set_health Message-ID: <046.8a05e196ff21820ddf1411b60bf8db31@varnish-cache.org> #1349: No exact match on varnishadm backend.set_health --------------------------------------------+---------------------- Reporter: tmagnien | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.3 | Severity: normal Keywords: exact match backend.set_health | --------------------------------------------+---------------------- Hi, When using varnishadm backend.set_health, matcher does only a partial match and can catch several backends beginning the same, e.g. backend5 / backend500. Faulty line is in cache_backend_cfg.c, line 387: {{{ if (name_b != NULL && strncmp(b->vcl_name, name_b, name_l) != 0) }}} The match is done on length of the matcher, not on length of actual backend name. Tested on 3.0.3, did not check in trunk. Regards, Thierry -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 24 17:20:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 24 Sep 2013 17:20:57 -0000 Subject: [Varnish] #1350: IMS+Vary asserts Message-ID: <043.94ca1de2e0fa693c674e71707fedece7@varnish-cache.org> #1350: IMS+Vary asserts ----------------------+------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- varnishd (varnish-trunk revision 00090ea) See attached test case {{{ Last panic at: Tue, 24 Sep 2013 16:53:45 GMT Assert error in http_GetHdr(), cache/cache_http.c line 264: Condition(l == strlen(hdr + 1)) not true. thread = (cache-worker) ident = Linux,3.9-1-686-pae,i686,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x80797ce: ObjIterEnd+6ee 0x80700a9: http_GetHdr+109 0x80851a4: VRY_Match+f4 0x806e04c: HSH_Lookup+39c 0x807d9de: CNT_Request+54e 0x8073c5b: HTTP1_Session+1fb 0x8081373: RFC2616_Do_Cond+173 0x8082aa9: SES_pool_accept_task+259 0x807c0e5: Pool_Work_Thread+e5 0x8092575: WRK_SumStat+125 req = 0x90c7748 { sp = 0xaa405108, vxid = 1073774598, step = R_STP_LOOKUP, req_body = R_BODY_NONE, restarts = 0, esi_level = 0 sp = 0xaa405108 { fd = 16, vxid = 32773, client = 127.0.0.1 56213, step = S_STP_WORKING, }, worker = 0xa9586160 { ws = 0xa9586324 { id = "wrk", {s,f,r,e} = {0xa9585948,0xa9585948,(nil),+2048}, }, VCL::method = 0x0, VCL::return = lookup, }, ws = 0x90c785c { id = "req", {s,f,r,e} = {0x90c8828,+124,+20256,+20256}, }, http[req] = { ws = 0x90c785c[req] "GET", "/", "HTTP/1.1", "User-Agent: curl/7.31.0", "Host: localhost:6081", "Accept: */*", "X-Forwarded-For: 127.0.0.1", }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 26 10:48:02 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Sep 2013 10:48:02 -0000 Subject: [Varnish] #1351: varnishlog segfault in vtx_scan_linktag Message-ID: <046.027c3895b7f89072c5e8e2654f1a32d2@varnish-cache.org> #1351: varnishlog segfault in vtx_scan_linktag ------------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: trunk Severity: normal | Keywords: ------------------------+------------------- varnishlog segmentation faults in current master. (00090ea) Produce with: ``varnishlog > /dev/null &``, crashes after 3-5 seconds. varnishd is running with reduced shmlog size (-l 1M), and ~10kreq/s against a single small object. 12:37:55 [ INFO] run[fryer1]: /opt/varnish/sbin/varnishd -T localhost:6082 -a :6081 -f /opt/varnish/etc/testsuite.vcl -smalloc,50M -l1M -p default_ttl=86400 -p thread_pool_min=100 12:37:57 [DEBUG] scp[fryer2]: ['scp', 'templates/siegerc.tmpl', 'root at fryer2.varnish-software.com:.siegerc'] 12:37:57 [ INFO] run[fryer2]: ulimit -n 65548; siege -t 60m -c 3 http://fryer1.varnish-software.com:6081/cacheabledata/set_hot1/index.html {{{ Program terminated with signal 11, Segmentation fault. #0 0x00007f40e9e8cc3a in vtx_scan_linktag (vslq=0x2305430, vtx=0x2305720, ptr=0x23273a0) at vsl_dispatch.c:596 596 assert(VSL_TAG(ptr) == SLT_Link); (gdb) bt full #0 0x00007f40e9e8cc3a in vtx_scan_linktag (vslq=0x2305430, vtx=0x2305720, ptr=0x23273a0) at vsl_dispatch.c:596 i = 0 c_type = 32576 c_vxid = 36860832 c_vtx = 0x23057c0 __func__ = "vtx_scan_linktag" #1 0x00007f40e9e8cf7e in vtx_scan (vslq=0x2305430, vtx=0x2305720) at vsl_dispatch.c:662 ptr = 0x23273a0 tag = SLT_Link ret = 0x0 __func__ = "vtx_scan" #2 0x00007f40e9e8e3db in VSLQ_Dispatch (vslq=0x2305430, func=0x401790 , priv=0x0) at vsl_dispatch.c:954 c = 0x2305328 i = 1 tag = SLT_Link len = 5 vxid = 65537 vtx = 0x2305720 now = 4.9406564584124654e-317 __func__ = "VSLQ_Dispatch" #3 0x00000000004028d8 in VUT_Main (func=0x401790 , priv=0x0) at ../../lib/libvarnishtools/vut.c:297 c = 0x0 i = 0 __func__ = "VUT_Main" #4 0x0000000000401ba0 in main (argc=1, argv=0x7fff55ca7918) at varnishlog.c:83 opt = -1 '\377' (gdb) }}} vcl is standard but with streaming+conditionals disabled. {{{ vcl 4.0; # Autogenerated by varnish-fry. backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { unset req.http.if-modified-since; unset req.http.if-none- match; } sub vcl_backend_response { set beresp.do_stream = false; } }}} A rerun to confirm (this time to stdout, not /dev/null) made it crash with this (possibly unrelated) backtrace: {{{ (gdb) bt full #0 0x00007fb44718b425 in raise () from /lib/x86_64-linux-gnu/libc.so.6 No symbol table info available. #1 0x00007fb44718eb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6 No symbol table info available. #2 0x00007fb447c3c41d in VAS_Fail_default ( func=0x7fb447c52704 "vslc_vtx_next", file=0x7fb447c51e7e "vsl_dispatch.c", line=212, cond=0x7fb447c51f48 "c->offset <= c->vtx->len", err=0, kind=VAS_ASSERT) at ../libvarnish/vas.c:67 No locals. #3 0x00007fb447c47c71 in vslc_vtx_next (cursor=0xd3bca0) at vsl_dispatch.c:212 c = 0xd3bc98 chunk = 0xd3bce0 __func__ = "vslc_vtx_next" #4 0x00007fb447c463dc in VSL_Next (cursor=0xd3bca0) at vsl_cursor.c:464 tbl = 0x7fb447e578c0 __func__ = "VSL_Next" #5 0x00007fb447c4d234 in VSL_PrintTransactions (vsl=0xd3b010, pt=0x7fff54bdb760, fo=0x7fb44750e260) at vsl.c:346 t = 0x7fff54bdb780 i = 0 delim = 1 verbose = 0 __func__ = "VSL_PrintTransactions" #6 0x00007fb447c4a754 in vslq_callback (vslq=0xd3b430, vtx=0x0, func=0x401790 , priv=0x0) at vsl_dispatch.c:754 n = 1 vtxs = 0x7fff54bdb7b0 trans = 0x7fff54bdb780 ptrans = 0x7fff54bdb760 i = 1 j = 1 __func__ = "vslq_callback" #7 0x00007fb447c4b448 in VSLQ_Dispatch (vslq=0xd3b430, func=0x401790 , priv=0x0) at vsl_dispatch.c:957 c = 0xd3b328 i = 1 tag = SLT__Batch len = 288 vxid = 4309726 ---Type to continue, or q to quit--- vtx = 0xd3bc00 now = 4.9406564584124654e-317 __func__ = "VSLQ_Dispatch" #8 0x00000000004028d8 in VUT_Main (func=0x401790 , priv=0x0) at ../../lib/libvarnishtools/vut.c:297 c = 0x0 i = 0 __func__ = "VUT_Main" #9 0x0000000000401ba0 in main (argc=1, argv=0x7fff54bdba18) at varnishlog.c:83 opt = -1 '\377' }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 26 20:54:37 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Sep 2013 20:54:37 -0000 Subject: [Varnish] #1347: varnishtest: cache_hitpass counter does not counting hit_for_pass In-Reply-To: <040.e7cb1743e83e491b15d7ace8e8d4ffed@varnish-cache.org> References: <040.e7cb1743e83e491b15d7ace8e8d4ffed@varnish-cache.org> Message-ID: <055.868fa6c88cf54e3da7b43779419c6cfc@varnish-cache.org> #1347: varnishtest: cache_hitpass counter does not counting hit_for_pass -------------------------+-------------------- Reporter: lo | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: 3.0.4 Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by lo): Ok, somebody please close this ticket. The counters cache_hit, cache_hitpass and cache_miss are incremented during lookup. See [source:bin/varnishtest/tests/r00425.vtc] for an example how it works. -- Ticket URL: Varnish The Varnish HTTP Accelerator