From varnish-bugs at varnish-cache.org Fri Jun 1 06:38:08 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 06:38:08 -0000 Subject: [Varnish] #1143: Slackware64-Current segfault In-Reply-To: <045.bdeacc2343bf5d909edbf2b7a1de20e2@varnish-cache.org> References: <045.bdeacc2343bf5d909edbf2b7a1de20e2@varnish-cache.org> Message-ID: <054.e28228011df3c2d858321d7d21e98eb6@varnish-cache.org> #1143: Slackware64-Current segfault ---------------------+------------------------------------------------------ Reporter: nanashi | Type: defect Status: new | Priority: highest Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: | ---------------------+------------------------------------------------------ Comment(by nanashi): Rebuilt with --enable-debugging-symbols and it seems now that it's working in x86_64. Is it normal? regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 1 10:08:13 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 10:08:13 -0000 Subject: [Varnish] #1144: Assert error when (backend name + ipv4 + ipv6) is too long Message-ID: <046.dafd3fb5cd62c5c06f9019c1688a5997@varnish-cache.org> #1144: Assert error when (backend name + ipv4 + ipv6) is too long ----------------------+----------------------------------------------------- Reporter: tmagnien | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Keywords: | ----------------------+----------------------------------------------------- Hi, I experienced this assert error on a varnish-3.0.2s instance : {{{ Assert error in VSM__Alloc(), vsm.c line 185: Condition(snprintf(sha->ident, sizeof sha->ident, "%s", ident) < sizeof sha->ident) not true. thread = (cache-main) ident = Linux,2.6.39-bpo.2-amd64,x86_64,-smalloc,-smalloc,-hcritbit,no_waiter Backtrace: 0x42fd18: /usr/sbin/varnishd() [0x42fd18] 0x452775: /usr/sbin/varnishd(VSM__Alloc+0xbc5) [0x452775] 0x434e43: /usr/sbin/varnishd(VSM_Alloc+0x43) [0x434e43] 0x4115f3: /usr/sbin/varnishd(VBE_AddBackend+0x1c3) [0x4115f3] 0x40f984: /usr/sbin/varnishd(VRT_init_dir_simple+0xe4) [0x40f984] 0x7f91b23c8dbd: ./vcl.4lmdKkVI.so(+0x1cdbd) [0x7f91b23c8dbd] 0x436ffe: /usr/sbin/varnishd() [0x436ffe] 0x7f91bcc19bde: /usr/lib/varnish/libvarnish.so(+0x7bde) [0x7f91bcc19bde] 0x7f91bcc1a0bd: /usr/lib/varnish/libvarnish.so(+0x80bd) [0x7f91bcc1a0bd] 0x7f91bcc1cf01: /usr/lib/varnish/libvarnish.so(+0xaf01) [0x7f91bcc1cf01] }}} It occurs because in VBE_AddBackend we reserve a 128 char buf to store vcl_name, ipv4_addr, ipv6_addr and port. Then we call VSM_Alloc and try to put this buf into the ident field of VSM_chunk struct, which is only 64 char long. I've looked into trunk code and it seems we could encounter the same problem, at a different place : in VSM_common_alloc : {{{ assert(strlen(ident) < sizeof(vr->chunk->ident)); }}} Regards, Thierry -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 1 12:36:49 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 12:36:49 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters Message-ID: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters -----------------------+---------------------------------------------------- Reporter: tobixen | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: PRIV_CALL | -----------------------+---------------------------------------------------- I'm on varnish 3.0.2 ... git rev 8015b1ab869a60cb220b5d20ef7b6d183cea64ab VCL_PRIV returns same object on different function call parameters. This causes the test suit of my vmod to break - https://github.com/tobixen /libvmod-ratelimit - and it also causes the caching in std.fileread to break. Consider this VCL code: import std; backend default { .host = "80.91.37.212"; .port = "80"; } sub vcl_deliver { set resp.http.foo = std.fileread("/tmp/" + req.http.host); } And then I did this: $ grep aftenposten /etc/hosts 127.0.0.1 aftenposten.no www.aftenposten.no $ cd /tmp/ $ echo www > www.aftenposten.no $ echo no-www > aftenposten.no $ sudo kill `cat /var/run/varnish.pid ` $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl $ curl -sI http://www.aftenposten.no/ | grep foo foo: www $ curl -sI http://aftenposten.no/ | grep foo foo: www $ sudo kill `cat /var/run/varnish.pid ` $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl $ curl -sI http://aftenposten.no/ | grep foo foo: no-www $ curl -sI http://www.aftenposten.no/ | grep foo foo: no-www -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 1 12:39:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 12:39:35 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters In-Reply-To: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> References: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> Message-ID: <054.b6c1a37147e999f732d4fee8d54b4d1d@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters -----------------------+---------------------------------------------------- Reporter: tobixen | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: PRIV_CALL | -----------------------+---------------------------------------------------- Description changed by tfheen: Old description: > I'm on varnish 3.0.2 ... git rev 8015b1ab869a60cb220b5d20ef7b6d183cea64ab > > VCL_PRIV returns same object on different function call parameters. This > causes the test suit of my vmod to break - https://github.com/tobixen > /libvmod-ratelimit - and it also causes the caching in std.fileread to > break. > > Consider this VCL code: > > import std; > > backend default { > .host = "80.91.37.212"; > .port = "80"; > } > > sub vcl_deliver { > set resp.http.foo = std.fileread("/tmp/" + req.http.host); > } > > And then I did this: > > $ grep aftenposten /etc/hosts > 127.0.0.1 aftenposten.no www.aftenposten.no > > $ cd /tmp/ > > $ echo www > www.aftenposten.no > > $ echo no-www > aftenposten.no > > $ sudo kill `cat /var/run/varnish.pid ` > > $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T > localhost:6082 -u varnish -g varnish -s > file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl > > $ curl -sI http://www.aftenposten.no/ | grep foo > foo: www > > $ curl -sI http://aftenposten.no/ | grep foo > foo: www > > $ sudo kill `cat /var/run/varnish.pid ` > > $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T > localhost:6082 -u varnish -g varnish -s > file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl > > $ curl -sI http://aftenposten.no/ | grep foo > foo: no-www > > $ curl -sI http://www.aftenposten.no/ | grep foo > foo: no-www New description: I'm on varnish 3.0.2 ... git rev 8015b1ab869a60cb220b5d20ef7b6d183cea64ab VCL_PRIV returns same object on different function call parameters. This causes the test suit of my vmod to break - https://github.com/tobixen /libvmod-ratelimit - and it also causes the caching in std.fileread to break. Consider this VCL code: {{{ import std; backend default { .host = "80.91.37.212"; .port = "80"; } sub vcl_deliver { set resp.http.foo = std.fileread("/tmp/" + req.http.host); } }}} And then I did this: {{{ $ grep aftenposten /etc/hosts 127.0.0.1 aftenposten.no www.aftenposten.no $ cd /tmp/ $ echo www > www.aftenposten.no $ echo no-www > aftenposten.no $ sudo kill `cat /var/run/varnish.pid ` $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl $ curl -sI http://www.aftenposten.no/ | grep foo foo: www $ curl -sI http://aftenposten.no/ | grep foo foo: www $ sudo kill `cat /var/run/varnish.pid ` $ sudo /usr/local/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,2G -f ~/varnish.vcl $ curl -sI http://aftenposten.no/ | grep foo foo: no-www $ curl -sI http://www.aftenposten.no/ | grep foo foo: no-www }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 1 15:54:29 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 15:54:29 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters In-Reply-To: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> References: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> Message-ID: <054.dc89d527c5c9244b5dc0fdaddb9a9e96@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters -----------------------+---------------------------------------------------- Reporter: tobixen | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: PRIV_CALL | -----------------------+---------------------------------------------------- Comment(by tobixen): The m00004.vtc should have caught this bug - but it passes. However, I've managed to write up a vtc-file that fails - see the attachment. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 1 16:00:28 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 01 Jun 2012 16:00:28 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters In-Reply-To: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> References: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> Message-ID: <054.d307e6c09cc7793bf73895a737043fa8@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters -----------------------+---------------------------------------------------- Reporter: tobixen | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: PRIV_CALL | -----------------------+---------------------------------------------------- Comment(by tobixen): (So maybe I've misunderstood the "call"-part of the PRIV_CALL object, but anyway I'll insist it's a bug that the readfile function cannot be run with a variable in-parameter). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jun 3 21:49:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 03 Jun 2012 21:49:35 -0000 Subject: [Varnish] #1146: Persistent: When dropping empty segments, it will leak objects from LRU_Alloc, and not reset the free_offset to reclaim the space Message-ID: <044.1aadbdec81ec327f20ce81ac5a318c96@varnish-cache.org> #1146: Persistent: When dropping empty segments, it will leak objects from LRU_Alloc, and not reset the free_offset to reclaim the space ----------------------+----------------------------------------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Keywords: ----------------------+----------------------------------------------------- The optimization in smp_close_seg() does not free the LRU object, and does not reset the free_offset so that the space this segment held can be reclaimed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 08:43:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 08:43:17 -0000 Subject: [Varnish] #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: Message-ID: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- From fryer (https://www.varnish-cache.org/lists/pipermail/varnish- test/2012-June/001084.html): {{{ 2012-06-03 19:32:39 [2,23]: httperf-lru-nostream-gzip(httperf): Starting test 2012-06-03 19:36:06 WARNING [0,207]: httperf-lru-nostream-gzip(httperf): Panic detected. I think! 2012-06-03 19:36:06 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Last panic at: Sun, 03 Jun 2012 19:34:54 GMT Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: Condition((sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d48: pan_ic+d8 0x43c9fd: VRT_l_beresp_do_stream+8d 0x7fca059f5919: _end+7fca05372151 0x43939e: VCL_fetch_method+4e 0x4186a9: cnt_fetch+479 0x41accd: CNT_Session+42d 0x4366cd: ses_pool_task+fd 0x433552: Pool_Work_Thread+112 0x4407f8: wrk_thread_real+c8 0x7fca139a29ca: _end+7fca1331f202 sp = 0x7fca04702c20 { fd = 22, id = 22, xid = 1132068845, client = 10.20.100.9 14923, step = STP_FETCH, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 busyobj = 0x7fca04013020 { ws = 0x7fca04013070 { id = "bo", {s,f,r,e} = {0x7fca04014aa0,+512,(nil),+58752}, }, do_stream bodystatus = 3 (chunked), }, http[bereq] = { ws = 0x7fca04013070[bo] "GET", "/1/5/9/1/6/9.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", "X-Forwarded-For: 10.20.100.9", "X-Varnish: 1132068845", "Accept-Encoding: gzip", }, http[beresp] = { ws = 0x7fca04013070[bo] "HTTP/1.1", "200", "OK", "Server: nginx/0.7.65", "Date: Sun, 03 Jun 2012 19:34:54 GMT", "Content-Type: text/plain", "Last-Modified: Sun, 03 Jun 2012 19:32:42 GMT", "Transfer-Encoding: chunked", "Connection: keep-alive", "Content-Encoding: gzip", }, ws = 0x7fca0480b158 { id = "req", {s,f,r,e} = {0x7fca0480c730,+136,(nil),+59632}, }, http[req] = { ws = 0x7fca0480b158[req] "GET", "/1/5/9/1/6/9.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", "X-Forwarded-For: 10.20.100.9", }, worker = 0x7fca04926c60 { ws = 0x7fca04926e20 { id = "wrk", {s,f,r,e} = {0x7fca04926450,0x7fca04926450,(nil),+2048}, }, }, vcl = { srcname = { "input", "Default", }, }, }, 2012-06-03 19:36:06 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 203 stat: 71 diff: 132). Did we crash? 2012-06-03 19:36:06 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000 2012-06-03 19:36:06 WARNING [0, 0]: httperf-lru-nostream-gzip(httperf): Out of bounds: client_req(262234) less than lower boundary 1999720 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Load: 21:36:06 up 3 days, 7:59, 3 users, load average: 0.61, 0.49, 0.20 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Test name: httperf-lru-nostream-gzip 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish options: 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): -t=3600 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): -s=malloc,30M 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Varnish parameters: 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): thread_pool_max=5000 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): nuke_limit=250 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): http_gzip_support=on 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): thread_pool_min=200 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Payload size (excludes headers): 10K 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Branch: master 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of clients involved: 24 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Type of test: httperf 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Test iterations: 1 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Runtime: 203 seconds 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): VCL: backend foo { .host = "localhost"; .port = "80"; } sub vcl_fetch { set beresp.do_stream = false; } 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Number of total connections: 200000 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations. 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Requests per connection: 10 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Extra options to httperf: --wset=1000000,0.1 2012-06-03 19:36:06 [1, 0]: httperf-lru-nostream-gzip(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num- conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.12 --wset=1000000,0.1 }}} It's fairly consistent, in the sense that only 6-10 of 26 tests succeed. Note that run-time is a couple of minutes which would equate to quite a lot of traffic. This specific test is designed to pressure the LRU mechanisms. Only seen in master. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 08:48:49 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 08:48:49 -0000 Subject: [Varnish] #1148: Fryer: Assert error in VRT_r_req_restarts() Message-ID: <046.c570150359d62b5943f39b991a4d1773@varnish-cache.org> #1148: Fryer: Assert error in VRT_r_req_restarts() ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Only applies to trunk. Fryer fails (see: https://www.varnish-cache.org/lists/pipermail/varnish- test/2012-June/001084.html) Possibly a duplicate of #1147. {{{ 2012-06-03 19:36:13 [2, 6]: httperf-lru-nostream-gzip-deflateoff(httperf): Starting test 2012-06-03 19:40:39 WARNING [0,266]: httperf-lru-nostream-gzip- deflateoff(httperf): Panic detected. I think! 2012-06-03 19:40:39 WARNING [0, 0]: httperf-lru-nostream-gzip- deflateoff(httperf): Last panic at: Sun, 03 Jun 2012 19:38:27 GMT Assert error in VRT_r_req_restarts(), cache/cache_vrt_var.c line 365: Condition((sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d48: pan_ic+d8 0x43bbe5: VRT_r_req_restarts+65 0x7f567bafc3e8: _end+7f567b478c20 0x4399fb: VCL_recv_method+4b 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x4366cd: ses_pool_task+fd 0x433552: Pool_Work_Thread+112 0x4407f8: wrk_thread_real+c8 0x7f568a06c9ca: _end+7f56899e9202 sp = 0x7f5680623a20 { fd = 21, id = 21, xid = 284089736, client = 10.20.100.8 6598, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f567c939158 { id = "req", {s,f,r,e} = {0x7f567c93a730,+488,(nil),+59632}, }, http[req] = { ws = 0x7f567c939158[req] "GET", "/0/6/6/8/7/4.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7f5679ecec60 { ws = 0x7f5679ecee20 { id = "wrk", {s,f,r,e} = {0x7f5679ece450,0x7f5679ece450,(nil),+2048}, }, }, vcl = { srcname = { "input", "Default", }, }, }, 2012-06-03 19:40:39 WARNING [0, 0]: httperf-lru-nostream-gzip- deflateoff(httperf): Varnishstat uptime and measured run-time is too large (measured: 263 stat: 132 diff: 131). Did we crash? 2012-06-03 19:40:40 WARNING [0, 0]: httperf-lru-nostream-gzip- deflateoff(httperf): Out of bounds: n_lru_nuked(0) less than lower boundary 80000 2012-06-03 19:40:40 WARNING [0, 0]: httperf-lru-nostream-gzip- deflateoff(httperf): Out of bounds: client_req(216621) less than lower boundary 1999720 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Load: 21:40:40 up 3 days, 8:03, 3 users, load average: 0.26, 0.65, 0.36 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test name: httperf-lru-nostream-gzip-deflateoff 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish options: 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -t=3600 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): -s=malloc,30M 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Varnish parameters: 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): thread_pool_max=5000 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): nuke_limit=250 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): http_gzip_support=on 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): thread_pool_min=200 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Payload size (excludes headers): 10K 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Branch: master 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of clients involved: 24 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Type of test: httperf 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Test iterations: 1 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Runtime: 263 seconds 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): VCL: backend foo { .host = "localhost"; .port = "80"; } sub vcl_fetch { set beresp.do_stream = false; set beresp.do_gzip = true; } 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Number of total connections: 200000 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Note: connections are subject to rounding when divided among clients. Expect slight deviations. 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Requests per connection: 10 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Extra options to httperf: --wset=1000000,0.1 2012-06-03 19:40:40 [1, 0]: httperf-lru-nostream-gzip-deflateoff(httperf): Httperf command (last client): httperf --hog --timeout 60 --num-calls 10 --num-conns 8333 --port 8080 --burst-length 10 --client 23/24 --server 10.20.100.12 --wset=1000000,0.1 }}} This specific test pressures the LRU system but other tests design to pressure other aspects (like cold-vs-hot hits) also exhibit the same nature. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 08:49:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 08:49:26 -0000 Subject: [Varnish] #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: In-Reply-To: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> References: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> Message-ID: <055.a3f7fa4a1c4bae100311ca027b5d6938@varnish-cache.org> #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): See also #1148 for a similar, but not necessarily identical issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 08:51:24 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 08:51:24 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters In-Reply-To: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> References: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> Message-ID: <054.a6df551aadea8d53a886e705e238ce14@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters -----------------------+---------------------------------------------------- Reporter: tobixen | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: PRIV_CALL | -----------------------+---------------------------------------------------- Comment(by phk): PRIV_CALL works as it should work. The relevant question is if it should be used in std.fileread(), and if so, if it should be used the way it is now. Basically std.fileread() only looks at its argument the first time it is called and that probably breaks POLA pretty badly. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 09:20:23 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 09:20:23 -0000 Subject: [Varnish] #1145: PRIV_CALL returns same object on different function parameters In-Reply-To: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> References: <045.d31ac89d2324228d6429c6c179b921a6@varnish-cache.org> Message-ID: <054.5f72e0dbefc64b8f34cdcdcbf0c082bd@varnish-cache.org> #1145: PRIV_CALL returns same object on different function parameters ----------------------+----------------------------------------------------- Reporter: tobixen | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: PRIV_CALL ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [344a709ccf9559f3d8e5d7a0a9a35c6e94705f0f]) std.fileread() should not blindly return whatever file it returned last without checking if the filename changed. Fixes #1145 Testcase by: tobixen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 11:38:59 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 11:38:59 -0000 Subject: [Varnish] #1144: Assert error when (backend name + ipv4 + ipv6) is too long In-Reply-To: <046.dafd3fb5cd62c5c06f9019c1688a5997@varnish-cache.org> References: <046.dafd3fb5cd62c5c06f9019c1688a5997@varnish-cache.org> Message-ID: <055.5ca5697ae6b3888615aa3440f4a75cc0@varnish-cache.org> #1144: Assert error when (backend name + ipv4 + ipv6) is too long -----------------------+---------------------------------------------------- Reporter: tmagnien | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: major Resolution: fixed | Keywords: -----------------------+---------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [a9e5d3503f8fea332c01cdf6576a980947b39b07]) Increase the id field in VSM to 128 bytes to make space for 64 backend VCL name + IPv4, IPv6 and portnumber. Fixes #1144 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 4 15:54:05 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Jun 2012 15:54:05 -0000 Subject: [Varnish] #1149: Varnishadm output buffer problems Message-ID: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> #1149: Varnishadm output buffer problems ----------------------+----------------------------------------------------- Reporter: adriansw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishadm Version: 3.0.2 | Severity: major Keywords: | ----------------------+----------------------------------------------------- Hello, I am trying to run some varnishadm commands on a remote cache version 3.0.2, and running into problems: # varnishadm -T my.remote.cache:6082 -S /etc/varnish/secret- my.remote.cache debug.health CLI communication error (body) Command failed with error code 400 # varnishadm -T my.remote.cache:6082 -S /etc/varnish/secret- my.remote.cache "param.show -l" CLI communication error (body) Command failed with error code 400 ...so commands with big output fail. But commands with small output succeed: # varnishadm -T my.remote.cache:6082 -S /etc/varnish/secret- my.remote.cache "param.show user" user nobody (65534) Default is... I tried increasing cli_buffer so debug.health gets printed, even though it's the input buffer. Of course it made no difference. Please advise how to fix it, or if it's a bug. Thank you. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 6 06:08:56 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 Jun 2012 06:08:56 -0000 Subject: [Varnish] #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: In-Reply-To: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> References: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> Message-ID: <055.b0b1e68a7d7b4e1087569d58f6a4d137@varnish-cache.org> #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by kristian): Fryer has more data after the additional assert() statements. The details can be found at https://www.varnish-cache.org/lists/pipermail/varnish- test/2012-June/001095.html. Here's the ones from cnt_first: {{{ Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7ffa9a6e89ca: _end+7ffa9a064202 0x7ffa9a445cdd: _end+7ffa99dc1515 sp = 0x7ffa8b42ad20 { fd = 21, id = 21, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7ffa8b09a158 { id = "req", {s,f,r,e} = {0x7ffa8b09b730,0x7ffa8b09b730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7ffa8b69ec60 { ws = 0x7ffa8b69ee20 { id = "wrk", {s,f,r,e} = {0x7ffa8b69e450,0x7ffa8b69e450,(nil),+2048}, }, }, }, 2012-06-05 20:21:14 WARNING [0, 0]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 209 stat: 68 diff: 141). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fa322f879ca: _end+7fa322903202 0x7fa322ce4cdd: _end+7fa322660515 sp = 0x7fa318cdef20 { fd = 22, id = 22, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fa31119d158 { id = "req", {s,f,r,e} = {0x7fa31119e730,0x7fa31119e730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fa312faec60 { ws = 0x7fa312faee20 { id = "wrk", {s,f,r,e} = {0x7fa312fae450,0x7fa312fae450,(nil),+2048}, }, }, }, 2012-06-05 20:53:03 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 198 stat: 52 diff: 146). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fd7bc7a69ca: _end+7fd7bc122202 0x7fd7bc503cdd: _end+7fd7bbe7f515 sp = 0x7fd7b2519c20 { fd = 26, id = 26, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fd7accbd158 { id = "req", {s,f,r,e} = {0x7fd7accbe730,0x7fd7accbe730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fd7ae41dc60 { ws = 0x7fd7ae41de20 { id = "wrk", {s,f,r,e} = {0x7fd7ae41d450,0x7fd7ae41d450,(nil),+2048}, }, }, }, 2012-06-05 21:00:26 WARNING [0, 0]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 402 stat: 96 diff: 306). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f9c5b7d59ca: _end+7f9c5b151202 0x7f9c5b532cdd: _end+7f9c5aeae515 sp = 0x7f9c39abd220 { fd = 30, id = 30, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f9c4a402158 { id = "req", {s,f,r,e} = {0x7f9c4a403730,0x7f9c4a403730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7f9c4d474c60 { ws = 0x7f9c4d474e20 { id = "wrk", {s,f,r,e} = {0x7f9c4d474450,0x7f9c4d474450,(nil),+2048}, }, }, }, 2012-06-05 21:43:52 WARNING [0, 0]: cold-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 201 stat: 100 diff: 101). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f9c5b7d59ca: _end+7f9c5b151202 0x7f9c5b532cdd: _end+7f9c5aeae515 sp = 0x7f9c4e40c320 { fd = 38, id = 38, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f9c4c110158 { id = "req", {s,f,r,e} = {0x7f9c4c111730,0x7f9c4c111730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7f9c4d93dc60 { ws = 0x7f9c4d93de20 { id = "wrk", {s,f,r,e} = {0x7f9c4d93d450,0x7f9c4d93d450,(nil),+2048}, }, }, }, 2012-06-05 21:47:20 WARNING [0, 0]: cold-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 410 stat: 142 diff: 268). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fa1564f99ca: _end+7fa155e75202 0x7fa156256cdd: _end+7fa155bd2515 sp = 0x7fa155e20620 { fd = 12, id = 12, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fa146e02158 { id = "req", {s,f,r,e} = {0x7fa146e03730,0x7fa146e03730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fa14773dc60 { ws = 0x7fa14773de20 { id = "wrk", {s,f,r,e} = {0x7fa14773d450,0x7fa14773d450,(nil),+2048}, }, }, }, 2012-06-05 22:12:58 WARNING [0, 0]: purge-fail(httperf): Varnishstat uptime and measured run-time is too large (measured: 250 stat: 37 diff: 213). Did we crash? }}} And here are all the asserts from that run: {{{ Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f9865e0b9ca: _end+7f9865787202 0x7f9865b68cdd: _end+7f98654e4515 sp = 0x7f985bc11f20 { fd = 16, id = 16, xid = 2028750955, client = 10.20.100.8 16605, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f9856257158 { id = "req", {s,f,r,e} = {0x7f9856258730,+568,(nil),+59632}, }, http[req] = { ws = 0x7f9856257158[req] "GET", "/1/8/4/8/0/8.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7f9855e4ac60 { ws = 0x7f9855e4ae20 { id = "wrk", {s,f,r,e} = {0x7f9855e4a450,0x7f9855e4a450,(nil),+2048}, }, -- Assert error in VCL_fetch_method(), ../../include/tbl/vcl_returns.h line 56: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x439875: VCL_fetch_method+1a5 0x4186a9: cnt_fetch+479 0x41accd: CNT_Session+42d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f5f6f0879ca: _end+7f5f6ea03202 0x7f5f6ede4cdd: _end+7f5f6e760515 sp = 0x7f5f6000af20 { fd = 17, id = 17, xid = 1023144019, client = 10.20.100.9 15642, step = STP_FETCH, handling = fetch, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 busyobj = 0x7f5f5f913020 { ws = 0x7f5f5f913070 { id = "bo", {s,f,r,e} = {0x7f5f5f914aa0,+10488,(nil),+58752}, }, do_stream bodystatus = 4 (length), }, http[bereq] = { ws = 0x7f5f5f913070[bo] "GET", "/1/7/4/9/8/4.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", "X-Forwarded-For: 10.20.100.9", -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f0efc1b29ca: _end+7f0efbb2e202 0x7f0efbf0fcdd: _end+7f0efb88b515 sp = 0x7f0eed502320 { fd = 33, id = 33, xid = 1001242949, client = 10.20.100.8 14484, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f0eebe94158 { id = "req", {s,f,r,e} = {0x7f0eebe95730,+168,(nil),+59632}, }, http[req] = { ws = 0x7f0eebe94158[req] "GET", "/1/5/6/7/0/4.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7f0eec096c60 { ws = 0x7f0eec096e20 { id = "wrk", {s,f,r,e} = {0x7f0eec096450,0x7f0eec096450,(nil),+2048}, }, -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fd78135a9ca: _end+7fd780cd6202 0x7fd7810b7cdd: _end+7fd780a33515 sp = 0x7fd772602c20 { fd = 18, id = 18, xid = 194284409, client = 10.20.100.9 15511, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fd772386158 { id = "req", {s,f,r,e} = {0x7fd772387730,+248,(nil),+59632}, }, http[req] = { ws = 0x7fd772386158[req] "GET", "/1/7/5/0/3/5.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7fd771bb6c60 { ws = 0x7fd771bb6e20 { id = "wrk", {s,f,r,e} = {0x7fd771bb6450,0x7fd771bb6450,(nil),+2048}, }, -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f73ea2c89ca: _end+7f73e9c44202 0x7f73ea025cdd: _end+7f73e99a1515 sp = 0x7f73db405820 { fd = 14, id = 14, xid = 105368243, client = 10.20.100.8 7900, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f73dbfc2158 { id = "req", {s,f,r,e} = {0x7f73dbfc3730,+232,(nil),+59632}, }, http[req] = { ws = 0x7f73dbfc2158[req] "GET", "/3/0/1.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7f73db5f5c60 { ws = 0x7f73db5f5e20 { id = "wrk", {s,f,r,e} = {0x7f73db5f5450,0x7f73db5f5450,(nil),+2048}, }, -- Assert error in VCL_fetch_method(), ../../include/tbl/vcl_returns.h line 56: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x439875: VCL_fetch_method+1a5 0x4186a9: cnt_fetch+479 0x41accd: CNT_Session+42d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f1c44ddf9ca: _end+7f1c4475b202 0x7f1c44b3ccdd: _end+7f1c444b8515 sp = 0x7f1c3ac84220 { fd = 22, id = 22, xid = 1032213154, client = 10.20.100.8 16927, step = STP_FETCH, handling = fetch, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 busyobj = 0x7f1c35e8a020 { ws = 0x7f1c35e8a070 { id = "bo", {s,f,r,e} = {0x7f1c35e8baa0,+10488,(nil),+58752}, }, do_stream bodystatus = 4 (length), }, http[bereq] = { ws = 0x7f1c35e8a070[bo] "GET", "/1/9/1/3/4/0.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", "X-Forwarded-For: 10.20.100.8", -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7ffa9a6e89ca: _end+7ffa9a064202 0x7ffa9a445cdd: _end+7ffa99dc1515 sp = 0x7ffa8b42ad20 { fd = 21, id = 21, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7ffa8b09a158 { id = "req", {s,f,r,e} = {0x7ffa8b09b730,0x7ffa8b09b730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7ffa8b69ec60 { ws = 0x7ffa8b69ee20 { id = "wrk", {s,f,r,e} = {0x7ffa8b69e450,0x7ffa8b69e450,(nil),+2048}, }, }, }, 2012-06-05 20:21:14 WARNING [0, 0]: cold-gzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 209 stat: 68 diff: 141). Did we crash? -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7ffa9a6e89ca: _end+7ffa9a064202 0x7ffa9a445cdd: _end+7ffa99dc1515 sp = 0x7ffa868f5c20 { fd = 41, id = 41, xid = 116031308, client = 10.20.100.9 2579, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7ffa86e47158 { id = "req", {s,f,r,e} = {0x7ffa86e48730,+256,(nil),+59632}, }, http[req] = { ws = 0x7ffa86e47158[req] "GET", "/0/0/8/1/3/6/2.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7ffa8b8c2c60 { ws = 0x7ffa8b8c2e20 { id = "wrk", {s,f,r,e} = {0x7ffa8b8c2450,0x7ffa8b8c2450,(nil),+2048}, }, -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fe4a2b8a9ca: _end+7fe4a2506202 0x7fe4a28e7cdd: _end+7fe4a2263515 sp = 0x7fe493206f20 { fd = 27, id = 27, xid = 1179679296, client = 10.20.100.8 14080, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fe49360b158 { id = "req", {s,f,r,e} = {0x7fe49360c730,+248,(nil),+59632}, }, http[req] = { ws = 0x7fe49360b158[req] "GET", "/1/5/6/7/5/3.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7fe492a86c60 { ws = 0x7fe492a86e20 { id = "wrk", {s,f,r,e} = {0x7fe492a86450,0x7fe492a86450,(nil),+2048}, }, -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fa322f879ca: _end+7fa322903202 0x7fa322ce4cdd: _end+7fa322660515 sp = 0x7fa318cdef20 { fd = 22, id = 22, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fa31119d158 { id = "req", {s,f,r,e} = {0x7fa31119e730,0x7fa31119e730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fa312faec60 { ws = 0x7fa312faee20 { id = "wrk", {s,f,r,e} = {0x7fa312fae450,0x7fa312fae450,(nil),+2048}, }, }, }, 2012-06-05 20:53:03 WARNING [0, 0]: httperf-lru-stream-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 198 stat: 52 diff: 146). Did we crash? -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fd7bc7a69ca: _end+7fd7bc122202 0x7fd7bc503cdd: _end+7fd7bbe7f515 sp = 0x7fd7a3147220 { fd = 22, id = 22, xid = 773496949, client = 10.20.100.9 7626, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fd7a3745158 { id = "req", {s,f,r,e} = {0x7fd7a3746730,+256,(nil),+59632}, }, http[req] = { ws = 0x7fd7a3745158[req] "GET", "/0/3/5/1/1/2/4.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7fd7ad66ec60 { ws = 0x7fd7ad66ee20 { id = "wrk", {s,f,r,e} = {0x7fd7ad66e450,0x7fd7ad66e450,(nil),+2048}, }, -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fd7bc7a69ca: _end+7fd7bc122202 0x7fd7bc503cdd: _end+7fd7bbe7f515 sp = 0x7fd7b2519c20 { fd = 26, id = 26, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fd7accbd158 { id = "req", {s,f,r,e} = {0x7fd7accbe730,0x7fd7accbe730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fd7ae41dc60 { ws = 0x7fd7ae41de20 { id = "wrk", {s,f,r,e} = {0x7fd7ae41d450,0x7fd7ae41d450,(nil),+2048}, }, }, }, 2012-06-05 21:00:26 WARNING [0, 0]: cold-nogzip(httperf): Varnishstat uptime and measured run-time is too large (measured: 402 stat: 96 diff: 306). Did we crash? -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f7dbd12e9ca: _end+7f7dbcaaa202 0x7f7dbce8bcdd: _end+7f7dbc807515 sp = 0x7f7db0b04e20 { fd = 17, id = 17, xid = 869304432, client = 10.20.100.9 59813, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f7dade35158 { id = "req", {s,f,r,e} = {0x7f7dade36730,+152,(nil),+59632}, }, http[req] = { ws = 0x7f7dade35158[req] "GET", "/", "HTTP/1.1", "Host: 10.20.100.12:8080", "Accept: */*", "Accept-Encoding: gzip", "User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.66)", "Connection: close", }, worker = 0x7f7dae3f0c60 { ws = 0x7f7dae3f0e20 { -- Assert error in VCL_hit_method(), ../../include/tbl/vcl_returns.h line 51: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x439a55: VCL_hit_method+1a5 0x418037: cnt_hit+1a7 0x41acfd: CNT_Session+45d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f5185ab29ca: _end+7f518542e202 0x7f518580fcdd: _end+7f518518b515 sp = 0x7f51765f5520 { fd = 14, id = 14, xid = 1887591876, client = 10.20.100.9 14540, step = STP_HIT, handling = hash, restarts = 0, esi_level = 0 ws = 0x7f517688d158 { id = "req", {s,f,r,e} = {0x7f517688e730,+864,(nil),+59632}, }, http[req] = { ws = 0x7f517688d158[req] "GET", "/1/5/6/1/4/9.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", "X-Forwarded-For: 10.20.100.9", }, worker = 0x7f51757c6c60 { ws = 0x7f51757c6e20 { id = "wrk", {s,f,r,e} = {0x7f51757c6450,0x7f51757c6450,(nil),+2048}, -- Assert error in VCL_recv_method(), ../../include/tbl/vcl_returns.h line 27: Condition((req->sp) != NULL) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x43a3b5: VCL_recv_method+1a5 0x4164c5: cnt_recv+1e5 0x41ae1d: CNT_Session+57d 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f3b8958f9ca: _end+7f3b88f0b202 0x7f3b892eccdd: _end+7f3b88c68515 sp = 0x7f3b79f04120 { fd = 23, id = 23, xid = 1596354623, client = 10.20.100.9 7323, step = STP_RECV, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f3b7a1db158 { id = "req", {s,f,r,e} = {0x7f3b7a1dc730,+224,(nil),+59632}, }, http[req] = { ws = 0x7f3b7a1db158[req] "GET", "/5/8.html", "HTTP/1.1", "User-Agent: httperf/0.9.0", "Host: 10.20.100.12", }, worker = 0x7f3b7a59ec60 { ws = 0x7f3b7a59ee20 { id = "wrk", {s,f,r,e} = {0x7f3b7a59e450,0x7f3b7a59e450,(nil),+2048}, }, -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f9c5b7d59ca: _end+7f9c5b151202 0x7f9c5b532cdd: _end+7f9c5aeae515 sp = 0x7f9c39abd220 { fd = 30, id = 30, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f9c4a402158 { id = "req", {s,f,r,e} = {0x7f9c4a403730,0x7f9c4a403730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7f9c4d474c60 { ws = 0x7f9c4d474e20 { id = "wrk", {s,f,r,e} = {0x7f9c4d474450,0x7f9c4d474450,(nil),+2048}, }, }, }, 2012-06-05 21:43:52 WARNING [0, 0]: cold-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 201 stat: 100 diff: 101). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7f9c5b7d59ca: _end+7f9c5b151202 0x7f9c5b532cdd: _end+7f9c5aeae515 sp = 0x7f9c4e40c320 { fd = 38, id = 38, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7f9c4c110158 { id = "req", {s,f,r,e} = {0x7f9c4c111730,0x7f9c4c111730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7f9c4d93dc60 { ws = 0x7f9c4d93de20 { id = "wrk", {s,f,r,e} = {0x7f9c4d93d450,0x7f9c4d93d450,(nil),+2048}, }, }, }, 2012-06-05 21:47:20 WARNING [0, 0]: cold-default(httperf): Varnishstat uptime and measured run-time is too large (measured: 410 stat: 142 diff: 268). Did we crash? -- Assert error in cnt_first(), cache/cache_center.c line 943: Condition(req->sp == sp) not true. thread = (cache-worker) ident = Linux,2.6.32-38-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431d88: pan_ic+d8 0x41b021: CNT_Session+781 0x43670d: ses_pool_task+fd 0x433592: Pool_Work_Thread+112 0x441128: wrk_thread_real+c8 0x7fa1564f99ca: _end+7fa155e75202 0x7fa156256cdd: _end+7fa155bd2515 sp = 0x7fa155e20620 { fd = 12, id = 12, xid = 0, client = , step = STP_FIRST, handling = deliver, restarts = 0, esi_level = 0 ws = 0x7fa146e02158 { id = "req", {s,f,r,e} = {0x7fa146e03730,0x7fa146e03730,(nil),+59632}, }, http[req] = { ws = (nil)[] }, worker = 0x7fa14773dc60 { ws = 0x7fa14773de20 { id = "wrk", {s,f,r,e} = {0x7fa14773d450,0x7fa14773d450,(nil),+2048}, }, }, }, 2012-06-05 22:12:58 WARNING [0, 0]: purge-fail(httperf): Varnishstat uptime and measured run-time is too large (measured: 250 stat: 37 diff: 213). Did we crash? }}} These seem to fail for just about any type of test at the moment, but not consistently. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 6 11:09:22 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 Jun 2012 11:09:22 -0000 Subject: [Varnish] #1100: Assert error in stv_alloc(), stevedore.c line 144 In-Reply-To: <042.63d2077ebde3a5ab8b3d16b5f7a5e9eb@varnish-cache.org> References: <042.63d2077ebde3a5ab8b3d16b5f7a5e9eb@varnish-cache.org> Message-ID: <051.8995b3db640c7ee26d869f9dae6e0d53@varnish-cache.org> #1100: Assert error in stv_alloc(), stevedore.c line 144 ---------------------+------------------------------------------------------ Reporter: Roze | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Comment(by Tollef Fog Heen ): (In [82c4b095509e46acdb8a5b24099527a7497084d7]) Don't assert if we fail to get storage in VFP_Begin() Fixes #1100 Conflicts: bin/varnishd/cache_fetch.c bin/varnishd/cache_panic.c -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 6 20:49:07 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 Jun 2012 20:49:07 -0000 Subject: [Varnish] #1149: Varnishadm output buffer problems In-Reply-To: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> References: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> Message-ID: <055.dcfe518c9732afd7ecf3e36b69c18138@varnish-cache.org> #1149: Varnishadm output buffer problems ----------------------+----------------------------------------------------- Reporter: adriansw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishadm Version: 3.0.2 | Severity: major Keywords: | ----------------------+----------------------------------------------------- Comment(by lkarsten): I am not able to reproduce in a quick test on 3.0.2. varnishadm over the internet, both debug.health and param.show -l works fine. three backends. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 6 22:14:10 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 06 Jun 2012 22:14:10 -0000 Subject: [Varnish] #1149: Varnishadm output buffer problems In-Reply-To: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> References: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> Message-ID: <055.e9b6277d9b890fcfa6000ee69e92b3c1@varnish-cache.org> #1149: Varnishadm output buffer problems ----------------------+----------------------------------------------------- Reporter: adriansw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishadm Version: 3.0.2 | Severity: major Keywords: | ----------------------+----------------------------------------------------- Comment(by adriansw): Hello, thanks for your help. Using 3 backends is not a problem, my test case had 17 backends (that's a lot of output). I am unable to test just how much lines received is the problem (i.e anything over 6 or 7 or 8... backends) because right now I can't reduce the backend pool so low. I may scrap together a test server with false backends to see where's the breaking point. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 7 00:54:54 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 07 Jun 2012 00:54:54 -0000 Subject: [Varnish] #1149: Varnishadm output buffer problems In-Reply-To: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> References: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> Message-ID: <055.d2eead9af938789d4d4f09bf3e15baa7@varnish-cache.org> #1149: Varnishadm output buffer problems ----------------------+----------------------------------------------------- Reporter: adriansw | Type: defect Status: new | Priority: normal Milestone: | Component: varnishadm Version: 3.0.2 | Severity: major Keywords: | ----------------------+----------------------------------------------------- Comment(by adriansw): Hello the remote varnishadm client did not match the server version by mistake. Remote was using Debian default v2 instead of varnish project v3 packages. Version 3.0.2 of the client was installed and has no problem receiving a lot of output from the server. I didn't do any further param changes after initial report, and upgrade... in case someone gets this as a search results in the future. Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 7 08:29:32 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 07 Jun 2012 08:29:32 -0000 Subject: [Varnish] #1149: Varnishadm output buffer problems In-Reply-To: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> References: <046.fc0bec9370f5be5338c400f8f20e6216@varnish-cache.org> Message-ID: <055.46800ade3b8f18aaf83c7f6d9fbc0fba@varnish-cache.org> #1149: Varnishadm output buffer problems -------------------------+-------------------------------------------------- Reporter: adriansw | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishadm Version: 3.0.2 | Severity: major Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by lkarsten): * status: new => closed * resolution: => worksforme -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 7 11:57:01 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 07 Jun 2012 11:57:01 -0000 Subject: [Varnish] #1150: Fast expiry_sleep causing panics Message-ID: <045.8334b39d0642817ed08d71ad9c27d0d6@varnish-cache.org> #1150: Fast expiry_sleep causing panics --------------------------------+------------------------------------------- Reporter: jjordan | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: expiry_sleep panic | --------------------------------+------------------------------------------- We were having an issue with SMA.Transient.g_alloc growing unbounded because we were getting stuff into it faster then the expiry thread could remove it. We turned down the expiry_sleep. With expiry_sleep at 0.02 it keeps up with the Transient expiring such that it doesn't grow unbounded, but every couple minutes after the Transient store is empty we get the following panic: varnish> panic.show 200 Last panic at: Sun, 03 Jun 2012 08:43:09 GMT Assert error in oc_getobj(), cache.h line 452: Condition((oc->flags & (1<<1)) == 0) not true. thread = (cache-timeout) ident = Linux,2.6.18-308.4.1.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] 0x420a35: /usr/sbin/varnishd [0x420a35] 0x42ebac: /usr/sbin/varnishd [0x42ebac] 0x3a9de0677d: /lib64/libpthread.so.0 [0x3a9de0677d] 0x3a9cad325d: /lib64/libc.so.6(clone+0x6d) [0x3a9cad325d] varnish> param.show expiry_sleep 200 expiry_sleep 0.020000 [seconds] Default is 1 How long the expiry thread sleeps when there is nothing for it to do. varnish> If I set expiry_sleep any higher, it doesn't keep up and Transient store acts basically as a slow memory leak. We are running 3.0.2 on Centos 5.8, Kernel 2.6.18-308.4.1.el5, SMP x86_64 GNU/Linux -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 7 14:06:53 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 07 Jun 2012 14:06:53 -0000 Subject: [Varnish] #1151: Varnish to Apache sometimes return error 503 Message-ID: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> #1151: Varnish to Apache sometimes return error 503 -----------------------------------+---------------------------------------- Reporter: rj@? | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.2 | Severity: major Keywords: 503 http error apache | -----------------------------------+---------------------------------------- Hey guys. I have a setup where the firewall load balances down to two servers(load balancers). Here Ngingx parses http and https to a varnish cache on the same server. From the varnish it is sent the other varnish(the other load balance server) and from there to a backend server with apache2. (The setup was discussed with Martin Blix Grydeland during a varnish course held in Denmark and is working good besides the issue at hand) Now I have for some time been tracking some strange 503 errors. But I cant seem to find the silver bullet. Currently I am logging the 503 errors through varnish this way: {{{ sudo varnishlog -c -m TxStatus:503 >> /home/rj/varnishlog503.log }}} and then referring to the apache access log to see if any 503 requests have been handled. Today I had a health check from the firewall that failed: {{{ 20 SessionOpen c 127.0.0.1 34319 :8081 20 ReqStart c 127.0.0.1 34319 607335635 20 RxRequest c HEAD 20 RxURL c /health-check 20 RxProtocol c HTTP/1.0 20 RxHeader c X-Real-IP: 192.168.3.254 20 RxHeader c Host: 192.168.3.189 20 RxHeader c X-Forwarded-For: 192.168.3.254 20 RxHeader c Connection: close 20 RxHeader c User-Agent: Astaro Service Monitor 0.9 20 RxHeader c Accept: */* 20 VCL_call c recv lookup 20 VCL_call c hash 20 Hash c /health-check 20 VCL_return c hash 20 VCL_call c miss fetch 20 Backend c 33 aurum aurum 20 FetchError c http first read error: -1 11 (No error recorded) 20 VCL_call c error deliver 20 VCL_call c deliver deliver 20 TxProtocol c HTTP/1.1 20 TxStatus c 503 20 TxResponse c Service Unavailable 20 TxHeader c Server: Varnish 20 TxHeader c Content-Type: text/html; charset=utf-8 20 TxHeader c Retry-After: 5 20 TxHeader c Content-Length: 879 20 TxHeader c Accept-Ranges: bytes 20 TxHeader c Date: Wed, 06 Jun 2012 12:35:12 GMT 20 TxHeader c X-Varnish: 607335635 20 TxHeader c Age: 60 20 TxHeader c Via: 1.1 varnish 20 TxHeader c Connection: close 20 Length c 879 20 ReqEnd c 607335635 1338986052.649786949 1338986112.648169994 0.000160217 59.997980356 0.000402689 }}} And here is another example of a 503 error: {{{ 16 SessionOpen c 127.0.0.1 44997 :8081 16 ReqStart c 127.0.0.1 44997 613052041 16 RxRequest c POST 16 RxURL c /?id=412 16 RxProtocol c HTTP/1.0 16 RxHeader c X-Real-IP: 80.210.231.31 16 RxHeader c Host: gnisten.sfoweb.dk 16 RxHeader c X-Forwarded-For: 80.210.231.31 16 RxHeader c Connection: close 16 RxHeader c Content-Length: 434 16 RxHeader c Cache-Control: max-age=0 16 RxHeader c Origin: https://gnisten.sfoweb.dk 16 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5 16 RxHeader c Content-Type: application/x-www-form-urlencoded 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Referer: https://gnisten.sfoweb.dk/?id=412 16 RxHeader c Accept-Encoding: gzip,deflate,sdch 16 RxHeader c Accept-Language: da-DK,da;q=0.8,en-US;q=0.6,en;q=0.4 16 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 16 RxHeader c Cookie: PHPSESSID=ovqafgba8q4pvokmu3eso89ct4; fe_typo_user=9ea6d60afb41a72033479726ae4fac89 16 VCL_call c recv pass 16 VCL_call c hash 16 Hash c /?id=412 16 VCL_return c hash 16 VCL_call c pass pass 16 FetchError c no backend connection 16 VCL_call c error deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Retry-After: 5 16 TxHeader c Content-Length: 879 16 TxHeader c Accept-Ranges: bytes 16 TxHeader c Date: Thu, 07 Jun 2012 13:34:41 GMT 16 TxHeader c X-Varnish: 613052041 16 TxHeader c Age: 1 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: close 16 Length c 879 16 ReqEnd c 613052041 1339076080.814462662 1339076081.515620232 0.000342369 0.701060295 0.000097275 }}} Now the backend server (apache) does not have any 503 in the access log at this point. So it would seem the request never reached the server. At the time where I get the 503 error traffic is still being sent to the server (I can see requests in the access log - but not the 503 one). Even in the morning when the server dosen't seem to be doing anything. A summery of the FetchErrors for the last two days: {{{ 7 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 20 FetchError c http first read error: -1 11 (No error recorded) 20 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 18 FetchError c http first read error: -1 11 (No error recorded) 36 FetchError c no backend connection 18 FetchError c http first read error: -1 11 (No error recorded) 24 FetchError c http first read error: -1 11 (No error recorded) 22 FetchError c http first read error: -1 11 (No error recorded) 23 FetchError c no backend connection 33 FetchError c no backend connection 25 FetchError c http first read error: -1 11 (No error recorded) 30 FetchError c http first read error: -1 11 (No error recorded) 36 FetchError c no backend connection 17 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 17 FetchError c http first read error: -1 11 (No error recorded) 34 FetchError c http first read error: -1 11 (No error recorded) 26 FetchError c http first read error: -1 11 (No error recorded) 20 FetchError c http first read error: -1 11 (No error recorded) 5 FetchError c http first read error: -1 11 (No error recorded) 17 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c http first read error: -1 11 (No error recorded) 18 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 17 FetchError c http first read error: -1 11 (No error recorded) 20 FetchError c http first read error: -1 11 (No error recorded) 6 FetchError c http first read error: -1 11 (No error recorded) 19 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 4 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c http first read error: -1 11 (No error recorded) 7 FetchError c backend write error: 11 (Resource temporarily unavailable) 4 FetchError c http first read error: -1 11 (No error recorded) 32 FetchError c http first read error: -1 11 (No error recorded) 18 FetchError c no backend connection 7 FetchError c no backend connection 4 FetchError c no backend connection }}} I haven't changed the default timeout values for varnish. This is my configuration for the backends and the director. {{{ backend radon { .host = "192.168.3.186"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } backend xenon { .host = "192.168.3.187"; .port = "80"; .probe = { .url = "/health-check/"; .interval = 3s; .window = 5; .threshold = 2; } } backend aurum { .host = "127.0.0.1"; .port = "8085"; .probe = { .url = "/health-check"; .interval = 3s; .window = 5; .threshold = 2; } } backend iridium { .host = "192.168.3.189"; .port = "8081"; .probe = { .url = "/varnish-health"; .interval = 3s; .window = 5; .threshold = 2; } } director balance client { { .backend = radon; .weight = 1; } { .backend = xenon; .weight = 1; } } }}} I have also tried to double the number of threads apache is allowed to run but without any result. I still get around 10-20 503 error a day and they come from various requests. I'm assuming this is some kind of defect/bug. I haven't been able to find anything about the issues in the Internet. I have set the priority and severity as a bit higher than normal since out clients receive 503 error in delicate operations. Best regards Ronnie -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 7 14:14:48 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 07 Jun 2012 14:14:48 -0000 Subject: [Varnish] #1151: Varnish to Apache sometimes return error 503 In-Reply-To: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> References: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> Message-ID: <060.585548385c84f0d3afccf2391b28ba89@varnish-cache.org> #1151: Varnish to Apache sometimes return error 503 -----------------------------------+---------------------------------------- Reporter: rj@? | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.2 | Severity: major Keywords: 503 http error apache | -----------------------------------+---------------------------------------- Comment(by rj@?): I have just run varnishadm debug.health {{{ Backend radon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002560 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend xenon is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002760 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend iridium is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.000849 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy Backend aurum is Healthy Current states good: 5 threshold: 2 window: 5 Average responsetime of good probes: 0.002100 Oldest Newest ================================================================ 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH Happy }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 8 08:25:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 08 Jun 2012 08:25:38 -0000 Subject: [Varnish] #1151: Varnish to Apache sometimes return error 503 In-Reply-To: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> References: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> Message-ID: <060.3d186bd5c3fdf2da5b3983dc1288d6ed@varnish-cache.org> #1151: Varnish to Apache sometimes return error 503 -----------------------------------+---------------------------------------- Reporter: rj@? | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.2 | Severity: major Keywords: 503 http error apache | -----------------------------------+---------------------------------------- Comment(by rj@?): Some more debug information. I had varnishtop running since I created this ticket. Load 1 {{{ 3224774 3.99 2.61 backend_conn - Backend conn. success 27 0.00 0.00 backend_unhealthy - Backend conn. not attempted 63 0.00 0.00 backend_fail - Backend conn. failures 358798 0.00 0.29 backend_reuse - Backend conn. reuses 21035 0.00 0.02 backend_toolate - Backend conn. was closed 379834 0.00 0.31 backend_recycle - Backend conn. recycles 26 0.00 0.00 backend_retry - Backend conn. retry }}} Load 2 {{{ 3217751 5.99 2.61 backend_conn - Backend conn. success 32 0.00 0.00 backend_fail - Backend conn. failures 364185 0.00 0.30 backend_reuse - Backend conn. reuses 27077 0.00 0.02 backend_toolate - Backend conn. was closed 391263 0.00 0.32 backend_recycle - Backend conn. recycles 36 0.00 0.00 backend_retry - Backend conn. retry }}} I have received more 503 errors but varnish haven't registered any backend_fail's? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 8 13:54:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 08 Jun 2012 13:54:17 -0000 Subject: [Varnish] #1152: Varnish 503 Error Message-ID: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> #1152: Varnish 503 Error ----------------------+----------------------------------------------------- Reporter: ernatalo | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: | ----------------------+----------------------------------------------------- I could see that commissioning of 2 servers Varnish (served by a loadbalancer) that increases client connections the server produces error 503. The machines are identical and have 2 CPU 4core, 3 GHz and 4 GB RAM. Initially, the cache used in the malloc storage then I tried to set it on hdd. In this way I avoided the whole occupation of memory, but without success. The memory remains occupied for less than 2 GB. I attach statistics varnishstat -1, the file and the configuration file default.vcl demon Varnish -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 11 10:41:47 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 Jun 2012 10:41:47 -0000 Subject: [Varnish] #1151: Varnish to Apache sometimes return error 503 In-Reply-To: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> References: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> Message-ID: <060.3c55c2aa7a51faffc1b5db4839e0c5d3@varnish-cache.org> #1151: Varnish to Apache sometimes return error 503 -----------------------------------+---------------------------------------- Reporter: rj@? | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.2 | Severity: major Keywords: 503 http error apache | -----------------------------------+---------------------------------------- Comment(by rj@?): Could you please close this ticket? Recent activities on server and logs indicates that the issue very well could be the apache server. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 12 08:03:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Jun 2012 08:03:15 -0000 Subject: [Varnish] #1148: Fryer: Assert error in VRT_r_req_restarts() In-Reply-To: <046.c570150359d62b5943f39b991a4d1773@varnish-cache.org> References: <046.c570150359d62b5943f39b991a4d1773@varnish-cache.org> Message-ID: <055.34e82279b17c8d050672f4a9d3ba7654@varnish-cache.org> #1148: Fryer: Assert error in VRT_r_req_restarts() ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: duplicate Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => duplicate Comment: Duplicate of #1147 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 12 08:09:58 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Jun 2012 08:09:58 -0000 Subject: [Varnish] #1153: No privilege seperation for cc-command Message-ID: <046.f2b34a70a6755770aedb54d89f37db15@varnish-cache.org> #1153: No privilege seperation for cc-command ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Keywords: ----------------------+----------------------------------------------------- In short: {{{ param.set cc_command "id >> /tmp/bad_guy_was_here; exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared -Wl,-x -o %o %s " }}} lead to: {{{ root at vac-agent:/etc# cat /tmp/bad_guy_was_here uid=0(root) gid=0(root) groups=0(root) uid=0(root) gid=0(root) groups=0(root) uid=0(root) gid=0(root) groups=0(root) uid=0(root) gid=0(root) groups=0(root) }}} The issue being that it's run as root, not that it works. Not confirmed on master yet. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 12 08:12:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Jun 2012 08:12:35 -0000 Subject: [Varnish] #1152: Varnish 503 Error In-Reply-To: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> References: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> Message-ID: <055.7a3bebe67ca2a86f6723ec3331f62cc3@varnish-cache.org> #1152: Varnish 503 Error ----------------------+----------------------------------------------------- Reporter: ernatalo | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: | ----------------------+----------------------------------------------------- Comment(by ernatalo): Hello, the statistics Attached is was done this morning after the tests that I made ??with Using siege. In this way I can reproduce what happens in reality: past 600/650 client connection the server stops to serve and Varnish produces 503 errors Guru Mediation. The varnish server restart to server the client after a few secons. The varnishstat file attached are referd at startup of varnish service andt when i looked the first reported varnish error 503. I made some changes to the configuration file that I attached. I can beleave how many people declare the capacity of varnish to serve tens of thousands of requests per second with only 1 server Varnish. The data that we go and serve with varnish are mainly data exchange, many, but small. The larger object is One html page which is too big (a few hundred KB). The server are not even suffering memory as shown in the command free-m total used free shared buffers cached Mem: 3953 2049 1903 0 128 1689 -/+Buffers/cache: 230 3722 Swap: 7624 0 7624 I can't spend any more resources so I must make the decision to return to Squid that, although mono process, did not give all these problems of interpretation of the configuration. Thank you again for the help. Regards -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 12 17:54:19 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Jun 2012 17:54:19 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.e146900c7bad69d8dd66e544be7ef049@varnish-cache.org> #1054: Child not responding to CLI, killing it -----------------------+---------------------------------------------------- Reporter: scorillo | Type: defect Status: reopened | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: | Keywords: -----------------------+---------------------------------------------------- Comment(by bstillwell): The source of the problem seems to be the defragmenting done by transparent hugepages. Once we disabled them with the following command our problems went away: {{{ echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 02:32:05 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 02:32:05 -0000 Subject: [Varnish] #1154: varnish serving a wrong file Message-ID: <048.eefa373bd5e46d246a50a890a7600feb@varnish-cache.org> #1154: varnish serving a wrong file ------------------------+--------------------------------------------------- Reporter: varnish302 | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | ------------------------+--------------------------------------------------- I have been using varnish for about 2years. Recently I discovered that sometimes it served a wrong file. for example: The url is /foo/a.jpg, but the response content is actually from another file (/bar/b.jpg) I upgrade to varnish-3.0.2, and it still happens sometimes. What's the possible reasons of this problem? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 06:35:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 06:35:17 -0000 Subject: [Varnish] #1154: varnish serving a wrong file In-Reply-To: <048.eefa373bd5e46d246a50a890a7600feb@varnish-cache.org> References: <048.eefa373bd5e46d246a50a890a7600feb@varnish-cache.org> Message-ID: <057.8c6808fe25a7bb464d87905fb81933b1@varnish-cache.org> #1154: varnish serving a wrong file -------------------------+-------------------------------------------------- Reporter: varnish302 | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: That can only happen if they have the same hash-key, and that can only happen if you have defined a vcl_hash{} function that does that. If you have evidence that this is a bug in varnish, please reopen this ticket and include at the very minimum you vcl code and preferably varnishlog output. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 08:03:50 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 08:03:50 -0000 Subject: [Varnish] #1153: No privilege seperation for cc-command In-Reply-To: <046.f2b34a70a6755770aedb54d89f37db15@varnish-cache.org> References: <046.f2b34a70a6755770aedb54d89f37db15@varnish-cache.org> Message-ID: <055.605f84e77bb68c5a60f6e5be1ef44da7@varnish-cache.org> #1153: No privilege seperation for cc-command ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [b7175b38ad96ae57888e930a12cb88e33005178e]) Priv-sep vcc and cc also. Fixes #1153 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 11:03:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 11:03:35 -0000 Subject: [Varnish] #1154: varnish serving a wrong file In-Reply-To: <048.eefa373bd5e46d246a50a890a7600feb@varnish-cache.org> References: <048.eefa373bd5e46d246a50a890a7600feb@varnish-cache.org> Message-ID: <057.cdd73bcf7d7f2f04c1f575537af8bcb4@varnish-cache.org> #1154: varnish serving a wrong file -------------------------+-------------------------------------------------- Reporter: varnish302 | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Comment(by varnish302): I don't define a vcl_hash. Here is my vcl code: {{{ backend server1 { .host = "127.0.0.1"; .port = "8000"; } backend server2 { .host = "192.168.1.1"; .port = "8080"; } # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. sub vcl_recv { set req.backend = server2; if (req.url ~ "^/foo/" || req.url ~ "^/bar/" || req.url ~ "^/test/") { set req.backend = server1; } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.request == "DELETE" || req.request == "PUT") { ban_url(req.url); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } if (req.url ~ "something=") { return (pass); } return (lookup); } }}} I cannot reproduce it. But it dit happened about every 2 weeks.. Some of varnishstat -1 output: {{{ $ bin/varnishstat -1 client_conn 30472366 5.77 Client connections accepted client_drop 19367 0.00 Connection dropped, no sess/wrk client_req 1824494888 345.45 Client requests received cache_hit 1649166805 312.25 Cache hits cache_hitpass 245355 0.05 Cache hits for pass cache_miss 157833935 29.88 Cache misses backend_conn 33103046 6.27 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 1108 0.00 Backend conn. failures backend_reuse 141864468 26.86 Backend conn. reuses backend_toolate 44250 0.01 Backend conn. was closed backend_recycle 141910016 26.87 Backend conn. recycles backend_retry 531 0.00 Backend conn. retry fetch_head 239 0.00 Fetch head fetch_length 173702974 32.89 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 2017 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 1264520 0.24 Fetch no body (304) n_sess_mem 7331 . N struct sess_mem n_sess 755 . N struct sess n_object 0 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 489 . N struct objectcore n_objecthead 490 . N struct objecthead n_waitinglist 197835 . N struct waitinglist n_vbc 9 . N struct vbc n_wrk 620 . N worker threads n_wrk_create 64180 0.01 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 167490 0.03 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 865748 0.16 N queued work requests n_wrk_drop 60676 0.01 N dropped work requests n_backend 3 . N backends n_expired 157832421 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 391854312 . N LRU moved objects losthdr 722 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 1443097030 273.23 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 30453209 5.77 Total Sessions s_req 1824494888 345.45 Total Requests s_pipe 42 0.00 Total pipe s_pass 17136021 3.24 Total pass s_fetch 174965716 33.13 Total fetch s_hdrbytes 482009445815 91262.60 Total header bytes s_bodybytes 51526937574814 9755996.15 Total body bytes sess_closed 8840719 1.67 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 1823095244 345.18 Session Linger sess_herd 789328402 149.45 Session herd shm_records 79140151510 14984.22 SHM records shm_writes 3068111358 580.91 SHM writes shm_flushes 401 0.00 SHM flushes due to overflow shm_cont 3729798 0.71 SHM MTX contention shm_cycles 30880 0.01 SHM cycles through buffer sms_nreq 362276 0.07 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 141085162 . SMS bytes allocated sms_bfree 141085162 . SMS bytes freed backend_req 174967552 33.13 Backend requests made n_vcl 10 0.00 N vcl total n_vcl_avail 10 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_add 921286 0.17 N new bans added n_ban_retire 921285 0.17 N old bans deleted n_ban_obj_test 212986878 40.33 N objects tested n_ban_re_test 861931714 163.20 N regexps tested against n_ban_dups 15252 0.00 N duplicate bans removed hcb_nolock 1807101152 342.15 HCB Lookups without lock hcb_lock 139930675 26.49 HCB Lookups with lock hcb_insert 139930416 26.49 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 41309 0.01 Connection dropped late uptime 5281566 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 1086828 0.21 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 1 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 64410834 12.20 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 1 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 631804935 119.62 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 279890437 52.99 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 346396 0.07 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 7331 0.00 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 30493212 5.77 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 35308008 6.69 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 602156 0.11 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 2 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 1606680653 304.21 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 139973344 26.50 Created locks LCK.objhdr.destroy 139976181 26.50 Destroyed locks LCK.objhdr.locks 7567485873 1432.81 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 321015003 60.78 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 157903711 29.90 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 1181598 0.22 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 535706998 101.43 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 66209337 12.54 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 3 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 422599991 80.01 Lock Operations LCK.backend.colls 0 0.00 Collisions SMF.s0.c_req 315902467 59.81 Allocator requests SMF.s0.c_fail 0 0.00 Allocator failures SMF.s0.c_bytes 4249229312000 804539.66 Bytes allocated SMF.s0.c_freed 4249229312000 804539.66 Bytes freed SMF.s0.g_alloc 0 . Allocations outstanding SMF.s0.g_bytes 0 . Bytes outstanding SMF.s0.g_space 3221225472 . Bytes available SMF.s0.g_smf 4 . N struct smf SMF.s0.g_smf_frag 0 . N small free smf SMF.s0.g_smf_large 4 . N large free smf SMA.Transient.c_req 32205377 6.10 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 1397823198198 264660.75 Bytes allocated SMA.Transient.c_freed 1397819398409 264660.03 Bytes freed SMA.Transient.g_alloc 2 . Allocations outstanding SMA.Transient.g_bytes 3799789 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available .. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 12:15:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 12:15:15 -0000 Subject: [Varnish] #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: In-Reply-To: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> References: <046.e3699cff47fed762400f3a4ea6e68e2a@varnish-cache.org> Message-ID: <055.efc63b84807bcd96353c5593505614a3@varnish-cache.org> #1147: Assert error in VRT_l_beresp_do_stream(), cache/cache_vrt_var.c line 200: ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [65fe9ad8a6459d329f4c6b562642d4711793ad9d]) Don't modify req after we freed it. Fixes #1147 and #1148 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 13 12:15:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Jun 2012 12:15:15 -0000 Subject: [Varnish] #1148: Fryer: Assert error in VRT_r_req_restarts() In-Reply-To: <046.c570150359d62b5943f39b991a4d1773@varnish-cache.org> References: <046.c570150359d62b5943f39b991a4d1773@varnish-cache.org> Message-ID: <055.b6b195c8b4c2b39759454ec7fde20d1c@varnish-cache.org> #1148: Fryer: Assert error in VRT_r_req_restarts() ----------------------+----------------------------------------------------- Reporter: kristian | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Poul-Henning Kamp ): * resolution: duplicate => fixed Comment: (In [65fe9ad8a6459d329f4c6b562642d4711793ad9d]) Don't modify req after we freed it. Fixes #1147 and #1148 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 14 14:32:09 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 14 Jun 2012 14:32:09 -0000 Subject: [Varnish] #1045: Ban lurker doesn't work anymore In-Reply-To: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> References: <042.4946905ce0b79e243e705a144d3ed1c5@varnish-cache.org> Message-ID: <051.b781eee25f4189b21af9c8a01f884c37@varnish-cache.org> #1045: Ban lurker doesn't work anymore ---------------------+------------------------------------------------------ Reporter: Yvan | Type: defect Status: closed | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: major Resolution: fixed | Keywords: ban lurker ---------------------+------------------------------------------------------ Comment(by Yvan): Shouldn't this bug, fixed 7 months ago, need a new release of varnish? The ban list just keep increasing in 3.0.2, can I assume it's no problem and go with it anyway? I've just set an alert trigger on list > 20k+ bans, hence the alert never goes off. I reverted back to 3.0.1, but after new servers installed, automatically installed 3.0.2. Thanks for fixing this by the way :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 14 18:51:39 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 14 Jun 2012 18:51:39 -0000 Subject: [Varnish] #1155: Bug on debian varnishncsa start/stop script Message-ID: <045.757aa566bd6a112441af3c671e033dcc@varnish-cache.org> #1155: Bug on debian varnishncsa start/stop script ---------------------+------------------------------------------------------ Reporter: Croydon | Type: defect Status: new | Priority: high Milestone: | Component: packaging Version: 3.0.2 | Severity: major Keywords: | ---------------------+------------------------------------------------------ On line 23 in the debian package /etc/init.d/varnishncsa there is a typo. It should say ${PIDFILE} and not $PIDFILE}. This leads to a problem when restarting varnishncsa manual or through logrotate etc. ending up with lots of running processes. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:02:23 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:02:23 -0000 Subject: [Varnish] #1155: Bug on debian varnishncsa start/stop script In-Reply-To: <045.757aa566bd6a112441af3c671e033dcc@varnish-cache.org> References: <045.757aa566bd6a112441af3c671e033dcc@varnish-cache.org> Message-ID: <054.f76a25692f7f38d8dfaf96911833121c@varnish-cache.org> #1155: Bug on debian varnishncsa start/stop script ----------------------+----------------------------------------------------- Reporter: Croydon | Type: defect Status: closed | Priority: high Milestone: | Component: packaging Version: 3.0.2 | Severity: major Resolution: fixed | Keywords: ----------------------+----------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: This is fixed in the 3.0.3 rc 1 packages, so closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:07:07 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:07:07 -0000 Subject: [Varnish] #1054: Child not responding to CLI, killing it In-Reply-To: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> References: <046.c94a70b2cb7314de75dbde5ac039463e@varnish-cache.org> Message-ID: <055.ecfcce837eb16151021e5da48a0c451a@varnish-cache.org> #1054: Child not responding to CLI, killing it ----------------------+----------------------------------------------------- Reporter: scorillo | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by lkarsten): * owner: => lkarsten * status: reopened => new -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:10:57 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:10:57 -0000 Subject: [Varnish] #1152: Varnish 503 Error In-Reply-To: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> References: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> Message-ID: <055.e2f3b2456676e99d0af894fc32438bc6@varnish-cache.org> #1152: Varnish 503 Error ----------------------+----------------------------------------------------- Reporter: ernatalo | Owner: kristian Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: critical | Keywords: ----------------------+----------------------------------------------------- Changes (by tfheen): * owner: => kristian -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:11:40 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:11:40 -0000 Subject: [Varnish] #1151: Varnish to Apache sometimes return error 503 In-Reply-To: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> References: <051.c80535989726826a582cfd70a9799873@varnish-cache.org> Message-ID: <060.5ac8adf64673c588b5fbdf949e143827@varnish-cache.org> #1151: Varnish to Apache sometimes return error 503 ----------------------------+----------------------------------------------- Reporter: rj@? | Type: defect Status: closed | Priority: high Milestone: | Component: build Version: 3.0.2 | Severity: major Resolution: worksforme | Keywords: 503 http error apache ----------------------------+----------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: closed as requested. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:13:08 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:13:08 -0000 Subject: [Varnish] #1152: Varnish 503 Error In-Reply-To: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> References: <046.7377b9a22842df3a192adfdc18007b69@varnish-cache.org> Message-ID: <055.20b5a8d24a86c87041e60b551f437f6e@varnish-cache.org> #1152: Varnish 503 Error ----------------------+----------------------------------------------------- Reporter: ernatalo | Owner: kristian Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: critical | Resolution: invalid Keywords: | ----------------------+----------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => invalid Comment: This is probably best handled by an e-mail to varnish-misc (See https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc) where you will have a much larger audience (and thus a better response time). You will want to include varnishlog output too, which is probably most central to this specific problem. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:14:41 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:14:41 -0000 Subject: [Varnish] #1150: Fast expiry_sleep causing panics In-Reply-To: <045.8334b39d0642817ed08d71ad9c27d0d6@varnish-cache.org> References: <045.8334b39d0642817ed08d71ad9c27d0d6@varnish-cache.org> Message-ID: <054.d3df4dceccfc12dadb10726547c119ea@varnish-cache.org> #1150: Fast expiry_sleep causing panics --------------------------------+------------------------------------------- Reporter: jjordan | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: expiry_sleep panic | --------------------------------+------------------------------------------- Description changed by kristian: Old description: > We were having an issue with SMA.Transient.g_alloc growing unbounded > because we were getting stuff into it faster then the expiry thread could > remove it. > We turned down the expiry_sleep. With expiry_sleep at 0.02 it keeps up > with the Transient expiring such that it doesn't grow unbounded, but > every couple minutes after the Transient store is empty we get the > following panic: > > varnish> panic.show > 200 > Last panic at: Sun, 03 Jun 2012 08:43:09 GMT > Assert error in oc_getobj(), cache.h line 452: > Condition((oc->flags & (1<<1)) == 0) not true. > thread = (cache-timeout) > ident = Linux,2.6.18-308.4.1.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] > 0x420a35: /usr/sbin/varnishd [0x420a35] > 0x42ebac: /usr/sbin/varnishd [0x42ebac] > 0x3a9de0677d: /lib64/libpthread.so.0 [0x3a9de0677d] > 0x3a9cad325d: /lib64/libc.so.6(clone+0x6d) [0x3a9cad325d] > > > varnish> param.show expiry_sleep > 200 > expiry_sleep 0.020000 [seconds] > Default is 1 > How long the expiry thread sleeps when there is > nothing for it to do. > > varnish> > > If I set expiry_sleep any higher, it doesn't keep up and Transient store > acts basically as a slow memory leak. > > We are running 3.0.2 on Centos 5.8, Kernel 2.6.18-308.4.1.el5, SMP x86_64 > GNU/Linux New description: We were having an issue with SMA.Transient.g_alloc growing unbounded because we were getting stuff into it faster then the expiry thread could remove it. We turned down the expiry_sleep. With expiry_sleep at 0.02 it keeps up with the Transient expiring such that it doesn't grow unbounded, but every couple minutes after the Transient store is empty we get the following panic: {{{ varnish> panic.show 200 Last panic at: Sun, 03 Jun 2012 08:43:09 GMT Assert error in oc_getobj(), cache.h line 452: Condition((oc->flags & (1<<1)) == 0) not true. thread = (cache-timeout) ident = Linux,2.6.18-308.4.1.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] 0x420a35: /usr/sbin/varnishd [0x420a35] 0x42ebac: /usr/sbin/varnishd [0x42ebac] 0x3a9de0677d: /lib64/libpthread.so.0 [0x3a9de0677d] 0x3a9cad325d: /lib64/libc.so.6(clone+0x6d) [0x3a9cad325d] varnish> param.show expiry_sleep 200 expiry_sleep 0.020000 [seconds] Default is 1 How long the expiry thread sleeps when there is nothing for it to do. varnish> }}} If I set expiry_sleep any higher, it doesn't keep up and Transient store acts basically as a slow memory leak. We are running 3.0.2 on Centos 5.8, Kernel 2.6.18-308.4.1.el5, SMP x86_64 GNU/Linux -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 10:26:47 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 10:26:47 -0000 Subject: [Varnish] #1143: Slackware64-Current segfault In-Reply-To: <045.bdeacc2343bf5d909edbf2b7a1de20e2@varnish-cache.org> References: <045.bdeacc2343bf5d909edbf2b7a1de20e2@varnish-cache.org> Message-ID: <054.4ef2edf52f73c34501c6eb860168b691@varnish-cache.org> #1143: Slackware64-Current segfault -------------------------+-------------------------------------------------- Reporter: nanashi | Type: defect Status: closed | Priority: highest Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Resolution: worksforme | Keywords: -------------------------+-------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: No, this is not normal, but it's hard to tell what's going on without a backtrace. I'm closing this bug for now. Please reopen if you can reproduce this with a backtrace. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 11:07:42 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 11:07:42 -0000 Subject: [Varnish] #1133: Varnishstat uses too narrow columns causing keys/values to bleed together In-Reply-To: <042.ae3feeb8be9fef444412ebceefa7d0b7@varnish-cache.org> References: <042.ae3feeb8be9fef444412ebceefa7d0b7@varnish-cache.org> Message-ID: <051.9af9b821787296514f612daf8ce9a5c2@varnish-cache.org> #1133: Varnishstat uses too narrow columns causing keys/values to bleed together -------------------------+-------------------------------------------------- Reporter: kane | Owner: daghf Type: defect | Status: closed Priority: normal | Milestone: Component: varnishstat | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by Dag Haavi Finstad ): * status: new => closed * resolution: => fixed Comment: (In [e9e4aa46214c17eab5839f7305626d85551f5702]) Fix for an off-by-one issue in do_once_cb(). Fixes: #1133 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 13:40:36 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 13:40:36 -0000 Subject: [Varnish] #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used Message-ID: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used -------------------+-------------------------------------------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- I'm experiencing a problem with first_byte_timeout in keepalive connections. For the second request, if the backend doesn't respond within the timeout, the backend connection will be closed (normal), but the 503 response will not be sent to the client immediately ... it's gonna take an additional first_byte_timeout sec for 503 error response to be sent to the client ... So if you have a 5s timeout, after 5s the backend connection is closed, but it's gonna take 10s for the 503 response to be sent out ... Confirmed on 3.0.3 rc1 and master -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 13:45:37 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 13:45:37 -0000 Subject: [Varnish] #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used In-Reply-To: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> References: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> Message-ID: <050.3e7790b176e178ee04cf2eb7b4918ce4@varnish-cache.org> #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used -------------------+-------------------------------------------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by tnt): The log was generated with a timeout set to 7 sec. The call was to a test api that sleep(). The first request is immediate sleep(0) and the second is a sleep(15). You can see it took 14 sec (i.e. 2*first_byte_timeout) to return ... time curl -v http://127.0.0.1:8080/wait/0/0 http://127.0.0.1:8080/wait/15/15 * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > GET /wait/0/0 HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 OpenSSL/1.0.0j zlib/1.2.5.1 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: text/html; charset=utf-8 < Transfer-Encoding: chunked < Date: Mon, 18 Jun 2012 13:42:46 GMT < Age: 0 < Connection: keep-alive < Server: nydb < 0 EIxmOPybgttZFnLuUaqFejykKywwTXchVHuTgprbuqDglSNzMjxhHtORqMzzGnGvCgPZOBUgSnNmihaIKajtayotYYyiJWBGSUySDLSqTdscyUhnJambtvOhUOLhqSvaJpNcpKZsMjyJfVtRmsyMvczrmAaZPYTurREWNNtRbQjbBxnYUlqMnAkhFJpxiPxMNxTTuoegjjzChhbEhCAiZsAHQChvZfcjniiyPTUXbOokCDRZtupkGFfxHuJDfTcLaoOehTGoXgsiUWOpCUVMAXrcNAQGfTvMxsgznKbWbKnKIajApEfOrnJvGNQluYwxCJsQSZaPAczVISxnRlCnWrtrdgeGiYIwpRwIrVEiOUsKTAeWPVaexIQproTtTjhBuBBsTrRnByrQOkZdsuRzFwZLviEVvlmwwoGVJtNbrIHBbpvBGXXxjeCJamfeuGcDddWxpxhQOQZlYtUfeDkmWFJuEUOzzooCazNVghMbLyXZoiCGjcouSKRJTPAGZUFNaxvtYeiiXseDdjxxormLXDzxVmMbwIKXjPyzZOqBuutCbfdhYOcvQsKcmFwJbfzjQrhlpbMpBiUiGHdCTOISvzaIPlwKSnezGDvPZNTStShuepzTNwsQOjfguHRwGIyVhrIsKpUwnOOXjSaplMwnsFvOueUKDcVzGASaFfoYltrLSiKKxCbhYKVISGyrRRLAsEreRPOPFMRJzqKwZdyjvLVYockohBucvppEqTWJYqmuOvTRXJTwcAsHlVmDXsUYrNQFohZxCuoAmJEmKJvYwMJaCAhiuSIbhowygpOHqjKeEaHUxcupEbxRGLGXLnScUUHLcxOCKzpvwddOfkgkMUjydCrLJiggTffPMQQLIkqmyaLnKAehyFTjdiuKldxcsIuIaTMELcaJKsBcdXEGqmLVGKDraPTwSaKjwFMFcWCYxIGSZfrrxZVWOChxANTFUTDFofUyjZytpwrZdjoHAOknSKeNVhLEdqdUIktcMdhHKyWQukZfqQrUNptmqHJL 0 * Connection #0 to host 127.0.0.1 left intact * Re-using existing connection! (#0) with host (nil) * Connected to (nil) (127.0.0.1) port 8080 (#0) > GET /wait/15/15 HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 OpenSSL/1.0.0j zlib/1.2.5.1 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 503 Service Unavailable < Content-Type: text/html; charset=utf-8 < Retry-After: 5 < Content-Length: 419 < Accept-Ranges: bytes < Date: Mon, 18 Jun 2012 13:43:00 GMT < Age: 14 < Connection: close < Server: nydb < 503 Service Unavailable

Error 503 Service Unavailable

Service Unavailable

Guru Meditation:

XID: 1754534937


Varnish cache server

* Closing connection #0 real 0m14.075s user 0m0.000s sys 0m0.004s -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 13:52:20 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 13:52:20 -0000 Subject: [Varnish] #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used In-Reply-To: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> References: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> Message-ID: <050.2f318fde37bd30cbca11dd2f7da07c97@varnish-cache.org> #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used -------------------+-------------------------------------------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by kristian): Block: {{{ time curl -v http://127.0.0.1:8080/wait/0/0 http://127.0.0.1:8080/wait/15/15 * About to connect() to 127.0.0.1 port 8080 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0) > GET /wait/0/0 HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 OpenSSL/1.0.0j zlib/1.2.5.1 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: text/html; charset=utf-8 < Transfer-Encoding: chunked < Date: Mon, 18 Jun 2012 13:42:46 GMT < Age: 0 < Connection: keep-alive < Server: nydb < 0 EIxmOPybgttZFnLuUaqFejykKywwTXchVHuTgprbuqDglSNzMjxhHtORqMzzGnGvCgPZOBUgSnNmihaIKajtayotYYyiJWBGSUySDLSqTdscyUhnJambtvOhUOLhqSvaJpNcpKZsMjyJfVtRmsyMvczrmAaZPYTurREWNNtRbQjbBxnYUlqMnAkhFJpxiPxMNxTTuoegjjzChhbEhCAiZsAHQChvZfcjniiyPTUXbOokCDRZtupkGFfxHuJDfTcLaoOehTGoXgsiUWOpCUVMAXrcNAQGfTvMxsgznKbWbKnKIajApEfOrnJvGNQluYwxCJsQSZaPAczVISxnRlCnWrtrdgeGiYIwpRwIrVEiOUsKTAeWPVaexIQproTtTjhBuBBsTrRnByrQOkZdsuRzFwZLviEVvlmwwoGVJtNbrIHBbpvBGXXxjeCJamfeuGcDddWxpxhQOQZlYtUfeDkmWFJuEUOzzooCazNVghMbLyXZoiCGjcouSKRJTPAGZUFNaxvtYeiiXseDdjxxormLXDzxVmMbwIKXjPyzZOqBuutCbfdhYOcvQsKcmFwJbfzjQrhlpbMpBiUiGHdCTOISvzaIPlwKSnezGDvPZNTStShuepzTNwsQOjfguHRwGIyVhrIsKpUwnOOXjSaplMwnsFvOueUKDcVzGASaFfoYltrLSiKKxCbhYKVISGyrRRLAsEreRPOPFMRJzqKwZdyjvLVYockohBucvppEqTWJYqmuOvTRXJTwcAsHlVmDXsUYrNQFohZxCuoAmJEmKJvYwMJaCAhiuSIbhowygpOHqjKeEaHUxcupEbxRGLGXLnScUUHLcxOCKzpvwddOfkgkMUjydCrLJiggTffPMQQLIkqmyaLnKAehyFTjdiuKldxcsIuIaTMELcaJKsBcdXEGqmLVGKDraPTwSaKjwFMFcWCYxIGSZfrrxZVWOChxANTFUTDFofUyjZytpwrZdjoHAOknSKeNVhLEdqdUIktcMdhHKyWQukZfqQrUNptmqHJL 0 * Connection #0 to host 127.0.0.1 left intact * Re-using existing connection! (#0) with host (nil) * Connected to (nil) (127.0.0.1) port 8080 (#0) > GET /wait/15/15 HTTP/1.1 > User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 OpenSSL/1.0.0j zlib/1.2.5.1 > Host: 127.0.0.1:8080 > Accept: */* > < HTTP/1.1 503 Service Unavailable < Content-Type: text/html; charset=utf-8 < Retry-After: 5 < Content-Length: 419 < Accept-Ranges: bytes < Date: Mon, 18 Jun 2012 13:43:00 GMT < Age: 14 < Connection: close < Server: nydb < 503 Service Unavailable

Error 503 Service Unavailable

Service Unavailable

Guru Meditation:

XID: 1754534937


Varnish cache server

* Closing connection #0 real 0m14.075s user 0m0.000s sys 0m0.004s }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 14:28:01 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 14:28:01 -0000 Subject: [Varnish] #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used In-Reply-To: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> References: <041.241bf6dcc23aa9f0c29f48821fcf593f@varnish-cache.org> Message-ID: <050.545fcd3b981bc96a5662f33a192a01c2@varnish-cache.org> #1156: first_byte_timeout is "doubled" on the client side when keep-alive is used -------------------+-------------------------------------------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by tnt): A bit more investigation turned some more infos: - The request is actually retried on a new backend connection, which is why it takes twice as much time - It doesn't happen on the first connection because there is no 'retry' for fresh new backend connections. What happens is that the code that retries request to a backend in case of keepalive (like is the backend closes the connection) decides to retry the request ... But that's not an acceptable behavior for a 'timeout' error, only for things like connection reset or if the request couldn't be sent but not timeout on the response. To check if this was really the source of the problem I tried this hack: {{{ diff --git a/bin/varnishd/cache/cache_fetch.c b/bin/varnishd/cache/cache_fetch.c index 7ccba0a..f71cd45 100644 --- a/bin/varnishd/cache/cache_fetch.c +++ b/bin/varnishd/cache/cache_fetch.c @@ -510,7 +510,7 @@ FetchHdr(struct sess *sp, int need_host_hdr, int sendbody) VDI_CloseFd(&bo->vbc); /* XXX: other cleanup ? */ /* Retryable if we never received anything */ - return (i == -1 ? retry : -1); + return -1; } VTCP_set_read_timeout(vc->fd, vc->between_bytes_timeout); }}} (It's obviously not a 'fix' because it disables retry in all cases but that was just to confirm the source of the issue). Ideally we'd need to know if HTC_Rx timed out or had a real error. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 18 14:32:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Jun 2012 14:32:38 -0000 Subject: [Varnish] #1157: Returning pass in vcl_recv when doing streaming causes backend keepalive to fail ( "Stream Error" ) Message-ID: <041.c1e4fcdee4d20bbca7f36ca26433f595@varnish-cache.org> #1157: Returning pass in vcl_recv when doing streaming causes backend keepalive to fail ( "Stream Error" ) -------------------+-------------------------------------------------------- Reporter: tnt | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- If in a vcl_recv you return pass and you make use of streaming, the varnishlog will report a "Stream Error". I traced the issue to cnt_streambody that considers the lack of sp->obj->objcore to be a "Stream Error" which is not the case. Fix is attached (tested for > 1 week in prod. It fixed the issue and didn't turn up any bad effects) This doesn't happen in the git master because of the stream code rewrite, but it does happen in 3.0.3 rc1 and I think it'd be useful to be fixed for 3.0.3. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 19 10:10:13 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 Jun 2012 10:10:13 -0000 Subject: [Varnish] #1003: Fix libedit (libreadline) support for FreeBSD In-Reply-To: <044.97ea0cb79c8172ef6b88dc23843a837c@varnish-cache.org> References: <044.97ea0cb79c8172ef6b88dc23843a837c@varnish-cache.org> Message-ID: <053.782282252798151bdbbad77e3812df7c@varnish-cache.org> #1003: Fix libedit (libreadline) support for FreeBSD -------------------------+-------------------------------------------------- Reporter: anders | Owner: tfheen Type: enhancement | Status: reopened Priority: normal | Milestone: Component: build | Version: 3.0.1 Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by anders): * status: closed => reopened * resolution: fixed => Comment: This does not work well in FreeBSD, varnishadm fails to build with libedit: {{{ libtool: link: gcc -std=gnu99 -g -O2 -D_THREAD_SAFE -pthread -g -fno- inline -DDIAGNOSTICS -Wextra -Wno-missing-field-initializers -Wno-sign- compare -o .libs/varnishadm varnishadm-varnishadm.o varnishadm-assert.o varnishadm-tcp.o varnishadm-vss.o ../../lib/libvarnishapi/.libs/libvarnishapi.so -L/usr/local/lib /usr/local/lib/libpcre.so ../../lib/libvarnishcompat/.libs/libvarnishcompat.so -ledit -lcurses -lm -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,-rpath -Wl,/usr/local/lib/varnish varnishadm-varnishadm.o(.text+0xa3): In function `cli_write': /root/varnish-cache/bin/varnishadm/varnishadm.c:82: undefined reference to `rl_callback_handler_remove' varnishadm-varnishadm.o(.text+0x697): In function `pass': /root/varnish-cache/bin/varnishadm/varnishadm.c:211: undefined reference to `rl_already_prompted' varnishadm-varnishadm.o(.text+0x6b7):/root/varnish- cache/bin/varnishadm/varnishadm.c:213: undefined reference to `rl_callback_handler_install' varnishadm-varnishadm.o(.text+0x816):/root/varnish- cache/bin/varnishadm/varnishadm.c:248: undefined reference to `rl_forced_update_display' varnishadm-varnishadm.o(.text+0x829):/root/varnish- cache/bin/varnishadm/varnishadm.c:253: undefined reference to `rl_callback_read_char' varnishadm-varnishadm.o(.text+0x871):/root/varnish- cache/bin/varnishadm/varnishadm.c:233: undefined reference to `rl_callback_handler_remove' varnishadm-varnishadm.o(.text+0x887):/root/varnish- cache/bin/varnishadm/varnishadm.c:215: undefined reference to `rl_callback_handler_install' varnishadm-varnishadm.o(.text+0x8b1):/root/varnish- cache/bin/varnishadm/varnishadm.c:236: undefined reference to `rl_callback_handler_remove' varnishadm-varnishadm.o(.text+0xa03): In function `send_line': /root/varnish-cache/bin/varnishadm/varnishadm.c:188: undefined reference to `rl_callback_handler_remove' varnishadm-varnishadm.o(.text+0x9fe):/root/varnish- cache/bin/varnishadm/varnishadm.c:186: undefined reference to `add_history' *** Error code 1 Stop in /root/varnish-cache/bin/varnishadm. *** Error code 1 Stop in /root/varnish-cache/bin. *** Error code 1 Stop in /root/varnish-cache. *** Error code 1 Stop in /root/varnish-cache. }}} If we build with libreadline (which is installed by default) it will work. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 19 15:54:47 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 Jun 2012 15:54:47 -0000 Subject: [Varnish] #1158: graced objects serve very old objects Message-ID: <046.2c701f0eda5d50250b65a0c0701b2378@varnish-cache.org> #1158: graced objects serve very old objects ----------------------+----------------------------------------------------- Reporter: nicholas | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: grace | ----------------------+----------------------------------------------------- Hello, this is probably not a bug just surprising behaviour: We had problems this morning, and served a fair amount of errors for some while. Problem got solved, but error pages still got served at random intervals. We had vcl_recv { set req.grace = 8h ...} vcl_fetch { set beresp.grace = 8h ...} and it looks like it picked very old objects every time it graced simple requests while backend is healthy. IN normal operation we grace quite a lot of concurrent objects. We would hope it picked the last available, ordered by time. It looked like it picked ordered by time reverse. We have now done the "if (req.backend.healthy)" checks indicated in https://www.varnish-cache.org/trac/wiki/VCLExampleGrace We took those out some years back when it was obvious that we didn't know which backend was in use at the beginnning of the vcl. We now do the "if (req.backend.healthy)" at the end of vcl_recv. Is this the recommended way? Nicholas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 20 11:59:50 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 20 Jun 2012 11:59:50 -0000 Subject: [Varnish] #1157: Returning pass in vcl_recv when doing streaming causes backend keepalive to fail ( "Stream Error" ) In-Reply-To: <041.c1e4fcdee4d20bbca7f36ca26433f595@varnish-cache.org> References: <041.c1e4fcdee4d20bbca7f36ca26433f595@varnish-cache.org> Message-ID: <050.5a2e7b44027321e3d4d3fa7a06ef2499@varnish-cache.org> #1157: Returning pass in vcl_recv when doing streaming causes backend keepalive to fail ( "Stream Error" ) ---------------------+------------------------------------------------------ Reporter: tnt | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.2 | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by martin): * status: new => closed * resolution: => fixed Comment: Commit 8c821b77b3ecda5ced37a35ab64bddd1b7f1caf2 fixes this issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 26 10:55:29 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 26 Jun 2012 10:55:29 -0000 Subject: [Varnish] #1158: graced objects serve very old objects In-Reply-To: <046.2c701f0eda5d50250b65a0c0701b2378@varnish-cache.org> References: <046.2c701f0eda5d50250b65a0c0701b2378@varnish-cache.org> Message-ID: <055.e26dbe36d161a5aa39e1e06bd32956d4@varnish-cache.org> #1158: graced objects serve very old objects ----------------------+----------------------------------------------------- Reporter: nicholas | Type: enhancement Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: grace | ----------------------+----------------------------------------------------- Comment(by nicholas): Hi! Looks like we've figured out whats going on: our ban regime bans on Cache- Control headers, while error messages for specific url's don't have Cache- Control headers, and so the errors are the only objects available to us. vcl looks like this: ban("obj.http.cache-control ~ group=" + {"""} + req.url + {"""}); Incoming "purge" requests looks like "PURGE http://localhost/art6123639" for article 6123639 in all it's varieties on all kinds of pages. Any way to combine ban + keeping objects for grace? Or redoing the logic to use purge() against Cache-Control headers? All parts of the purge-regime is self-made, so hints at doing it again to get it right is also appreciated. Greetings Nicholas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 27 20:19:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 Jun 2012 20:19:26 -0000 Subject: [Varnish] #1159: bad IP address in test Message-ID: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> #1159: bad IP address in test ---------------------+------------------------------------------------------ Reporter: jdrusch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: minor Keywords: | ---------------------+------------------------------------------------------ My varnish build was failing the c00003 test (it was returning a 200 instead of the expected 300). I finally realized it was because the bad IP address was the real broadcast address on my build machine, 10.255.255.255 As I believe that to be a fairly common broadcast address, I was wondering if we could get the bad IP used for the test changed to something a little less common? Even just changing it to 10.255.255.254, which is what I did. I feel weird suggesting this, but a real public IP would be the least common address - say the IP address of one of the root DNS servers should be pretty safe :) Anyway, just some suggestions. Just in case it's relevant, I'm using CentOS 6.2 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 27 20:48:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 Jun 2012 20:48:38 -0000 Subject: [Varnish] #1159: bad IP address in test In-Reply-To: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> References: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> Message-ID: <054.2edf91ed167d22729707275281d9e889@varnish-cache.org> #1159: bad IP address in test ---------------------+------------------------------------------------------ Reporter: jdrusch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: minor Keywords: | ---------------------+------------------------------------------------------ Comment(by bz): Stopping to use 10/8 for tests completely? There are documentation prefixes, test-net[123] prefixes, ... defined in RFC 5737. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 27 21:01:09 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 Jun 2012 21:01:09 -0000 Subject: [Varnish] #1159: bad IP address in test In-Reply-To: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> References: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> Message-ID: <054.910ef4bd840e89d6b2dc81f09809e830@varnish-cache.org> #1159: bad IP address in test ---------------------+------------------------------------------------------ Reporter: jdrusch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: minor Keywords: | ---------------------+------------------------------------------------------ Comment(by jdrusch): Thanks for the info, I'll read up on it. I wonder if there's a redhat quirk (or mistake on my part) at play here as 10.255.255.255 shouldn't be my broadcast now that I think about it... inet addr:10.70.50.40 Mask:255.255.0.0 Bcast:10.255.255.255 I would think my broadcast should be 10.70.255.255 eth0 Link encap:Ethernet HWaddr 00:0C:29:65:D9:A0 inet addr:10.70.50.40 Bcast:10.255.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1166241 errors:0 dropped:0 overruns:0 frame:0 TX packets:829055 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:816951175 (779.1 MiB) TX bytes:568382021 (542.0 MiB) I guess maybe this is a non-issue then. Weird. Here is the error I was receiving before I changed the IP: # top TEST ./tests/c00002.vtc passed (1.170) **** top 0.0 macro def varnishd=../varnishd/varnishd **** top 0.0 macro def pwd=/home/build/vtest/varnish-3.0.2/bin/varnishtest **** top 0.0 macro def topbuild=/home/build/vtest/varnish-3.0.2/bin/varnishtest/../.. **** top 0.0 macro def bad_ip=10.255.255.255 **** top 0.0 macro def tmpdir=/tmp/vtc.31176.64226ac0 * top 0.0 TEST ./tests/c00003.vtc starting *** top 0.0 varnishtest * top 0.0 TEST Check that we start if at least one listen address works *** top 0.0 server ** s1 0.0 Starting server **** s1 0.0 macro def s1_addr=127.0.0.1 **** s1 0.0 macro def s1_port=57693 **** s1 0.0 macro def s1_sock=127.0.0.1 57693 * s1 0.0 Listen on 127.0.0.1 57693 *** top 0.0 varnish ** s1 0.0 Started on 127.0.0.1 57693 ** v1 0.0 Launch *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n /tmp/vtc.31176.64226ac0/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.31176.64226ac0/v1/_S -M '127.0.0.1 35745' -P /tmp/vtc.31176.64226ac0/v1/varnishd.pid -sfile,/tmp/vtc.31176.64226ac0/v1,10M *** v1 0.0 CMD: cd /home/build/vtest/varnish-3.0.2/bin/varnishtest && ../varnishd/varnishd -d -d -n /tmp/vtc.31176.64226ac0/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.31176.64226ac0/v1/_S -M '127.0.0.1 35745' -P /tmp/vtc.31176.64226ac0/v1/varnishd.pid -sfile,/tmp/vtc.31176.64226ac0/v1,10M *** v1 0.0 PID: 760 *** v1 0.0 debug| Platform: Linux,2.6.32-71.29.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| 200 251 \n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Varnish Cache CLI 1.0\n *** v1 0.0 debug| -----------------------------\n *** v1 0.0 debug| Linux,2.6.32-71.29.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit\n *** v1 0.0 debug| \n *** v1 0.0 debug| Type 'help' for command list.\n *** v1 0.0 debug| Type 'quit' to close CLI session.\n *** v1 0.0 debug| Type 'start' to launch worker process.\n *** v1 0.0 debug| \n **** v1 0.1 CLIPOLL 1 0x1 0x0 *** v1 0.1 CLI connection fd = 9 *** v1 0.1 CLI RX 107 **** v1 0.1 CLI RX| jhmnlckxgedfedrgcevdhlhxkrntqchz\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Authentication required.\n **** v1 0.1 CLI TX| auth ba8a9f2b02f46a1e24996a2823d4165649ac5ead5698010ff779a24ad576e526\n *** v1 0.1 CLI RX 200 **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Varnish Cache CLI 1.0\n **** v1 0.1 CLI RX| -----------------------------\n **** v1 0.1 CLI RX| Linux,2.6.32-71.29.1.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit\n **** v1 0.1 CLI RX| \n **** v1 0.1 CLI RX| Type 'help' for command list.\n **** v1 0.1 CLI RX| Type 'quit' to close CLI session.\n **** v1 0.1 CLI RX| Type 'start' to launch worker process.\n **** v1 0.1 CLI TX| param.set listen_address 10.255.255.255:0 *** v1 0.2 CLI RX 200 ** v1 0.2 CLI 200 *** top 0.2 varnish **** v1 0.2 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n **** v1 0.2 CLI TX| backend s1 { .host = "127.0.0.1"; .port = "57693"; }\n **** v1 0.2 CLI TX| \n **** v1 0.2 CLI TX| \n **** v1 0.2 CLI TX| %XJEIFLH|)Xspa8P\n *** v1 0.2 CLI RX 200 **** v1 0.2 CLI RX| VCL compiled. **** v1 0.2 CLI TX| vcl.use vcl1 *** v1 0.3 CLI RX 200 **** v1 0.3 CLI TX| start *** v1 0.3 debug| child (808) Started\n **** v1 0.3 vsl| 0 CLI - Rd vcl.load "vcl1" ./vcl.chJ7EvFM.so **** v1 0.3 vsl| 0 CLI - Wr 200 36 Loaded "./vcl.chJ7EvFM.so" as "vcl1" **** v1 0.3 vsl| 0 CLI - Rd vcl.use "vcl1" **** v1 0.3 vsl| 0 CLI - Wr 200 0 **** v1 0.3 vsl| 0 WorkThread - 0x7fe3e05fca80 start **** v1 0.3 vsl| 0 CLI - Rd start *** v1 0.3 debug| Child (808) said Not running as root, no priv-sep\n *** v1 0.3 debug| Child (808) said Child starts\n *** v1 0.3 debug| Child (808) said SMF.s0 mmap'ed 10485760 bytes of 10485760\n *** v1 0.3 CLI RX 200 ** v1 0.3 CLI 200 ---- v1 0.3 FAIL CLI response 200 expected 300 * top 0.3 RESETTING after ./tests/c00003.vtc ** s1 0.3 Waiting for server **** v1 0.3 vsl| 0 Debug - Acceptor is epoll **** v1 0.3 vsl| 0 CLI - Wr 200 0 **** s1 0.3 macro undef s1_addr **** s1 0.3 macro undef s1_port **** s1 0.3 macro undef s1_sock **** v1 0.3 vsl| 0 WorkThread - 0x7fe3dc2fba80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3db8faa80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3daef9a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3da4f8a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3d9af7a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3d90f6a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3d86f5a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3d7cf4a80 start **** v1 0.3 vsl| 0 WorkThread - 0x7fe3d72f3a80 start ** v1 1.3 Wait ** v1 1.3 R 760 Status: 0000 * top 1.3 TEST ./tests/c00003.vtc FAILED # top TEST ./tests/c00003.vtc FAILED (1.336) exit=1 make[2]: *** [check] Error 2 make[2]: Leaving directory `/home/build/vtest/varnish-3.0.2/bin/varnishtest' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/build/vtest/varnish-3.0.2/bin' make: *** [check-recursive] Error 1 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 27 21:09:58 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 Jun 2012 21:09:58 -0000 Subject: [Varnish] #1159: bad IP address in test In-Reply-To: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> References: <045.3364ad2809a6f81e737fc49a2174112a@varnish-cache.org> Message-ID: <054.c39bf3e577d98e1928e07f9969428f88@varnish-cache.org> #1159: bad IP address in test ---------------------+------------------------------------------------------ Reporter: jdrusch | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.2 | Severity: minor Keywords: | ---------------------+------------------------------------------------------ Comment(by jdrusch): Sorry about the mess of that shell output, I'll pop it into an attachment if someone thinks it'd be useful to read -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 29 11:31:28 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 Jun 2012 11:31:28 -0000 Subject: [Varnish] #1160: [PATCH] varnishadm crashes when run with -n and server without -T and -S Message-ID: <046.06913db15d2f5e7306db8049b18783f7@varnish-cache.org> #1160: [PATCH] varnishadm crashes when run with -n and server without -T and -S ----------------------+----------------------------------------------------- Reporter: lkundrak | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ----------------------+----------------------------------------------------- {{{ ? varnishd -F -a 0.0.0.0:668 -b 127.0.0.1:80 -n kokot child (29843) Started Child (29843) said Child starts Child (29843) said SMF.s0 mmap'ed 104857600 bytes of 104857600 ^Z [1]+ Stopped varnishd -F -a 0.0.0.0:6668 -b 127.0.0.1:80 -n kokot ? bg [1]+ varnishd -F -a 0.0.0.0:6668 -b 127.0.0.1:80 -n kokot & ? gdb varnishadm GNU gdb (GDB) Red Hat Enterprise Linux (7.2-50.el6) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /home/lkundrak/src/varnish-cache/bin/varnishadm/.libs /lt-varnishadm...done. (gdb) run -n kokot Starting program: /home/lkundrak/src/varnish-cache/bin/varnishadm/.libs /lt-varnishadm -n kokot [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. __strlen_sse2 () at ../sysdeps/x86_64/strlen.S:32 32 movdqu (%rdi), %xmm1 (gdb) bt #0 __strlen_sse2 () at ../sysdeps/x86_64/strlen.S:32 #1 0x0000003a8127f836 in __strdup (s=0x0) at strdup.c:42 #2 0x00000000004019ff in n_arg_sock (n_arg=) at varnishadm.c:306 #3 0x0000000000401eee in main (argc=0, argv=0x7fffffffdfb0) at varnishadm.c:364 (gdb) up #1 0x0000003a8127f836 in __strdup (s=0x0) at strdup.c:42 42 size_t len = strlen (s) + 1; (gdb) #2 0x00000000004019ff in n_arg_sock (n_arg=) at varnishadm.c:306 306 T_start = T_arg = strdup(vt.b); (gdb) print vt $1 = {chunk = 0x0, b = 0x0, e = 0x0, priv = 0} (gdb) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From JEREMIAH.JORDAN at morningstar.com Sun Jun 3 08:51:50 2012 From: JEREMIAH.JORDAN at morningstar.com (Jeremiah Jordan) Date: Sun, 03 Jun 2012 08:51:50 -0000 Subject: Fast expiry_sleep causing panics Message-ID: We were having an issue with SMA.Transient.g_alloc growing unbounded because we were getting stuff into it faster then the expiry thread could remove it, so we turned down the expiry_sleep. With expiry_sleep at 0.02 it keeps up with the Transient expiring such that it doesn't grow unbounded, but every 10 minutes or so we get the following panic: varnish> panic.show 200 Last panic at: Sun, 03 Jun 2012 08:43:09 GMT Assert error in oc_getobj(), cache.h line 452: Condition((oc->flags & (1<<1)) == 0) not true. thread = (cache-timeout) ident = Linux,2.6.18-308.4.1.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c7a6: /usr/sbin/varnishd [0x42c7a6] 0x420a35: /usr/sbin/varnishd [0x420a35] 0x42ebac: /usr/sbin/varnishd [0x42ebac] 0x3a9de0677d: /lib64/libpthread.so.0 [0x3a9de0677d] 0x3a9cad325d: /lib64/libc.so.6(clone+0x6d) [0x3a9cad325d] varnish> param.show expiry_sleep 200 expiry_sleep 0.020000 [seconds] Default is 1 How long the expiry thread sleeps when there is nothing for it to do. varnish> If I set expiry_sleep any higher, it doesn't keep up and Transient store acts basically as a slow memory leak. -Jeremiah Jordan