From varnish-bugs at varnish-cache.org Wed Jun 1 10:10:26 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jun 2011 10:10:26 -0000 Subject: [Varnish] #929: Child panic (Assert error) using streaming on a recent git checkout Message-ID: <048.5a2cf98ad04fbe10d66d9ac9cd7b6ef0@varnish-cache.org> #929: Child panic (Assert error) using streaming on a recent git checkout -------------------------+-------------------------------------------------- Reporter: andreacampi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------------+-------------------------------------------------- {{{ Child (95485) Panic message: Assert error in res_WriteDirObj(), cache_response.c line 330: Condition(u == sp->obj->len) not true. thread = (cache-worker) ident = Darwin,10.7.0,i386,-smalloc,-smalloc,-hcritbit,poll Backtrace: 0x1000251c5: 01 0000 BNSYM+c5 0x1000288ce: 01 0000 BNSYM+2fe 0x10000d554: 01 0000 BNSYM+4a4 0x1000272c7: 01 0000 BNSYM+e7 0x10002779f: 01 0000 BNSYM+44f 0x7fff88ed94f6: _vsm_end+7ffe88e6ceb6 0x7fff88ed93a9: _vsm_end+7ffe88e6cd69 sp = 0x10081fc08 { fd = 7, id = 7, xid = 1692407524, client = 127.0.0.1 61076, step = STP_DELIVER, handling = deliver, restarts = 0, esi_level = 0 ws = 0x10081fc80 { id = "sess", {s,f,r,e} = {0x1008208e0,+224,0x0,+65536}, }, http[req] = { ws = 0x10081fc80[sess] "GET", "/", "HTTP/1.1", "User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3", "Host: 0.0.0.0:8881", "Accept: */*", "X-Forwarded-For: 127.0.0.1", }, worker = 0x106786bc0 { ws = 0x106786d60 { id = "wrk", {s,f,r,e} = {0x106774b40,+120,0x0,+65536}, }, http[resp] = { ws = 0x106786d60[wrk] "HTTP/1.1", "200", "OK", "Status: 200 OK", "Cache-Control: no-cache", "Content-Type: text/html; charset=utf-8", "X-UA-Compatible: IE=Edge", "X-Runtime: 0.036742", "Content-Length: 402", "Accept-Ranges: bytes", "Date: Wed, 01 Jun 2011 10:02:09 GMT", "X-Varnish: 1692407524 1692407523", "Age: 8", "Via: 1.1 varnish", "Connection: keep-alive", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x107b00320 { xid = 1692407523, ws = 0x107b00338 { id = "obj", {s,f,r,e} = {0x107b00540,+224,0x0,+256}, }, http[obj] = { ws = 0x107b00338[obj] "HTTP/1.1", "OK", "Date: Wed, 01 Jun 2011 10:02:01 GMT", "Status: 200 OK", "Cache-Control: no-cache", "Content-Type: text/html; charset=utf-8", "X-UA-Compatible: IE=Edge", "X-Runtime: 0.036742", "Content-Length: 402", }, len = 402, store = { }, }, }, }}} This is the complete VCL: {{{ backend default { .host = "127.0.0.1"; .port = "3000"; } sub vcl_fetch { set beresp.do_stream = true; } }}} Unpatched sources as of b2c617d7be6745d8ba0d96b190a1e99fdf1eafd1 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 1 13:44:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jun 2011 13:44:44 -0000 Subject: [Varnish] #929: Child panic (Assert error) using streaming on a recent git checkout In-Reply-To: <048.5a2cf98ad04fbe10d66d9ac9cd7b6ef0@varnish-cache.org> References: <048.5a2cf98ad04fbe10d66d9ac9cd7b6ef0@varnish-cache.org> Message-ID: <057.4f78cfba0dc20497cc608e79c815b10a@varnish-cache.org> #929: Child panic (Assert error) using streaming on a recent git checkout --------------------------+------------------------------------------------- Reporter: andreacampi | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: --------------------------+------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [23de733b2e4e25cec3e24962b2dfe40bb040ea14]) We used a wrong test to detect if a streaming pass could delete object data, and therefore also deleted when we should not. Fixes #929 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 1 19:07:37 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Jun 2011 19:07:37 -0000 Subject: [Varnish] #876: Can't start varnish: "SHMFILE owned by running varnishd master" In-Reply-To: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> References: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> Message-ID: <051.2adf27d5a4d5c49011c820a233341d56@varnish-cache.org> #876: Can't start varnish: "SHMFILE owned by running varnishd master" -------------------+-------------------------------------------------------- Reporter: wijet | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 2.1.3 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by kirrus): Was hit by this one as well, after a kernel-upgrade driven reboot. Ubuntu Lucid, Varnish 2.1.0-2ubuntu0.1 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 2 05:59:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jun 2011 05:59:43 -0000 Subject: [Varnish] #928: strace attached to child causes child to die In-Reply-To: <048.d8685bc72e2f98a74463b3c5eab42307@varnish-cache.org> References: <048.d8685bc72e2f98a74463b3c5eab42307@varnish-cache.org> Message-ID: <057.9a291f4cfd8ef8d7f0e1ff661172acb3@varnish-cache.org> #928: strace attached to child causes child to die --------------------------+------------------------------------------------- Reporter: David Busby | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: 2.1.5 | Severity: normal Resolution: invalid | Keywords: strace,child,die --------------------------+------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: Check your logs to see if this is the manager process growing impatient and killing the child. If it is, tinker with the ping_interval and cli_timeout parameters to make it more tolerant. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 2 06:16:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Jun 2011 06:16:59 -0000 Subject: [Varnish] #927: Assert error in stv_alloc(), stevedore.c in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.c2db84effe9fcd8808d104eea97f97f5@varnish-cache.org> References: <043.c2db84effe9fcd8808d104eea97f97f5@varnish-cache.org> Message-ID: <052.404d9ceafab18c3fdb0f21c5889b0169@varnish-cache.org> #927: Assert error in stv_alloc(), stevedore.c in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 ---------------------+------------------------------------------------------ Reporter: kdajka | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [31a65a99d96e061dea7cd689d7b494fe6364e209]) Storage allocation failues happen, let other code deal with it. Fixes #927 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 3 02:37:29 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 03 Jun 2011 02:37:29 -0000 Subject: [Varnish] #930: Caching rules didn't work after upgrading to 2.1.5 Message-ID: <041.ec38446941555ede73ce64c0a7745e21@varnish-cache.org> #930: Caching rules didn't work after upgrading to 2.1.5 ---------------------------------+------------------------------------------ Reporter: pdah | Type: defect Status: new | Priority: high Milestone: Varnish 2.1 release | Component: varnishd Version: 2.1.5 | Severity: major Keywords: | ---------------------------------+------------------------------------------ After upgrading Varnish from 2.1.0 to 2.1.5 on Ubuntu Lucid (10.04), some of my rules didn't work anymore. This is the configuration that worked with Varnish 2.1.0: {{{ backend glassfish { .host = "localhost"; .port = "8080"; } sub vcl_recv { set req.backend = glassfish; if ( (req.request == "GET") && (req.url ~ "\.jsp") ) { return(lookup); } return(pass); } sub vcl_fetch { if ( (req.request == "GET") && (req.url ~ "\.jsp") ) { return(deliver); } return(pass); } }}} After upgrading to Varnish 2.1.5, requests to jsp files always go to backend (I checked it using varnishtop -i txurl) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 3 15:45:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 03 Jun 2011 15:45:44 -0000 Subject: [Varnish] #931: compile failure on isfinite(int) Message-ID: <042.6304e57cc6074725a2374f3e7b9da1be@varnish-cache.org> #931: compile failure on isfinite(int) -----------------------------+---------------------------------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: normal Keywords: | -----------------------------+---------------------------------------------- Building from trunk: {{{ commit a39be7ab6a44ee0ecc5bf512a8a4b654ae5ca5ff Date: Thu Jun 2 22:59:43 2011 +0000 }}} I get the following error: {{{ vmod_std_conversions.c: In function 'vmod_integer': vmod_std_conversions.c:112:2: error: non-floating-point argument in call to function '__builtin_isnan' vmod_std_conversions.c:112:2: error: non-floating-point argument in call to function '__builtin_isinf' }}} Built on OpenSolaris using gcc 4.5.1: {{{ $ uname -a SunOS gsimmons 5.11 snv_134 i86pc i386 i86pc Solaris $ gcc --version gcc (Blastwave.org Inc. Mon Aug 23 11:16:32 GMT 2010) 4.5.1 }}} I didn't get the error building on Debian 5.0.5. Line 112 in vmod_std_conversions.c includes the macro isfinite(r) for an int r, and my setup apparently won't allow the int. isfinite() expands in /usr/include/iso/math_c99.h to use the "builtin" calls shown above, and they evidently require a floating point type. (The macro expansion is different for gcc < 4, but that one also calls a "builtin" function which requires a floating point type, so the compile fails for the same reason.) The workaround is simple, just cast r to double. The compile succeeds with isfinite((double) r). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 3 17:20:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 03 Jun 2011 17:20:59 -0000 Subject: [Varnish] #931: compile failure on isfinite(int) In-Reply-To: <042.6304e57cc6074725a2374f3e7b9da1be@varnish-cache.org> References: <042.6304e57cc6074725a2374f3e7b9da1be@varnish-cache.org> Message-ID: <051.1de9f43a2613d8249d5ce2425ec40c11@varnish-cache.org> #931: compile failure on isfinite(int) -----------------------------+---------------------------------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: normal Keywords: | -----------------------------+---------------------------------------------- Comment(by geoff): Come to think of it, isfinite() for an int is unnecessary -- ints are always finite. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 07:26:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 07:26:18 -0000 Subject: [Varnish] #930: Caching rules didn't work after upgrading to 2.1.5 In-Reply-To: <041.ec38446941555ede73ce64c0a7745e21@varnish-cache.org> References: <041.ec38446941555ede73ce64c0a7745e21@varnish-cache.org> Message-ID: <050.4c720ab2a9917bba4c1122c919f6d03b@varnish-cache.org> #930: Caching rules didn't work after upgrading to 2.1.5 ---------------------------------+------------------------------------------ Reporter: pdah | Type: defect Status: new | Priority: high Milestone: Varnish 2.1 release | Component: varnishd Version: 2.1.5 | Severity: major Keywords: | ---------------------------------+------------------------------------------ Comment(by tfheen): Please provide varnishlog showing the transaction. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 08:16:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 08:16:03 -0000 Subject: [Varnish] #931: compile failure on isfinite(int) In-Reply-To: <042.6304e57cc6074725a2374f3e7b9da1be@varnish-cache.org> References: <042.6304e57cc6074725a2374f3e7b9da1be@varnish-cache.org> Message-ID: <051.5e20bfff3fbef956c4923ea7a1012b34@varnish-cache.org> #931: compile failure on isfinite(int) ------------------------------+--------------------------------------------- Reporter: geoff | Type: defect Status: closed | Priority: normal Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: ------------------------------+--------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [d223152d5e233493ac2fdb461c309a9418d871ac]) Drop isfinite for integer conversion Checking for isfinite does not make any sense for an integer conversion and causes compilation problems on Solaris. Fixes: #931 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 10:08:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 10:08:38 -0000 Subject: [Varnish] #919: 503 error from varish while apache returns 200 In-Reply-To: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> References: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> Message-ID: <051.8838c9ddb169d34fd71ba6e4126a1f91@varnish-cache.org> #919: 503 error from varish while apache returns 200 ----------------------+----------------------------------------------------- Reporter: damol | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: 503 centos ----------------------+----------------------------------------------------- Changes (by kristian): * component: build => varnishd Comment: Do you have any more information on this? We will probably need the entire varnishlog of a request that fails and a good chump of requests that _don't_ fail. Also: Which version is this? This can easily be an issue related to backend communication failures, but we wont know until we have more data. It can also easily be influenced by the in-line C, which looks somewhat hazardous. Can you check if syslog has any information from varnishd? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 10:30:36 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 10:30:36 -0000 Subject: [Varnish] #889: opensolaris build issue In-Reply-To: <040.8a1771e91d8351a83c002cbc106b3054@varnish-cache.org> References: <040.8a1771e91d8351a83c002cbc106b3054@varnish-cache.org> Message-ID: <049.0191f7042906a1fbd580642bc5469f1b@varnish-cache.org> #889: opensolaris build issue --------------------------+------------------------------------------------- Reporter: phk | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: port:solaris | Version: trunk Severity: normal | Keywords: --------------------------+------------------------------------------------- Comment(by geoff): FWIW, varnishstat_curses.c will compile without warning if it has {{{ #include }}} ... in place of ... {{{ #include }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 10:32:51 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 10:32:51 -0000 Subject: [Varnish] #876: Can't start varnish: "SHMFILE owned by running varnishd master" In-Reply-To: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> References: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> Message-ID: <051.111e464702fd91363f2d68e5c0cc3646@varnish-cache.org> #876: Can't start varnish: "SHMFILE owned by running varnishd master" ----------------------+----------------------------------------------------- Reporter: wijet | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.1.3 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 10:39:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 10:39:00 -0000 Subject: [Varnish] #919: 503 error from varish while apache returns 200 In-Reply-To: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> References: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> Message-ID: <051.1c401bd8fe1d03b126f2da00a209e28f@varnish-cache.org> #919: 503 error from varish while apache returns 200 ----------------------+----------------------------------------------------- Reporter: damol | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: 503 centos ----------------------+----------------------------------------------------- Comment(by damol): My varnish version: varnish-2.0.6 I looked in the syslog but couldn't find something relevant. I am running a varnish -w on the server now waiting for another error and will post that log when an error occurs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 11:33:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 11:33:28 -0000 Subject: [Varnish] #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror Message-ID: <042.8339a531efd0d68bd310ce19402b8285@varnish-cache.org> #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror -----------------------------+---------------------------------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: normal Keywords: | -----------------------------+---------------------------------------------- Building from current trunk: {{{ commit 25ef24962e12d57b12c3daa29623be60cc77b738 Date: Mon Jun 6 09:04:48 2011 +0000 }}} Compile on Solaris for 64 bit (CFLAGS=-m64) results in these warnings for varnishreplay.c (and failure with -Werror): {{{ varnishreplay.c: In function `thread_log': varnishreplay.c:178: warning: cast to pointer from integer of different size varnishreplay.c: In function `thread_get': varnishreplay.c:272: warning: cast to pointer from integer of different size varnishreplay.c: In function `thread_close': varnishreplay.c:297: warning: cast to pointer from integer of different size }}} All of these involve a cast of pthread_t to (void *). pthread_t is defined in as uint_t, unsigned 32 bit integer, but a pointer is 64 bit for the 64 bit build. The warnings do not appear for a 32 bit build (CFLAGS=-m32). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 12:06:52 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 12:06:52 -0000 Subject: [Varnish] #933: vcc_vmod harcodes shared library suffix to .so Message-ID: <048.6bc04f61c402559fa63ee3a4afbb2f20@varnish-cache.org> #933: vcc_vmod harcodes shared library suffix to .so -------------------------+-------------------------------------------------- Reporter: andreacampi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------------+-------------------------------------------------- vcc_vmod hardcodes the shlib extension to .so, but that's wrong e.g. on Mac OS X. configure should autodetect the correct extension, and vcc_vmod should use it to build the filename. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 20:50:30 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 20:50:30 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 Message-ID: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 --------------------+------------------------------------------------------- Reporter: kdajka | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- I'm having problems with memory leak in varnish trunk 32e40a6ececf4a2ea65830e723c770d1ce261898. I've been trying to use this version in production since 30 May. I run varnish on machine with 12GB of RAM, with 2.1.5 I use malloc 9G. With trunk I'm using only 1G. Nonetheless, varnish from trunk eats all available memory and swap. My system: Linux varnishic06 2.6.26-2-amd64 #1 SMP Thu Nov 25 04:30:55 UTC 2010 x86_64 GNU/Linux {{{ /usr/local/inp/varnish/sbin/varnishd -P /var/tmp/foo.bar.pl_varnishd.pid -a 123.123.123.123:8084 -i foo.bar_varnishic06 -n foo.bar_varnishic06 -f /tmp/foo.bar.pl.vcl -T 123.123.123.123:2084 -h classic,20011 -p thread_pools=4 -p ban_lurker_sleep=0.1 -w 200,4000,2 -t 0 -s malloc,1G -d }}} {{{ Child (14378) said INFLATE=-3 (incorrect header check) [above probably not related, happened few hours earlier, I think timestamps would be a nice feature in CLI] Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. Child (14378) not responding to CLI, killing it. [about 10 times more CLI tries to kill 14378] }}} Sadly this time varnish couldn't be killed. In past few days even though varnish ate all memory it oomkiller hadn't been used. An excerpt from varnishstat: {{{ 0+14:17:24 foo.bar_varnishic06 Hitrate ratio: 10 100 1000 Hitrate avg: 0.9455 0.9519 0.9568 5907834 0.00 114.84 client_conn - Client connections accepted 11851631 0.00 230.38 client_req - Client requests received 8536708 0.00 165.94 cache_hit - Cache hits 667843 0.00 12.98 cache_miss - Cache misses 3270479 0.00 63.57 backend_conn - Backend conn. success 2064 0.00 0.04 fetch_head - Fetch head 2099794 0.00 40.82 fetch_length - Fetch with Length 1108491 0.00 21.55 fetch_chunked - Fetch chunked 59114 0.00 1.15 fetch_close - Fetch wanted close 1 0.00 0.00 fetch_failed - Fetch failed 493 0.00 0.01 fetch_304 - Fetch no body (304) 1213 . . n_sess_mem - N struct sess_mem 1118 . . n_sess - N struct sess 315918 . . n_object - N struct object 316309 . . n_objectcore - N struct objectcore 313676 . . n_objecthead - N struct objecthead 829 . . n_waitinglist - N struct waitinglist 60 . . n_vbc - N struct vbc 800 . . n_wrk - N worker threads 800 0.00 0.02 n_wrk_create - N worker threads created 69 0.00 0.00 n_wrk_max - N worker threads limited 1 0.00 0.00 n_wrk_queued - N queued work requests 4 . . n_backend - N backends 14117 . . n_expired - N expired objects 334145 . . n_lru_nuked - N LRU nuked objects 5799488 . . n_lru_moved - N LRU moved objects 99 0.00 0.00 losthdr - HTTP header overflows 8908643 0.00 173.17 n_objwrite - Objects sent with write 5907366 0.00 114.83 s_sess - Total Sessions 11851631 0.00 230.38 s_req - Total Requests 2645843 0.00 51.43 s_pass - Total pass 3269955 0.00 63.56 s_fetch - Total fetch 4292029827 0.00 83431.11 s_hdrbytes - Total header bytes 191349157693 0.00 3719562.20 s_bodybytes - Total body bytes 676208 0.00 13.14 sess_closed - Session Closed 54239 0.00 1.05 sess_pipeline - Session Pipeline 24901 0.00 0.48 sess_readahead - Session Read Ahead 11374083 0.00 221.10 sess_linger - Session Linger 10415564 0.00 202.46 sess_herd - Session herd 686757636 0.00 13349.62 shm_records - SHM records 47062742 0.00 914.83 shm_writes - SHM writes 60835 0.00 1.18 shm_cont - SHM MTX contention 300 0.00 0.01 shm_cycles - SHM cycles through buffer 45019 0.00 0.88 sms_nreq - SMS allocator requests 0 . . sms_nobj - SMS outstanding allocations 0 . . sms_nbytes - SMS outstanding bytes 347573810 . . sms_balloc - SMS bytes allocated 347573810 . . sms_bfree - SMS bytes freed 3270357 0.00 63.57 backend_req - Backend requests made 1 0.00 0.00 n_vcl - N vcl total 1 0.00 0.00 n_vcl_avail - N vcl available 3 . . n_ban - N total active bans 10 0.00 0.00 n_ban_add - N new bans added 7 0.00 0.00 n_ban_retire - N old bans deleted 821981 0.00 15.98 n_ban_obj_test - N objects tested 1527363 0.00 29.69 n_ban_re_test - N regexps tested against 51444 0.00 1.00 uptime - Client uptime 2899745 0.00 56.37 n_gunzip - Gunzip operations 1 0.00 0.00 LCK.sms.creat - Created locks 135057 0.00 2.63 LCK.sms.locks - Lock Operations 2 0.00 0.00 LCK.sma.creat - Created locks 16298412 0.00 316.82 LCK.sma.locks - Lock Operations 20011 0.00 0.39 LCK.hcl.creat - Created locks 18091894 0.00 351.68 LCK.hcl.locks - Lock Operations 1 0.00 0.00 LCK.vcl.creat - Created locks 1341 0.00 0.03 LCK.vcl.locks - Lock Operations 1 0.00 0.00 LCK.stat.creat - Created locks 1213 0.00 0.02 LCK.stat.locks - Lock Operations 1 0.00 0.00 LCK.sessmem.creat - Created locks }}} {{{ top - 21:54:05 up 168 days, 12:26, 4 users, load average: 952.29, 871.34, 750.66 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 41.9%sy, 0.0%ni, 9.0%id, 48.9%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 12332260k total, 12269208k used, 63052k free, 496k buffers Swap: 6291448k total, 6291448k used, 0k free, 5480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14378 webadm 20 0 26.0g 11g 564 D 46 97.9 52:08.10 varnishd 26320 root 20 0 48868 1844 1344 D 46 0.0 0:12.44 sshd 220 root 15 -5 0 0 0 D 34 0.0 70:15.74 kswapd0 }}} Last captured dmesg in attachment (jpg), I had seen few blocked pids in last days: {{{ [17434416.864353] INFO: task varnishd:27364 blocked for more than 120 seconds. [17434416.864385] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [17434416.864430] varnishd D 0000000000000003 0 27364 16201 [17434416.865945] ffff810300fe3c18 0000000000000082 ffff810300fe3bd8 ffffffff80229020 [17434416.865996] ffff810177574f20 ffff81025e082fa0 ffff8101775751a8 0000000101045880 [17434416.866045] 0031e12ab42a464e 00000000ffffffff 0000000000000000 ffffffff8022efec [17434416.866080] Call Trace: [17434416.866127] [] hrtick_start_fair+0xfb/0x144 [17434416.866157] [] hrtick_set+0x9e/0xf7 [17434416.866186] [] __down_write_nested+0x87/0xa1 [17434416.866215] [] do_coredump+0x99/0x7c6 [17434416.866243] [] futex_wait+0x2e9/0x394 [17434416.866271] [] current_fs_time+0x1e/0x24 [17434416.866298] [] pipe_write+0x3e4/0x42d [17434416.866327] [] __dequeue_signal+0xff/0x15a [17434416.866355] [] recalc_sigpending+0xe/0x38 [17434416.866383] [] get_signal_to_deliver+0x2fb/0x324 [17434416.866413] [] do_notify_resume+0xaf/0x7fc [17434416.869429] [] do_futex+0x81/0x78a [17434416.869429] [] autoremove_wake_function+0x0/0x2e [17434416.869429] [] sys_futex+0xfe/0x11c [17434416.869429] [] vfs_read+0x11e/0x152 [17434416.869429] [] sysret_signal+0x2b/0x45 [17434416.869429] [] ptregscall_common+0x67/0xb0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 6 22:00:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Jun 2011 22:00:07 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> References: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> Message-ID: <052.d5cc218b7240e08a3245390a93ed8337@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 --------------------+------------------------------------------------------- Reporter: kdajka | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by kb): With 800 workers, you'll be allocating 6.25GB of RAM just for stack space. Try using thread_pool_stack to reduce the stack to 256KB and see if you get better luck. Note that this isn't a bug, just actual usage. The docs cover some of why setting one buffer size doesn't control any of the many others. FWIW, -- kb -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 7 07:42:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jun 2011 07:42:13 -0000 Subject: [Varnish] #903: Segmentation fault in varnish 3b4859455803b606107c07b25b784372d5665a1f In-Reply-To: <043.04455f1c56c4d22970fbcd4e327fc60b@varnish-cache.org> References: <043.04455f1c56c4d22970fbcd4e327fc60b@varnish-cache.org> Message-ID: <052.8ec0806a5864f6e55d334930c464de62@varnish-cache.org> #903: Segmentation fault in varnish 3b4859455803b606107c07b25b784372d5665a1f ----------------------+----------------------------------------------------- Reporter: kdajka | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Keywords: ----------------------+----------------------------------------------------- Comment(by kdajka): I haven't run across segfaults in recent trunk. So I suppose it should be closed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 7 08:16:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jun 2011 08:16:33 -0000 Subject: [Varnish] #876: Can't start varnish: "SHMFILE owned by running varnishd master" In-Reply-To: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> References: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> Message-ID: <051.906183a36edb62d3a3f965026a931a7e@varnish-cache.org> #876: Can't start varnish: "SHMFILE owned by running varnishd master" ----------------------+----------------------------------------------------- Reporter: wijet | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.1.3 Severity: normal | Keywords: ----------------------+----------------------------------------------------- Comment(by martin): Partly fixed by commit e2ef91e9587b4662d2f982aed9cc69f1fc236b4c, varnishd will now clear the master pid from the shmlog on normal exit. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 7 09:25:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jun 2011 09:25:34 -0000 Subject: [Varnish] #933: vcc_vmod harcodes shared library suffix to .so In-Reply-To: <048.6bc04f61c402559fa63ee3a4afbb2f20@varnish-cache.org> References: <048.6bc04f61c402559fa63ee3a4afbb2f20@varnish-cache.org> Message-ID: <057.6a420ff21b79488db732ed94722de615@varnish-cache.org> #933: vcc_vmod harcodes shared library suffix to .so -------------------------+-------------------------------------------------- Reporter: andreacampi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Comment(by andreacampi): Fixed in a better way by edd0d11803810d30566c883a9b6d1fcbdc12b69f -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 7 10:37:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jun 2011 10:37:03 -0000 Subject: [Varnish] #933: vcc_vmod harcodes shared library suffix to .so In-Reply-To: <048.6bc04f61c402559fa63ee3a4afbb2f20@varnish-cache.org> References: <048.6bc04f61c402559fa63ee3a4afbb2f20@varnish-cache.org> Message-ID: <057.a003bfb285b1101c33eb7c25255e6101@varnish-cache.org> #933: vcc_vmod harcodes shared library suffix to .so --------------------------+------------------------------------------------- Reporter: andreacampi | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Resolution: fixed | Keywords: --------------------------+------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 7 14:59:20 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Jun 2011 14:59:20 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> References: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> Message-ID: <052.a7cdfcf5813f9879b258ed2f972a2de9@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 --------------------+------------------------------------------------------- Reporter: kdajka | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by kdajka): @kb Could you enlighten me how to calculate stack space? I reduced thread_pool_stack to 256KB with varnish_edd0d11803810d30566c883a9b6d1fcbdc12b69f, without any success I think that there could be problem in memory allocation in trunk. While using 2.1.5 with -s malloc,9G I'm having 174799 objects, with trunk -s malloc,1G I had 315918. Here is varnishstats for another instance running varnish 2.1.5 {{{ 74+03:36:50 foo.bar_varnishis06 Hitrate ratio: 4 4 4 Hitrate avg: 0.9465 0.9465 0.9465 538232179 106.00 84.01 Client connections accepted 1114797510 188.00 174.01 Client requests received 793644749 122.00 123.88 Cache hits 63018054 10.00 9.84 Cache misses 320780448 64.00 50.07 Backend conn. success 1755 0.00 0.00 Backend conn. failures 723142 1.00 0.11 Fetch head 207716493 40.00 32.42 Fetch with Length 107544864 25.00 16.79 Fetch chunked 4748592 0.00 0.74 Fetch wanted close 6451 . . N struct sess_mem 4800 . . N struct sess 174799 . . N struct object 175340 . . N struct objectcore 168412 . . N struct objecthead 6 . . N struct vbe_conn 800 . . N worker threads 800 0.00 0.00 N worker threads created 1054 0.00 0.00 N overflowed work requests 4 . . N backends 1095240 . . N expired objects 60986108 . . N LRU nuked objects 556985450 . . N LRU moved objects 423 0.00 0.00 HTTP header overflows 846522080 160.00 132.13 Objects sent with write 538232150 98.00 84.01 Total Sessions 1114797510 188.00 174.01 Total Requests 257961885 56.00 40.26 Total pass 320772085 66.00 50.07 Total fetch 394613512589 68681.08 61594.75 Total header bytes 18397971826970 3207204.82 2871717.15 Total body bytes 68888160 15.00 10.75 Session Closed 5643000 6.00 0.88 Session Pipeline 3441334 0.00 0.54 Session Read Ahead 1064588356 179.00 166.17 Session Linger 977736112 168.00 152.61 Session herd 62541672735 11194.01 9762.05 SHM records 4362898502 812.00 681.00 SHM writes 7902888 1.00 1.23 SHM MTX contention 26774 0.00 0.00 SHM cycles through buffer 448824662 79.00 70.06 SMA allocator requests 366074 . . SMA outstanding allocations 9663510529 . . SMA outstanding bytes 16615975252318 . . SMA bytes allocated 16606311741789 . . SMA bytes free 384052 0.00 0.06 SMS allocator requests 243664829 . . SMS bytes allocated 243664829 . . SMS bytes freed 320774869 64.00 50.07 Backend requests made 4 0.00 0.00 N vcl total 4 0.00 0.00 N vcl available 5 . . N total active purges 9433 0.00 0.00 N new purges added 9428 0.00 0.00 N old purges deleted 64630941 5.00 10.09 N objects tested 629033817 14.00 98.19 N regexps tested against 61 0.00 0.00 N duplicate purges removed 6406610 1.00 1.00 Client uptime 38994 0.00 0.01 Fetch no body (304) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 8 14:51:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Jun 2011 14:51:42 -0000 Subject: [Varnish] #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret Message-ID: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret -----------------------------+---------------------------------------------- Reporter: scoof | Type: defect Status: new | Priority: lowest Milestone: Varnish 3.0 dev | Component: varnishadm Version: trunk | Severity: trivial Keywords: | -----------------------------+---------------------------------------------- This occurs only when I run as a user without rights for the secret file: {{{ Cannot open "/etc/varnish/secret": Permission denied *** glibc detected *** varnishadm: free(): invalid pointer: 0x089330df *** ======= Backtrace: ========= /lib/i686/cmov/libc.so.6(+0x6b281)[0xb75c8281] /lib/i686/cmov/libc.so.6(+0x6cad8)[0xb75c9ad8] /lib/i686/cmov/libc.so.6(cfree+0x6d)[0xb75ccbbd] varnishadm[0x80497bb] varnishadm[0x8049bc4] /lib/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb7573c76] varnishadm[0x8049111] ======= Memory map: ======== 08048000-0804c000 r-xp 00000000 08:01 1022240 /usr/bin/varnishadm 0804c000-0804d000 rw-p 00003000 08:01 1022240 /usr/bin/varnishadm 08933000-08954000 rw-p 00000000 00:00 0 [heap] b22e2000-b22ff000 r-xp 00000000 08:01 4416206 /lib/libgcc_s.so.1 b22ff000-b2300000 rw-p 0001c000 08:01 4416206 /lib/libgcc_s.so.1 b2300000-b2321000 rw-p 00000000 00:00 0 b2321000-b2400000 ---p 00000000 00:00 0 b2410000-b7510000 r--s 00000000 08:01 2861966 /var/lib/varnish/trillian/_.vsm b7510000-b7512000 rw-p 00000000 00:00 0 b7512000-b7514000 r-xp 00000000 08:01 4415792 /lib/i686/cmov/libdl-2.11.2.so b7514000-b7515000 r--p 00001000 08:01 4415792 /lib/i686/cmov/libdl-2.11.2.so b7515000-b7516000 rw-p 00002000 08:01 4415792 /lib/i686/cmov/libdl-2.11.2.so b7516000-b751f000 r-xp 00000000 08:01 4415120 /lib/libbsd.so.0.2.0 b751f000-b7520000 rw-p 00008000 08:01 4415120 /lib/libbsd.so.0.2.0 b7520000-b755c000 r-xp 00000000 08:01 4415659 /lib/libpcre.so.3.12.1 b755c000-b755d000 rw-p 0003b000 08:01 4415659 /lib/libpcre.so.3.12.1 b755d000-b769d000 r-xp 00000000 08:01 4416106 /lib/i686/cmov/libc-2.11.2.so b769d000-b769f000 r--p 0013f000 08:01 4416106 /lib/i686/cmov/libc-2.11.2.so b769f000-b76a0000 rw-p 00141000 08:01 4416106 /lib/i686/cmov/libc-2.11.2.so b76a0000-b76a4000 rw-p 00000000 00:00 0 b76a4000-b76b9000 r-xp 00000000 08:01 4415779 /lib/i686/cmov/libpthread-2.11.2.so b76b9000-b76ba000 r--p 00014000 08:01 4415779 /lib/i686/cmov/libpthread-2.11.2.so b76ba000-b76bb000 rw-p 00015000 08:01 4415779 /lib/i686/cmov/libpthread-2.11.2.so b76bb000-b76bd000 rw-p 00000000 00:00 0 b76bd000-b76e1000 r-xp 00000000 08:01 4415532 /lib/i686/cmov/libm-2.11.2.so b76e1000-b76e2000 r--p 00023000 08:01 4415532 /lib/i686/cmov/libm-2.11.2.so b76e2000-b76e3000 rw-p 00024000 08:01 4415532 /lib/i686/cmov/libm-2.11.2.so b76e3000-b771a000 r-xp 00000000 08:01 4415247 /lib/libncurses.so.5.7 b771a000-b771d000 rw-p 00036000 08:01 4415247 /lib/libncurses.so.5.7 b771d000-b7738000 r-xp 00000000 08:01 1016461 /usr/lib/libedit.so.2.11 b7738000-b773a000 rw-p 0001b000 08:01 1016461 /usr/lib/libedit.so.2.11 b773a000-b773b000 rw-p 00000000 00:00 0 b773b000-b774e000 r-xp 00000000 08:01 4416100 /lib/i686/cmov/libnsl-2.11.2.so b774e000-b774f000 r--p 00012000 08:01 4416100 /lib/i686/cmov/libnsl-2.11.2.so b774f000-b7750000 rw-p 00013000 08:01 4416100 /lib/i686/cmov/libnsl-2.11.2.so b7750000-b7753000 rw-p 00000000 00:00 0 b7753000-b7754000 r-xp 00000000 08:01 3761035 /usr/lib/varnish/libvarnishcompat.so b7754000-b7755000 rw-p 00000000 08:01 3761035 /usr/lib/varnish/libvarnishcompat.so b7755000-b7765000 r-xp 00000000 08:01 1014340 /usr/lib/libvarnishapi.so.1.0.0 b7765000-b7766000 rw-p 00010000 08:01 1014340 /usr/lib/libvarnishapi.so.1.0.0 b7785000-b7787000 rw-p 00000000 00:00 0 b7787000-b7788000 r-xp 00000000 00:00 0 [vdso] b7788000-b77a3000 r-xp 00000000 08:01 4416907 /lib/ld-2.11.2.so b77a3000-b77a4000 r--p 0001a000 08:01 4416907 /lib/ld-2.11.2.so b77a4000-b77a5000 rw-p 0001b000 08:01 4416907 /lib/ld-2.11.2.so bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] Aborted }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From renato at luren.com.br Wed Jun 8 15:22:38 2011 From: renato at luren.com.br (Renato Farias) Date: Wed, 8 Jun 2011 11:22:38 -0400 Subject: [Varnish] #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret In-Reply-To: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> References: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> Message-ID: Hi scoof. Could you send the command line that you are running. []'s On Wed, Jun 8, 2011 at 10:51 AM, Varnish wrote: > #935: "varnishadm: free(): invalid pointer" when varnishadm does not have > read > access to secret > > -----------------------------+---------------------------------------------- > Reporter: scoof | Type: defect > Status: new | Priority: lowest > Milestone: Varnish 3.0 dev | Component: varnishadm > Version: trunk | Severity: trivial > Keywords: | > > -----------------------------+---------------------------------------------- > This occurs only when I run as a user without rights for the secret file: > > > {{{ > Cannot open "/etc/varnish/secret": Permission denied > *** glibc detected *** varnishadm: free(): invalid pointer: 0x089330df *** > ======= Backtrace: ========= > /lib/i686/cmov/libc.so.6(+0x6b281)[0xb75c8281] > /lib/i686/cmov/libc.so.6(+0x6cad8)[0xb75c9ad8] > /lib/i686/cmov/libc.so.6(cfree+0x6d)[0xb75ccbbd] > varnishadm[0x80497bb] > varnishadm[0x8049bc4] > /lib/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb7573c76] > varnishadm[0x8049111] > ======= Memory map: ======== > 08048000-0804c000 r-xp 00000000 08:01 1022240 /usr/bin/varnishadm > 0804c000-0804d000 rw-p 00003000 08:01 1022240 /usr/bin/varnishadm > 08933000-08954000 rw-p 00000000 00:00 0 [heap] > b22e2000-b22ff000 r-xp 00000000 08:01 4416206 /lib/libgcc_s.so.1 > b22ff000-b2300000 rw-p 0001c000 08:01 4416206 /lib/libgcc_s.so.1 > b2300000-b2321000 rw-p 00000000 00:00 0 > b2321000-b2400000 ---p 00000000 00:00 0 > b2410000-b7510000 r--s 00000000 08:01 2861966 > /var/lib/varnish/trillian/_.vsm > b7510000-b7512000 rw-p 00000000 00:00 0 > b7512000-b7514000 r-xp 00000000 08:01 4415792 > /lib/i686/cmov/libdl-2.11.2.so > b7514000-b7515000 r--p 00001000 08:01 4415792 > /lib/i686/cmov/libdl-2.11.2.so > b7515000-b7516000 rw-p 00002000 08:01 4415792 > /lib/i686/cmov/libdl-2.11.2.so > b7516000-b751f000 r-xp 00000000 08:01 4415120 /lib/libbsd.so.0.2.0 > b751f000-b7520000 rw-p 00008000 08:01 4415120 /lib/libbsd.so.0.2.0 > b7520000-b755c000 r-xp 00000000 08:01 4415659 /lib/libpcre.so.3.12.1 > b755c000-b755d000 rw-p 0003b000 08:01 4415659 /lib/libpcre.so.3.12.1 > b755d000-b769d000 r-xp 00000000 08:01 4416106 > /lib/i686/cmov/libc-2.11.2.so > b769d000-b769f000 r--p 0013f000 08:01 4416106 > /lib/i686/cmov/libc-2.11.2.so > b769f000-b76a0000 rw-p 00141000 08:01 4416106 > /lib/i686/cmov/libc-2.11.2.so > b76a0000-b76a4000 rw-p 00000000 00:00 0 > b76a4000-b76b9000 r-xp 00000000 08:01 4415779 > /lib/i686/cmov/libpthread-2.11.2.so > b76b9000-b76ba000 r--p 00014000 08:01 4415779 > /lib/i686/cmov/libpthread-2.11.2.so > b76ba000-b76bb000 rw-p 00015000 08:01 4415779 > /lib/i686/cmov/libpthread-2.11.2.so > b76bb000-b76bd000 rw-p 00000000 00:00 0 > b76bd000-b76e1000 r-xp 00000000 08:01 4415532 > /lib/i686/cmov/libm-2.11.2.so > b76e1000-b76e2000 r--p 00023000 08:01 4415532 > /lib/i686/cmov/libm-2.11.2.so > b76e2000-b76e3000 rw-p 00024000 08:01 4415532 > /lib/i686/cmov/libm-2.11.2.so > b76e3000-b771a000 r-xp 00000000 08:01 4415247 /lib/libncurses.so.5.7 > b771a000-b771d000 rw-p 00036000 08:01 4415247 /lib/libncurses.so.5.7 > b771d000-b7738000 r-xp 00000000 08:01 1016461 /usr/lib/libedit.so.2.11 > b7738000-b773a000 rw-p 0001b000 08:01 1016461 /usr/lib/libedit.so.2.11 > b773a000-b773b000 rw-p 00000000 00:00 0 > b773b000-b774e000 r-xp 00000000 08:01 4416100 > /lib/i686/cmov/libnsl-2.11.2.so > b774e000-b774f000 r--p 00012000 08:01 4416100 > /lib/i686/cmov/libnsl-2.11.2.so > b774f000-b7750000 rw-p 00013000 08:01 4416100 > /lib/i686/cmov/libnsl-2.11.2.so > b7750000-b7753000 rw-p 00000000 00:00 0 > b7753000-b7754000 r-xp 00000000 08:01 3761035 > /usr/lib/varnish/libvarnishcompat.so > b7754000-b7755000 rw-p 00000000 08:01 3761035 > /usr/lib/varnish/libvarnishcompat.so > b7755000-b7765000 r-xp 00000000 08:01 1014340 > /usr/lib/libvarnishapi.so.1.0.0 > b7765000-b7766000 rw-p 00010000 08:01 1014340 > /usr/lib/libvarnishapi.so.1.0.0 > b7785000-b7787000 rw-p 00000000 00:00 0 > b7787000-b7788000 r-xp 00000000 00:00 0 [vdso] > b7788000-b77a3000 r-xp 00000000 08:01 4416907 /lib/ld-2.11.2.so > b77a3000-b77a4000 r--p 0001a000 08:01 4416907 /lib/ld-2.11.2.so > b77a4000-b77a5000 rw-p 0001b000 08:01 4416907 /lib/ld-2.11.2.so > bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] > Aborted > }}} > > -- > Ticket URL: > Varnish > The Varnish HTTP Accelerator > > _______________________________________________ > varnish-bugs mailing list > varnish-bugs at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs > -- Att, Renato Farias -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Wed Jun 8 17:49:57 2011 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 8 Jun 2011 19:49:57 +0200 Subject: [Varnish] #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret In-Reply-To: References: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> Message-ID: <20110608174956.GI960@nerd.dk> On Wed, Jun 08, 2011 at 11:22:38AM -0400, Renato Farias wrote: > Hi scoof. > > Could you send the command line that you are running. No params, just plain varnishadm. Reproduced with the debian package and a source install of latest snapshot. > On Wed, Jun 8, 2011 at 10:51 AM, Varnish wrote: > > > #935: "varnishadm: free(): invalid pointer" when varnishadm does not have > > read > > access to secret > > > > -----------------------------+---------------------------------------------- > > Reporter: scoof | Type: defect > > Status: new | Priority: lowest > > Milestone: Varnish 3.0 dev | Component: varnishadm > > Version: trunk | Severity: trivial > > Keywords: | > > > > -----------------------------+---------------------------------------------- > > This occurs only when I run as a user without rights for the secret file: > > > > > > {{{ > > Cannot open "/etc/varnish/secret": Permission denied > > *** glibc detected *** varnishadm: free(): invalid pointer: 0x089330df *** > > ======= Backtrace: ========= > > /lib/i686/cmov/libc.so.6(+0x6b281)[0xb75c8281] > > /lib/i686/cmov/libc.so.6(+0x6cad8)[0xb75c9ad8] > > /lib/i686/cmov/libc.so.6(cfree+0x6d)[0xb75ccbbd] > > varnishadm[0x80497bb] > > varnishadm[0x8049bc4] > > /lib/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb7573c76] > > varnishadm[0x8049111] > > ======= Memory map: ======== > > 08048000-0804c000 r-xp 00000000 08:01 1022240 /usr/bin/varnishadm > > 0804c000-0804d000 rw-p 00003000 08:01 1022240 /usr/bin/varnishadm > > 08933000-08954000 rw-p 00000000 00:00 0 [heap] > > b22e2000-b22ff000 r-xp 00000000 08:01 4416206 /lib/libgcc_s.so.1 > > b22ff000-b2300000 rw-p 0001c000 08:01 4416206 /lib/libgcc_s.so.1 > > b2300000-b2321000 rw-p 00000000 00:00 0 > > b2321000-b2400000 ---p 00000000 00:00 0 > > b2410000-b7510000 r--s 00000000 08:01 2861966 > > /var/lib/varnish/trillian/_.vsm > > b7510000-b7512000 rw-p 00000000 00:00 0 > > b7512000-b7514000 r-xp 00000000 08:01 4415792 > > /lib/i686/cmov/libdl-2.11.2.so > > b7514000-b7515000 r--p 00001000 08:01 4415792 > > /lib/i686/cmov/libdl-2.11.2.so > > b7515000-b7516000 rw-p 00002000 08:01 4415792 > > /lib/i686/cmov/libdl-2.11.2.so > > b7516000-b751f000 r-xp 00000000 08:01 4415120 /lib/libbsd.so.0.2.0 > > b751f000-b7520000 rw-p 00008000 08:01 4415120 /lib/libbsd.so.0.2.0 > > b7520000-b755c000 r-xp 00000000 08:01 4415659 /lib/libpcre.so.3.12.1 > > b755c000-b755d000 rw-p 0003b000 08:01 4415659 /lib/libpcre.so.3.12.1 > > b755d000-b769d000 r-xp 00000000 08:01 4416106 > > /lib/i686/cmov/libc-2.11.2.so > > b769d000-b769f000 r--p 0013f000 08:01 4416106 > > /lib/i686/cmov/libc-2.11.2.so > > b769f000-b76a0000 rw-p 00141000 08:01 4416106 > > /lib/i686/cmov/libc-2.11.2.so > > b76a0000-b76a4000 rw-p 00000000 00:00 0 > > b76a4000-b76b9000 r-xp 00000000 08:01 4415779 > > /lib/i686/cmov/libpthread-2.11.2.so > > b76b9000-b76ba000 r--p 00014000 08:01 4415779 > > /lib/i686/cmov/libpthread-2.11.2.so > > b76ba000-b76bb000 rw-p 00015000 08:01 4415779 > > /lib/i686/cmov/libpthread-2.11.2.so > > b76bb000-b76bd000 rw-p 00000000 00:00 0 > > b76bd000-b76e1000 r-xp 00000000 08:01 4415532 > > /lib/i686/cmov/libm-2.11.2.so > > b76e1000-b76e2000 r--p 00023000 08:01 4415532 > > /lib/i686/cmov/libm-2.11.2.so > > b76e2000-b76e3000 rw-p 00024000 08:01 4415532 > > /lib/i686/cmov/libm-2.11.2.so > > b76e3000-b771a000 r-xp 00000000 08:01 4415247 /lib/libncurses.so.5.7 > > b771a000-b771d000 rw-p 00036000 08:01 4415247 /lib/libncurses.so.5.7 > > b771d000-b7738000 r-xp 00000000 08:01 1016461 /usr/lib/libedit.so.2.11 > > b7738000-b773a000 rw-p 0001b000 08:01 1016461 /usr/lib/libedit.so.2.11 > > b773a000-b773b000 rw-p 00000000 00:00 0 > > b773b000-b774e000 r-xp 00000000 08:01 4416100 > > /lib/i686/cmov/libnsl-2.11.2.so > > b774e000-b774f000 r--p 00012000 08:01 4416100 > > /lib/i686/cmov/libnsl-2.11.2.so > > b774f000-b7750000 rw-p 00013000 08:01 4416100 > > /lib/i686/cmov/libnsl-2.11.2.so > > b7750000-b7753000 rw-p 00000000 00:00 0 > > b7753000-b7754000 r-xp 00000000 08:01 3761035 > > /usr/lib/varnish/libvarnishcompat.so > > b7754000-b7755000 rw-p 00000000 08:01 3761035 > > /usr/lib/varnish/libvarnishcompat.so > > b7755000-b7765000 r-xp 00000000 08:01 1014340 > > /usr/lib/libvarnishapi.so.1.0.0 > > b7765000-b7766000 rw-p 00010000 08:01 1014340 > > /usr/lib/libvarnishapi.so.1.0.0 > > b7785000-b7787000 rw-p 00000000 00:00 0 > > b7787000-b7788000 r-xp 00000000 00:00 0 [vdso] > > b7788000-b77a3000 r-xp 00000000 08:01 4416907 /lib/ld-2.11.2.so > > b77a3000-b77a4000 r--p 0001a000 08:01 4416907 /lib/ld-2.11.2.so > > b77a4000-b77a5000 rw-p 0001b000 08:01 4416907 /lib/ld-2.11.2.so > > bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] > > Aborted > > }}} > > > > -- > > Ticket URL: > > Varnish > > The Varnish HTTP Accelerator > > > > _______________________________________________ > > varnish-bugs mailing list > > varnish-bugs at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs > > > > > > _______________________________________________ > varnish-bugs mailing list > varnish-bugs at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-bugs From varnish-bugs at varnish-cache.org Wed Jun 8 21:46:01 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Jun 2011 21:46:01 -0000 Subject: [Varnish] #936: SIGABRT in VRT_synth_page Message-ID: <040.acfdfa916d19cc14d750ad61e9c150e5@varnish-cache.org> #936: SIGABRT in VRT_synth_page -------------------+-------------------------------------------------------- Reporter: kwy | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Hi, the following vcl, possumly erroneous, compiles: sub vcl_recv { synthetic "HELLOO"; return (error); } and given a request, fails with SIGABRT. If I were to guess, it is likely due to the synthetic in vcl_recv having no object to attach itself to. But I am just guessing. Backtrace follows;this is reproducable on demand. 0x00007f32d43c9f93 in poll () from /lib/libc.so.6 (gdb) c Continuing. Program received signal SIGABRT, Aborted. [Switching to Thread 0x7f32457cd700 (LWP 31169)] 0x00007f32d4323a75 in raise () from /lib/libc.so.6 (gdb) where #0 0x00007f32d4323a75 in raise () from /lib/libc.so.6 #1 0x00007f32d43275c0 in abort () from /lib/libc.so.6 #2 0x0000000000436b9c in pan_ic (func=0x475247 "VRT_synth_page", file=0x474fd5 "cache_vrt.c", line=403, cond=0x47501d "(sp->obj) != NULL", err=0, xxx=0) at cache_panic.c:358 #3 0x00000000004413aa in VRT_synth_page (sp=0x7f32cd6a4008, flags=0, str=0x7f324b7db886 "HELLOO") at cache_vrt.c:403 #4 0x00007f324b7db224 in ?? () #5 0x0000000000000000 in ?? () -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 9 06:53:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Jun 2011 06:53:16 -0000 Subject: [Varnish] #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret In-Reply-To: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> References: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> Message-ID: <051.a410632d630a9655473628e511d1c29d@varnish-cache.org> #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret ------------------------------+--------------------------------------------- Reporter: scoof | Type: defect Status: closed | Priority: lowest Milestone: Varnish 3.0 dev | Component: varnishadm Version: trunk | Severity: trivial Resolution: fixed | Keywords: ------------------------------+--------------------------------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [48392ad84205195f8d624e281f3b93d7d736ce46]) Fix crash in varnishadm when secret file is unreadable Record the original strdup-ed value so we can free it afterwards. Fixes: #935 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 9 07:06:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Jun 2011 07:06:35 -0000 Subject: [Varnish] #936: SIGABRT in VRT_synth_page In-Reply-To: <040.acfdfa916d19cc14d750ad61e9c150e5@varnish-cache.org> References: <040.acfdfa916d19cc14d750ad61e9c150e5@varnish-cache.org> Message-ID: <049.4117f77fab5d34881f34d801ac794d04@varnish-cache.org> #936: SIGABRT in VRT_synth_page ---------------------+------------------------------------------------------ Reporter: kwy | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [7077261e1b1cb38ab152119742a6a001748e8901]) Mark synthetic as only available in vcl_error Eventually, we want to be able to do synthetic everywhere, but for now, mark it as just available in vcl_error. Fixes: #936 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 9 17:13:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Jun 2011 17:13:27 -0000 Subject: [Varnish] #937: missing files in varnish-libs-devel rpm package Message-ID: <042.97d6feca399e2fd34aa751adac9af458@varnish-cache.org> #937: missing files in varnish-libs-devel rpm package -----------------------------+---------------------------------------------- Reporter: fr3nd | Type: defect Status: new | Priority: normal Milestone: Varnish 3.0 dev | Component: packaging Version: trunk | Severity: normal Keywords: rpm vmod | -----------------------------+---------------------------------------------- To compile any vmod, the varnish source files are needed. The "varnish- libs-devel" rpm package is supposed to have all those files, but some of them are missing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 10 20:50:45 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 10 Jun 2011 20:50:45 -0000 Subject: [Varnish] #938: Doc issue: max-age should not contain spaces. Message-ID: <048.f27903afc4a231c74ad26964061ac26a@varnish-cache.org> #938: Doc issue: max-age should not contain spaces. -------------------------+-------------------------------------------------- Reporter: frankfarmer | Type: documentation Status: new | Priority: lowest Milestone: | Component: documentation Version: trunk | Severity: trivial Keywords: | -------------------------+-------------------------------------------------- This doc specifies headers as "cache-control: max-age = 900": http://www.varnish-cache.org/trac/wiki/VCLExampleLongerCaching Apparently, it'd be more correct to use: set beresp.http.Cache-Control = "max-age=900"; Specifically, we've found in our testing that: 1. RFC2616 seems to not allow for spaces in this context: {{{ "max-age" "=" delta-seconds ; Section 14.9.3, 14.9.4 delta-seconds = 1*DIGIT }}} 1. YSlow and Firebug seem to ignore max-age if there are spaces, which is probably related to the next point: 1. The firefox, unlike other major browsers, doesn't appear to tolerate spaces in max-age. See http://mxr.mozilla.org/mozilla2.0/source/netwerk/protocol/http/nsHttpResponseHead.cpp line 533 {{{ 533 const char *p = PL_strcasestr(val, "max-age="); }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Jun 10 22:36:03 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 10 Jun 2011 22:36:03 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> References: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> Message-ID: <052.4a8fdcbcd45fbc44938d38d5a1d9869c@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 --------------------+------------------------------------------------------- Reporter: kdajka | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by kb): varnishstat is confirming that Varnish is using exactly the 9GB you specified. By "without any success" do you mean the memory usage remained unchanged? Run pmap on your child process. 256K allocations will be thread stacks ( you can verify the option you're using is working ) and the 1024K allocations should be jemalloc blocks, and both will have matching single- page allocations (4K). Then look for anything anomalous (and/or post it). Assuming invariant object size, causes of object bloat could be that you support Accept-Encoding:/Vary: which will create multiple version of objects. Do you do lots and lots of bans? That's the other common memory user. Any inline C in your VCL? 2.1.5 has been in production here since April 12 (example daemon still running with ~400 hits/s). I haven't seen any leakage or bloat with 2.1.5. I expect that if this were endemic to 2.1.5, that a lot more people would be seeing this issue. Do you have any inline C in your VCL? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jun 11 07:44:04 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 11 Jun 2011 07:44:04 -0000 Subject: [Varnish] #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 In-Reply-To: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> References: <043.d74ebc3e11e35ac4f435f670a0217608@varnish-cache.org> Message-ID: <052.4d8ac34028854cf47942f796a16edb88@varnish-cache.org> #934: Memory leak in varnish_32e40a6ececf4a2ea65830e723c770d1ce261898 --------------------+------------------------------------------------------- Reporter: kdajka | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by kdajka): Unfortunately I won't be in my office in next week, so I won't be able to help and debug memleak issue. I would like to rectify that I'm having problems with trunk version. Varnish 2.1.5 is rock solid for me. second varnishstat listing is snapshot from 2.1.5, the first one is from trunk. could you tell me how is called counterpart of "SMA bytes allocated" from 2.1 in trunk? As for other questions: -I don't use inline C -I strip vary. only gzip or deflate, are allowed in encoding. my backend does compression and because of that I have 2 objects per file. -I don't know if 2 bans per hour is much I suppose not. on the average I have 30 active bans, some of them are duplicates. I'm mostly banning objects by setting their ttl=0. I'LLC post my vcl in next week. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 09:59:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 09:59:19 -0000 Subject: [Varnish] #903: Segmentation fault in varnish 3b4859455803b606107c07b25b784372d5665a1f In-Reply-To: <043.04455f1c56c4d22970fbcd4e327fc60b@varnish-cache.org> References: <043.04455f1c56c4d22970fbcd4e327fc60b@varnish-cache.org> Message-ID: <052.28597d988463c139277c550755f41d05@varnish-cache.org> #903: Segmentation fault in varnish 3b4859455803b606107c07b25b784372d5665a1f ----------------------+----------------------------------------------------- Reporter: kdajka | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: blocker | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: Yes, This was fixed in 94fe1956 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 09:59:28 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 09:59:28 -0000 Subject: [Varnish] #938: Doc issue: max-age should not contain spaces. In-Reply-To: <048.f27903afc4a231c74ad26964061ac26a@varnish-cache.org> References: <048.f27903afc4a231c74ad26964061ac26a@varnish-cache.org> Message-ID: <057.5af6c4a85b110f349deb55d1a875ecc8@varnish-cache.org> #938: Doc issue: max-age should not contain spaces. --------------------------+------------------------------------------------- Reporter: frankfarmer | Type: documentation Status: closed | Priority: lowest Milestone: | Component: documentation Version: trunk | Severity: trivial Resolution: fixed | Keywords: --------------------------+------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => fixed Comment: Fixed. (It's a wiki, though, so user-submitted, but still nice to Get It Right) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 10:16:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 10:16:42 -0000 Subject: [Varnish] #937: missing files in varnish-libs-devel rpm package In-Reply-To: <042.97d6feca399e2fd34aa751adac9af458@varnish-cache.org> References: <042.97d6feca399e2fd34aa751adac9af458@varnish-cache.org> Message-ID: <051.2a8e5c3b1e8f78465729ecc7cff1949e@varnish-cache.org> #937: missing files in varnish-libs-devel rpm package -----------------------+---------------------------------------------------- Reporter: fr3nd | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: packaging | Version: trunk Severity: normal | Keywords: rpm vmod -----------------------+---------------------------------------------------- Changes (by tfheen): * owner: => kristian Comment: We do not offer a stable API or ABI for VMODs yet, so for now you need a full source tree. This needs a bit more documentation, so I'm leaving the bug open until that's been written. To write vmods, we recommend you take a look at https://github.com/varnish /libvmod-example for an example. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 10:18:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 10:18:43 -0000 Subject: [Varnish] #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror In-Reply-To: <042.8339a531efd0d68bd310ce19402b8285@varnish-cache.org> References: <042.8339a531efd0d68bd310ce19402b8285@varnish-cache.org> Message-ID: <051.d49f71868fdadb5534a2aebe530cc6cc@varnish-cache.org> #932: varnishreplay.c: compile fails on Solaris for 64 bit with -Werror --------------------------+------------------------------------------------- Reporter: geoff | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Varnish 3.0 dev Component: port:solaris | Version: trunk Severity: normal | Keywords: --------------------------+------------------------------------------------- Changes (by phk): * owner: => slink * component: build => port:solaris -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 10:20:47 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 10:20:47 -0000 Subject: [Varnish] #919: 503 error from varish while apache returns 200 In-Reply-To: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> References: <042.ce36521e27a423cd056033b77372ee69@varnish-cache.org> Message-ID: <051.44f3a19a8f4f0bb89db8caa62f147b53@varnish-cache.org> #919: 503 error from varish while apache returns 200 ------------------------+--------------------------------------------------- Reporter: damol | Owner: kristian Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: 503 centos | ------------------------+--------------------------------------------------- Changes (by kristian): * status: new => closed * resolution: => invalid Comment: You need to test this on 2.1.5, or now 3.0.0 since that'll be out before you get any relevant data. If you can reproduce this on 3.0.0, feel free to re-open the ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 14 10:26:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 14 Jun 2011 10:26:17 -0000 Subject: [Varnish] #897: sess_mem "leak" on hyper-threaded cpu In-Reply-To: <045.14a2eaf653169bfcd2936fa771c688f0@varnish-cache.org> References: <045.14a2eaf653169bfcd2936fa771c688f0@varnish-cache.org> Message-ID: <054.1bcd01b542f7f7561e0a48991eedeea3@varnish-cache.org> #897: sess_mem "leak" on hyper-threaded cpu ----------------------+----------------------------------------------------- Reporter: askalski | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: sess_mem leak n_sess race condition ----------------------+----------------------------------------------------- Changes (by phk): * owner: => phk * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 15 22:00:48 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Jun 2011 22:00:48 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters Message-ID: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Hello, After upgrading to Varnish 3.0.0-beta2 from 2.1.5 users began to experience many Error 400 Bad Request pages. We found that this was happening because many of our links were referred to by URLs that are close to or exceed 2048 characters. The client then passes along a Referrer header that Varnish cannot handle. This is a snippet from varnishlog: 3 RxHeader c User-Agent: curl/7.21.2 (x86_64-apple-darwin10.6.0) libcurl/7.21.2 OpenSSL/1.0.0c zlib/1.2.5 libidn/1.19 3 RxHeader c Host: bil1-varn01 3 RxHeader c Accept: */* 3 LostHeader c Referer: http://www.site.com/update.php?subject=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 3 HttpGarbage c HEAD 3 VCL_call c error deliver 3 VCL_call c deliver deliver 3 TxProtocol c HTTP/1.1 3 TxStatus c 400 3 TxResponse c Bad Request 3 TxHeader c Server: Varnish My expectation is that Varnish will discard any headers that it cannot deal with. Is there a way to change this behavior now? I am removing long headers before they get to Varnish to work-around the problem now. Thanks! -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 15 22:47:12 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Jun 2011 22:47:12 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters In-Reply-To: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> References: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> Message-ID: <051.97d36dbafe8a264bcdf9559fe3a6f047@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by scoof): Not a bug. You should check these parameters: http_req_hdr_len 2048 [bytes] http_req_size 32768 [bytes] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 15 22:48:43 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Jun 2011 22:48:43 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters In-Reply-To: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> References: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> Message-ID: <051.f8d139a21c7204f474650dc9643fd955@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by david): Thank you for that pointer. I could not find any mention of those parameters in the docs. This is great! Regards, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 15 23:05:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Jun 2011 23:05:33 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters In-Reply-To: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> References: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> Message-ID: <051.e10fe5e5909c750c37a59ed12fe88f2a@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by david): Additionally, it was rather difficult to track down what was causing this. Changing "Bad Request" to something less generic like "header $X exceeded maximum length" would be extremely helpful. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jun 18 09:47:57 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 18 Jun 2011 09:47:57 -0000 Subject: [Varnish] #939: Error 400 if a single header exceeds 2048 characters In-Reply-To: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> References: <042.040f5d1e4f9b5d1da3859fd4e9ea4778@varnish-cache.org> Message-ID: <051.9ba4ff66ec7cec06718bbbb0c3ec0752@varnish-cache.org> #939: Error 400 if a single header exceeds 2048 characters -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by phk): Take the X-Forwarded-For header as example: You append to that when ever you go through a proxy. Imagine you have a load-balancer sitting in front of your varnish which does that, and that you need the X-F-F header for something important. If Varnish just drops headers that are too long, you have now made it possible for an adversary to send a X-F-F: header which is 2046 chars long, your balancer adds the IP to it and your varnish throws it away. That sort of scenario makes my security-alarm tingle faintly. Your points about documentation and diagnostics are taken, so the ticket stays open as a reminder for now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jun 19 16:05:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 19 Jun 2011 16:05:35 -0000 Subject: [Varnish] #936: SIGABRT in VRT_synth_page In-Reply-To: <040.acfdfa916d19cc14d750ad61e9c150e5@varnish-cache.org> References: <040.acfdfa916d19cc14d750ad61e9c150e5@varnish-cache.org> Message-ID: <049.390eaf0112ebe650ce90696f4a87393a@varnish-cache.org> #936: SIGABRT in VRT_synth_page ---------------------+------------------------------------------------------ Reporter: kwy | Type: defect Status: closed | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Comment(by Geoff Simmons ): (In [3ad76c3b3566dc19f392a70a27d68417c6d756e0]) Mark synthetic as only available in vcl_error Eventually, we want to be able to do synthetic everywhere, but for now, mark it as just available in vcl_error. Fixes: #936 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Jun 19 16:05:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 19 Jun 2011 16:05:38 -0000 Subject: [Varnish] #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret In-Reply-To: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> References: <042.993b16e8549a63811fbce5cc748e32bf@varnish-cache.org> Message-ID: <051.cd3ec79540759fd30ede8e957fc611f8@varnish-cache.org> #935: "varnishadm: free(): invalid pointer" when varnishadm does not have read access to secret ------------------------------+--------------------------------------------- Reporter: scoof | Type: defect Status: closed | Priority: lowest Milestone: Varnish 3.0 dev | Component: varnishadm Version: trunk | Severity: trivial Resolution: fixed | Keywords: ------------------------------+--------------------------------------------- Comment(by Geoff Simmons ): (In [eb5c9a2870b493b4c914bf56b159615a2d567896]) Fix crash in varnishadm when secret file is unreadable Record the original strdup-ed value so we can free it afterwards. Fixes: #935 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 08:22:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 08:22:59 -0000 Subject: [Varnish] #876: Can't start varnish: "SHMFILE owned by running varnishd master" In-Reply-To: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> References: <042.b152ffb1624d0390243a17893e133b5d@varnish-cache.org> Message-ID: <051.b22e1e42716e85ad13cc57fcb68a29d5@varnish-cache.org> #876: Can't start varnish: "SHMFILE owned by running varnishd master" ----------------------+----------------------------------------------------- Reporter: wijet | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.1.3 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: (In [78df00d176c2cc323b4a66024aac2087a09a9e0b]) Use file locking on SHMFILE to indicate this file is currently in use Fixes: #876 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 16:47:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 16:47:17 -0000 Subject: [Varnish] #918: Segfault in varnishncsa (3.0.0-beta1) In-Reply-To: <041.335050a3b64438d5a5f6c71c03dac3d4@varnish-cache.org> References: <041.335050a3b64438d5a5f6c71c03dac3d4@varnish-cache.org> Message-ID: <050.3a51ef08bec1063c08240d78826d48db@varnish-cache.org> #918: Segfault in varnishncsa (3.0.0-beta1) -------------------------+-------------------------------------------------- Reporter: olli | Owner: tfheen Type: defect | Status: reopened Priority: normal | Milestone: Varnish 3.0 dev Component: varnishncsa | Version: Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by olli): * status: closed => reopened * resolution: fixed => Comment: Hi, i still get segfaults after PURGE-Requests. I start varnish like this: varnishncsa -F '%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" %{Varnish:hitmiss}x' I use 3.0.0. Thanks Oliver Joa -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 19:08:44 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 19:08:44 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. Message-ID: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Hello, With the introduction of gzip support in Varnish 3, we have run into a problem where ETags are incorrectly duplicated between two different objects. I'm not sure I need to provide much explanation here except to remind the devs that ETags are unique, and need to be different for each variant of a page (including a variant caused by gzip). You can read more about it in an old bug report here: https://issues.apache.org/bugzilla/show_bug.cgi?id=39727 The only workaround I could come up with was to revert to storing two copies of a page in cache (2.1.5 behavior) by turning off Varnish's gzip support, and putting the Accept-Encoding normalization code back into vcl_recv. If there is another workaround, please let me know. Regards, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 19:26:23 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 19:26:23 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.2e01be759c08804051e608061803e915@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by phk): Just when I thought HTTP couldn't become more crappy as a protocol to transfer data. As I read the apache discussion there are two possible workarounds and no solutions: 1) twist the etag and make it weak: you can do this in vcl_deliver if there is a transfer-encoding header. 2) Delete the etag entirely. Ditto. I'll need to think more about this, but right now I don't see what else we can do. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 19:30:39 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 19:30:39 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.8d3f2dc0db98963cb736a24ff42607b5@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by david): Hi phk, Thanks for the ideas. Changing the etag for gzipped variants in vcl_deliver was the first thing I tried. It neutered my hit ratio because all of the if-modified-since and if-none-match headers started to not work. My thought, of how you can fix this in Varnish is to store a second etag header for the gunzipped copy - Varnish will probably need to fabricate this tag, perhaps just adding something like "nogz" to the etag you have stored now would be sufficient. The problem comes in how to look it up when you get one of those if-none-match headers. Good luck. Thanks again, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 19:52:06 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 19:52:06 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.0c05e0e6dc0037974259a7cf23d4f2c9@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by phk): Yes, you'd have to inverse the etags modification should they appear in an If-* header in vcl_recv, otherwise the matching will not work. I would just add -gzip and remove it again with a regsub() -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 19:58:09 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 19:58:09 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> References: <042.bc99dca97e165d223cfd474527b5eda1@varnish-cache.org> Message-ID: <051.7ec6162b2107a011dfbb58ad97c94ad3@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: high Milestone: | Component: build Version: 3.0.0 | Severity: critical Keywords: | -------------------+-------------------------------------------------------- Comment(by david): oh! I didn't even think about changing it back in vcl_recv. I'll try that. You used the word inverse, and I read it as reverse, and thought "huh, that's an interesting idea, reverse the same etag for gzip variant" - then I realized that's not what you were saying. Maybe it is an idea you can think about, to tag a gzip-vs-unzip variant. Just reverse the etag for one of them. :) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 20:21:00 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 20:21:00 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch Message-ID: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Hello, I opened a forum report on this, but haven't gotten an answer. At this point, I have performed enough troubleshooting to narrow this down to a bug in Varnish. I apologize if this is just a change of behavior from version 2 to 3, but I cannot find anything in the docs about this other than the removing of beresp.cacheable. When I set beresp.ttl to a value, it is simply not used. My testing shows that Varnish calculates an objects TTL based on two things: 1. backend Cache-Control header (max-age parameter in my case). 2. Default TTL passed to varnishd via -t run-time flag. beresp.ttl is not considered. I also attempted to arbitrarily set beresp.http.Cache-Control to something with a different max-age than the backend sent us. Varnish still uses the back-end's response regardless of my setting. If the back-end did not supply a Cache-Control header, then the default TTL is used. Let me know if you need more info than what is seen below. Regards, -david Here is my test bed: DAEMON_OPTS="-a :80 \ -T :6082 \ -f /etc/varnish/varnish.conf \ -u varnish -g varnish \ -s file,/var/nish/02/varnish.cache,22G \ -s file,/var/nish/03/varnish.cache,22G \ -s file,/var/nish/04/varnish.cache,22G \ -s file,/var/nish/05/varnish.cache,22G \ -s file,/var/nish/06/varnish.cache,22G \ -h classic,800009 \ -p session_linger=100 \ -p ban_lurker_sleep=0.000001 \ -p thread_pool_add_delay=1 \ -p thread_pool_max=4000 \ -p thread_pool_min=300 \ -p thread_pools=4 \ -p listen_depth=2048 \ -p sess_workspace=65536 \ -p shm_workspace=16384 \ -p ping_interval=8 \ -p cli_timeout=20 \ -p lru_interval=30 \ -p http_gzip_support=off \ -p sess_timeout=5 \ -p http_req_hdr_len=4096 \ -p syslog_cli_traffic=off \ -t 30000" This is my entire VCL: backend default { .host = "172.21.4.125"; .port = "80"; .connect_timeout = 10s; .first_byte_timeout = 35s; .between_bytes_timeout = 5s; } sub vcl_fetch { set beresp.ttl = 1000s; set beresp.http.X-WeAreHere = "here we are"; return(deliver); } This is the curl command I used: time curl -x bil1-varn01:80 http://www.livejournal.com/ This is the snippet from varnishlog that is this request (notice the TTL of 30000): 16 StatSess c 172.21.4.18 53230 0 1 0 0 0 0 0 0 17 BackendClose - prodtest 17 BackendOpen b prodtest 172.21.4.182 33060 172.21.4.125 80 17 TxRequest b GET 17 TxURL b http://www.livejournal.com/ 17 TxProtocol b HTTP/1.1 17 TxHeader b User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 17 TxHeader b Host: www.livejournal.com 17 TxHeader b Pragma: no-cache 17 TxHeader b Accept: */* 17 TxHeader b Proxy-Connection: Keep-Alive 17 TxHeader b X-Forwarded-For: 172.21.0.32 17 TxHeader b X-Varnish: 1205010972 17 RxProtocol b HTTP/1.1 17 RxStatus b 200 17 RxResponse b OK 17 RxHeader b Date: Mon, 20 Jun 2011 20:15:57 GMT 17 RxHeader b Server: Apache/2.2.3 (CentOS) 17 RxHeader b X-AWS-Id: ws50 17 RxHeader b Cache-Control: private, proxy-revalidate 17 RxHeader b ETag: "da94603f920ddeddbba618fa22ade7ad" 17 RxHeader b Content-length: 47178 17 RxHeader b Content-Type: text/html; charset=utf-8 17 RxHeader b Content-Language: en 17 Fetch_Body b 4 0 1 17 Length b 47178 17 BackendReuse b prodtest 16 SessionOpen c 172.21.0.32 53590 :80 16 ReqStart c 172.21.0.32 53590 1205010972 16 RxRequest c GET 16 RxURL c http://www.livejournal.com/ 16 RxProtocol c HTTP/1.1 16 RxHeader c User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 16 RxHeader c Host: www.livejournal.com 16 RxHeader c Pragma: no-cache 16 RxHeader c Accept: */* 16 RxHeader c Proxy-Connection: Keep-Alive 16 VCL_call c recv lookup 16 VCL_call c hash 16 Hash c http://www.livejournal.com/ 16 Hash c www.livejournal.com 16 VCL_return c hash 16 VCL_call c miss fetch 16 Backend c 17 prodtest prodtest 16 TTL c 1205010972 RFC 30000 1308600958 0 0 0 0 16 VCL_call c fetch deliver 16 ObjProtocol c HTTP/1.1 16 ObjResponse c OK 16 ObjHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT 16 ObjHeader c Server: Apache/2.2.3 (CentOS) 16 ObjHeader c X-AWS-Id: ws50 16 ObjHeader c Cache-Control: private, proxy-revalidate 16 ObjHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" 16 ObjHeader c Content-Type: text/html; charset=utf-8 16 ObjHeader c Content-Language: en 16 ObjHeader c X-WeAreHere: here we are 16 VCL_call c deliver deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 200 16 TxResponse c OK 16 TxHeader c Server: Apache/2.2.3 (CentOS) 16 TxHeader c X-AWS-Id: ws50 16 TxHeader c Cache-Control: private, proxy-revalidate 16 TxHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Content-Language: en 16 TxHeader c X-WeAreHere: here we are 16 TxHeader c Content-Length: 47178 16 TxHeader c Accept-Ranges: bytes 16 TxHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT 16 TxHeader c X-Varnish: 1205010972 16 TxHeader c Age: 0 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: keep-alive 16 Length c 47178 16 ReqEnd c 1205010972 1308600957.437629938 1308600957.725367069 0.000030994 0.286793947 0.000943184 16 Debug c herding 16 SessionClose c no request -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 20 20:22:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Jun 2011 20:22:35 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.4e8b1dd2316800c21eaada2e13bd38fa@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Comment(by david): That doesn't look good. try this: http://pastie.org/2097765 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 22 06:59:21 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 22 Jun 2011 06:59:21 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend Message-ID: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Varnish in front of an Apache+PHP running Horde, which is replying with Content-Encoding: gzip, sending some gzip data and then some plain text. It's also using chunked encoding. The response never gets sent back to the client; other requests are handled normally. I sent phk a tcpdump of the conversation between Varnish and the backend. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 22 07:24:27 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 22 Jun 2011 07:24:27 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.3df29c8bb8762d9b411f83f32abf2428@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Comment(by andreacampi): In case somebody else has this problem: a good workaround is to return (pipe) for any affected URL. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 22 17:29:17 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 22 Jun 2011 17:29:17 -0000 Subject: [Varnish] #943: Support the PROXY protocol Message-ID: <047.27b95e45fc833324b117fb33fae2d984@varnish-cache.org> #943: Support the PROXY protocol ------------------------+--------------------------------------------------- Reporter: justincase | Type: enhancement Status: new | Priority: normal Milestone: | Component: build Version: | Severity: normal Keywords: | ------------------------+--------------------------------------------------- HAProxy uses something called the "PROXY protocol" which is basically a simple (non-http) header that can replace X-Forwarded-For, and provides more information: http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt Varnish could potentially make use of this on both sides of the connection. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 22 19:47:16 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 22 Jun 2011 19:47:16 -0000 Subject: [Varnish] #943: Support the PROXY protocol In-Reply-To: <047.27b95e45fc833324b117fb33fae2d984@varnish-cache.org> References: <047.27b95e45fc833324b117fb33fae2d984@varnish-cache.org> Message-ID: <056.2bdceb2ca9eea8cce8bb647c71d7c5de@varnish-cache.org> #943: Support the PROXY protocol -------------------------+-------------------------------------------------- Reporter: justincase | Type: enhancement Status: closed | Priority: normal Milestone: | Component: build Version: | Severity: normal Resolution: invalid | Keywords: -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: I've moved this to the Future_Protocols wiki page, we don't track change requests with tickets, only bugs. I read the document quickly and that should be doable. I wonder how big a securityhole it is that you can connect to the backend and claim to be a proxy this way. Apart from the obvious spoofing, what if the backend takes use of this proxy protocol to mean that HTTPS was used ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 23 08:54:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 23 Jun 2011 08:54:33 -0000 Subject: [Varnish] #944: varnishncsa crash Message-ID: <043.9d9198dcc891ca943c00cd33918f2967@varnish-cache.org> #944: varnishncsa crash -------------------------------+-------------------------------------------- Reporter: 191919 | Type: defect Status: new | Priority: high Milestone: | Component: varnishncsa Version: 3.0.0 | Severity: critical Keywords: varnishncsa crash | -------------------------------+-------------------------------------------- varnishncsa crashes when logging hitmiss information (gdb) r -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 Starting program: /root/varnish-3.0.0/bin/varnishncsa/.libs/varnishncsa -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 warning: no loadable sections found in added symbol-file system-supplied DSO at 0x2aaaaaaab000 [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 (gdb) where #0 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 #1 0x0000003eddc61bce in fputs () from /lib64/libc.so.6 #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, len=88, spec=1, ptr=0x2aaaad408c00 "1160660892 1308816831.106726885 1308816831.106859922 0.761338949 0.000108004 0.000025034\005", bitmap=0) at varnishncsa.c:657 #3 0x00002aaaaaccf241 in VSL_Dispatch () from /usr/lib64/libvarnishapi.so.1 #4 0x0000000000403577 in main (argc=5, argv=0x7fffffffe938) at varnishncsa.c:817 (gdb) print lp No symbol "lp" in current context. (gdb) frame 2 #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, len=88, spec=1, ptr=0x2aaaad408c00 "omid=178993267cn\b", bitmap=0) at varnishncsa.c:657 657 fprintf(fo, "%s", lp->df_hitmiss); (gdb) print lp $1 = (struct logline *) 0x6090b0 (gdb) print lp->df_hitmiss $2 = 0x0 lp->df_hitmiss is initialized but not filled. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jun 25 17:44:13 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 25 Jun 2011 17:44:13 -0000 Subject: [Varnish] #945: segfault at vlc.discard Message-ID: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> #945: segfault at vlc.discard --------------------+------------------------------------------------------- Reporter: elurin | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Varnish 3.0 crash at vcl.discard operation: vcl.load testname /etc/varnish/beta-maps.vcl 200 13 VCL compiled. vcl.discard testname Child (19143) not responding to CLI, killing it. 400 29 CLI communication error (hdr) Child (19143) died signal=6 Child (19143) Panic message: Assert error in VBP_Stop(), cache_backend_poll.c line 543: Condition((vcl) != 0) not true. thread = (cache-main) ident = Linux,2.6.33-5-server,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c247: /usr/sbin/varnishd [0x42c247] 0x411c04: /usr/sbin/varnishd(VBP_Stop+0x144) [0x411c04] 0x40fae8: /usr/sbin/varnishd [0x40fae8] 0x4109af: /usr/sbin/varnishd(VRT_fini_dir+0x6f) [0x4109af] 0x7fb9c62fb0aa: ./vcl.2ejNwRGh.so [0x7fb9c62fb0aa] 0x43206f: /usr/sbin/varnishd [0x43206f] 0x7fb9fc21bf38: /usr/lib/varnish/libvarnish.so [0x7fb9fc21bf38] 0x7fb9fc21c2af: /usr/lib/varnish/libvarnish.so [0x7fb9fc21c2af] 0x7fb9fc21ef07: /usr/lib/varnish/libvarnish.so [0x7fb9fc21ef07] 0x7fb9fc21b0c9: /usr/lib/varnish/libvarnish.so(VCLS_Poll+0x179) [0x7fb9fc21b0c9] Child cleanup complete child (19612) Started Child (19612) said Child starts I have some directors in my config, like: irector test random { { .backend = { .host = "192.168.0.1"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?query HTTP/1.1" "Host: mytest.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } { .backend = { .host = "192.168.0.2"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?query HTTP/1.1" "Host: mytest.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } } -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Jun 25 17:47:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 25 Jun 2011 17:47:19 -0000 Subject: [Varnish] #945: segfault at vlc.discard In-Reply-To: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> References: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> Message-ID: <052.7ac2e61dd80b91b8f75a44473ae7c3dc@varnish-cache.org> #945: segfault at vlc.discard --------------------+------------------------------------------------------- Reporter: elurin | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Comment(by elurin): Sorry for plain text in the ticket :-( {{{ vcl.load testname /etc/varnish/beta-maps.vcl 200 13 VCL compiled. vcl.discard testname Child (19143) not responding to CLI, killing it. 400 29 CLI communication error (hdr) Child (19143) died signal=6 Child (19143) Panic message: Assert error in VBP_Stop(), cache_backend_poll.c line 543: Condition((vcl) != 0) not true. thread = (cache-main) ident = Linux,2.6.33-5-server,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c247: /usr/sbin/varnishd [0x42c247] 0x411c04: /usr/sbin/varnishd(VBP_Stop+0x144) [0x411c04] 0x40fae8: /usr/sbin/varnishd [0x40fae8] 0x4109af: /usr/sbin/varnishd(VRT_fini_dir+0x6f) [0x4109af] 0x7fb9c62fb0aa: ./vcl.2ejNwRGh.so [0x7fb9c62fb0aa] 0x43206f: /usr/sbin/varnishd [0x43206f] 0x7fb9fc21bf38: /usr/lib/varnish/libvarnish.so [0x7fb9fc21bf38] 0x7fb9fc21c2af: /usr/lib/varnish/libvarnish.so [0x7fb9fc21c2af] 0x7fb9fc21ef07: /usr/lib/varnish/libvarnish.so [0x7fb9fc21ef07] 0x7fb9fc21b0c9: /usr/lib/varnish/libvarnish.so(VCLS_Poll+0x179) [0x7fb9fc21b0c9] Child cleanup complete child (19612) Started Child (19612) said Child starts ^CManager got SIGINT }}} {{{ director myt random { { .backend = { .host = "95.108.231.122"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?l=sat&v=1.22.0&x=9583&y=5532&z=14&g=G HTTP/1.1" "Host: vec.maps.yandex.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } { .backend = { .host = "95.108.231.123"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?l=sat&v=1.22.0&x=9583&y=5532&z=14&g=G HTTP/1.1" "Host: vec.maps.yandex.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:09:59 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:09:59 -0000 Subject: [Varnish] #944: varnishncsa crash In-Reply-To: <043.9d9198dcc891ca943c00cd33918f2967@varnish-cache.org> References: <043.9d9198dcc891ca943c00cd33918f2967@varnish-cache.org> Message-ID: <052.ea8955c42ad1fbb17ecc74a417dba59c@varnish-cache.org> #944: varnishncsa crash -------------------------+-------------------------------------------------- Reporter: 191919 | Owner: tfheen Type: defect | Status: new Priority: high | Milestone: Component: varnishncsa | Version: 3.0.0 Severity: critical | Keywords: varnishncsa crash -------------------------+-------------------------------------------------- Changes (by tfheen): * owner: => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:12:34 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:12:34 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.03f88ce3b57ac3167360ed9f70d95d10@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch -------------------+-------------------------------------------------------- Reporter: david | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Description changed by phk: Old description: > Hello, > > I opened a forum report on this, but haven't gotten an answer. At this > point, I have performed enough troubleshooting to narrow this down to a > bug in Varnish. I apologize if this is just a change of behavior from > version 2 to 3, but I cannot find anything in the docs about this other > than the removing of beresp.cacheable. > > When I set beresp.ttl to a value, it is simply not used. My testing shows > that Varnish calculates an objects TTL based on two things: > 1. backend Cache-Control header (max-age parameter in my case). > 2. Default TTL passed to varnishd via -t run-time flag. > > beresp.ttl is not considered. I also attempted to arbitrarily set > beresp.http.Cache-Control to something with a different max-age than the > backend sent us. Varnish still uses the back-end's response regardless of > my setting. If the back-end did not supply a Cache-Control header, then > the default TTL is used. > > Let me know if you need more info than what is seen below. > > Regards, > -david > > Here is my test bed: > > DAEMON_OPTS="-a :80 \ > -T :6082 \ > -f /etc/varnish/varnish.conf \ > -u varnish -g varnish \ > -s file,/var/nish/02/varnish.cache,22G \ > -s file,/var/nish/03/varnish.cache,22G \ > -s file,/var/nish/04/varnish.cache,22G \ > -s file,/var/nish/05/varnish.cache,22G \ > -s file,/var/nish/06/varnish.cache,22G \ > -h classic,800009 \ > -p session_linger=100 \ > -p ban_lurker_sleep=0.000001 \ > -p thread_pool_add_delay=1 \ > -p thread_pool_max=4000 \ > -p thread_pool_min=300 \ > -p thread_pools=4 \ > -p listen_depth=2048 \ > -p sess_workspace=65536 \ > -p shm_workspace=16384 \ > -p ping_interval=8 \ > -p cli_timeout=20 \ > -p lru_interval=30 \ > -p http_gzip_support=off \ > -p sess_timeout=5 \ > -p http_req_hdr_len=4096 \ > -p syslog_cli_traffic=off \ > -t 30000" > > This is my entire VCL: > backend default { > .host = "172.21.4.125"; > .port = "80"; > .connect_timeout = 10s; > .first_byte_timeout = 35s; > .between_bytes_timeout = 5s; > } > > sub vcl_fetch { > set beresp.ttl = 1000s; > set beresp.http.X-WeAreHere = "here we are"; > return(deliver); > } > > This is the curl command I used: > time curl -x bil1-varn01:80 http://www.livejournal.com/ > > This is the snippet from varnishlog that is this request (notice the TTL > of 30000): > 16 StatSess c 172.21.4.18 53230 0 1 0 0 0 0 0 0 > 17 BackendClose - prodtest > 17 BackendOpen b prodtest 172.21.4.182 33060 172.21.4.125 80 > 17 TxRequest b GET > 17 TxURL b http://www.livejournal.com/ > 17 TxProtocol b HTTP/1.1 > 17 TxHeader b User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) > libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > 17 TxHeader b Host: www.livejournal.com > 17 TxHeader b Pragma: no-cache > 17 TxHeader b Accept: */* > 17 TxHeader b Proxy-Connection: Keep-Alive > 17 TxHeader b X-Forwarded-For: 172.21.0.32 > 17 TxHeader b X-Varnish: 1205010972 > 17 RxProtocol b HTTP/1.1 > 17 RxStatus b 200 > 17 RxResponse b OK > 17 RxHeader b Date: Mon, 20 Jun 2011 20:15:57 GMT > 17 RxHeader b Server: Apache/2.2.3 (CentOS) > 17 RxHeader b X-AWS-Id: ws50 > 17 RxHeader b Cache-Control: private, proxy-revalidate > 17 RxHeader b ETag: "da94603f920ddeddbba618fa22ade7ad" > 17 RxHeader b Content-length: 47178 > 17 RxHeader b Content-Type: text/html; charset=utf-8 > 17 RxHeader b Content-Language: en > 17 Fetch_Body b 4 0 1 > 17 Length b 47178 > 17 BackendReuse b prodtest > 16 SessionOpen c 172.21.0.32 53590 :80 > 16 ReqStart c 172.21.0.32 53590 1205010972 > 16 RxRequest c GET > 16 RxURL c http://www.livejournal.com/ > 16 RxProtocol c HTTP/1.1 > 16 RxHeader c User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) > libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 > 16 RxHeader c Host: www.livejournal.com > 16 RxHeader c Pragma: no-cache > 16 RxHeader c Accept: */* > 16 RxHeader c Proxy-Connection: Keep-Alive > 16 VCL_call c recv lookup > 16 VCL_call c hash > 16 Hash c http://www.livejournal.com/ > 16 Hash c www.livejournal.com > 16 VCL_return c hash > 16 VCL_call c miss fetch > 16 Backend c 17 prodtest prodtest > 16 TTL c 1205010972 RFC 30000 1308600958 0 0 0 0 > 16 VCL_call c fetch deliver > 16 ObjProtocol c HTTP/1.1 > 16 ObjResponse c OK > 16 ObjHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT > 16 ObjHeader c Server: Apache/2.2.3 (CentOS) > 16 ObjHeader c X-AWS-Id: ws50 > 16 ObjHeader c Cache-Control: private, proxy-revalidate > 16 ObjHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" > 16 ObjHeader c Content-Type: text/html; charset=utf-8 > 16 ObjHeader c Content-Language: en > 16 ObjHeader c X-WeAreHere: here we are > 16 VCL_call c deliver deliver > 16 TxProtocol c HTTP/1.1 > 16 TxStatus c 200 > 16 TxResponse c OK > 16 TxHeader c Server: Apache/2.2.3 (CentOS) > 16 TxHeader c X-AWS-Id: ws50 > 16 TxHeader c Cache-Control: private, proxy-revalidate > 16 TxHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" > 16 TxHeader c Content-Type: text/html; charset=utf-8 > 16 TxHeader c Content-Language: en > 16 TxHeader c X-WeAreHere: here we are > 16 TxHeader c Content-Length: 47178 > 16 TxHeader c Accept-Ranges: bytes > 16 TxHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT > 16 TxHeader c X-Varnish: 1205010972 > 16 TxHeader c Age: 0 > 16 TxHeader c Via: 1.1 varnish > 16 TxHeader c Connection: keep-alive > 16 Length c 47178 > 16 ReqEnd c 1205010972 1308600957.437629938 1308600957.725367069 > 0.000030994 0.286793947 0.000943184 > 16 Debug c herding > 16 SessionClose c no request New description: Hello, I opened a forum report on this, but haven't gotten an answer. At this point, I have performed enough troubleshooting to narrow this down to a bug in Varnish. I apologize if this is just a change of behavior from version 2 to 3, but I cannot find anything in the docs about this other than the removing of beresp.cacheable. When I set beresp.ttl to a value, it is simply not used. My testing shows that Varnish calculates an objects TTL based on two things: 1. backend Cache-Control header (max-age parameter in my case). 2. Default TTL passed to varnishd via -t run-time flag. beresp.ttl is not considered. I also attempted to arbitrarily set beresp.http.Cache-Control to something with a different max-age than the backend sent us. Varnish still uses the back-end's response regardless of my setting. If the back-end did not supply a Cache-Control header, then the default TTL is used. Let me know if you need more info than what is seen below. Regards, -david Here is my test bed: {{{ DAEMON_OPTS="-a :80 \ -T :6082 \ -f /etc/varnish/varnish.conf \ -u varnish -g varnish \ -s file,/var/nish/02/varnish.cache,22G \ -s file,/var/nish/03/varnish.cache,22G \ -s file,/var/nish/04/varnish.cache,22G \ -s file,/var/nish/05/varnish.cache,22G \ -s file,/var/nish/06/varnish.cache,22G \ -h classic,800009 \ -p session_linger=100 \ -p ban_lurker_sleep=0.000001 \ -p thread_pool_add_delay=1 \ -p thread_pool_max=4000 \ -p thread_pool_min=300 \ -p thread_pools=4 \ -p listen_depth=2048 \ -p sess_workspace=65536 \ -p shm_workspace=16384 \ -p ping_interval=8 \ -p cli_timeout=20 \ -p lru_interval=30 \ -p http_gzip_support=off \ -p sess_timeout=5 \ -p http_req_hdr_len=4096 \ -p syslog_cli_traffic=off \ -t 30000" This is my entire VCL: backend default { .host = "172.21.4.125"; .port = "80"; .connect_timeout = 10s; .first_byte_timeout = 35s; .between_bytes_timeout = 5s; } sub vcl_fetch { set beresp.ttl = 1000s; set beresp.http.X-WeAreHere = "here we are"; return(deliver); } This is the curl command I used: time curl -x bil1-varn01:80 http://www.livejournal.com/ This is the snippet from varnishlog that is this request (notice the TTL of 30000): 16 StatSess c 172.21.4.18 53230 0 1 0 0 0 0 0 0 17 BackendClose - prodtest 17 BackendOpen b prodtest 172.21.4.182 33060 172.21.4.125 80 17 TxRequest b GET 17 TxURL b http://www.livejournal.com/ 17 TxProtocol b HTTP/1.1 17 TxHeader b User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 17 TxHeader b Host: www.livejournal.com 17 TxHeader b Pragma: no-cache 17 TxHeader b Accept: */* 17 TxHeader b Proxy-Connection: Keep-Alive 17 TxHeader b X-Forwarded-For: 172.21.0.32 17 TxHeader b X-Varnish: 1205010972 17 RxProtocol b HTTP/1.1 17 RxStatus b 200 17 RxResponse b OK 17 RxHeader b Date: Mon, 20 Jun 2011 20:15:57 GMT 17 RxHeader b Server: Apache/2.2.3 (CentOS) 17 RxHeader b X-AWS-Id: ws50 17 RxHeader b Cache-Control: private, proxy-revalidate 17 RxHeader b ETag: "da94603f920ddeddbba618fa22ade7ad" 17 RxHeader b Content-length: 47178 17 RxHeader b Content-Type: text/html; charset=utf-8 17 RxHeader b Content-Language: en 17 Fetch_Body b 4 0 1 17 Length b 47178 17 BackendReuse b prodtest 16 SessionOpen c 172.21.0.32 53590 :80 16 ReqStart c 172.21.0.32 53590 1205010972 16 RxRequest c GET 16 RxURL c http://www.livejournal.com/ 16 RxProtocol c HTTP/1.1 16 RxHeader c User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 16 RxHeader c Host: www.livejournal.com 16 RxHeader c Pragma: no-cache 16 RxHeader c Accept: */* 16 RxHeader c Proxy-Connection: Keep-Alive 16 VCL_call c recv lookup 16 VCL_call c hash 16 Hash c http://www.livejournal.com/ 16 Hash c www.livejournal.com 16 VCL_return c hash 16 VCL_call c miss fetch 16 Backend c 17 prodtest prodtest 16 TTL c 1205010972 RFC 30000 1308600958 0 0 0 0 16 VCL_call c fetch deliver 16 ObjProtocol c HTTP/1.1 16 ObjResponse c OK 16 ObjHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT 16 ObjHeader c Server: Apache/2.2.3 (CentOS) 16 ObjHeader c X-AWS-Id: ws50 16 ObjHeader c Cache-Control: private, proxy-revalidate 16 ObjHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" 16 ObjHeader c Content-Type: text/html; charset=utf-8 16 ObjHeader c Content-Language: en 16 ObjHeader c X-WeAreHere: here we are 16 VCL_call c deliver deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 200 16 TxResponse c OK 16 TxHeader c Server: Apache/2.2.3 (CentOS) 16 TxHeader c X-AWS-Id: ws50 16 TxHeader c Cache-Control: private, proxy-revalidate 16 TxHeader c ETag: "da94603f920ddeddbba618fa22ade7ad" 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Content-Language: en 16 TxHeader c X-WeAreHere: here we are 16 TxHeader c Content-Length: 47178 16 TxHeader c Accept-Ranges: bytes 16 TxHeader c Date: Mon, 20 Jun 2011 20:15:57 GMT 16 TxHeader c X-Varnish: 1205010972 16 TxHeader c Age: 0 16 TxHeader c Via: 1.1 varnish 16 TxHeader c Connection: keep-alive 16 Length c 47178 16 ReqEnd c 1205010972 1308600957.437629938 1308600957.725367069 0.000030994 0.286793947 0.000943184 16 Debug c herding 16 SessionClose c no request }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:24:32 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:24:32 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.6f4548ed80d1907ea825651bad9788e4@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch --------------------+------------------------------------------------------- Reporter: david | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by kristian): * owner: => kristian * status: new => assigned Comment: Looking into this. I've been able to reproduce it by just using www.livejournal.com as backend, so that makes it somewhat easier to test/track down. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:27:38 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:27:38 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.1928512780aa08df1f90ef99031a1a62@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch --------------------+------------------------------------------------------- Reporter: david | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Comment(by kristian): Ok, so this is just logging. Setting the default ttl to 5 seconds using your VCL, the content is definitely cached longer than 5 seconds, even if the log is incorrect. In other words: The log is lying and your VCL is working. Still a bug, fix coming up, though. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:33:02 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:33:02 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.9fbc561bb6430ddc3c6fb952b209f869@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch --------------------+------------------------------------------------------- Reporter: david | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: --------------------+------------------------------------------------------- Changes (by martin): * owner: kristian => martin * status: assigned => new -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:33:05 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:33:05 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.3bf73809f8250c4295b419daed0497b3@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: -------------------------+-------------------------------------------------- Changes (by kristian): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:33:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:33:58 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.6ee00f728dbb5ec84b41e9536f932602@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Owner: kristian Type: defect | Status: assigned Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: -------------------------+-------------------------------------------------- Changes (by kristian): * owner: martin => kristian * status: new => assigned -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 10:34:07 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 10:34:07 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> References: <048.2b99f488280e4547651595dbf7791ba6@varnish-cache.org> Message-ID: <057.a3798d45cc0075dd4393168c7219efd0@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+-------------------------------------------------- Reporter: andreacampi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: -------------------------+-------------------------------------------------- Changes (by kristian): * status: assigned => new * owner: kristian => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Jun 27 11:22:15 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Jun 2011 11:22:15 -0000 Subject: [Varnish] #944: varnishncsa crash In-Reply-To: <043.9d9198dcc891ca943c00cd33918f2967@varnish-cache.org> References: <043.9d9198dcc891ca943c00cd33918f2967@varnish-cache.org> Message-ID: <052.43c83185f4633f15c7337111e657c2e4@varnish-cache.org> #944: varnishncsa crash -------------------------+-------------------------------------------------- Reporter: 191919 | Owner: tfheen Type: defect | Status: new Priority: high | Milestone: Component: varnishncsa | Version: 3.0.0 Severity: critical | Keywords: varnishncsa crash -------------------------+-------------------------------------------------- Description changed by tfheen: Old description: > varnishncsa crashes when logging hitmiss information > > (gdb) r -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" > %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 > Starting program: /root/varnish-3.0.0/bin/varnishncsa/.libs/varnishncsa > -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" > %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 > warning: no loadable sections found in added symbol-file system-supplied > DSO at 0x2aaaaaaab000 > [Thread debugging using libthread_db enabled] > > Program received signal SIGSEGV, Segmentation fault. > 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 > (gdb) where > #0 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 > #1 0x0000003eddc61bce in fputs () from /lib64/libc.so.6 > #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, > len=88, spec=1, > ptr=0x2aaaad408c00 "1160660892 1308816831.106726885 > 1308816831.106859922 0.761338949 0.000108004 0.000025034\005", bitmap=0) > at varnishncsa.c:657 > #3 0x00002aaaaaccf241 in VSL_Dispatch () from > /usr/lib64/libvarnishapi.so.1 > #4 0x0000000000403577 in main (argc=5, argv=0x7fffffffe938) at > varnishncsa.c:817 > (gdb) print lp > No symbol "lp" in current context. > (gdb) frame 2 > #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, > len=88, spec=1, ptr=0x2aaaad408c00 "omid=178993267cn\b", bitmap=0) at > varnishncsa.c:657 > 657 fprintf(fo, "%s", > lp->df_hitmiss); > (gdb) print lp > $1 = (struct logline *) 0x6090b0 > (gdb) print lp->df_hitmiss > $2 = 0x0 > > lp->df_hitmiss is initialized but not filled. New description: varnishncsa crashes when logging hitmiss information {{{ (gdb) r -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 Starting program: /root/varnish-3.0.0/bin/varnishncsa/.libs/varnishncsa -F "%h %l %u %t "%r" %s %b \"%{Referer}i\" \"%{User-agent}i\" %{Varnish:hitmiss}x %{Varnish:handling}x" -w 111 warning: no loadable sections found in added symbol-file system-supplied DSO at 0x2aaaaaaab000 [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 (gdb) where #0 0x0000003eddc79b80 in strlen () from /lib64/libc.so.6 #1 0x0000003eddc61bce in fputs () from /lib64/libc.so.6 #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, len=88, spec=1, ptr=0x2aaaad408c00 "1160660892 1308816831.106726885 1308816831.106859922 0.761338949 0.000108004 0.000025034\005", bitmap=0) at varnishncsa.c:657 #3 0x00002aaaaaccf241 in VSL_Dispatch () from /usr/lib64/libvarnishapi.so.1 #4 0x0000000000403577 in main (argc=5, argv=0x7fffffffe938) at varnishncsa.c:817 (gdb) print lp No symbol "lp" in current context. (gdb) frame 2 #2 0x0000000000403075 in h_ncsa (priv=0x606be0, tag=SLT_ReqEnd, fd=66, len=88, spec=1, ptr=0x2aaaad408c00 "omid=178993267cn\b", bitmap=0) at varnishncsa.c:657 657 fprintf(fo, "%s", lp->df_hitmiss); (gdb) print lp $1 = (struct logline *) 0x6090b0 (gdb) print lp->df_hitmiss $2 = 0x0 }}} lp->df_hitmiss is initialized but not filled. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 28 10:40:06 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 28 Jun 2011 10:40:06 -0000 Subject: [Varnish] #946: ExpKill disappeared from exp_timer in 3.0 Message-ID: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> #946: ExpKill disappeared from exp_timer in 3.0 -------------------+-------------------------------------------------------- Reporter: scoof | Type: defect Status: new | Priority: low Milestone: | Component: build Version: 3.0.0 | Severity: trivial Keywords: | -------------------+-------------------------------------------------------- The ExpKill log could be used to see when a particular item was expired from the cache, but it's disappeared in 3.0. Could we please have it back? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Jun 28 10:48:35 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 28 Jun 2011 10:48:35 -0000 Subject: [Varnish] #947: varnishncsa dies on SIGHUP Message-ID: <042.abc7c4a4622a2fd6353b2c785d04e210@varnish-cache.org> #947: varnishncsa dies on SIGHUP -------------------+-------------------------------------------------------- Reporter: ljorg | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Sending a "kill -HUP" to varnishncsa makes it die with this: Assert error in vsl_nextlog(), vsl.c line 187: Condition((usleep((50*1000))) == 0) not true. errno = 4 (Interrupted system call) Aborted (core dumped) Varnishncsa is started with varnishncsa -P /var/run/varnishncsa.pid -a -w /var/log/httpd/access_log -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 29 10:41:40 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 29 Jun 2011 10:41:40 -0000 Subject: [Varnish] #848: varnishlog -r seems broken In-Reply-To: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> References: <042.f4d125bf7db6e317d56c6e53ae887681@varnish-cache.org> Message-ID: <051.344dada5834e0970a63cb6c2edba8713@varnish-cache.org> #848: varnishlog -r seems broken ------------------------+--------------------------------------------------- Reporter: perbu | Owner: kristian Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: trunk Severity: normal | Keywords: varnishlog ------------------------+--------------------------------------------------- Changes (by kristian): * milestone: Varnish 3.0 dev => Comment: I was finally able to re-create this, after Mortsa pinged me about it. It seems -r is rather broken, and still tries to read the actual shmlog (?) or some combination, instead of the real file... This is a mental note for self, more or less, but here's some examples: (the foo.vlog gets some data printed to it that matches the size): {{{ kristian at freud:~$ varnishlog -w foo.vlog -n /home/kristian/tmp/asf/ -O Assert error in vsl_nextlog(), vsl.c line 187: Condition((usleep((50*1000))) == 0) not true. errno = 4 (Interrupted system call) Aborted kristian at freud:~$ varnishlog -r foo.vlog -n /home/kristian/tmp/asf/ -d kristian at freud:~$ varnishlog -r foo.vlog Cannot open /usr/local/var/varnish/freud/_.vsm: No such file or directory kristian at freud:~$ varnishlog -r foo.vlog -d Cannot open /usr/local/var/varnish/freud/_.vsm: No such file or directory kristian at freud:~$ varnishlog -r foo.vlog Cannot open /usr/local/var/varnish/freud/_.vsm: No such file or directory kristian at freud:~$ varnishlog -O -r foo.vlog Cannot open /usr/local/var/varnish/freud/_.vsm: No such file or directory kristian at freud:~$ ls -lh foo.vlog -rw-r--r-- 1 kristian kristian 26K 2011-06-29 12:36 foo.vlog kristian at freud:~$ strings foo.vlog | head -n2 Rd ping Wr 200 19 PONG 1309343755 1.0 kristian at freud:~$ strings foo.vlog | head -n5 Rd ping Wr 200 19 PONG 1309343755 1.0 Rd ping Wr 200 19 PONG 1309343758 1.0 127.0.0.1 56886 :8000 kristian at freud:~$ }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 29 16:50:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 29 Jun 2011 16:50:18 -0000 Subject: [Varnish] #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions Message-ID: <046.6c654bc18e4929c25efda08958dd6bef@varnish-cache.org> #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions -----------------------+---------------------------------------------------- Reporter: mademedia | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: client.ip | -----------------------+---------------------------------------------------- Attempts to use client.ip more than once result in multiple declarations of match_acl_anon_* VCL: {{{ # Backend definition backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if(client.ip != "1.2.3.4" && client.ip != "5.6.7.8"){ error 403 "Website unavailable"; } } }}} Output from debug compile: {{{ Message from C-compiler: ./vcl.pIxaTHp7.c:456: error: redefinition of ?match_acl_anon_1? ./vcl.pIxaTHp7.c:425: error: previous definition of ?match_acl_anon_1? was here Running C-compiler failed, exit 1 }}} Submitted at the request of PHK in IRC. Version installed was "varnishd (varnish-3.0.0 revision 3bd5997)" installed from RHEL repo. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Jun 29 22:10:58 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 29 Jun 2011 22:10:58 -0000 Subject: [Varnish] #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions In-Reply-To: <046.6c654bc18e4929c25efda08958dd6bef@varnish-cache.org> References: <046.6c654bc18e4929c25efda08958dd6bef@varnish-cache.org> Message-ID: <055.a23bc5b9d1e2d27aeb0f43995b25166c@varnish-cache.org> #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions -----------------------+---------------------------------------------------- Reporter: mademedia | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: client.ip | -----------------------+---------------------------------------------------- Comment(by drwilco): All anon acls within a single compound expression would use the same ID when generating both the acl function and the function call, my patch adds a separate counter for anon acls which is incremented for each anon acl. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 08:32:46 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 08:32:46 -0000 Subject: [Varnish] #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions In-Reply-To: <046.6c654bc18e4929c25efda08958dd6bef@varnish-cache.org> References: <046.6c654bc18e4929c25efda08958dd6bef@varnish-cache.org> Message-ID: <055.4dd84a506820368683edfe2b81019b82@varnish-cache.org> #948: Using multiple client.ip calls results in duplicate match_acl_anon_* definitions ------------------------+--------------------------------------------------- Reporter: mademedia | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: client.ip ------------------------+--------------------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [01d9c22319c04940aff6743a9ca9e9a474cd194c]) Fix a compiler fault if two IP comparisons were on the same line of source code. This is substantially DocWilcos fix, but instead of adding yet another unique numbering variable, I have collapsed the once we have into a single one. Fixes #948 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 08:53:33 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 08:53:33 -0000 Subject: [Varnish] #949: %b format gives wrong outbut for zero-size content Message-ID: <042.210d1826f0c6be0ec5841b5f6325035b@varnish-cache.org> #949: %b format gives wrong outbut for zero-size content -------------------+-------------------------------------------------------- Reporter: ljorg | Type: defect Status: new | Priority: normal Milestone: | Component: varnishncsa Version: 3.0.0 | Severity: normal Keywords: | -------------------+-------------------------------------------------------- The manual page for varnishncsa says that %b format gives "Size of response in bytes, excluding HTTP headers. In CLF format, i.e. a '-' rather than a 0 when no bytes are sent." But %b does give a 0, not a '-' -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 09:11:19 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 09:11:19 -0000 Subject: [Varnish] #941: beresp.ttl not honred when set in vcl_fetch In-Reply-To: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> References: <042.621b2af1ec17465de36ac30afe14bc71@varnish-cache.org> Message-ID: <051.468234e1a8b795329a8bd3f1597f5b76@varnish-cache.org> #941: beresp.ttl not honred when set in vcl_fetch --------------------+------------------------------------------------------- Reporter: david | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: (In [90b7074b4cad090743f5a9797007ab448fa95a39]) Reintroduce TTL VCL logging that was lost in commit a21746d23d4047ce209a0c283e12ff684f478b72. Fixes: #941 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 09:51:18 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 09:51:18 -0000 Subject: [Varnish] #945: segfault at vlc.discard In-Reply-To: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> References: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> Message-ID: <052.bd220428848b688f486052deda001f66@varnish-cache.org> #945: segfault at vlc.discard --------------------+------------------------------------------------------- Reporter: elurin | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: | --------------------+------------------------------------------------------- Description changed by phk: Old description: > Varnish 3.0 crash at vcl.discard operation: > > vcl.load testname /etc/varnish/beta-maps.vcl > 200 13 > VCL compiled. > vcl.discard testname > Child (19143) not responding to CLI, killing it. > 400 29 > CLI communication error (hdr) > Child (19143) died signal=6 > Child (19143) Panic message: Assert error in VBP_Stop(), > cache_backend_poll.c line 543: > Condition((vcl) != 0) not true. > thread = (cache-main) > ident = Linux,2.6.33-5-server,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x42c247: /usr/sbin/varnishd [0x42c247] > 0x411c04: /usr/sbin/varnishd(VBP_Stop+0x144) [0x411c04] > 0x40fae8: /usr/sbin/varnishd [0x40fae8] > 0x4109af: /usr/sbin/varnishd(VRT_fini_dir+0x6f) [0x4109af] > 0x7fb9c62fb0aa: ./vcl.2ejNwRGh.so [0x7fb9c62fb0aa] > 0x43206f: /usr/sbin/varnishd [0x43206f] > 0x7fb9fc21bf38: /usr/lib/varnish/libvarnish.so [0x7fb9fc21bf38] > 0x7fb9fc21c2af: /usr/lib/varnish/libvarnish.so [0x7fb9fc21c2af] > 0x7fb9fc21ef07: /usr/lib/varnish/libvarnish.so [0x7fb9fc21ef07] > 0x7fb9fc21b0c9: /usr/lib/varnish/libvarnish.so(VCLS_Poll+0x179) > [0x7fb9fc21b0c9] > > Child cleanup complete > child (19612) Started > Child (19612) said Child starts > > I have some directors in my config, like: > irector test random { > { > .backend = { > .host = "192.168.0.1"; > .port = "80"; > .connect_timeout = 5s; > .probe = { > .request = > "GET /tiles?query HTTP/1.1" > "Host: mytest.ru" > "Connection: close"; > .timeout = 10s; > .window = 1; > .threshold = 1; > .interval = 1s; > } > } > .weight = 10; > } > { > .backend = { > .host = "192.168.0.2"; > .port = "80"; > .connect_timeout = 5s; > .probe = { > .request = > "GET /tiles?query HTTP/1.1" > "Host: mytest.ru" > "Connection: close"; > .timeout = 10s; > .window = 1; > .threshold = 1; > .interval = 1s; > } > } > .weight = 10; > } > } New description: Varnish 3.0 crash at vcl.discard operation: {{{ vcl.load testname /etc/varnish/beta-maps.vcl 200 13 VCL compiled. vcl.discard testname Child (19143) not responding to CLI, killing it. 400 29 CLI communication error (hdr) Child (19143) died signal=6 Child (19143) Panic message: Assert error in VBP_Stop(), cache_backend_poll.c line 543: Condition((vcl) != 0) not true. thread = (cache-main) ident = Linux,2.6.33-5-server,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42c247: /usr/sbin/varnishd [0x42c247] 0x411c04: /usr/sbin/varnishd(VBP_Stop+0x144) [0x411c04] 0x40fae8: /usr/sbin/varnishd [0x40fae8] 0x4109af: /usr/sbin/varnishd(VRT_fini_dir+0x6f) [0x4109af] 0x7fb9c62fb0aa: ./vcl.2ejNwRGh.so [0x7fb9c62fb0aa] 0x43206f: /usr/sbin/varnishd [0x43206f] 0x7fb9fc21bf38: /usr/lib/varnish/libvarnish.so [0x7fb9fc21bf38] 0x7fb9fc21c2af: /usr/lib/varnish/libvarnish.so [0x7fb9fc21c2af] 0x7fb9fc21ef07: /usr/lib/varnish/libvarnish.so [0x7fb9fc21ef07] 0x7fb9fc21b0c9: /usr/lib/varnish/libvarnish.so(VCLS_Poll+0x179) [0x7fb9fc21b0c9] Child cleanup complete child (19612) Started Child (19612) said Child starts I have some directors in my config, like: irector test random { { .backend = { .host = "192.168.0.1"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?query HTTP/1.1" "Host: mytest.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } { .backend = { .host = "192.168.0.2"; .port = "80"; .connect_timeout = 5s; .probe = { .request = "GET /tiles?query HTTP/1.1" "Host: mytest.ru" "Connection: close"; .timeout = 10s; .window = 1; .threshold = 1; .interval = 1s; } } .weight = 10; } } }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 11:02:08 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 11:02:08 -0000 Subject: [Varnish] #946: ExpKill disappeared from exp_timer in 3.0 In-Reply-To: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> References: <042.177cc31fd16288dfa6d14ef04c0e001c@varnish-cache.org> Message-ID: <051.e3e598ff15399a5d6ec455a769f12b17@varnish-cache.org> #946: ExpKill disappeared from exp_timer in 3.0 -------------------+-------------------------------------------------------- Reporter: scoof | Type: defect Status: new | Priority: low Milestone: | Component: build Version: 3.0.0 | Severity: trivial Keywords: | -------------------+-------------------------------------------------------- Comment(by martin): Commit bc2f3f06764bec1aca49a0d21929838646bbcb3a seems to have removed these log lines -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Jun 30 11:07:42 2011 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Jun 2011 11:07:42 -0000 Subject: [Varnish] #945: segfault at vlc.discard In-Reply-To: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> References: <043.dab354f7b9a42cd11731107d8850bc60@varnish-cache.org> Message-ID: <052.1d6387d6d3e821ac9638b9cf6eb5502f@varnish-cache.org> #945: segfault at vlc.discard ---------------------+------------------------------------------------------ Reporter: elurin | Type: defect Status: closed | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Resolution: fixed | Keywords: ---------------------+------------------------------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [7209a66e764e7afd9d8ca179494b1656f0ec0c9b]) Split registration and selection of backend poll functions into two different functions. I have not managed to write a vtc case for this one, but I am pretty sure this: Fixes: #945 -- Ticket URL: Varnish The Varnish HTTP Accelerator