From varnish-bugs at varnish-cache.org Wed Aug 1 00:03:07 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Aug 2012 00:03:07 -0000 Subject: [Varnish] #1180: Restart in vcl_miss increases miss count even for hits and passes. In-Reply-To: <043.71c1a3fa3d2e6ef741b0d3c2e99e38f1@varnish-cache.org> References: <043.71c1a3fa3d2e6ef741b0d3c2e99e38f1@varnish-cache.org> Message-ID: <058.03bcdb6266269cdebe42f601ca6c04c1@varnish-cache.org> #1180: Restart in vcl_miss increases miss count even for hits and passes. --------------------+---------------------- Reporter: david | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: normal | Resolution: invalid Keywords: | --------------------+---------------------- Comment (by david): This behavior makes it basically impossible to determine your actual request cache hit/miss ratio. I understand the logic behind the behavior, but I think it's nearly useless in any real-world scenario. Because of this behavior I cannot implement the VCL I provided here because of the "invalid" hit ratio numbers I get from varnishstat. The whole point of this VCL is to increase my hit ratio by serving banned users a cached "you're banned" page. Instead of increasing the hit ratio, it was dropped to 1/10th of what it was. If you intend to keep this behavior, I'm done complaining, but I at least think changing this is worth considering and discussing. If an upcoming version has a way to check two hashes in one request without restarting then I can certainly wait for that. Regards, -david -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 1 17:07:42 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Aug 2012 17:07:42 -0000 Subject: [Varnish] #1181: ..... Message-ID: <053.303bf82319c80341325b1521fafc1450@varnish-cache.org> #1181: ..... -----------------------------+-------------------- Reporter: trogacopcas1973 | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.0 | Severity: normal Keywords: key1,key2 | -----------------------------+-------------------- _, Epson Care V300 Photo is truly a broadband internet piece of content reader. It would possibly check out shoot, pix yet 3-d targets in astonishing purity details 1 ) you see , the value-priced Epson Efficiency V300 Shot makes it much simpler than ever before with the help of 4800 dpi eye settlement and also organizer most typically associated with family- friendly aspects. And thus, reveal create the many faded relatives portraits back again, require Epson Faultlessness V300 Pic to simply rebuild the type. And, it standard created then one-touch checking. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 1 17:25:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 01 Aug 2012 17:25:17 -0000 Subject: [Varnish] #1181: ..... In-Reply-To: <053.303bf82319c80341325b1521fafc1450@varnish-cache.org> References: <053.303bf82319c80341325b1521fafc1450@varnish-cache.org> Message-ID: <068.580e8b9123056cfb7e7e0eda3cb837d0@varnish-cache.org> #1181: ..... -----------------------------+---------------------- Reporter: trogacopcas1973 | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: invalid Keywords: key1,key2 | -----------------------------+---------------------- Changes (by daghf): * status: new => closed * resolution: => invalid Old description: > _, Epson Care V300 Photo is truly a broadband internet piece of content > reader. It would possibly check out shoot, pix yet 3-d targets in > astonishing purity details 1 ) you see , the value-priced Epson > Efficiency V300 Shot makes it much simpler than ever before with the help > of 4800 dpi eye settlement and also organizer most typically associated > with family-friendly aspects. And thus, reveal create the many faded > relatives portraits back again, require Epson Faultlessness V300 Pic to > simply rebuild the type. And, it standard created then one-touch > checking. New description: Spam -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 2 12:45:51 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Aug 2012 12:45:51 -0000 Subject: [Varnish] #1166: Varnishadm throws an assert In-Reply-To: <051.5814c781387d31eb173c4ed622e0ca4c@varnish-cache.org> References: <051.5814c781387d31eb173c4ed622e0ca4c@varnish-cache.org> Message-ID: <066.6e55e3bbef3a0efc3a3e32a548415b3c@varnish-cache.org> #1166: Varnishadm throws an assert ------------------------+-------------------- Reporter: rj@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishadm | Version: 3.0.2 Severity: normal | Resolution: Keywords: varnishadm | ------------------------+-------------------- Comment (by lfboulanger): Got the same error, although I left the terminal for several hours, so I am not sure how soon it happened: {{{ root at chablis:/etc/varnish# varnishadm -T 127.0.0.1:6082 -S /etc/varnish/secret 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.4.2-x86_64-linode25,x86_64,-sfile,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. varnish> vcl.load bonlook5 /etc/varnish/bonlook.vcl 200 VCL compiled. varnish> vcl.use bonlook5 200 varnish> Assert error in pass(), varnishadm.c line 219: Condition(i > 0) not true. errno = 4 (Interrupted system call) Aborted (core dumped) root at chablis:/etc/varnish# }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 2 17:59:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Aug 2012 17:59:35 -0000 Subject: [Varnish] #1182: Unknown requsts Message-ID: <049.7ec7d629c23aa79f9c7ed245eaaa521a@varnish-cache.org> #1182: Unknown requsts ------------------------------+--------------------------- Reporter: raymondjiii | Type: documentation Status: new | Priority: normal Milestone: Later | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: Unknown requests | ------------------------------+--------------------------- I currently am getting like a 10% hit rate, a 10% cache rate so what are the other 80% of requests? I would have thought that varnishstat's output of hits and misses would be equal to the number of requests unless some serious error occurred? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 2 18:14:33 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 02 Aug 2012 18:14:33 -0000 Subject: [Varnish] #1182: Unknown requsts In-Reply-To: <049.7ec7d629c23aa79f9c7ed245eaaa521a@varnish-cache.org> References: <049.7ec7d629c23aa79f9c7ed245eaaa521a@varnish-cache.org> Message-ID: <064.e978c8069ab9d4dbdebdc4791aeb65a3@varnish-cache.org> #1182: Unknown requsts ------------------------------+---------------------- Reporter: raymondjiii | Owner: Type: documentation | Status: closed Priority: normal | Milestone: Later Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: invalid Keywords: Unknown requests | ------------------------------+---------------------- Changes (by scoof): * status: new => closed * resolution: => invalid Comment: hits and misses are for lookups in the cache, and doesn't necessarily map to a request. Please only use the bug tracker for bugs, and use the mailing lists for questions. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 6 10:51:11 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 06 Aug 2012 10:51:11 -0000 Subject: [Varnish] #1166: Varnishadm throws an assert In-Reply-To: <051.5814c781387d31eb173c4ed622e0ca4c@varnish-cache.org> References: <051.5814c781387d31eb173c4ed622e0ca4c@varnish-cache.org> Message-ID: <066.81771fc89f7215cce9f49a6d260423ea@varnish-cache.org> #1166: Varnishadm throws an assert ------------------------+------------------------- Reporter: rj@? | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishadm | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: varnishadm | ------------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: The return value from poll(2) should not be able to be zero, with negative 3rd argument, so we must assume it was -1 and that EINTR is correct. So I think somebody or something sent your varnishadm a signal, and there really isn't much we can do about that... If you can reproduce it, try to run varnishadm with ktrace/truss/strace to see what the signal is and possibly who sent it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 7 10:24:19 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Aug 2012 10:24:19 -0000 Subject: [Varnish] #1183: n_wrk_max has a random initial value Message-ID: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> #1183: n_wrk_max has a random initial value -------------------+---------------------- Reporter: dlec | Type: defect Status: new | Priority: low Milestone: | Component: varnishd Version: 3.0.2 | Severity: minor Keywords: | -------------------+---------------------- I seem to get a random initial value for n_wrk_max with varnishstat, while I expect a value of zero. # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 349675 17483.75 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 314399 314399.00 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 2384 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 2146 357.67 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 281910 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 254622 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 354177 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 0 0.00 N worker threads limited command: /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/varnish.vcl -T 127.0.0.1:6082 -t 120 -w 1250,1500,120 -p thread_pool_add_delay 1 -p thread_pools 4 -p sess_timeout 5 -u varnish -g varnish -s Transient=malloc,200M -s malloc,8G source: http://repo.varnish- cache.org/source/varnish-3.0.2-streaming.tar.gz os: CentOS 5.8 x86_64 hw: vm (VMware ESX 4.1), 10 GB, 4 vCPU -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 7 21:09:33 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Aug 2012 21:09:33 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg Message-ID: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg -----------------------------+---------------------- Reporter: dbakerflight | Type: defect Status: new | Priority: high Milestone: Varnish 3.0 dev | Component: build Version: trunk | Severity: critical Keywords: | -----------------------------+---------------------- We have been experiencing frequent crashes of varnishd on FreeBSD 9 with this assert error. The problem remains in 3.0.3rc1 Aug 6 01:17:04 rokit varnishd[59107]: Child (57104) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue Aug 6 20:48:03 rokit varnishd[19405]: Child (19406) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue Aug 7 20:50:33 rokit varnishd[19405]: Child (30531) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 7 21:27:42 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 07 Aug 2012 21:27:42 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.f64c7092daa9a16d75eb35f9f66968e5@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: Type: defect | Status: new Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: Keywords: | --------------------------+------------------------------ Comment (by dbakerflight): Relevant code: {{{ VGZ_Obuf(sp->wrk->vgz_rx, ibuf2, sizeof ibuf2); i = VGZ_Gunzip(sp->wrk->vgz_rx, &dp, &dl); /* XXX: check i */ assert(i >= VGZ_OK); }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 8 06:54:55 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Aug 2012 06:54:55 -0000 Subject: [Varnish] #1183: n_wrk_max has a random initial value In-Reply-To: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> References: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> Message-ID: <057.88435aefb346881ae8f9b8e75fcad69a@varnish-cache.org> #1183: n_wrk_max has a random initial value ----------------------+-------------------- Reporter: dlec | Owner: Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: 3.0.2 Severity: minor | Resolution: Keywords: | ----------------------+-------------------- Description changed by phk: Old description: > I seem to get a random initial value for n_wrk_max with varnishstat, > while I expect a value of zero. > > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 349675 17483.75 N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 314399 314399.00 N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 2384 inf N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 2146 357.67 N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 281910 inf N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 254622 inf N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 354177 inf N worker threads limited > # /etc/init.d/varnish restart > # varnishstat -1 | grep max > n_wrk_max 0 0.00 N worker threads limited > > command: /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f > /etc/varnish/varnish.vcl -T 127.0.0.1:6082 -t 120 -w 1250,1500,120 -p > thread_pool_add_delay 1 -p thread_pools 4 -p sess_timeout 5 -u varnish -g > varnish -s Transient=malloc,200M -s malloc,8G > > source: http://repo.varnish- > cache.org/source/varnish-3.0.2-streaming.tar.gz > > os: CentOS 5.8 x86_64 > > hw: vm (VMware ESX 4.1), 10 GB, 4 vCPU New description: I seem to get a random initial value for n_wrk_max with varnishstat, while I expect a value of zero. {{{ # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 349675 17483.75 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 314399 314399.00 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 2384 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 2146 357.67 N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 281910 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 254622 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 354177 inf N worker threads limited # /etc/init.d/varnish restart # varnishstat -1 | grep max n_wrk_max 0 0.00 N worker threads limited }}} command: /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/varnish.vcl -T 127.0.0.1:6082 -t 120 -w 1250,1500,120 -p thread_pool_add_delay 1 -p thread_pools 4 -p sess_timeout 5 -u varnish -g varnish -s Transient=malloc,200M -s malloc,8G source: http://repo.varnish- cache.org/source/varnish-3.0.2-streaming.tar.gz os: CentOS 5.8 x86_64 hw: vm (VMware ESX 4.1), 10 GB, 4 vCPU -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 8 06:55:25 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Aug 2012 06:55:25 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.090e7ee3ba7c64707f862c0579d8f0d8@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: Type: defect | Status: new Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: Keywords: | --------------------------+------------------------------ Description changed by phk: Old description: > We have been experiencing frequent crashes of varnishd on FreeBSD 9 with > this assert error. The problem remains in 3.0.3rc1 > > Aug 6 01:17:04 rokit varnishd[59107]: Child (57104) Panic message: > Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: > Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = > FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue > Aug 6 20:48:03 rokit varnishd[19405]: Child (19406) Panic message: > Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: > Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = > FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue > Aug 7 20:50:33 rokit varnishd[19405]: Child (30531) Panic message: > Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: > Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = > FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue New description: We have been experiencing frequent crashes of varnishd on FreeBSD 9 with this assert error. The problem remains in 3.0.3rc1 {{{ Aug 6 01:17:04 rokit varnishd[59107]: Child (57104) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue Aug 6 20:48:03 rokit varnishd[19405]: Child (19406) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue Aug 7 20:50:33 rokit varnishd[19405]: Child (30531) Panic message: Assert error in vfp_esi_bytes_gg(), cache_esi_fetch.c line 274: Condition(i >= VGZ_OK) not true. thread = (cache-worker) ident = FreeBSD,9.0-RELEASE,amd64,-smalloc,-smalloc,-hclassic,kqueue }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 8 10:06:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Aug 2012 10:06:26 -0000 Subject: [Varnish] #1168: rollback breaks ESI In-Reply-To: <043.c4c3c5288e64565202f3ef2286a684c8@varnish-cache.org> References: <043.c4c3c5288e64565202f3ef2286a684c8@varnish-cache.org> Message-ID: <058.15b24302824e6a72a2b4a14bc3416a96@varnish-cache.org> #1168: rollback breaks ESI ----------------------+-------------------- Reporter: scoof | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [d9a9ecd999f78123ac6ef5dfa8fd4aac38130c26]: {{{ #!CommitTicketReference repository="" revision="d9a9ecd999f78123ac6ef5dfa8fd4aac38130c26" Doing rollback in a esi:include request would roll back to the original ESI processed request, rather than to the included requests. Fix by allocating/cloning a new request for the esi:include transactions. Testcase by: scoof Fixes #1168 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 8 10:06:27 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 08 Aug 2012 10:06:27 -0000 Subject: [Varnish] #1168: rollback breaks ESI In-Reply-To: <043.c4c3c5288e64565202f3ef2286a684c8@varnish-cache.org> References: <043.c4c3c5288e64565202f3ef2286a684c8@varnish-cache.org> Message-ID: <058.cb7f378b74a1b44324db98f795087c33@varnish-cache.org> #1168: rollback breaks ESI ----------------------+--------------------- Reporter: scoof | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [d9a9ecd999f78123ac6ef5dfa8fd4aac38130c26]) Doing rollback in a esi:include request would roll back to the original ESI processed request, rather than to the included requests. Fix by allocating/cloning a new request for the esi:include transactions. Testcase by: scoof Fixes #1168 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 9 07:57:20 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Aug 2012 07:57:20 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 Message-ID: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Keywords: ----------------------+------------------- Running trunk (as of aug 9, 2012) with the following settings: {{{ $ varnishd -d -n /tmp/trunkTMP -a :8081 -b kly.no }}} I consistently get assert errors when I throw traffic at it. This only happens when I throw some moderate amount of requests over the same connection. The current example uses 502 GET requests and the assert is: {{{ Child (8080) Panic message: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313: Condition((p = WS_Alloc(req->http->ws, len)) != 0) not true. thread = (cache-worker) ident = Linux,2.6.38-15-generic-pae,i686,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x80762d2: varnishd() [0x80762d2] 0x8085ca0: varnishd(VRT_IP_string+0x1f0) [0x8085ca0] 0xa41e3c2c: ./vcl.Qs1KMLyU.so(+0xc2c) [0xa41e3c2c] 0x808349f: varnishd(VCL_recv_method+0xef) [0x808349f] 0x807bed5: varnishd(CNT_Request+0x1ba5) [0x807bed5] 0x8072974: varnishd(HTTP1_Session+0x724) [0x8072974] 0x807e9e0: varnishd() [0x807e9e0] 0x8080035: varnishd(SES_pool_accept_task+0x315) [0x8080035] 0x8078b24: varnishd(Pool_Work_Thread+0x194) [0x8078b24] 0x808d540: varnishd() [0x808d540] req = 0xa302e018 { sp = 0xa2f02418, xid = 1864913370, step = R_STP_RECV, handling = deliver, restarts = 0, esi_level = 0 sp = 0xa2f02418 { fd = 14, id = 14, client = 127.0.0.1 45386, step = S_STP_WORKING, }, worker = 0xa32c11bc { ws = 0xa32c1348 { id = "wrk", {s,f,r,e} = {0xa32c0990,0xa32c0990,(nil),+2048}, }, }, ws = 0xa302e114 { overflow id = "req", {s,f,r,e} = {0xa302f0b4,+12132,(nil),+12132}, }, http[req] = { ws = 0xa302e114[req] "GET", "/tomfil", "HTTP/1.1", "Host: localhost:8081", }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 9 08:07:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Aug 2012 08:07:43 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 In-Reply-To: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> References: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> Message-ID: <061.b103463056f290149ae5d5e32e1a959c@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by kristian): I needed several hundred (300+) requests over a single connection to reproduce. 100 were not enough. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 9 11:07:17 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Aug 2012 11:07:17 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 In-Reply-To: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> References: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> Message-ID: <061.80a1be5c4455bc59ea374a3cf73f7f3a@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by kristian): I did a few more tests, mostly related to timeout_linger. The results are ... mixed. X=Fail, O=OK. Same tests iterated. {{{ default linger. 500 txreq OXXXXXXXXOXXXXOXXXXOXXXXXOXXOXXXOOXOXOOOXXXXXOXXXOXXXOXXXOXXXXXOXXOXOOXOXXOOXXXOXOXXXOOXXXXXOXXXXXXO 29 of 100 tests succeeded. 0.05 linger. 500 txreq XXXXXXXOOXXXOXOXXOXOXXXXOXXOXXXXXXXOXXXXOXXXXXXXXOXXOXXXXXXXXXXXOXXXXOOXXXXXXXXXOXXXXXXXXXXXOXXXOOXX 19 of 100 tests succeeded. 2.00 linger. 500 txreq XXXOOOXXXXXXXXXOXOXXXOXXOXOXXXXXXXXXXXXXXXXXXXXXXXOXXXXXXOXXOXOXXXXOXXXXXXXXXXXXOXOXXXOXXXXXXXXXXXXX 16 of 100 tests succeeded. 20.00 linger. 500 txreq OXXXOXXXXOXXXXXXXXXXXXXXXOXXXXXOXXXXXXXOXXXXOXXXXXXOXXXXXXXOXXXOXXXXXXXXXXXOXXXXXOXXXXXXXOOXXXXXXXXX 14 of 100 tests succeeded. 0.00 linger. 500 txreq OXXXXXXXXOXOXOXXXXXXXXOOOXXXXXXXXXXXXXOXXXXXXOXXXOXXXXXXXOOXXOXXXXXXOOXXXXXXXXXOOOXXXXOXXXXXXOXOXXXX 21 of 100 tests succeeded. }}} I'm almost ready to rule out timeout_linger as a factor. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 9 16:00:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 09 Aug 2012 16:00:43 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.d80145b53624f4871b5bd6f46947d056@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: Type: defect | Status: new Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: Keywords: | --------------------------+------------------------------ Comment (by dbakerflight): We applied this patch and are not seeing any more crashes: {{{ --- cache_esi_fetch.c.orig 2012-08-07 21:28:47.724109347 +0000 +++ cache_esi_fetch.c 2012-08-07 21:39:32.301109679 +0000 @@ -271,6 +271,9 @@ VGZ_Obuf(sp->wrk->vgz_rx, ibuf2, sizeof ibuf2); i = VGZ_Gunzip(sp->wrk->vgz_rx, &dp, &dl); /* XXX: check i */ +if (i < VGZ_OK) { + return -1; +} assert(i >= VGZ_OK); vef->bufp = ibuf2; if (dl > 0) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 10 19:12:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 10 Aug 2012 19:12:15 -0000 Subject: [Varnish] #1178: Varnish should add Vary: Accept-Encoding when compressing content In-Reply-To: <043.f8e3add5fe2d9de34f59397c4cedbe58@varnish-cache.org> References: <043.f8e3add5fe2d9de34f59397c4cedbe58@varnish-cache.org> Message-ID: <058.48508ee7c4040205be61a53183e64984@varnish-cache.org> #1178: Varnish should add Vary: Accept-Encoding when compressing content --------------------+-------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+-------------------- Comment (by RuddO): I would like to be kept posted about this. Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 10:16:21 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 10:16:21 -0000 Subject: [Varnish] #1176: varnishd segfaults if no storage backend has been defined In-Reply-To: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> References: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> Message-ID: <059.d595ba381a33e7ef93eacf9c8ec9e69c@varnish-cache.org> #1176: varnishd segfaults if no storage backend has been defined ----------------------+-------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by martin): The attached test case triggers the segfault. Though the test case will still fail after the patch, so not suitable as a regression test. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 10:46:02 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 10:46:02 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.952b5ef5d93a865970f7f0f45990a35b@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: phk Type: defect | Status: new Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: Keywords: | --------------------------+------------------------------ Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 10:52:36 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 10:52:36 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 In-Reply-To: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> References: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> Message-ID: <061.9cd55896979c379e84bbb6d3cb6e31ec@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by kristian): To reproduce you simply need a large set of requests. Varnishtest has a limited buffer size, so I first applied this (please ignore the numbers - they were pulled out randomly): {{{ diff --git a/bin/varnishtest/vtc_main.c b/bin/varnishtest/vtc_main.c index fb1c42e..b5f5992 100644 --- a/bin/varnishtest/vtc_main.c +++ b/bin/varnishtest/vtc_main.c @@ -260,7 +260,7 @@ start_test(void) ALLOC_OBJ(jp, JOB_MAGIC); AN(jp); - jp->bufsiz = 256*1024; /* XXX */ + jp->bufsiz = 168740*1024; /* XXX */ jp->buf = mmap(NULL, jp->bufsiz, PROT_READ|PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0); }}} Then used the attached test case. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 10:54:57 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 10:54:57 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 In-Reply-To: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> References: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> Message-ID: <061.3c18d7e61e4bc46f6c396aabbcd37f6d@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by kristian): The test case will not break consistently unfortunately. I suspect it's timing related. If you instead use varnishd manually and use nc to throw a large amount of requests at it, it will hit the assert error pretty much every time. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 10:55:28 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 10:55:28 -0000 Subject: [Varnish] #1183: n_wrk_max has a random initial value In-Reply-To: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> References: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> Message-ID: <057.795d7502560d5e9a0a6168b56341e384@varnish-cache.org> #1183: n_wrk_max has a random initial value ----------------------+--------------------- Reporter: dlec | Owner: martin Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: 3.0.2 Severity: minor | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 11:06:24 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 11:06:24 -0000 Subject: [Varnish] #1176: varnishd segfaults if no storage backend has been defined In-Reply-To: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> References: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> Message-ID: <059.8fd9b7ce707af61c80b4a579b21497ca@varnish-cache.org> #1176: varnishd segfaults if no storage backend has been defined ----------------------+-------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [58596b2780fdfdfeea4c000fe3e3912196369df2]: {{{ #!CommitTicketReference repository="" revision="58596b2780fdfdfeea4c000fe3e3912196369df2" Sigh, this one is the working part of the fix: If people only specify Transient, run only on Transient. Fixes #1176 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 11:06:23 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 11:06:23 -0000 Subject: [Varnish] #1176: varnishd segfaults if no storage backend has been defined In-Reply-To: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> References: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> Message-ID: <059.9304e5b1b0d79cb383017993969efa12@varnish-cache.org> #1176: varnishd segfaults if no storage backend has been defined ----------------------+-------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Poul-Henning Kamp ): In [dc7913071e7bc19ddc7139b0bc9e13ca99357bda]: {{{ #!CommitTicketReference repository="" revision="dc7913071e7bc19ddc7139b0bc9e13ca99357bda" If people only specify the Transient storage, only run on the Transient storage. Fixes #1176 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 11:06:25 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 11:06:25 -0000 Subject: [Varnish] #1176: varnishd segfaults if no storage backend has been defined In-Reply-To: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> References: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> Message-ID: <059.cc5a64b9ff77f4a57e5c8b1ee3022389@varnish-cache.org> #1176: varnishd segfaults if no storage backend has been defined ----------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [58596b2780fdfdfeea4c000fe3e3912196369df2]) Sigh, this one is the working part of the fix: If people only specify Transient, run only on Transient. Fixes #1176 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 13 11:06:27 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 13 Aug 2012 11:06:27 -0000 Subject: [Varnish] #1176: varnishd segfaults if no storage backend has been defined In-Reply-To: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> References: <044.359689bc726e4977b127817ced20225c@varnish-cache.org> Message-ID: <059.74f58b18d5ab89a987793347067cbad4@varnish-cache.org> #1176: varnishd segfaults if no storage backend has been defined ----------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ----------------------+--------------------- Comment (by Poul-Henning Kamp ): (In [dc7913071e7bc19ddc7139b0bc9e13ca99357bda]) If people only specify the Transient storage, only run on the Transient storage. Fixes #1176 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 15 11:51:08 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Aug 2012 11:51:08 -0000 Subject: [Varnish] #1183: n_wrk_max has a random initial value In-Reply-To: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> References: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> Message-ID: <057.7a920cff9e2aac3d264c3d08f4438c91@varnish-cache.org> #1183: n_wrk_max has a random initial value ----------------------+--------------------- Reporter: dlec | Owner: martin Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: 3.0.2 Severity: minor | Resolution: Keywords: | ----------------------+--------------------- Comment (by martin): Verified this on 3.0 Varnish, so not specific to streaming branch. Caused by a race on startup on the global nthr_max variable between wrk_herdtimer_thread and wrk_herder_thread. It happens if the wrk_herder_thread wins the race and tries to create threads before wrk_herdtimer_thread has updated the nthr_max variable from the current parameters. Not applicable to current trunk. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 15 12:06:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 15 Aug 2012 12:06:38 -0000 Subject: [Varnish] #1183: n_wrk_max has a random initial value In-Reply-To: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> References: <042.46243fb87ad234c1708669d6634165fa@varnish-cache.org> Message-ID: <057.821d01aa99f1b6641bfbfe5fce30bcda@varnish-cache.org> #1183: n_wrk_max has a random initial value ----------------------+--------------------- Reporter: dlec | Owner: martin Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: 3.0.2 Severity: minor | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by martin): * status: new => closed * resolution: => fixed Comment: Commit 3993225fd998ebdf8e03d18105a7da65c3a83d8a fixes this issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 17 11:27:37 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 17 Aug 2012 11:27:37 -0000 Subject: [Varnish] #1186: Varnish crashing with "Assert error in hcb_insert(), hash_critbit.c line 217" Message-ID: <043.61629f460ae9ccde797d449c5dc85940@varnish-cache.org> #1186: Varnish crashing with "Assert error in hcb_insert(), hash_critbit.c line 217" -------------------+-------------------- Reporter: flies | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.1.5 | Severity: normal Keywords: | -------------------+-------------------- Thank you in advance for your help. Will provide VCL if needed. Varnish version: {{{ # /usr/local/sbin/varnishd -V varnishd (varnish-2.1.5 SVN 0843d7a) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS }}} Syslog output: {{{ Aug 17 02:59:26 wpf1 varnishd[5451]: Child (4548) Panic message: Assert error in hcb_insert(), hash_critbit.c line 217: Condition((y)->magic == 0x125c4bd2) not true. errno = 115 (Operation now in progress) thread = (cache-worker) ident = Linux,2.6.36-g5-po,x86_64,-sfile,-hcritbit,epoll Backtrace: 0x424608: /usr/sbin/varnishd() [0x424608] 0x430d25: /usr/sbin/varnishd() [0x430d25] 0x43123b: /usr/sbin/varnishd() [0x43123b] 0x41e689: /usr/sbin/varnishd(HSH_Lookup+0x6a9) [0x41e689] 0x412c9b: /usr/sbin/varnishd() [0x412c9b] 0x4151ad: /usr/sbin/varnishd(CNT_Session+0x38d) [0x4151ad] 0x426a48: /usr/sbin/varnishd() [0x426a48] 0x425d23: /usr/sbin/varnishd() [0x425d23] 0x7fd8ca4ca8ca: /lib/libpthread.so.0(+0x68ca) [0x7fd8ca4ca8ca] 0x7fd8c9d97b6d: /lib/libc.so.6(clone+0x6d) [0x7fd8c9d97b6d] sp = 0x7fd7a33ac008 { fd = 37, id = 37, xid = 753325169, client = 89.72.18.179:57412, step = STP_LOOKUP, handling = hash, restarts = 0, esis = 0 ws = 0x7fd7a33ac078 { id = "sess", {s,f,r,e} = {0x7fd7a33accd0,+2776,(nil),+65536}, }, http[req] = { ws = 0x7fd7a33ac078[sess] "GET", "/", "HTTP/1.1", "User-Agent: Opera/9.80 (Windows NT 6.1; U; pl) Presto/2.10.289 Version/12.00", "Host: margarytka-kulinarnie.blog.onet.pl", "Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1", "Accept-Language: pl-PL,pl;q=0.9,en;q=0.8", "Referer: http://www.google.pl/url?sa=t&rct=j&q=blogi%20kulinarne&source=web&cd=9&ved=0CFsQFjAI&url=http%3A%2F %2Fmargarytka- kulinarnie.blog.onet.pl%2F&ei=25YtUKPmE63N4QTfh4Fw&usg=AFQjCNGCyrSeixJ2_OapEckx2fOga0AV2A", "Connection: Keep-Alive", "X-Forwarded-For: 89.72.18.179", "X-User-Host: margarytka-kulinarnie.blog.onet.pl", "cookie: onet_ubi=HIDDEN; onetzuo_ticket=HIDDEN }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 17 20:44:52 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 17 Aug 2012 20:44:52 -0000 Subject: [Varnish] #1187: Document restrictions on range requests Message-ID: <043.3a5b79d566e4d093cdee9d9c0342bc65@varnish-cache.org> #1187: Document restrictions on range requests -------------------+--------------------------- Reporter: fgsch | Type: documentation Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+--------------------------- There are cases where range requests are not honoured even if http_range_support is enabled that should be documented. These are: - if the obj is gzip'ed and the client can't receive a gzip'ed response - if esi is enabled and the obj requires esi processing Thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 20 10:46:58 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 20 Aug 2012 10:46:58 -0000 Subject: [Varnish] #1187: Document restrictions on range requests In-Reply-To: <043.3a5b79d566e4d093cdee9d9c0342bc65@varnish-cache.org> References: <043.3a5b79d566e4d093cdee9d9c0342bc65@varnish-cache.org> Message-ID: <058.0d12671b7d26c00d2c6b44caf0ac0ded@varnish-cache.org> #1187: Document restrictions on range requests ---------------------------+-------------------- Reporter: fgsch | Owner: scoof Type: documentation | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------------+-------------------- Changes (by tfheen): * owner: => scoof -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 21 07:42:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 21 Aug 2012 07:42:26 -0000 Subject: [Varnish] #1049: Unbalanced {} in varnishncsa init script In-Reply-To: <043.b2723c63d08ceda97a3718abd8394083@varnish-cache.org> References: <043.b2723c63d08ceda97a3718abd8394083@varnish-cache.org> Message-ID: <058.de2533b19b6e62dcf419c6a1df83b160@varnish-cache.org> #1049: Unbalanced {} in varnishncsa init script -----------------------+--------------------- Reporter: scoof | Owner: tfheen Type: defect | Status: new Priority: low | Milestone: Component: packaging | Version: 3.0.2 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Changes (by tfheen): * owner: => tfheen Comment: This has been fixed in the 3.0.3 packages. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 21 08:20:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 21 Aug 2012 08:20:43 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.f59ca8f3f6d1ad48f8baf200c19262a7@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: phk Type: defect | Status: new Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: Keywords: | --------------------------+------------------------------ Comment (by Poul-Henning Kamp ): In [7c784d5c9d2dd959a5ea1a1bea5f7bbb4173437d]: {{{ #!CommitTicketReference repository="" revision="7c784d5c9d2dd959a5ea1a1bea5f7bbb4173437d" Add long time missing error handling of gunzip'ing fetched objects for ESI processing. Polish the VGZ code a bit while here anyway. Fixes #1184 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 21 08:20:45 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 21 Aug 2012 08:20:45 -0000 Subject: [Varnish] #1184: Assert error in vfp_esi_bytes_gg In-Reply-To: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> References: <050.846db86104780f64b0a0b8e845cc697e@varnish-cache.org> Message-ID: <065.35dc78a4e47518855e055417fd8d37cd@varnish-cache.org> #1184: Assert error in vfp_esi_bytes_gg --------------------------+------------------------------ Reporter: dbakerflight | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Varnish 3.0 dev Component: build | Version: trunk Severity: critical | Resolution: fixed Keywords: | --------------------------+------------------------------ Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: (In [7c784d5c9d2dd959a5ea1a1bea5f7bbb4173437d]) Add long time missing error handling of gunzip'ing fetched objects for ESI processing. Polish the VGZ code a bit while here anyway. Fixes #1184 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Aug 21 08:46:40 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 21 Aug 2012 08:46:40 -0000 Subject: [Varnish] #1085: -w should not allow thread_pool_min to be 1 In-Reply-To: <043.f13738db6d7f57b1b7ef0ff3ebed7ffe@varnish-cache.org> References: <043.f13738db6d7f57b1b7ef0ff3ebed7ffe@varnish-cache.org> Message-ID: <058.a3b46cd52f900142048b00a8c8a36575@varnish-cache.org> #1085: -w should not allow thread_pool_min to be 1 ----------------------+--------------------- Reporter: scoof | Owner: scoof Type: defect | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: 3.0.2 Severity: trivial | Resolution: fixed Keywords: | ----------------------+--------------------- Changes (by scoof): * status: assigned => closed * resolution: => fixed Comment: Fixed in master. Won't merge to 3.0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 22 10:14:27 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 22 Aug 2012 10:14:27 -0000 Subject: [Varnish] #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations In-Reply-To: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> References: <041.58951bc1bacb408becb240d0e7491fac@varnish-cache.org> Message-ID: <056.fcd4df20a66c2aec731a7c97b435abb7@varnish-cache.org> #1026: varnishd immediate segfault on armv7, seeminly strict-aliasing violations --------------------------------------+------------------------- Reporter: hno | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.0 Severity: normal | Resolution: worksforme Keywords: strict-aliasing segfault | --------------------------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: time out this ticket... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 23 05:43:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 23 Aug 2012 05:43:26 -0000 Subject: [Varnish] #1188: Assert error in hcb_insert(), hash_critbit.c line 212: Condition((y)->magic == 0x125c4bd2) not true. Message-ID: <043.ed04adf9eca4049815e8098fa06e0c6c@varnish-cache.org> #1188: Assert error in hcb_insert(), hash_critbit.c line 212: Condition((y)->magic == 0x125c4bd2) not true. -------------------+---------------------- Reporter: flies | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.3 | Severity: normal Keywords: | -------------------+---------------------- Hi, we have been able to reproduce this error in version 3.0.3 (we previously had it on 2.1.3, 2.1.5). It happens very often, if the traffic is high. If the traffic is medium it crashes once per a day or two. We are using credis library in varnish VCL. Here's the output from screen: {{{ Aug 23 07:13:28 blog-dev varnishd[19541]: Child (19553) Panic message: Assert error in hcb_insert(), hash_critbit.c line 212: Condition((y)->magic == 0x125c4bd2) not true. errno = 115 (Operation now in progress) thread = (cache-worker) ident = Linux,2.6.32-5-amd64,x86_64,-sfile,-smalloc,-hcritbit,epoll Backtrace: 0x43cca8: pan_backtrace+28 0x43cfa0: pan_ic+1bc 0x452989: hcb_insert+ef 0x453a4b: hcb_lookup+191 0x432d71: HSH_Lookup+3ac 0x41de2f: cnt_lookup+27e 0x41fe14: CNT_Session+6be 0x43e75d: wrk_thread_real+9c5 0x43ec08: wrk_thread+139 0x7f9bf7a968ca: _end+7f9bf73f8272 sp = 0x7f9adf2e0008 { fd = 249, id = 249, xid = 1121929969, client = 180.76.5.107 20988, step = STP_LOOKUP, handling = hash, restarts = 0, esi_level = 0 flags = bodystatus = 4 ws = 0x7f9adf2e0080 { id = "sess", {s,f,r,e} = {0x7f9adf2e0c78,+432,+65536,+65536}, }, http[req] = { ws = 0x7f9adf2e0080[sess] "GET", "/PATH_OF_THE_URL", "HTTP/1.1", "Host: HOST.XXX.PL", "Connection: close", "User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)", "Accept-Language: en-US", "Accept: */*", "X-Forwarded-For: XXX.XXX.XXX.XXX", "X-User-Host: HOST.XXX.PL", }, worker = 0x7f9accef5a90 { ws = 0x7f9accef5cc8 { id = "wrk", {s,f,r,e} = {0x7f9accee39e0,0x7f9accee39e0,(nil),+65536}, }, }, vcl = { srcname = { "input", "Default", }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 07:16:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 07:16:15 -0000 Subject: [Varnish] #1188: Assert error in hcb_insert(), hash_critbit.c line 212: Condition((y)->magic == 0x125c4bd2) not true. In-Reply-To: <043.ed04adf9eca4049815e8098fa06e0c6c@varnish-cache.org> References: <043.ed04adf9eca4049815e8098fa06e0c6c@varnish-cache.org> Message-ID: <058.90eb841283f57cacd8a5922ae1b83d6e@varnish-cache.org> #1188: Assert error in hcb_insert(), hash_critbit.c line 212: Condition((y)->magic == 0x125c4bd2) not true. ----------------------+-------------------- Reporter: flies | Owner: Type: defect | Status: new Priority: lowest | Milestone: Component: varnishd | Version: 3.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by phk): * priority: normal => lowest Comment: As always I'm not ruling out that there is a bug, but given how much beating up the critbit code gets on all the varnish installations out there, I find it very improbable that your precise site can trigger a unknown and obscure bug this often. I spent some time staring at the relevant code, and the one thing that keeps popping up in my mind is that the trouble you see is _exactly_ what you would expect to see if there is a "use after free" mistake somewhere else in the program, (which in varnish' case includes the compiled VCL, any vmods and any libraries they use.) I have taken a quick look at the vmod_redis in github, and I don't spot any obvious use-after-free bugs, but I think that vmod leaks memory big time: All the strdup() returns never get freed, and I don't see where the redisContext's get freed either. I'm going to leave this ticket open, but I will assign it "lowest" priority, because there is no way I can debug this other than a total code source-code inspection. What I would suggest to you, is to build varnish from source, and put a check before the assert which will record what the value actually is, when it does not match the expected magic marker. Sometimes that value may give a clue to what code overwrote the value that should have been there. The other thing you can try is to use a the "classic" hash instead of critbit, it will not make the bug go away, but it will cause it to have a different impact, which again could offer clues to what/where the bug actually is. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 08:10:38 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 08:10:38 -0000 Subject: [Varnish] #1189: configure script didn't check rst2man binary Message-ID: <050.727cc84c45ed73d4e5a4d5e3ce0441e5@varnish-cache.org> #1189: configure script didn't check rst2man binary --------------------------+-------------------- Reporter: JonathanHuot | Type: defect Status: new | Priority: low Milestone: | Component: build Version: trunk | Severity: minor Keywords: | --------------------------+-------------------- '''Steps to reproduce:''' * uninstall python-docutils package or rst2man binary * clone varnish sources * launch ./configure * launch make '''Results :''' Make failed with the error below: {{{ make[3]: Entering directory `/home/jo/git/varnish/lib/libvmod_std' ======================================== You need rst2man installed to make dist ======================================== make[3]: *** [vmod_std.3] Error 1 }}} The correct behavior is to check if rst2man binary (or python-docutils package) is installed in configure script. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 08:11:54 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 08:11:54 -0000 Subject: [Varnish] #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 In-Reply-To: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> References: <046.e5c96159b15b686ce58184b16ef1fb92@varnish-cache.org> Message-ID: <061.4f4035c670f5def96ea503ab92a27abc@varnish-cache.org> #1185: Assert error in VRT_IP_string(), cache/cache_vrt.c line 313 ----------------------+-------------------- Reporter: kristian | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by phk): I have tried to reproduce with this slighly more compact vtc: {{{ varnishtest "Test multiple requests over a single connection" server s1 { rxreq txresp -status 200 } -start varnish v1 -vcl+backend { } -start client c1 { loop 3000 { txreq -req GET -hdr "Host: localhost" } loop 3000 { rxresp } } -run varnish v1 -cliok "debug.sizeof" varnish v1 -expect sess_pipeline > 2000 varnish v1 -expect sess_readahead > 0 }}} But it stubbornly insists on working... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 08:40:23 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 08:40:23 -0000 Subject: [Varnish] #1190: varnishd doesn't handle vcl_dir when starting Message-ID: <050.bdb51f26d41684d1ca780f605afee79a@varnish-cache.org> #1190: varnishd doesn't handle vcl_dir when starting --------------------------+---------------------- Reporter: JonathanHuot | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: trunk | Severity: normal Keywords: | --------------------------+---------------------- If severals VCL including each other are in a exotic directory, varnishd cannot be started. Example : '''Steps to reproduce''' * create a new directory layout /tmp/vcls * create a new vcl file "config.vcl" with the content below : {{{ include "recv.vcl" }}} * launch varnishd with the args below : {{{ varnish-trunk $ ./varnishd -T :1234 -a :4321 -p vcl_dir=/tmp/vcls/ -f /tmp/vcls/config.vcl -n /tmp/vcls Message from VCC-compiler: Syntax error at ('input' Line 1 Pos 9) include 'recv.vcl' --------#---------------- Running VCC-compiler failed, exit 1 VCL compilation failed }}} '''Steps to avoid the error''' * create a blank file or a file with fake backend "empty.vcl" * launch varnishd with empty.vcl * launch varnishadm and load the config.vcl since varnishd is running and has taken in account the vcl_dir parameter. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 08:50:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 08:50:31 -0000 Subject: [Varnish] #1190: varnishd doesn't handle vcl_dir when starting In-Reply-To: <050.bdb51f26d41684d1ca780f605afee79a@varnish-cache.org> References: <050.bdb51f26d41684d1ca780f605afee79a@varnish-cache.org> Message-ID: <065.f43ff7afd7af2ef7a12829d07a79ec56@varnish-cache.org> #1190: varnishd doesn't handle vcl_dir when starting --------------------------+------------------------- Reporter: JonathanHuot | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | --------------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: The error you get above is from using ' instead of " around the filename in the include statement. Also notice that you need a ; after the {{{include "recv.vcl";}}} I tried various versions of the scenario you posit, and could not provoke any errors. Feel free to reopen ticket if there is something I overlooked. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 10:22:49 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 10:22:49 -0000 Subject: [Varnish] #1051: child process died In-Reply-To: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> References: <045.f22d848aeef4ec1e301fd6d2603e1e88@varnish-cache.org> Message-ID: <060.c5f2825e61d993e2728a8b495fd84bf7@varnish-cache.org> #1051: child process died ----------------------+------------------------- Reporter: sreniaw | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by martin): * status: reopened => closed * resolution: => worksforme Comment: The behavior of Varnish on persistent stevedore when the silo space is exhausted is something we are working on as an extended feature patch. As for now this is the expected behavior. Closing ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 10:53:47 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 10:53:47 -0000 Subject: [Varnish] #1033: purge; in vcl_pass In-Reply-To: <046.5c6ed7d7ed0c2d768626ff3e3184eaef@varnish-cache.org> References: <046.5c6ed7d7ed0c2d768626ff3e3184eaef@varnish-cache.org> Message-ID: <061.36288d2f81169b15501b7e8b8c960232@varnish-cache.org> #1033: purge; in vcl_pass ----------------------+-------------------- Reporter: kristian | Owner: scn Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by martin): * owner: => scn Comment: During bugwash today semantics for this was discussed. Proposal of having a 'do_purge' flag available for setting in 'vcl_recv'. This flag will cause any lookups done on this request to become purges instead. This way also hit_pass-objects would be affected. This work should perhaps be done together with the propsoed soft-purge work, and the soft-purge functionality follow a similar structure. Handing to scn as he has been working on the soft-purge functionality. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Aug 27 10:57:06 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 27 Aug 2012 10:57:06 -0000 Subject: [Varnish] #1033: purge; in vcl_pass In-Reply-To: <046.5c6ed7d7ed0c2d768626ff3e3184eaef@varnish-cache.org> References: <046.5c6ed7d7ed0c2d768626ff3e3184eaef@varnish-cache.org> Message-ID: <061.89fdca2899eb93780664d299a240dc15@varnish-cache.org> #1033: purge; in vcl_pass ----------------------+----------------------- Reporter: kristian | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by martin): * owner: scn => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Aug 29 07:20:32 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 29 Aug 2012 07:20:32 -0000 Subject: [Varnish] #1191: pcre jit does not work on i386 Message-ID: <044.1dde28fc30c5b94e77c2c0f12d0415da@varnish-cache.org> #1191: pcre jit does not work on i386 --------------------+------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Keywords: --------------------+------------------- varnish-3.0.3 on fedora 17, 18 and rawhide: On i386 (and on 32bit ppc), pcre jit makes varnishd segfault, crashing tests/b00028.vtc A gdb backtrace is available here: http://pastebin.com/x1feajek Switching off pcre jit, makes it compile and run through the complete test suite. Workaround: Turn off pcre jit for i386, see attached patch. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 30 17:26:36 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Aug 2012 17:26:36 -0000 Subject: [Varnish] #1192: RHEL6: Init-script not giving correct startup Message-ID: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> #1192: RHEL6: Init-script not giving correct startup --------------------+---------------------- Reporter: Ueland | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | --------------------+---------------------- Running Varnish 3.0.2 from RPM on a RHEL6 box, after a crash, varnish would not start anymore, even then the init script says that it has started. I have not yet figured out why varnish does not start. But the main issue here is that the init script says "Ok" when it should say "failed". [root at dev ~]# rpm -qa|grep varnish varnish-3.0.2-1.el5.x86_64 varnish-libs-3.0.2-1.el5.x86_64 (PS: Running the RHEL5-package on RHEL6, not sure if this is the problem itself.) [root at dev ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) [root at dev ~]# service varnish start Starting Varnish Cache: [ OK ] [root at dev ~]# ps -ef|grep varnish root 7718 4694 0 19:25 pts/0 00:00:00 grep varnish (nothing running from Varnish) dmesg: Aug 30 19:25:22 dev varnishd[7715]: Platform: Linux,2.6.32-279.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit Aug 30 19:25:22 dev varnishd[7715]: Child start failed: could not open sockets -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Aug 30 17:28:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 30 Aug 2012 17:28:31 -0000 Subject: [Varnish] #1192: RHEL6: Init-script not giving correct startup In-Reply-To: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> References: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> Message-ID: <059.ee3e5a0d7f22695d0c932d80d5c5cdab@varnish-cache.org> #1192: RHEL6: Init-script not giving correct startup ----------------------+-------------------- Reporter: Ueland | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by Ueland): Trac ate the formatting, pastebin-version: http://pastebin.com/975Axsdm -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 31 13:02:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Aug 2012 13:02:31 -0000 Subject: [Varnish] #1193: varnishstat displays values for wrong attribute Message-ID: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> #1193: varnishstat displays values for wrong attribute ----------------------+------------------------- Reporter: macquist | Type: defect Status: new | Priority: normal Milestone: | Component: varnishstat Version: 3.0.3 | Severity: normal Keywords: | ----------------------+------------------------- After upgrading from version 3.0.2 to 3.0.3 we saw remarkable changes in our munin statistics (example attached). To me it looks like varnishstat displays the values on the wrong output line. Here some commutation examples: * vmods is displayed as n_gzip * uptime is displayed as dir_dns_lookups * hcb_nolock is displayed as hcb_lock (see attachment) and the most remarkable hcb_lock is displayed twice once as hcb_insert and once as esi_errors. For the later the value is always lessend by 2. For many values it looks like the 2nd row of the output is shifted down by one line somewhere after n_vcl_discard. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 31 14:16:40 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Aug 2012 14:16:40 -0000 Subject: [Varnish] #1193: varnishstat displays values for wrong attribute In-Reply-To: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> References: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> Message-ID: <061.961a0fc12203378b114059632c4fdcd0@varnish-cache.org> #1193: varnishstat displays values for wrong attribute -------------------------+-------------------- Reporter: macquist | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: 3.0.3 Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by macquist): Replying to [ticket:1193 macquist]: > > and the most remarkable hcb_lock is displayed twice once as hcb_insert and once as > esi_errors. For the later the value is always lessend by 2. Seems like i was wrong on this point. hcb_lock is not displayed twice. On our other servers (version 3.02) the values for hcb_lock and hcb_insert are equal or almost equal. Seems like both values are also shifted down by one row. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 31 16:25:50 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Aug 2012 16:25:50 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <062.0e40b2b66d4fe40822733cbf1cd868b7@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------------------------------+------------------------- Reporter: campisano | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: blocker | Resolution: worksforme Keywords: child died pushing vcls failed | #012CLI communication error (hdr) | -------------------------------------------------+------------------------- Comment (by lampe): Hi. Replying to [comment:7 phk]: > The connect_timeout has nothing to do with starting the child process. > > As I said, you can try to increase cli_timeout, if the problem is disk-i/o pileups. While I can understand the root cause for timeouts in master/client communications and fixed it by moving the shm log to tmpfs, I still think there's a bug or at least unexpected behaviour here. When the master fails to push the initial vcl to the child, it kills the child but does not try to restart it. The master process is left hanging useless without a child and requires a stop/start cycle. I can reproduce the problem with the shm log on HDD and large I/O, e.g.: while [ !`pidof varnishd |tr " " "\n" |wc -l` -eq 2 ]; do dd if=/dev/zero of=test bs=1M count=20480 conf=fdatasync done (varnish 3.0.3) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 31 17:15:31 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Aug 2012 17:15:31 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <062.2c6d801ca25caf785d854d13e1964042@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------------------------------+------------------------- Reporter: campisano | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: blocker | Resolution: worksforme Keywords: child died pushing vcls failed | #012CLI communication error (hdr) | -------------------------------------------------+------------------------- Comment (by campisano): Replying to [comment:10 lampe]: > While I can understand the root cause for timeouts in master/client communications and fixed it by moving the shm log to tmpfs, I still think there's a bug or at least unexpected behaviour here. Hi lampe, is the root the master process that handle the 'varnish childs workers' ? Reducing the shm log size on disk can resolve the problem ?? thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Aug 31 18:56:03 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 31 Aug 2012 18:56:03 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <062.84a71c0a4fd6f150b55d5bab37bc9cb7@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------------------------------+------------------------- Reporter: campisano | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: blocker | Resolution: worksforme Keywords: child died pushing vcls failed | #012CLI communication error (hdr) | -------------------------------------------------+------------------------- Comment (by lampe): Replying to [comment:11 campisano]: > is the root the master process that handle the 'varnish childs workers' ? Not quite. As I understand it (haven't actually checked with the code yet), the child blocks on a write to the shared memory file when cached disk writes exceed a certain amount and the disk is busy. With large RAM, the disk sync can take several seconds. If the child blocks long enough, the VCL upload from the master process times out and the child is terminated but not restarted. > Reducing the shm log size on disk can resolve the problem ?? I don't think so. Putting the shm log file on tmpfs resolved it for me. No physical disk, thus no waiting on the I/O scheduler. The Varnish Book explicitly recommends that the log must not cause physical disk I/O. I'll also experiment with lowering /proc/sys/vm/dirty_background_bytes to <1GB and increasing dirty_ratio to 50%. See http://www.kernel.org/doc/Documentation/sysctl/vm.txt for a description of these linux kernel parameters. -- Ticket URL: Varnish The Varnish HTTP Accelerator