From varnish-bugs at varnish-cache.org Mon Nov 4 10:31:20 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 04 Nov 2013 10:31:20 -0000 Subject: [Varnish] #1369: Spinning thread while esi+gzip fetch Message-ID: <046.a959baf0eee18a6849c8587f6527406f@varnish-cache.org> #1369: Spinning thread while esi+gzip fetch ----------------------+------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.4 Severity: normal | Keywords: ----------------------+------------------- We've recently seen that a backend fetch for ESI enabled Gzip-ed content can get stuck and consume 100% CPU. Remaining threads are working fine, but this fetch never finishes. GDB backtrace of the spinning thread: {{{ Thread 102 (Thread 0x7eff6149c700 (LWP 2664)): #0 inflate (strm=0x7effabb049f8, flush=0) at inflate.c:1233 next = 0x7eff614813e8 "[ -- CUT -- ]"... put = 0x0 have = 2199 left = 32768 hold = bits = in = 0 out = 0 copy = 28932 from = 0x7eff614782b0 "[ -- CUT -- ]"... len = 0 ret = 1 hbuf = "?Ha" order = {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15} #1 0x0000000000424998 in VGZ_Gunzip (vg=0x7effabb049c0, pptr=0x7eff61488300, plen=0x7eff61488308) at cache_gzip.c:290 i = l = before = 0x7eff614782b0 "[ -- CUT -- ]"... __func__ = "VGZ_Gunzip" #2 0x000000000041cb4e in vfp_esi_bytes_gg (sp=0x7eff20e02008, htc=0x7eff6149bc90, bytes=6591) at cache_esi_fetch.c:275 w = 6591 vef = 0x7eff210d7040 dl = 0 dp = 0x0 i = __func__ = "vfp_esi_bytes_gg" #3 0x000000000041d64e in vfp_esi_bytes (sp=0x7eff20e02008, htc=0x7eff6149bc90, bytes=6591) at cache_esi_fetch.c:348 i = i = __func__ = "vfp_esi_bytes" #4 0x000000000042329e in fetch_chunked (sp=0x7eff20e02008) at cache_fetch.c:335 __func__ = "fetch_chunked" #5 FetchBody (sp=0x7eff20e02008) at cache_fetch.c:570 cls = 0 st = w = 0x7eff6149ba90 mklen = cl = __func__ = "FetchBody" #6 0x00000000004166b8 in cnt_fetchbody (sp=0x7eff20e02008) at cache_center.c:868 i = hp = 0x7eff61489140 hp2 = b = 0x7eff61489bae "Thu, 31 Oct 2013 03:00:27 +0000" nhttp = 20 l = vary = 0x0 varyl = 0 pass = 1 __func__ = "cnt_fetchbody" #7 0x0000000000418d50 in CNT_Session (sp=0x7eff20e02008) at steps.h:42 done = 0 __func__ = "CNT_Session" }}} Three backtraces are available. The second is similar with the thread at line 1232 in inflate.c, the third follows: {{{ Thread 102 (Thread 0x7eff6149c700 (LWP 2664)): #0 vfp_esi_bytes_gg (sp=0x7eff20e02008, htc=0x7eff6149bc90, bytes=6591) at cache_esi_fetch.c:279 w = 6591 vef = 0x7eff210d7040 dl = 0 dp = 0x7eff614782b0 i = 1 __func__ = "vfp_esi_bytes_gg" #1 0x000000000041d64e in vfp_esi_bytes (sp=0x7eff20e02008, htc=0x7eff6149bc90, bytes=6591) at cache_esi_fetch.c:348 i = __func__ = "vfp_esi_bytes" #2 0x000000000042329e in fetch_chunked (sp=0x7eff20e02008) at cache_fetch.c:335 __func__ = "fetch_chunked" #3 FetchBody (sp=0x7eff20e02008) at cache_fetch.c:570 cls = 0 st = w = 0x7eff6149ba90 mklen = cl = __func__ = "FetchBody" [..] }}} A PCAP of the bitstream is available on request. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 11 08:14:49 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 Nov 2013 08:14:49 -0000 Subject: [Varnish] #1371: test Message-ID: <044.5f6145f21f13ffd1668ad23c1f7f7b94@varnish-cache.org> #1371: test --------------------+--------------------- Reporter: tfheen | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Keywords: --------------------+--------------------- Just a test ticket to see that email still works as it should. Please ignore. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 11 08:15:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 Nov 2013 08:15:55 -0000 Subject: [Varnish] #1371: test In-Reply-To: <044.5f6145f21f13ffd1668ad23c1f7f7b94@varnish-cache.org> References: <044.5f6145f21f13ffd1668ad23c1f7f7b94@varnish-cache.org> Message-ID: <059.fd222a388bc9f10ab03c4860a92df3da@varnish-cache.org> #1371: test --------------------+---------------------- Reporter: tfheen | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: invalid Keywords: | --------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: It does, great, closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 11 13:31:10 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 11 Nov 2013 13:31:10 -0000 Subject: [Varnish] #1370: backtrace() is in libexecinfo on FreeBSD In-Reply-To: <041.1b0ac6324a321974c9110be90c4a320f@varnish-cache.org> References: <041.1b0ac6324a321974c9110be90c4a320f@varnish-cache.org> Message-ID: <056.6ae6349f9282855605654a0d18da4aae@varnish-cache.org> #1370: backtrace() is in libexecinfo on FreeBSD --------------------+--------------------- Reporter: phk | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: I just fixed this in: {{{ commit 079e68bae34b303abdff49722d656b12f620a0b3 Author: Tollef Fog Heen Date: Mon Nov 11 14:26:25 2013 +0100 Search for backtrace function in libexecinfo }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 12 12:57:36 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Nov 2013 12:57:36 -0000 Subject: [Varnish] #1372: Consistent crashes with seg fault error 4 in libc-2.12.so Message-ID: <045.a0d66f4881fdf144e127f5a22a64caca@varnish-cache.org> #1372: Consistent crashes with seg fault error 4 in libc-2.12.so -----------------------------+---------------------- Reporter: tomypro | Type: defect Status: new | Priority: highest Milestone: Varnish 3.0 dev | Component: varnishd Version: 3.0.4 | Severity: blocker Keywords: | -----------------------------+---------------------- On smaller equipped cloud server our varnish is crashing consistently on CentOS (CentOS release 6.4 (Final)) with he following exception kernel: varnishd[12102]: segfault at 1f7 ip 00007f43c05110ec sp 00007f423ec94a50 error 4 in libc-2.12.so[7f43c04c9000+18a000] Varnish Version: varnishd (varnish-3.0.4 revision 9f83e8f) varnishncsa does not show anything exception on crashes startup parameters: /usr/local/sbin/varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,256M -T 127.0.0.1:2000 -a 0.0.0.0:80 -p thread_pools=4 -p thread_pool_min=100 -p thread_pool_max=1000 -p max_restarts=6 -p listen_depth=2048 -p lru_interval=1800 -w 200,2000 -p sess_workspace=32768 If you need any thread dumbs I am happy to provide - please provide instructions. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 12 14:33:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Nov 2013 14:33:51 -0000 Subject: [Varnish] #1373: VCL includes with relative path names are always relative to /etc/varnish Message-ID: <047.a5d2facac732012e5ad5a32cbd5ccfc4@varnish-cache.org> #1373: VCL includes with relative path names are always relative to /etc/varnish -----------------------+---------------------- Reporter: itamar_hc | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.2 | Severity: normal Keywords: | -----------------------+---------------------- I am running varnishd with a config file I loaded from some random directory (I'm writing a functional test). It does an include. I would expect includes that are relative paths, e.g.: {{{ include "something.vcl"; }}} to be relative to the path of the VCL file with the include. In fact, they are always relative to /etc/varnish/. {{{ $ cd /tmp $ touch something.vcl $ echo 'include "something.vcl";' > default.vcl $ varnishd -C -f default.vcl -n /tmp ... Cannot read file 'something.vcl': No such file or directory }}} If you use "strace -f" you will see: {{{ [pid 13462] open("//etc/varnish/something.vcl", O_RDONLY) = -1 ENOENT (No such file or directory) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 12 15:11:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 12 Nov 2013 15:11:50 -0000 Subject: [Varnish] #1373: VCL includes with relative path names are always relative to /etc/varnish In-Reply-To: <047.a5d2facac732012e5ad5a32cbd5ccfc4@varnish-cache.org> References: <047.a5d2facac732012e5ad5a32cbd5ccfc4@varnish-cache.org> Message-ID: <062.34d660aaf32c441e2c0ea3c24917c8af@varnish-cache.org> #1373: VCL includes with relative path names are always relative to /etc/varnish -----------------------+------------------------- Reporter: itamar_hc | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: | -----------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: This is working as designed. included files are always relative to the directory in parameter 'vcl_dir' -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 13 10:09:40 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 13 Nov 2013 10:09:40 -0000 Subject: [Varnish] #1359: std.ip is not documented In-Reply-To: <043.75be36d7ab66244002f24c7337373279@varnish-cache.org> References: <043.75be36d7ab66244002f24c7337373279@varnish-cache.org> Message-ID: <058.4852d3743f1e78996ddcbdec57959ffe@varnish-cache.org> #1359: std.ip is not documented ---------------------------+--------------------- Reporter: fgsch | Owner: scoof Type: documentation | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | ---------------------------+--------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * resolution: => fixed Comment: In [c6dd10c32b3d4f559fbaf419d0ea57c4ccda141b]: {{{ #!CommitTicketReference repository="" revision="c6dd10c32b3d4f559fbaf419d0ea57c4ccda141b" Go over the vmod_std reference page. Fixes: #1359 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 07:28:45 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 07:28:45 -0000 Subject: [Varnish] #1374: High SWAP usage Message-ID: <043.86ef40551613a8c85d41afe6583b5c09@varnish-cache.org> #1374: High SWAP usage -------------------+---------------------- Reporter: jammy | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.3 | Severity: critical Keywords: | -------------------+---------------------- We are running varnish 3.0.3 in production. but recently, we are experiencing high SWAP usage recently. As you can see from the following information, we configured varnished to use 4G malloc storage, but not sure why it ate SWAP so heavily. I'm looking forward to hearing from the community. #1. uname -a Linux ip-10-36-1-238 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux #2. /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/yottaa.vcl -T 127.0.0.1:6082 -t 120 -w 500,1000,360 -p thread_pool_add_delay 2 -p listen_depth 65535 -p ban_lurker_sleep 1 -p sess_workspace 204800 -p thread_pool_workspace 307200 -p http_req_size 102400 -p http_resp_size 102400 -p http_req_hdr_len 51200 -p http_resp_hdr_len 51200 -p http_max_hdr 500 -S /etc/varnish/secret -s malloc,4066M #3. cat /proc/14464/status VmPeak: 19214032 kB VmSize: 19211984 kB VmLck: 0 kB VmHWM: 5540916 kB VmRSS: 4494520 kB VmData: 19055864 kB VmStk: 152 kB VmExe: 492 kB VmLib: 5820 kB VmPTE: 33332 kB VmSwap: '''3383040''' kB #4. varnishstat -1 n_sess_mem 919 . N struct sess_mem n_sess 12 . N struct sess n_object 94417 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 94622 . N struct objectcore n_objecthead 95253 . N struct objecthead n_waitinglist 720 . N struct waitinglist n_vbc 23 . N struct vbc n_wrk 1000 . N worker threads n_wrk_create 1002 0.00 N worker threads created n_expired 8899069 . N expired objects n_lru_nuked 227237 . N LRU nuked objects n_lru_moved 4688309 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5670271 2.97 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 7342009 3.84 Total Sessions s_req 14921754 7.81 Total Requests s_pipe 1057 0.00 Total pipe s_pass 446621 0.23 Total pass s_fetch 10027602 5.25 Total fetch s_hdrbytes 9425706241 4932.10 Total header bytes s_bodybytes 168048482086 87933.18 Total body bytes n_ban 15133 . N total active bans n_ban_gone 10716 . N total gone bans n_ban_add 21766 0.01 N new bans added n_ban_retire 6633 0.00 N old bans deleted n_ban_obj_test 3386265 1.77 N objects tested n_ban_re_test 422921545 221.30 N regexps tested against n_ban_dups 5514 0.00 N duplicate bans removed SMA.s0.c_req 1662338 0.87 Allocator requests SMA.s0.c_fail 33874757152 17725.33 Allocator failures SMA.s0.c_bytes 70233567950 36750.47 Bytes allocated SMA.s0.c_freed 66209262888 34644.71 Bytes freed SMA.s0.g_alloc 195376 . Allocations outstanding SMA.s0.g_bytes 4024305062 . Bytes outstanding SMA.s0.g_space 239204954 . Bytes available SMA.Transient.c_req 106362180 55.66 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 12698410805901 6644580.25 Bytes allocated SMA.Transient.c_freed 12698409165965 6644579.39 Bytes freed SMA.Transient.g_alloc 1139 . Allocations outstanding SMA.Transient.g_bytes 1639936 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:09:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:09:50 -0000 Subject: [Varnish] #1374: High SWAP usage In-Reply-To: <043.86ef40551613a8c85d41afe6583b5c09@varnish-cache.org> References: <043.86ef40551613a8c85d41afe6583b5c09@varnish-cache.org> Message-ID: <058.f702e05d15503bf71c6b7501dcf7b4e0@varnish-cache.org> #1374: High SWAP usage ----------------------+-------------------- Reporter: jammy | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Description changed by tfheen: Old description: > We are running varnish 3.0.3 in production. but recently, we are > experiencing high SWAP usage recently. As you can see from the following > information, we configured varnished to use 4G malloc storage, but not > sure why it ate SWAP so heavily. > > I'm looking forward to hearing from the community. > > #1. uname -a > Linux ip-10-36-1-238 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 > 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux > > #2. > /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f > /etc/varnish/yottaa.vcl -T 127.0.0.1:6082 -t 120 -w 500,1000,360 -p > thread_pool_add_delay 2 -p listen_depth 65535 -p ban_lurker_sleep 1 -p > sess_workspace 204800 -p thread_pool_workspace 307200 -p http_req_size > 102400 -p http_resp_size 102400 -p http_req_hdr_len 51200 -p > http_resp_hdr_len 51200 -p http_max_hdr 500 -S /etc/varnish/secret -s > malloc,4066M > > #3. cat /proc/14464/status > VmPeak: 19214032 kB > VmSize: 19211984 kB > VmLck: 0 kB > VmHWM: 5540916 kB > VmRSS: 4494520 kB > VmData: 19055864 kB > VmStk: 152 kB > VmExe: 492 kB > VmLib: 5820 kB > VmPTE: 33332 kB > VmSwap: '''3383040''' kB > > #4. varnishstat -1 > > n_sess_mem 919 . N struct sess_mem > n_sess 12 . N struct sess > n_object 94417 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 94622 . N struct objectcore > n_objecthead 95253 . N struct objecthead > n_waitinglist 720 . N struct waitinglist > n_vbc 23 . N struct vbc > n_wrk 1000 . N worker threads > n_wrk_create 1002 0.00 N worker threads created > > n_expired 8899069 . N expired objects > n_lru_nuked 227237 . N LRU nuked objects > n_lru_moved 4688309 . N LRU moved objects > losthdr 0 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 5670271 2.97 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing workspace > s_sess 7342009 3.84 Total Sessions > s_req 14921754 7.81 Total Requests > s_pipe 1057 0.00 Total pipe > s_pass 446621 0.23 Total pass > s_fetch 10027602 5.25 Total fetch > s_hdrbytes 9425706241 4932.10 Total header bytes > s_bodybytes 168048482086 87933.18 Total body bytes > > n_ban 15133 . N total active bans > n_ban_gone 10716 . N total gone bans > n_ban_add 21766 0.01 N new bans added > n_ban_retire 6633 0.00 N old bans deleted > n_ban_obj_test 3386265 1.77 N objects tested > n_ban_re_test 422921545 221.30 N regexps tested against > n_ban_dups 5514 0.00 N duplicate bans removed > > SMA.s0.c_req 1662338 0.87 Allocator requests > SMA.s0.c_fail 33874757152 17725.33 Allocator failures > SMA.s0.c_bytes 70233567950 36750.47 Bytes allocated > SMA.s0.c_freed 66209262888 34644.71 Bytes freed > SMA.s0.g_alloc 195376 . Allocations outstanding > SMA.s0.g_bytes 4024305062 . Bytes outstanding > SMA.s0.g_space 239204954 . Bytes available > SMA.Transient.c_req 106362180 55.66 Allocator requests > SMA.Transient.c_fail 0 0.00 Allocator failures > SMA.Transient.c_bytes 12698410805901 6644580.25 Bytes allocated > SMA.Transient.c_freed 12698409165965 6644579.39 Bytes freed > SMA.Transient.g_alloc 1139 . Allocations outstanding > SMA.Transient.g_bytes 1639936 . Bytes outstanding > SMA.Transient.g_space 0 . Bytes available New description: We are running varnish 3.0.3 in production. but recently, we are experiencing high SWAP usage recently. As you can see from the following information, we configured varnished to use 4G malloc storage, but not sure why it ate SWAP so heavily. I'm looking forward to hearing from the community. {{{ #1. uname -a Linux ip-10-36-1-238 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux }}} {{{ #2. /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/yottaa.vcl -T 127.0.0.1:6082 -t 120 -w 500,1000,360 -p thread_pool_add_delay 2 -p listen_depth 65535 -p ban_lurker_sleep 1 -p sess_workspace 204800 -p thread_pool_workspace 307200 -p http_req_size 102400 -p http_resp_size 102400 -p http_req_hdr_len 51200 -p http_resp_hdr_len 51200 -p http_max_hdr 500 -S /etc/varnish/secret -s malloc,4066M }}} {{{ #3. cat /proc/14464/status VmPeak: 19214032 kB VmSize: 19211984 kB VmLck: 0 kB VmHWM: 5540916 kB VmRSS: 4494520 kB VmData: 19055864 kB VmStk: 152 kB VmExe: 492 kB VmLib: 5820 kB VmPTE: 33332 kB VmSwap: '''3383040''' kB }}} {{{ #4. varnishstat -1 n_sess_mem 919 . N struct sess_mem n_sess 12 . N struct sess n_object 94417 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 94622 . N struct objectcore n_objecthead 95253 . N struct objecthead n_waitinglist 720 . N struct waitinglist n_vbc 23 . N struct vbc n_wrk 1000 . N worker threads n_wrk_create 1002 0.00 N worker threads created n_expired 8899069 . N expired objects n_lru_nuked 227237 . N LRU nuked objects n_lru_moved 4688309 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5670271 2.97 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 7342009 3.84 Total Sessions s_req 14921754 7.81 Total Requests s_pipe 1057 0.00 Total pipe s_pass 446621 0.23 Total pass s_fetch 10027602 5.25 Total fetch s_hdrbytes 9425706241 4932.10 Total header bytes s_bodybytes 168048482086 87933.18 Total body bytes n_ban 15133 . N total active bans n_ban_gone 10716 . N total gone bans n_ban_add 21766 0.01 N new bans added n_ban_retire 6633 0.00 N old bans deleted n_ban_obj_test 3386265 1.77 N objects tested n_ban_re_test 422921545 221.30 N regexps tested against n_ban_dups 5514 0.00 N duplicate bans removed SMA.s0.c_req 1662338 0.87 Allocator requests SMA.s0.c_fail 33874757152 17725.33 Allocator failures SMA.s0.c_bytes 70233567950 36750.47 Bytes allocated SMA.s0.c_freed 66209262888 34644.71 Bytes freed SMA.s0.g_alloc 195376 . Allocations outstanding SMA.s0.g_bytes 4024305062 . Bytes outstanding SMA.s0.g_space 239204954 . Bytes available SMA.Transient.c_req 106362180 55.66 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 12698410805901 6644580.25 Bytes allocated SMA.Transient.c_freed 12698409165965 6644579.39 Bytes freed SMA.Transient.g_alloc 1139 . Allocations outstanding SMA.Transient.g_bytes 1639936 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:18:02 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:18:02 -0000 Subject: [Varnish] #1070: varnishlog -k requires -O In-Reply-To: <046.5aa0fd0a659518b6d1c4b1f4337ca280@varnish-cache.org> References: <046.5aa0fd0a659518b6d1c4b1f4337ca280@varnish-cache.org> Message-ID: <061.5e965c2ef62c8fd960a96e20934336ce@varnish-cache.org> #1070: varnishlog -k requires -O ------------------------+--------------------- Reporter: kristian | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishlog | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | ------------------------+--------------------- Changes (by martin): * status: new => closed * resolution: => fixed Comment: With the new Varnish 4.0 logging API this isn't an issue any more. Closing ticket. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:23:41 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:23:41 -0000 Subject: [Varnish] #937: missing files in varnish-libs-devel rpm package In-Reply-To: <043.9bed421a56403203c823b6e9dcc79ebf@varnish-cache.org> References: <043.9bed421a56403203c823b6e9dcc79ebf@varnish-cache.org> Message-ID: <058.1be7493e3c28d6657059755c3e25f5c0@varnish-cache.org> #937: missing files in varnish-libs-devel rpm package -----------------------+------------------------------ Reporter: fr3nd | Owner: tfheen Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: packaging | Version: trunk Severity: normal | Resolution: fixed Keywords: rpm vmod | -----------------------+------------------------------ Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: We now ship enough to compile vmods in master, so closing this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:28:50 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:28:50 -0000 Subject: [Varnish] #1247: yum installation error "[Errno -1] Header is not complete." In-Reply-To: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> References: <044.e3db3b22021d9bf26d4bc79f8b887423@varnish-cache.org> Message-ID: <059.adfbbf2d4edf9713795e06c17a6fc3fd@varnish-cache.org> #1247: yum installation error "[Errno -1] Header is not complete." --------------------+------------------------- Reporter: Damien | Owner: tfheen Type: defect | Status: closed Priority: high | Milestone: Component: build | Version: trunk Severity: major | Resolution: worksforme Keywords: | --------------------+------------------------- Changes (by tfheen): * status: new => closed * resolution: => worksforme Comment: I've not heard any other reports about this, and everything looks fine here, so closing. If you download manually, you need to download the libvarnishapi package as well. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:30:33 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:30:33 -0000 Subject: [Varnish] #1267: Setup stack side through sysconfig varnish configuration In-Reply-To: <054.c672f506174fbd32b2a1f8b9713528f1@varnish-cache.org> References: <054.c672f506174fbd32b2a1f8b9713528f1@varnish-cache.org> Message-ID: <069.e7c6a2342a45db32cb25ac1a5c7e4950@varnish-cache.org> #1267: Setup stack side through sysconfig varnish configuration ------------------------------+------------------------------ Reporter: andrii.grytsenko | Owner: Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: packaging | Version: 3.0.3 Severity: normal | Resolution: wontfix Keywords: | ------------------------------+------------------------------ Changes (by tfheen): * status: new => closed * resolution: => wontfix Comment: I would rather not add more and more settings for special requirement OS- es. Just put it in your sysconfig yourself and it should work fine. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:32:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:32:51 -0000 Subject: [Varnish] #1268: 'shortlived' does not consider grace/keep In-Reply-To: <043.b6f261f0eca41d964db078b4b18d6e88@varnish-cache.org> References: <043.b6f261f0eca41d964db078b4b18d6e88@varnish-cache.org> Message-ID: <058.0ccc46847c24c5d0dd7dcf127c87080a@varnish-cache.org> #1268: 'shortlived' does not consider grace/keep --------------------+-------------------- Reporter: daghf | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+-------------------- Changes (by phk): * owner: daghf => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:38:23 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:38:23 -0000 Subject: [Varnish] #1278: missing counter - s_error In-Reply-To: <043.97e9443c31428c68989de969a620ae43@varnish-cache.org> References: <043.97e9443c31428c68989de969a620ae43@varnish-cache.org> Message-ID: <058.d3bf422222f56ed73e192cec79b3c23c@varnish-cache.org> #1278: missing counter - s_error -------------------------+--------------------- Reporter: perbu | Owner: martin Type: enhancement | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+--------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: We have this counter now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:40:49 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:40:49 -0000 Subject: [Varnish] #1207: varnishd child segfaults when using dns director In-Reply-To: <046.cbd6b84dea250737a025adc3c0105555@varnish-cache.org> References: <046.cbd6b84dea250737a025adc3c0105555@varnish-cache.org> Message-ID: <061.5d739d2fc3bf489856385a6b8a1ee07f@varnish-cache.org> #1207: varnishd child segfaults when using dns director --------------------------+---------------------- Reporter: econnell | Owner: drwilco Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: invalid Keywords: dns director | --------------------------+---------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: DNS director lives in VMOD now. Closing this ticket. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:42:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:42:27 -0000 Subject: [Varnish] #1292: Varnish restarts itself, when processing response from backend In-Reply-To: <042.4b2b91c80836415cc2d25b9977d04c7c@varnish-cache.org> References: <042.4b2b91c80836415cc2d25b9977d04c7c@varnish-cache.org> Message-ID: <057.850baa100f2676ce0d99e5f946ce4bd9@varnish-cache.org> #1292: Varnish restarts itself, when processing response from backend ----------------------+------------------------------ Reporter: ixos | Owner: Type: defect | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: varnishd | Version: 3.0.3 Severity: major | Resolution: worksforme Keywords: | ----------------------+------------------------------ Changes (by martin): * status: new => closed * resolution: => worksforme Comment: Closing as timeout after private email correspondence. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:45:47 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:45:47 -0000 Subject: [Varnish] #1294: Varnish diagrams don't mention hash_always_miss In-Reply-To: <044.6c7c8f95fb47baa3eda72dedbb11b31f@varnish-cache.org> References: <044.6c7c8f95fb47baa3eda72dedbb11b31f@varnish-cache.org> Message-ID: <059.808a79fb864e549f6f84ef3e492c6fd0@varnish-cache.org> #1294: Varnish diagrams don't mention hash_always_miss ---------------------------+------------------------- Reporter: allanc | Owner: perbu Type: documentation | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: worksforme Keywords: | ---------------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: I'm timing this ticket out. Varnish 4 changes so much that it looses its relevance. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:48:25 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:48:25 -0000 Subject: [Varnish] #1327: Crash if http_max_hdr is not multiple of 4 In-Reply-To: <043.582a578349c23acd06cc444ba23f147c@varnish-cache.org> References: <043.582a578349c23acd06cc444ba23f147c@varnish-cache.org> Message-ID: <058.89516e5109fae72c829ae20d041b6100@varnish-cache.org> #1327: Crash if http_max_hdr is not multiple of 4 ----------------------+------------------------- Reporter: Dan42 | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Timing out. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 11:53:56 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 11:53:56 -0000 Subject: [Varnish] #1316: req.grace = 0s does not disable grace In-Reply-To: <043.7d816f9073ee4793116f3b2e66ed25f1@varnish-cache.org> References: <043.7d816f9073ee4793116f3b2e66ed25f1@varnish-cache.org> Message-ID: <058.f21960a99a4ae9de4aabd88ae1a94545@varnish-cache.org> #1316: req.grace = 0s does not disable grace --------------------+---------------------- Reporter: daghf | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: invalid Keywords: | --------------------+---------------------- Changes (by daghf): * status: new => closed * resolution: => invalid Comment: Doesn't apply any more since req.grace is gone. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 12:28:57 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 12:28:57 -0000 Subject: [Varnish] #1372: Consistent crashes with seg fault error 4 in libc-2.12.so In-Reply-To: <045.a0d66f4881fdf144e127f5a22a64caca@varnish-cache.org> References: <045.a0d66f4881fdf144e127f5a22a64caca@varnish-cache.org> Message-ID: <060.8b1606fbc2b1fa6ef8652de6514400fc@varnish-cache.org> #1372: Consistent crashes with seg fault error 4 in libc-2.12.so ----------------------+------------------------------ Reporter: tomypro | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 3.0 dev Component: varnishd | Version: 3.0.4 Severity: blocker | Resolution: Keywords: | ----------------------+------------------------------ Comment (by martin): Hi, Could you attach your VCL configuration, and also please enumerate any VMODs you are using. Could you please also install the debug symbols package, and do a GDB backtrace of a crash? Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 12:38:53 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 12:38:53 -0000 Subject: [Varnish] #1229: Varnish Banlurker patch In-Reply-To: <043.574a4f7db89dadc5fc35d65d2f26f366@varnish-cache.org> References: <043.574a4f7db89dadc5fc35d65d2f26f366@varnish-cache.org> Message-ID: <058.ed8d334760bada91642ff42a5153ad68@varnish-cache.org> #1229: Varnish Banlurker patch ------------------------------------------------+-------------------- Reporter: celly | Owner: Type: enhancement | Status: new Priority: high | Milestone: Later Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: Keywords: banlist, huge, performance, lurker | ------------------------------------------------+-------------------- Comment (by martin): Another ban related patch that should be looked at in conjunction with this: https://www.varnish-cache.org/patchwork/patch/97/ Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 18 14:18:27 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 18 Nov 2013 14:18:27 -0000 Subject: [Varnish] #957: Bug in varnishstat related to the -f option In-Reply-To: <045.f672148c0581799d40cc68509dc3b9da@varnish-cache.org> References: <045.f672148c0581799d40cc68509dc3b9da@varnish-cache.org> Message-ID: <060.5d46ea2b7e97134908cd6452b060bc6e@varnish-cache.org> #957: Bug in varnishstat related to the -f option -------------------------+--------------------- Reporter: leed25d | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: 3.0.0 Severity: major | Resolution: Keywords: | -------------------------+--------------------- Comment (by martin): I suggest the following to resolve this issue: - Drop ',' separated filters. This fixes specifying names containing ',' - Multiple filters needs to be specified using multiple -f options - Specifying '.' characters needs to be done using escapes (the alternative is to use some other character for delimiting VSC fields, which I think makes for a much bigger change) Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 19 14:02:34 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 Nov 2013 14:02:34 -0000 Subject: [Varnish] #1375: Varnish performance appears to be impacted by the presence of many vary headers Message-ID: <046.e6aab9a8880d23ef3a990bd54017648f@varnish-cache.org> #1375: Varnish performance appears to be impacted by the presence of many vary headers ----------------------+---------------------- Reporter: closer01 | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.4 | Severity: normal Keywords: | ----------------------+---------------------- Varnish performance appears to be significantly impacted by the presence of many vary headers == Background == We use Varnish as our primary caching layer on a large platform. We've seen unpredictable behaviour in Varnish in front of web applications characterised by long response times. Having investigated the response times of our backends and having run a number of loadtests we believe we have isolated the problem to Varnish's caching behaviour around vary headers. Our own applications vary on a number of headers each with a number of variations. We believe that Varnish demonstrates a significant performance decrease when responses vary on a high number of headers and a substantial performance decrease when varying on a moderate number of headers with many variations. == Investigation == We've been able to reproduce this problem outside of our platform environment on both Varnish 2 and Varnish 3 when running vanilla VCL. The scenarios detailed below show: - A page that varies on a high number of headers but only one variation per header - A page that varies on one header but with a high number of variations - A page that varies on a number of headers which have a number of variations. We ran our scenarios against both Varnish 2.0.1 and Varnish 3.0.4 for comparison. We've included data against Varnish 2.0.1 where we feel it's of interest but are raising this as an issue in Varnish 3.0.4. === Testing Environment === We used AWS as our testing environment, further information on our instance sizes are documented here: http://aws.amazon.com/ec2/instance-types /instance-details/ ==== Varnish setup ==== - 1 64-bit, 'General Purpose' m1.medium instance - RHEL 5.10 - A simple demo node JS application as a backend running locally ==== Loadtest setup ==== - 1 64-bit, 'General Purpose' m1.medium instance, - Load generated by JMeter against a single endpoint with immediate ramp up - Test duration: 3 minutes - "Concurrent users": 800 Both our Varnish VM and loadtesting VM ran in the same availability zone and should not be subject to network high network latency. === Scenarios === ==== Many Headers, 1 Variation per Header (Ref: Horizontal) ==== - Page varies on 400 headers - Each header has only one value - Each request supplies one randomly chosen header out of the 400 ==== Many Variations, 1 Header (Ref: Vertical) ==== - Page varies on 1 header - This 1 header has 400 variations - Each request supplies a random value between 1-400 for this single header ==== Many Variations, Many Headers (Ref: Diagonal) ==== - Page varies on 20 headers - Each of those headers varies on 20 values - Each request supplies a random value between 1-20 for a randomly chosen header from the 20. === Results === Attached are graphs showing response times over time for each Varnish 3.0.4 scenario. ==== Response Times ==== v2 - Varnish 2.0.1, v3 - Varnish 3.0.4 || Varnish || Scenario || Min (ms) || Mean (ms) || Max (ms) || Standard Deviation (%) || Successful Requests (%) || Throughput (req/sec) || || v2 || Horizontal || 2 || 3948 || 19763 || 3635.8 || 56.68 || 174.2 || || v3 || Horizontal || 3 || 14546 || 139447 || 21897.94 || 95.49 || 28.7 || || || || || || || || || || || v2 || Diagonal || 1 || 385 || 1877 || 220.17 || 100 || 219.0 || || v3 || Diagonal || 1 || 374 || 2520 || 214.65 || 100 || 221.0 || || || || || || || || || || || v2 || Vertical || 1 || 288 || 1347 || 170.06 || 100 || 297.6 || || v3 || Vertical || 1 || 282 || 1409 || 168.48 || 100 || 298.2 || ==== Varnish Stats ==== During 'horizontal' testing we observed both versions of Varnish seem to frequently restart themselves due to segfaults. Data recorded from Varnishstat is therefore incomplete for those situations. || Varnish || Scenario || Hits || Misses || Percentage Hits (%) || || v2 || Diagonal || 36772 || 6565 || 84.9 || || v3 || Diagonal || 37094 || 6583 || 84.9 || || || || || || || || || || || v2 || Vertical || 51808 || 6824 || 88.4 || || v3 || Vertical || 52114 || 6823 || 88.4 || === Conclusions === Our 'horizontal' tests exhibit behaviour in Varnish that result in very long response times. Our 'diagonal' tests appear to exhibit a 25% lower throughput, a higher average response time and higher peaks in response time. Whilst the 'horizontal' scenario doesn't correspond to a realistic application, we believe this demonstrates the extremity of a problem within Varnish. We believe we're experiencing a more extreme variant of the 'diagonal' behaviour on our own platform and were able to reproduce this in our initial load tests against that platform. Hence, we have reason to believe that Varnish's implementation of caching variations is the root cause. To add context, we noticed and became interested in this behaviour as some of our applications vary on up to 10 headers and were failing to respond under moderate load. We were noticing our backends responding quickly but when looking through our Varnish logs requests appeared to take a very long time within Varnish itself. ==== Other Observations ==== In the scenarios above, an equal number of variations should be present in each scenario and we'd expect to see reasonably consistent behaviour across those three scenarios. For the available 'diagonal' scenario, our high hit ratio suggests our requests to the backend should have been low and given our application was running locally are unable to attribute the lower throughput to network latency. With respect to the 'horizontal' scenario, the presence of seg faulting may also suggest Varnish struggles to cope with pages that vary on many headers. == A Solution == We're not able to present any solution or patch to improve this behaviour. Although we have observed changes made in this area of the code previously: https://github.com/varnish/Varnish- Cache/commit/7bc0068d8f422c917042e35867e00a19f8956f46 === Attachments === Graphs for Varnish 3.0.4 test results: - Horizontal.jpg - Vertical.jpg - Diagonal.jpg -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 19 14:13:00 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 Nov 2013 14:13:00 -0000 Subject: [Varnish] #1375: Varnish performance appears to be impacted by the presence of many vary headers In-Reply-To: <046.e6aab9a8880d23ef3a990bd54017648f@varnish-cache.org> References: <046.e6aab9a8880d23ef3a990bd54017648f@varnish-cache.org> Message-ID: <061.ec06621f7cac0c5e0ca561c51003f650@varnish-cache.org> #1375: Varnish performance appears to be impacted by the presence of many vary headers ----------------------+-------------------- Reporter: closer01 | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.4 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by closer01): There should only be three attachments png attachments, duplicates were added incorrectly. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 19 16:38:00 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 19 Nov 2013 16:38:00 -0000 Subject: [Varnish] #957: Bug in varnishstat related to the -f option In-Reply-To: <045.f672148c0581799d40cc68509dc3b9da@varnish-cache.org> References: <045.f672148c0581799d40cc68509dc3b9da@varnish-cache.org> Message-ID: <060.603c9989620f60ebdd29d718445e2e93@varnish-cache.org> #957: Bug in varnishstat related to the -f option -------------------------+--------------------- Reporter: leed25d | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishstat | Version: 3.0.0 Severity: major | Resolution: fixed Keywords: | -------------------------+--------------------- Changes (by Martin Blix Grydeland ): * status: new => closed * resolution: => fixed Comment: In [154aea3248aa7465878f8261ef3b528c111d882e]: {{{ #!CommitTicketReference repository="" revision="154aea3248aa7465878f8261ef3b528c111d882e" Fix parsing of -f arguments in varnishstat (and vsc) Removed the comma-separated delimiting of this option parsing, as that conflicted with comma characters often used in backend names. Use multiple arguments to have several filters. Parser honors backslash escapes. This makes it possible to list names containing '.' (also common with backend names). Fixes: #957 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 20 07:54:23 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 20 Nov 2013 07:54:23 -0000 Subject: [Varnish] #1374: High SWAP usage In-Reply-To: <043.86ef40551613a8c85d41afe6583b5c09@varnish-cache.org> References: <043.86ef40551613a8c85d41afe6583b5c09@varnish-cache.org> Message-ID: <058.4df31524f660e3a47d44138af50308e1@varnish-cache.org> #1374: High SWAP usage ----------------------+-------------------- Reporter: jammy | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.3 Severity: critical | Resolution: Keywords: | ----------------------+-------------------- Comment (by jammy): @tfheen thanks for formatting the ticket ;) Any thoughts on the issue? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 20 08:36:07 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 20 Nov 2013 08:36:07 -0000 Subject: [Varnish] #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 Message-ID: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 ---------------------------------------+---------------------- Reporter: centur | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.4 | Severity: normal Keywords: esi, panic, child restart | ---------------------------------------+---------------------- Since we updated from varnish 3.0.3 to 3.0.4 the varnish child crashes every 1-3 days in our production system with the following panic message: Panic message: Assert error in VRT_r_server_ip(), cache_vrt_var.c line 473:0000012 Condition(VTCP_Check(i)) not true.#012errno = 9 (Bad file descriptor). The panic occurs only with esi_level > 0. We cannot reproduce the problem with certain conditions, it appears randomly. I had a look in the varnish source files, the crash occurs in the function getsockname(int sockfd, struct sockaddr *addrsocklen_t *" addrlen ), obviously failing due to an bad file descriptor. Our OS: Linux version 3.2.0-4-amd64 (debian-kernel at lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.51-1 (Linux Wheezy) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 20 13:24:55 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 20 Nov 2013 13:24:55 -0000 Subject: [Varnish] #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 In-Reply-To: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> References: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> Message-ID: <059.141b172e8b7581075722ea3305934336@varnish-cache.org> #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 ---------------------------------------+-------------------- Reporter: centur | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.4 Severity: normal | Resolution: Keywords: esi, panic, child restart | ---------------------------------------+-------------------- Comment (by martin): Set param log_local_addr to true to work around this problem. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 20 15:57:51 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 20 Nov 2013 15:57:51 -0000 Subject: [Varnish] #1377: varnishlog -I does not play well with -i or -m Message-ID: <043.a4846ba07e044099acaa5a616f79536a@varnish-cache.org> #1377: varnishlog -I does not play well with -i or -m -------------------+--------------------------- Reporter: frisi | Type: documentation Status: new | Priority: normal Milestone: | Component: varnishlog Version: 3.0.4 | Severity: normal Keywords: | -------------------+--------------------------- i want varnishlog to only show entries for a certain url (using the -m switch) which works fine: {{{ varnishlog -c -m 'RxURL:^/my/folder.*$' }}} after that i'd like to limit the output to certain tags/headers i'm interested in: {{{ varnishlog -c -m 'RxURL:^/my/folder.*$' -i RxRequest -i RxURL -i TxStatus }}} in addition i'd like to see the host header. i could add another {{{-i RxHeader}}} but this would give me too much information. so i try {{{ varnishlog -c -m 'RxURL:^/my/folder.*$' -i RxRequest -i RxURL -i TxStatus -I 'Host:' }}} which results in absolutely no output. the same when i skip all -i options: {{{ varnishlog -c -m 'RxURL:^/my/folder.*$' -I 'Host:' }}} it seems -I can't be combined with the -i and -m options. the only thing that works is {{{ varnishlog -I 'Host:' }}} if i understand correctly i should be able to add certain tags via {{{-i}}} and additionally print certain log entries that match my Regex(s) provided by {{{-I}}} {{{ manpage excerpt: -I regex Include log entries which match the specified regular expression. If neither -I nor -i is speci? fied, all log entries are included. -i tag Include log entries with the specified tag. If neither -I nor -i is specified, all log entries are included. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 22 09:51:26 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 22 Nov 2013 09:51:26 -0000 Subject: [Varnish] #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 In-Reply-To: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> References: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> Message-ID: <059.9e0fa80aeb5d4158ccc2773f2ffb5950@varnish-cache.org> #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 -------------------------------------+------------------------------------- Reporter: centur | Owner: Poul-Henning Kamp Type: defect | Priority: normal | Status: closed Component: varnishd | Milestone: Severity: normal | Version: 3.0.4 Keywords: esi, panic, child | Resolution: fixed restart | -------------------------------------+------------------------------------- Changes (by Poul-Henning Kamp ): * status: new => closed * owner: => Poul-Henning Kamp * resolution: => fixed Comment: In [180e785d8aaaac7abb1f47645ae4691b6ac8357a]: {{{ #!CommitTicketReference repository="" revision="180e785d8aaaac7abb1f47645ae4691b6ac8357a" Always pull the local address of the socket out right away and log it. Parallel use of sockets would force us to add locking and that's simply not worth it. Also: Fixes #1376 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 22 11:54:59 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 22 Nov 2013 11:54:59 -0000 Subject: [Varnish] #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 In-Reply-To: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> References: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> Message-ID: <059.038491455f4511dbd1adcab7b8b32f54@varnish-cache.org> #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 -------------------------------------+------------------------------------- Reporter: centur | Owner: Poul-Henning Kamp Type: defect | Priority: normal | Status: closed Component: varnishd | Milestone: Severity: normal | Version: 3.0.4 Keywords: esi, panic, child | Resolution: fixed restart | -------------------------------------+------------------------------------- Comment (by Martin Blix Grydeland ): In [6d74c0e41d01d58d38550ce1652c257268719eb3]: {{{ #!CommitTicketReference repository="" revision="6d74c0e41d01d58d38550ce1652c257268719eb3" Always pull the local address of the socket out right away. Fixes: #1376 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Nov 23 06:02:21 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 23 Nov 2013 06:02:21 -0000 Subject: [Varnish] #1372: Consistent crashes with seg fault error 4 in libc-2.12.so In-Reply-To: <045.a0d66f4881fdf144e127f5a22a64caca@varnish-cache.org> References: <045.a0d66f4881fdf144e127f5a22a64caca@varnish-cache.org> Message-ID: <060.514e2c85948681841e577004967b2610@varnish-cache.org> #1372: Consistent crashes with seg fault error 4 in libc-2.12.so ----------------------+------------------------------ Reporter: tomypro | Owner: Type: defect | Status: new Priority: highest | Milestone: Varnish 3.0 dev Component: varnishd | Version: 3.0.4 Severity: blocker | Resolution: Keywords: | ----------------------+------------------------------ Comment (by tomypro): Hey Martin, I was able to get a stacktrace when attaching gdb to a running instance (see below) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fee0cbf9700 (LWP 5280)] 0x00007fee1a34a0ec in vfprintf () from /lib64/libc.so.6 (gdb) Hope this helps. Happy to provide further instructions. We are experiencing this issue consistenly on CentOs 6 cloud servers. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Nov 25 16:30:11 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 25 Nov 2013 16:30:11 -0000 Subject: [Varnish] #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 In-Reply-To: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> References: <044.394989a6021a73d46aa2372d90281428@varnish-cache.org> Message-ID: <059.296c417722dcb81f1bfa6193985f66f2@varnish-cache.org> #1376: Child crash due to a bad file descriptor / socket with esi_level > 0 -------------------------------------+------------------------------------- Reporter: centur | Owner: Poul-Henning Kamp Type: defect | Priority: normal | Status: closed Component: varnishd | Milestone: Severity: normal | Version: 3.0.4 Keywords: esi, panic, child | Resolution: fixed restart | -------------------------------------+------------------------------------- Comment (by centur): Workaround solved our problem, no more panic crashes. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Nov 26 18:51:14 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 26 Nov 2013 18:51:14 -0000 Subject: [Varnish] #942: Varnish stalls receiving a "weird" response from a backend In-Reply-To: <049.b4cbf0e011acc3443ab27e603b59956b@varnish-cache.org> References: <049.b4cbf0e011acc3443ab27e603b59956b@varnish-cache.org> Message-ID: <064.c8e392dbafacaf24227aaa89270b64b3@varnish-cache.org> #942: Varnish stalls receiving a "weird" response from a backend -------------------------+---------------------------------------- Reporter: andreacampi | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | -------------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp Comment: In [41f7a356e2be38f03428589710d163bd4110d9fd]: {{{ #!CommitTicketReference repository="" revision="41f7a356e2be38f03428589710d163bd4110d9fd" Fix an oversight when we closed #942: The exact same condition can happen if we gunzip on fetch. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Nov 27 13:20:07 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 27 Nov 2013 13:20:07 -0000 Subject: [Varnish] #416: Segfault In-Reply-To: <041.42f511fe181f99ea7e2c7534cd763727@varnish-cache.org> References: <041.42f511fe181f99ea7e2c7534cd763727@varnish-cache.org> Message-ID: <056.76abbcca2ed5b96f531be390e19b128d@varnish-cache.org> #416: Segfault --------------------+----------------------- Reporter: sky | Owner: sky Type: defect | Status: reopened Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: Keywords: | --------------------+----------------------- Changes (by phk): * status: closed => reopened * resolution: fixed => Old description: > {{{ > #0 0x00007ff58d931095 in raise () from /lib/libc.so.6 > #1 0x00007ff58d932af0 in abort () from /lib/libc.so.6 > #2 0x000000000042111a in pan_ic (func=0x450cae "Tcheck", file=0x450cc0 > "cache.h", line=674, cond=0x450cb5 "(t.b) != 0", err=0, > xxx=0) at cache_panic.c:325 > #3 0x000000000041c929 in Tcheck (t={b = 0x0, e = 0x0}) at cache.h:674 > #4 0x000000000041c9e0 in http_findhdr (hp=0x7ff2e3c5e0b8, l=13, > hdr=0x665f11 "Cache-Control:") at cache_http.c:194 > #5 0x000000000041cb4f in http_GetHdr (hp=0x7ff2e3c5e0b8, hdr=0x665f11 > "Cache-Control:", ptr=0x7fd25d1dd9d8) at cache_http.c:216 > #6 0x000000000041cc34 in http_GetHdrField (hp=0x7ff2e3c5e0b8, > hdr=0x665f10 "\016Cache-Control:", field=0x45a722 "s-maxage", > ptr=0x7fd25d1dda98) at cache_http.c:244 > #7 0x0000000000439714 in RFC2616_Ttl (sp=0x7fd146553008, > hp=0x7ff2e3c5e0b8, obj=0x7ff2e3c5e000) at rfc2616.c:95 > #8 0x0000000000439ba6 in RFC2616_cache_policy (sp=0x7fd146553008, > hp=0x7ff2e3c5e0b8) at rfc2616.c:199 > #9 0x00000000004122cf in cnt_fetch (sp=0x7fd146553008) at > cache_center.c:406 > #10 0x00000000004142d3 in CNT_Session (sp=0x7fd146553008) at steps.h:41 > #11 0x0000000000422c89 in wrk_do_cnt_sess (w=0x7fd25d1e5c30, > priv=0x7fd146553008) at cache_pool.c:362 > #12 0x0000000000422320 in wrk_thread (priv=0x7ff58d543320) at > cache_pool.c:276 > #13 0x00007ff58e1013f7 in start_thread () from /lib/libpthread.so.0 > #14 0x00007ff58d9d6b3d in clone () from /lib/libc.so.6 > #15 0x0000000000000000 in ?? () > }}} New description: {{{ #0 0x00007ff58d931095 in raise () from /lib/libc.so.6 #1 0x00007ff58d932af0 in abort () from /lib/libc.so.6 #2 0x000000000042111a in pan_ic (func=0x450cae "Tcheck", file=0x450cc0 "cache.h", line=674, cond=0x450cb5 "(t.b) != 0", err=0, xxx=0) at cache_panic.c:325 #3 0x000000000041c929 in Tcheck (t={b = 0x0, e = 0x0}) at cache.h:674 #4 0x000000000041c9e0 in http_findhdr (hp=0x7ff2e3c5e0b8, l=13, hdr=0x665f11 "Cache-Control:") at cache_http.c:194 #5 0x000000000041cb4f in http_GetHdr (hp=0x7ff2e3c5e0b8, hdr=0x665f11 "Cache-Control:", ptr=0x7fd25d1dd9d8) at cache_http.c:216 #6 0x000000000041cc34 in http_GetHdrField (hp=0x7ff2e3c5e0b8, hdr=0x665f10 "\016Cache-Control:", field=0x45a722 "s-maxage", ptr=0x7fd25d1dda98) at cache_http.c:244 #7 0x0000000000439714 in RFC2616_Ttl (sp=0x7fd146553008, hp=0x7ff2e3c5e0b8, obj=0x7ff2e3c5e000) at rfc2616.c:95 #8 0x0000000000439ba6 in RFC2616_cache_policy (sp=0x7fd146553008, hp=0x7ff2e3c5e0b8) at rfc2616.c:199 #9 0x00000000004122cf in cnt_fetch (sp=0x7fd146553008) at cache_center.c:406 #10 0x00000000004142d3 in CNT_Session (sp=0x7fd146553008) at steps.h:41 #11 0x0000000000422c89 in wrk_do_cnt_sess (w=0x7fd25d1e5c30, priv=0x7fd146553008) at cache_pool.c:362 #12 0x0000000000422320 in wrk_thread (priv=0x7ff58d543320) at cache_pool.c:276 #13 0x00007ff58e1013f7 in start_thread () from /lib/libpthread.so.0 #14 0x00007ff58d9d6b3d in clone () from /lib/libc.so.6 #15 0x0000000000000000 in ?? () }}} -- Comment: Reopening, see vtc case, we don't properly handle beresp with too many headers. (also, trying to get beresp.body with no vbc) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 29 13:23:35 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 Nov 2013 13:23:35 -0000 Subject: [Varnish] #1330: VSL_Dispatch parameters 'fd' and 'spec' intermittently called with uninitialized values In-Reply-To: <043.aa626a75c4fe2e02d466c0020d05a41b@varnish-cache.org> References: <043.aa626a75c4fe2e02d466c0020d05a41b@varnish-cache.org> Message-ID: <058.ec54f698f0c2d41a3878e5ca657c13be@varnish-cache.org> #1330: VSL_Dispatch parameters 'fd' and 'spec' intermittently called with uninitialized values ------------------------+-------------------- Reporter: geoff | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishlog | Version: 3.0.3 Severity: major | Resolution: Keywords: | ------------------------+-------------------- Comment (by geoff): Just spoke to Martin (at VUG8). Both of these phenomena (uninitialized fd and spec in the VSL_Dispatch call) are known issues in the Varnish 3 log API. The file descriptor may have been closed in client connections, and Martin mentioned that there are certain situations where ESI requests don't have a proper fd (don't remember the details). So by the time log entries for client transactions make it to the log, the fd may be -1. That fits with my experience that this only ever seems to happen for client logging. The spec field might not be set because the Varnish 3 API (in VSL_NextLog) attempts to guess whether a vsm is associated with a client or backend by looking for tags that can only come from one or the other: ReqStart and SessionOpen for clients, BackendOpen and BackendXID for backends. In certain situations, none of those tags might have been seen (and BackendXID is never used). All of these issues are solved in the Varnish 4 API. So there really isn't any solution for this in Varnish 3, short of looking for fixes in the logging API (which are probably not trivial). An application using the API may decide that if fd == (unsigned) -1, then the client has been disconnected, so the data doesn't need to be logged. If spec is not set, an app can try harder to guess whether it's a client or backend based on the tag, because there are more than just those four tags that are strictly for backends or clients (my apps do that). Or migrate to Varnish 4. %^) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Nov 29 14:23:08 2013 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 29 Nov 2013 14:23:08 -0000 Subject: [Varnish] #1378: varnishncsa does not escape log items Message-ID: <046.c965e13022034e658e261f1b7ddb7f49@varnish-cache.org> #1378: varnishncsa does not escape log items ----------------------+------------------------- Reporter: simonvik | Type: defect Status: new | Priority: low Milestone: | Component: varnishncsa Version: 3.0.4 | Severity: minor Keywords: | ----------------------+------------------------- Varnishncsa logs the actual character where apache escapes it, for example "\x01" Tested with:[[BR]] {{{ echo -e "GET / HTTP/1.1\r\nHost: HOST\r\nUser- Agent:a\x01\r\nReferer:\x01\r\n\r\n" | nc -v HOST 80 }}} Apache log: [[BR]] {{{ 127.0.0.1 - - [29/Nov/2013:14:37:33 +0100] "GET / HTTP/1.1" 200 427 "\x01" "a\x01" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator