From varnish-bugs at varnish-cache.org Tue Feb 3 07:02:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 07:02:25 -0000 Subject: [Varnish] #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling In-Reply-To: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> References: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> Message-ID: <065.a5c0c2ea4643c973d7a41fb44c642d2c@varnish-cache.org> #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling --------------------------+---------------------------------- Reporter: DonMacAskill | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.0 Severity: critical | Resolution: Keywords: | --------------------------+---------------------------------- Changes (by slink): * status: closed => reopened * resolution: fixed => Comment: We have not yet addressed the case where clients _need_ C-L as for Range requests Replying to [comment:4 slink]: > * On `Range` - from #1500: > * trust C-L whenever we know it > * add `std.wait_body` for use in `vcl_response` to prefer to wait until we know a C-L for the cases where we don't have one before reading the response (read: response sent chunked) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 09:31:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 09:31:28 -0000 Subject: [Varnish] #1665: Wrong behavior of timeout_req In-Reply-To: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> References: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> Message-ID: <062.17b2c5bf0b75f236cd3ad166c78fc91c@varnish-cache.org> #1665: Wrong behavior of timeout_req -----------------------+--------------------- Reporter: sorinescu | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Changes (by aondio): * owner: => aondio -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 10:29:35 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 10:29:35 -0000 Subject: [Varnish] #1667: std.cache_req_body() does not respect size limit for chunked transfer encoding request bodies. Message-ID: <043.04919218c8eeaea07623549965d6da5a@varnish-cache.org> #1667: std.cache_req_body() does not respect size limit for chunked transfer encoding request bodies. ----------------------+------------------- Reporter: daghf | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+------------------- For chunked transfer encoding, std.cache_req_body() will happily cache request bodies larger than the provided size limitation. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 10:31:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 10:31:25 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() Message-ID: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+---------------------- Reporter: lisachenko.it@? | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.2 | Severity: normal Keywords: | -----------------------------+---------------------- We switched to the new version of Varnish on production and noticed that sometimes connection from nginx to varnish is refused: {{{ 2015/02/03 10:21:45 [error] 18100#0: *267573 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: xx.yy.ru, request: "GET /xxxx HTTP/1.1", upstream: "http://127.0.0.1:8080/xxxx/", host: "xx.yy.ru" }}} panic.show contains an error about assert violation {{{ Assert error in vmod_ip(), vmod_std_conversions.c line 144: Condition((p) != 0) not true. thread = (cache-worker) ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x434225: /usr/sbin/varnishd() [0x434225] 0x7fbdcdfea053: /usr/lib/varnish/vmods/libvmod_std.so(vmod_ip+0x143) [0x7fbdcdfea053] 0x7fbdcf1f2cbb: ./vcl.bDW3Ts2n.so(VGC_function_vcl_deliver+0x8b) [0x7fbdcf1f2cbb] 0x440cec: /usr/sbin/varnishd() [0x440cec] 0x441bd8: /usr/sbin/varnishd(VCL_deliver_method+0x48) [0x441bd8] 0x43806e: /usr/sbin/varnishd() [0x43806e] 0x4387d1: /usr/sbin/varnishd(CNT_Request+0x111) [0x4387d1] 0x41a778: /usr/sbin/varnishd(ESI_Deliver+0x7e8) [0x41a778] 0x43b5a8: /usr/sbin/varnishd(V1D_Deliver+0x368) [0x43b5a8] 0x438136: /usr/sbin/varnishd() [0x438136] req = 0x7fbdb85ac020 { sp = 0x7fbdc4810860, vxid = 1076822325, step = R_STP_DELIVER, req_body = R_BODY_NONE, restarts = 0, esi_level = 1 sp = 0x7fbdc4810860 { fd = 17, vxid = 3080488, client = 127.0.0.1 35703, step = S_STP_WORKING, }, worker = 0x7fbddc926c50 { ws = 0x7fbddc926e68 { id = "wrk", {s,f,r,e} = {0x7fbddc926450,0x7fbddc926450,(nil),+2048}, }, VCL::method = DELIVER, VCL::return = abandon, }, ws = 0x7fbdb85ac1b8 { id = "req", {s,f,r,e} = {0x7fbdb85ae010,+40,(nil),+57360}, }, http[req] = { ws = 0x7fbdb84771b8[2eq] "GET", "/some/url/here", "HTTP/1.1", "Connection: close", " some headers here " "Cache-Control: max-age=259200", "X-Forwarded-For: 178.255.200.61", "X-ESI-Level: 1", }, http[resp] = { ws = 0x7fbdb85ac1b8[req] "HTTP/1.1", "200", "OK", "Server: nginx/1.6.2", "Content-Type: text/html; charset=UTF-8", "cache-control: public, s-maxage=60", "date: Tue, 03 Feb 2015 08:25:27 GMT", "X-UA-Compatible: IE=edge", "Content-Encoding: gzip", "X-Varnish: 3080501 2162830", "Age: 34", "Via: 1.1 varnish-v4", }, vcl = { srcname = { "input", "Builtin", "acl.vcl", "probes.vcl", "backends.vcl", }, }, obj (REQ) = 0x7fbdba03b380 { vxid = 2149646478, http[obj] = { ws = (nil)[] "HTTP/1.1", "200", "OK", "Server: nginx/1.6.2", "Content-Type: text/html; charset=UTF-8", "cache-control: public, s-maxage=60", "date: Tue, 03 Feb 2015 08:25:27 GMT", "X-UA-Compatible: IE=edge", "Content-Encoding: gzip", }, len = 639, store = { 639 { 1f 8b 08 00 00 00 00 00 00 03 bd 95 5d 6e 13 31 |............]n.1| 10 c7 af 62 f6 7d 63 41 03 55 ab 4d b8 01 07 e0 |...b.}cA.U.M....| 65 e5 6c 9c c4 92 e3 8d bc de 06 fa 14 82 f8 90 |e.l.............| 52 09 11 55 e2 89 0f 89 0b 84 42 20 d0 26 5c 61 |R..U......B .&\a| [575 more] }, }, }, }, }}} From the assertion name and trace, I found a possible root causes in my VCL file, in the vcl_deliver routine: First line: {{{ # Enable debug headers only for developers if (!std.ip(regsub(req.http.X-Forwarded-For, "[, ].*$", ""), client.ip) ~ developers) { // add headers to the response } }}} Second line: {{{ std.log("Developer IP: " + std.ip(regsub(req.http.X-Forwarded-For, "[, ].*$", ""), client.ip) + " will see additional headers from Varnish"); }}} This assertions is not present all the time, but occurs periodically. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 10:50:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 10:50:58 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() In-Reply-To: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> References: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> Message-ID: <076.0af9051d7bc97ba157a67a70915ab144@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+-------------------- Reporter: lisachenko.it@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------------+-------------------- Comment (by daghf): Hi This looks like workspace exhaustion. Could you try to increase the parameter 'workspace_client', and see if the problem goes away? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 11:46:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 11:46:11 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() In-Reply-To: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> References: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> Message-ID: <076.042a897dbd7aa3753c810f512839a2a7@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+-------------------- Reporter: lisachenko.it@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------------+-------------------- Comment (by lisachenko.it@?): Hi! We have changed workspace_client to 128k, looks much better now. Thanks! We will wait a little to check that everything is ok. What is a recommended value for this parameter or how can we determine it according to our load? Are there any docs about workspace_xxx parameters? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 12:00:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 12:00:45 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() In-Reply-To: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> References: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> Message-ID: <076.bf9801cd3dc3f1496fd2e7cf8ac5a7e5@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+-------------------- Reporter: lisachenko.it@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------------+-------------------- Comment (by lisachenko.it@?): Still have errors, but more rarely, increased to 256k. No guess, which value should suit our needs :) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 3 12:59:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 03 Feb 2015 12:59:56 -0000 Subject: [Varnish] #1669: Assert error in http_Write() Message-ID: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> #1669: Assert error in http_Write() --------------------+---------------------- Reporter: yarivh | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 3.0.6 | Severity: normal Keywords: | --------------------+---------------------- A few times a day varnish panics with the follwing : Feb 3 14:12:15 XXXXXXX varnishd[15432]: Child (19229) Panic message: Assert error in http_Write(), cache_http.c line 1110:#012 Condition((hp->hd[HTTP_HDR_STATUS].b ) != 0) not true.#012thread = (cache-worker)#012ident = Linux,2.6.32-431.29.2.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x42e1c6: /usr/sbin/va rnishd() [0x42e1c6]#012 0x428f53: /usr/sbin/varnishd(http_Write+0x293) [0x428f53]#012 0x43136d: /usr/sbin/varnishd(RES_WriteObj+0x33d) [0x43136d]#012 0x418cdc: /usr/sbi n/varnishd(CNT_Session+0x7ac) [0x418cdc]#012 0x42fc56: /usr/sbin/varnishd() [0x42fc56]#012 0x7f750e94a9d1: /lib64/libpthread.so.0(+0x79d1) [0x7f750e94a9d1]#012 0x7f750e 69786d: /lib64/libc.so.6(clone+0x6d) [0x7f750e69786d]#012sp = 0x7f74c3f25008 {#012 fd = 195, id = 195, xid = 1321300019,#012 client = 2.22.50.84 45769,#012 step = STP_D ELIVER,#012 handling = deliver,#012 err_code = 200, err_reason = (null),#012 restarts = 0, esi_level = 0#012 flags = is_gunzip#012 bodystatus = 4#012 ws = 0x7f74c3f 25080 { #012 id = "sess",#012 {s,f,r,e} = {0x7f74c3f275f8,+1008,(nil),+131072},#012 },#012 http[req] = {#012 ws = 0x7f74c3f25080[sess]#012 "GET",#012 "/home/1,7340,L-1335-17979-31594109,00.html",#012 "HTTP/1.1",#012 "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",#012 "From: goog lebot(at)googlebot.com",#012 "User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",#012 "X-Akamai- Edgescape: georegion=288,count OS and varnish config and version : #### varnish version varnish-3.0.6-1.el5.centos.x86_64 ### ### OS centos 6.5 #### ### Kernel 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ### NFILES=131072 MEMLOCK=82000 RELOAD_VCL=1 VARNISH_VCL_CONF=/etc/varnish/default.vcl VARNISH_LISTEN_PORT=80 VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 VARNISH_SECRET_FILE=/etc/varnish/secret VARNISH_MIN_THREADS=50 VARNISH_MAX_THREADS=1000 VARNISH_THREAD_TIMEOUT=120 VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin VARNISH_STORAGE_SIZE=7G VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" VARNISH_TTL=86400 DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -S ${VARNISH_SECRET_FILE} \ -s ${VARNISH_STORAGE} \ -p sess_timeout=305\ -p http_req_hdr_len=16384 \ -p http_req_size=65536 \ -p http_resp_hdr_len=16384 \ -p http_resp_size=65536 \ -p http_gzip_support=off \ -p pipe_timeout=120 \ -p sess_workspace=131072 \ -p http_max_hdr=256" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 10:18:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 10:18:57 -0000 Subject: [Varnish] #1665: Wrong behavior of timeout_req In-Reply-To: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> References: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> Message-ID: <062.7d3ecc3ac9d32bf0c89b3291adcbafc4@varnish-cache.org> #1665: Wrong behavior of timeout_req -----------------------+--------------------- Reporter: sorinescu | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Changes (by aondio): * status: new => closed * resolution: => fixed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 10:19:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 10:19:05 -0000 Subject: [Varnish] #1670: assert: default_oc_getobj(), storage/stevedore.c line 68 Message-ID: <046.effee11bf6ff3d79ffdf705e9fe5a83f@varnish-cache.org> #1670: assert: default_oc_getobj(), storage/stevedore.c line 68 ----------------------+--------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Keywords: ----------------------+--------------------- Posting this on behalf of mattrobenolt. Originally reported on IRC. {{{ Last panic at: Sun, 01 Feb 2015 03:40:29 GMT Assert error in default_oc_getobj(), storage/stevedore.c line 68: Condition(((o))->magic == (0x32851d42)) not true. thread = (cache-worker) version = varnish-4.0.3-rc2 revision 1b96340 ident = Linux,3.13.0-43-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x433d8a: /usr/sbin/varnishd() [0x433d8a] 0x45853f: /usr/sbin/varnishd() [0x45853f] 0x41ef24: /usr/sbin/varnishd(EXP_NukeOne+0x194) [0x41ef24] 0x459148: /usr/sbin/varnishd(STV_alloc+0xe8) [0x459148] 0x422b0e: /usr/sbin/varnishd(VFP_GetStorage+0x7e) [0x422b0e] 0x420941: /usr/sbin/varnishd() [0x420941] 0x436ca1: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x436ca1] 0x449c58: /usr/sbin/varnishd() [0x449c58] 0x7fbb6d59be9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7fbb6d59be9a] 0x7fbb6d2c92ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fbb6d2c92ed] busyobj = 0x7fb5403bd020 { ws = 0x7fb5403bd0e0 { id = "bo", {s,f,r,e} = {0x7fb5403bf008,+2184,(nil),+57368}, }, [.. cut ..] }}} backtrace is redacted by me, I'll add the full panic dump when/if I get permission to make it public. According to the report, this happened after running for 90 minutes on live traffic with ~10kreq/s. Martin spent Monday looking at this, but did not find anything conclusive. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 14:00:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 14:00:03 -0000 Subject: [Varnish] #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 Message-ID: <043.82eb1d8cf4c43807e0c90373651b4e8e@varnish-cache.org> #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 ---------------------------+---------------------- Reporter: lygie | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.2 | Severity: normal Keywords: sefgault, vcl | ---------------------------+---------------------- Hi, we are using VCL code generated from https://github.com/willemk/varnish-mobiletranslate to set a Device-Type header in varnish. The generated VCL-Code is here: https://github.com/willemk/varnish- mobiletranslate/blob/master/mobile_detect.vcl The code workes perfectly with varnish3. On varnish4 when a user-agent {{{ "Mozilla/5.0 (Linux; Android 5.0.1; Nexus 5 Build/LRX22C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.89 Mobile Safari/537.36" }}} hits the varnish, it crashes with a segfault. Here is the gdb backtrace from the coredump: {{{ Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `/usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /opt/cd'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007f98a25b354a in ?? () from /lib/x86_64-linux-gnu/libpcre.so.3 (gdb) bt #0 0x00007f98a25b354a in ?? () from /lib/x86_64-linux-gnu/libpcre.so.3 #1 0x00007f98a25b3ecb in ?? () from /lib/x86_64-linux-gnu/libpcre.so.3 #2 0x00007f98a25c0cfa in ?? () from /lib/x86_64-linux-gnu/libpcre.so.3 ... #73 0x00007f98a25bbb59 in ?? () from /lib/x86_64-linux-gnu/libpcre.so.3 #74 0x00007f98a25c4221 in pcre_exec () from /lib/x86_64-linux- gnu/libpcre.so.3 #75 0x00007f98a2e4f8be in VRE_exec () from /usr/lib/varnish/libvarnish.so #76 0x00000000004430d2 in VRT_re_match () #77 0x00007f9896ca8064 in VGC_function_devicedetect (ctx=ctx at entry=0x7f989288d160) at ./vcl.ZG6nwvTJ.c:1045 #78 0x00007f9896ca9bd5 in VGC_function_vcl_recv (ctx=0x7f989288d160) at ./vcl.ZG6nwvTJ.c:1298 #79 0x000000000043fcf6 in ?? () #80 0x00000000004401c5 in VCL_recv_method () #81 0x0000000000437a71 in CNT_Request () #82 0x000000000042d17b in HTTP1_Session () #83 0x000000000043b738 in ?? () #84 0x0000000000436033 in Pool_Work_Thread () #85 0x00000000004492f8 in ?? () #86 0x00007f98a1e80182 in start_thread (arg=0x7f989288e700) at pthread_create.c:312 #87 0x00007f98a1bad00d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 }}} Line 1045 from vcl.ZG6nwvTJ.c mentioned in the coredump is: (VRT_re_match(ctx, VRT_GetHdr(ctx, &VGC_HDR_REQ_User_Agent), VGC_re_53))|| The operating system is ubuntu 14.04 64 bit using varnish apt repository: https://repo.varnish-cache.org/ubuntu/ precise varnish-4.0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 14:37:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 14:37:03 -0000 Subject: [Varnish] #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 In-Reply-To: <043.82eb1d8cf4c43807e0c90373651b4e8e@varnish-cache.org> References: <043.82eb1d8cf4c43807e0c90373651b4e8e@varnish-cache.org> Message-ID: <058.4ed9603ff78e43a2003b3b3e9fc868cd@varnish-cache.org> #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 ---------------------------+-------------------- Reporter: lygie | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: sefgault, vcl | ---------------------------+-------------------- Comment (by lygie): Using just the regex defined in the constant causes the same issue {{{ VRT_re_init(&VGC_re_53, "(?i)Android.*Nexus[\\s]+(7|10)|^.*Android.*Nexus(?:(?!Mobile).)*$"); }}} I tried a clean varnish4 setup just adding {{{ sub vcl_recv { if(req.http.User-Agent ~ "(?i)Android.*Nexus[\s]+(7|10)|^.*Android.*Nexus(?:(?!Mobile).)*$") { set req.http.User-Test = "123"; } } }}} causing the same segfault -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 14:55:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 14:55:48 -0000 Subject: [Varnish] #1672: Assert on cached object with non-200 status and bogus 304 backend reply Message-ID: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> #1672: Assert on cached object with non-200 status and bogus 304 backend reply --------------------+--------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Keywords: --------------------+--------------------- If the backend replies 304 Not Modified even though Varnish didn't send any INM/IMF headers and the "ims_oc" is an object with non-200 status, Varnish asserts. See attached test case and patch. Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 15:12:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 15:12:46 -0000 Subject: [Varnish] #1672: Assert on cached object with non-200 status and bogus 304 backend reply In-Reply-To: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> References: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> Message-ID: <059.a7b42188e562c51c305e4827bdfd011e@varnish-cache.org> #1672: Assert on cached object with non-200 status and bogus 304 backend reply ----------------------+-------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Changes (by martin): * version: unknown => trunk * component: build => varnishd -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 15:29:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 15:29:13 -0000 Subject: [Varnish] #1672: Assert on cached object with non-200 status and bogus 304 backend reply In-Reply-To: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> References: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> Message-ID: <059.9808247adb1217576445549b790e477c@varnish-cache.org> #1672: Assert on cached object with non-200 status and bogus 304 backend reply ----------------------+------------------ Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: Severity: normal | Resolution: Keywords: | ----------------------+------------------ Changes (by martin): * version: trunk => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 4 20:34:01 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 04 Feb 2015 20:34:01 -0000 Subject: [Varnish] #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 In-Reply-To: <043.82eb1d8cf4c43807e0c90373651b4e8e@varnish-cache.org> References: <043.82eb1d8cf4c43807e0c90373651b4e8e@varnish-cache.org> Message-ID: <058.f0db4d0b11f763e5f5c3f2912d95b4c9@varnish-cache.org> #1671: VRT_re_match causes Segmentation fault in libpcre.so.3 ---------------------------+------------------------ Reporter: lygie | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: duplicate Keywords: sefgault, vcl | ---------------------------+------------------------ Changes (by fgsch): * status: new => closed * resolution: => duplicate Comment: Dup of #1576. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 5 14:45:38 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 05 Feb 2015 14:45:38 -0000 Subject: [Varnish] #1617: Varnish 4 weird memory consumption / calculation In-Reply-To: <046.253c419c4f87de3d220969908bfcf866@varnish-cache.org> References: <046.253c419c4f87de3d220969908bfcf866@varnish-cache.org> Message-ID: <061.3ba275f315d895a3e210028f7ab06dfd@varnish-cache.org> #1617: Varnish 4 weird memory consumption / calculation ----------------------+-------------------- Reporter: whocares | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by ioppermann): Good to hear. Thanks! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 5 15:18:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 05 Feb 2015 15:18:41 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() In-Reply-To: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> References: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> Message-ID: <076.0121a9e138ada16fdebbeb2556560076@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+-------------------- Reporter: lisachenko.it@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------------+-------------------- Comment (by lisachenko.it@?): Two days without errors. Looks very nice! Thank you, feel free to close this issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 5 16:04:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 05 Feb 2015 16:04:41 -0000 Subject: [Varnish] #1673: Date header not updated in cached response Message-ID: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> #1673: Date header not updated in cached response ------------------------+-------------------- Reporter: ioppermann | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 4.0.2 | Severity: normal Keywords: | ------------------------+-------------------- varnish4 does not update the Date: header on a cache hit. Example response: HTTP/1.1 200 OK Server: nginx Date: Thu, 05 Feb 2015 15:09:01 GMT Content-Type: image/jpeg Last-Modified: Thu, 05 Feb 2015 11:12:21 GMT ETag: "54d35015-48bc" Cache-Control: max-age=72200 X-Varnish: 781994914 780477333 Age: 2974 Via: 1.1 varnish-v4 X-Cache: HIT 2974 69226.381 imagestore2 Content-Length: 18620 Connection: close Time of request: Thu Feb 5 16:58:35 CET 2015 This was already fixed back in varnish2: https://www.varnish- cache.org/trac/ticket/157 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 07:29:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 07:29:21 -0000 Subject: [Varnish] #1673: Date header not updated in cached response In-Reply-To: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> References: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> Message-ID: <063.b48822cd59898b8d7fcca05a892db3c7@varnish-cache.org> #1673: Date header not updated in cached response ------------------------+-------------------- Reporter: ioppermann | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Comment (by fgsch): Changed in commit 89870e0bbd785964c322e1e453f492d747731c88. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 07:32:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 07:32:16 -0000 Subject: [Varnish] #1668: Assertion error in vmod_ip() In-Reply-To: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> References: <061.1776aa0b6c594b5ac118847b77fc3d1e@varnish-cache.org> Message-ID: <076.b7b46fecae354afa1f35ade7c3d161fc@varnish-cache.org> #1668: Assertion error in vmod_ip() -----------------------------+---------------------- Reporter: lisachenko.it@? | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: invalid Keywords: | -----------------------------+---------------------- Changes (by fgsch): * status: new => closed * resolution: => invalid Comment: Problem due to workspace exhaustion confirmed by the reporter. Fixed by increasing workspace_client. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 12:21:07 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 12:21:07 -0000 Subject: [Varnish] #1673: Date header not updated in cached response In-Reply-To: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> References: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> Message-ID: <063.73ff576230b49f372b8482105077be10@varnish-cache.org> #1673: Date header not updated in cached response ------------------------+-------------------- Reporter: ioppermann | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Comment (by ioppermann): OK. Thanks for pointing this out. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 15:03:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 15:03:25 -0000 Subject: [Varnish] #1637: Assert error in VFP_Fetch_Body() In-Reply-To: <045.21ddee12af57707010d10fb844941331@varnish-cache.org> References: <045.21ddee12af57707010d10fb844941331@varnish-cache.org> Message-ID: <060.67140f2c363264093a315da309557494@varnish-cache.org> #1637: Assert error in VFP_Fetch_Body() --------------------------+--------------------- Reporter: llavaud | Owner: daghf Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: major | Resolution: fixed Keywords: Assert error | --------------------------+--------------------- Changes (by daghf): * status: new => closed * resolution: => fixed Comment: Fixed in master, since 780e52f312f8a2c5759ab545c8f6adcd934e5b6e. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 18:38:05 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 18:38:05 -0000 Subject: [Varnish] #1645: apt-get source does not work for varnish on precise and trusty In-Reply-To: <049.39e406a6b89a08b089f4e61c37871c86@varnish-cache.org> References: <049.39e406a6b89a08b089f4e61c37871c86@varnish-cache.org> Message-ID: <064.e2bab237295d24f134a086f133240e15@varnish-cache.org> #1645: apt-get source does not work for varnish on precise and trusty -------------------------+---------------------- Reporter: timhilliard | Owner: fgsch Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Resolution: Keywords: | -------------------------+---------------------- Comment (by fgsch): Replying to [comment:2 dbu]: > I think this is a problem for all debian varnish 4 versions too. > > https://repo.varnish-cache.org/debian/dists/wheezy/varnish-4.0/binary- all/Packages.bz2 only lists varnish-doc but not the repository for varnish itself. There is nothing wrong with only having the doc packages in binary-all. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 6 18:44:51 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 06 Feb 2015 18:44:51 -0000 Subject: [Varnish] #1645: apt-get source does not work for varnish on precise and trusty In-Reply-To: <049.39e406a6b89a08b089f4e61c37871c86@varnish-cache.org> References: <049.39e406a6b89a08b089f4e61c37871c86@varnish-cache.org> Message-ID: <064.d8616715f208b9d55d21548fd27106c7@varnish-cache.org> #1645: apt-get source does not work for varnish on precise and trusty -------------------------+---------------------- Reporter: timhilliard | Owner: fgsch Type: defect | Status: closed Priority: normal | Milestone: Component: packaging | Version: unknown Severity: normal | Resolution: fixed Keywords: | -------------------------+---------------------- Changes (by fgsch): * status: new => closed * resolution: => fixed Comment: Fixed. Tested with precise and trusty. We dropped lucid support in 4.0. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Feb 7 12:59:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 07 Feb 2015 12:59:55 -0000 Subject: [Varnish] #1670: assert: default_oc_getobj(), storage/stevedore.c line 68 In-Reply-To: <046.effee11bf6ff3d79ffdf705e9fe5a83f@varnish-cache.org> References: <046.effee11bf6ff3d79ffdf705e9fe5a83f@varnish-cache.org> Message-ID: <061.2f2104a6d58dbcb25c12b5c7f946554a@varnish-cache.org> #1670: assert: default_oc_getobj(), storage/stevedore.c line 68 ----------------------+---------------------- Reporter: lkarsten | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Description changed by lkarsten: Old description: > Posting this on behalf of mattrobenolt. Originally reported on IRC. > > {{{ > Last panic at: Sun, 01 Feb 2015 03:40:29 GMT > Assert error in default_oc_getobj(), storage/stevedore.c line 68: > Condition(((o))->magic == (0x32851d42)) not true. > thread = (cache-worker) > version = varnish-4.0.3-rc2 revision 1b96340 > ident = Linux,3.13.0-43-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll > Backtrace: > 0x433d8a: /usr/sbin/varnishd() [0x433d8a] > 0x45853f: /usr/sbin/varnishd() [0x45853f] > 0x41ef24: /usr/sbin/varnishd(EXP_NukeOne+0x194) [0x41ef24] > 0x459148: /usr/sbin/varnishd(STV_alloc+0xe8) [0x459148] > 0x422b0e: /usr/sbin/varnishd(VFP_GetStorage+0x7e) [0x422b0e] > 0x420941: /usr/sbin/varnishd() [0x420941] > 0x436ca1: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x436ca1] > 0x449c58: /usr/sbin/varnishd() [0x449c58] > 0x7fbb6d59be9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) > [0x7fbb6d59be9a] > 0x7fbb6d2c92ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) > [0x7fbb6d2c92ed] > busyobj = 0x7fb5403bd020 { > ws = 0x7fb5403bd0e0 { > id = "bo", > {s,f,r,e} = {0x7fb5403bf008,+2184,(nil),+57368}, > }, > [.. cut ..] > }}} > > backtrace is redacted by me, I'll add the full panic dump when/if I get > permission to make it public. > > According to the report, this happened after running for 90 minutes on > live traffic with ~10kreq/s. > > Martin spent Monday looking at this, but did not find anything > conclusive. New description: Posting this on behalf of mattrobenolt. Originally reported on IRC. {{{ Last panic at: Sun, 01 Feb 2015 03:40:29 GMT Assert error in default_oc_getobj(), storage/stevedore.c line 68: Condition(((o))->magic == (0x32851d42)) not true. thread = (cache-worker) version = varnish-4.0.3-rc2 revision 1b96340 ident = Linux,3.13.0-43-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x433d8a: /usr/sbin/varnishd() [0x433d8a] 0x45853f: /usr/sbin/varnishd() [0x45853f] 0x41ef24: /usr/sbin/varnishd(EXP_NukeOne+0x194) [0x41ef24] 0x459148: /usr/sbin/varnishd(STV_alloc+0xe8) [0x459148] 0x422b0e: /usr/sbin/varnishd(VFP_GetStorage+0x7e) [0x422b0e] 0x420941: /usr/sbin/varnishd() [0x420941] 0x436ca1: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x436ca1] 0x449c58: /usr/sbin/varnishd() [0x449c58] 0x7fbb6d59be9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7fbb6d59be9a] 0x7fbb6d2c92ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fbb6d2c92ed] busyobj = 0x7fb5403bd020 { ws = 0x7fb5403bd0e0 { id = "bo", {s,f,r,e} = {0x7fb5403bf008,+2184,(nil),+57368}, }, refcnt = 1 retries = 0 failed = 0 state = 2 is_do_stream bodystatus = 0 (none), }, http[bereq] = { ws = 0x7fb5403bd0e0[bo] "GET", "/embed/comments/?base=default&f=worldstar&s_o=default&t_d=&t_e=Whitney%20Houston%E2%80%99s%20Daughter%20Found%20Unresponsive%20In%20Bathtub!&t_i=77244&t_t=Whitney%20Houston%E2%80%99s%20Daughter%20Found%20Unresponsive%20In%20Bathtub!&t_u=http%3A%2F%2Fwww.worldstarhiphop.com%2Fvideos%2Fvideo.php%3Fv%3DwshhgQrkMl5yaZeeeHN9&version=ff15479433461993d0738de53d5f22cf", "HTTP/1.1", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; SAMSUNG-SM-G900A Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/30.0.0.0 Mobile Safari/537.36", "Referer: http://worldstar.disqus.com/", "Accept-Language: en-US", "X-Requested-With: com.pt.wshhp", "X-Forwarded-For: 66.117.245.64", "X-Forwarded-Proto: http", "Host: disqus.com", "Disqus-Root: 1", "Accept-Encoding: gzip", "If-Modified-Since: Sun, 01 Feb 2015 03:40:11 GMT", "X-Varnish: 161797784", }, http[beresp] = { ws = 0x7fb5403bd0e0[bo] "HTTP/1.1", "200", "OK", "Server: nginx", "Date: Sun, 01 Feb 2015 03:40:16 GMT", "Content-Type: text/html; charset=utf-8", "Vary: Accept-Encoding", "Content-Security-Policy: script-src https://*.twitter.com:* https://api.adsnative.com/v1/ad.json *.adsafeprotected.com *.google- analytics.com https://glitter-services.disqus.com https://*.services.disqus.com:* disqus.com http://*.twitter.com:* a.disquscdn.com api.taboola.com referrer.disqus.com *.scorecardresearch.com *.moatads.com https://admin.appnext.com/offerWallApi.aspx 'unsafe-eval' https://mobile.adnxs.com/mob *.services.disqus.com:*", "Surrogate-Control: max-age=5", "Last-Modified: Sun, 01 Feb 2015 03:40:11 GMT", "Cache-Control: s-stalewhilerevalidate=3600, stale-while- revalidate=30, no-cache, must-revalidate, public, s-maxage=5", "p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"", "Timing-Allow-Origin: *", "X-Content-Type-Options: nosniff", "X-XSS-Protection: 1; mode=block", "Content-Encoding: gzip", "Surrogate-Grace: 3600s", "Grace: 30s", "Disqus-Cachetype: CACHE", "X-Served-By: app-155.dal01, shield-1.dal01", "X-Cache: MISS, HIT", "X-Cache-Hits: 0, 3", "Connection: keep-alive", "Content-Length: 11241", "X-Backend: embed, shield1", }, ws = 0x7fb5403bd270 { id = "obj", {s,f,r,e} = {0x7fba550912b0,+1224,(nil),+1224}, }, objcore (FETCH) = 0x7fb98a715c00 { refcnt = 2 flags = 0x0 objhead = 0x7fbaed8352e0 } obj (FETCH) = 0x7fba55091000 { vxid = 2309281432, http[obj] = { ws = (nil)[] "HTTP/1.1", "200", "OK", "Server: nginx", "Date: Sun, 01 Feb 2015 03:40:16 GMT", "Content-Type: text/html; charset=utf-8", "Vary: Accept-Encoding", "Content-Security-Policy: script-src https://*.twitter.com:* https://api.adsnative.com/v1/ad.json *.adsafeprotected.com *.google- analytics.com https://glitter-services.disqus.com https://*.services.disqus.com:* disqus.com http://*.twitter.com:* a.disquscdn.com api.taboola.com referrer.disqus.com *.scorecardresearch.com *.moatads.com https://admin.appnext.com/offerWallApi.aspx 'unsafe-eval' https://mobile.adnxs.com/mob *.services.disqus.com:*", "Surrogate-Control: max-age=5", "Last-Modified: Sun, 01 Feb 2015 03:40:11 GMT", "Cache-Control: s-stalewhilerevalidate=3600, stale-while- revalidate=30, no-cache, must-revalidate, public, s-maxage=5", "p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"", "Timing-Allow-Origin: *", "X-Content-Type-Options: nosniff", "X-XSS-Protection: 1; mode=block", "Content-Encoding: gzip", "Surrogate-Grace: 3600s", "Grace: 30s", "Disqus-Cachetype: CACHE", "X-Served-By: app-155.dal01, shield-1.dal01", "X-Cache: MISS, HIT", "X-Cache-Hits: 0, 3", "Content-Length: 11241", "X-Backend: embed, shield1", }, len = 4917, store = { 2810 { 1f 8b 08 00 00 00 00 00 00 03 ed 5d 6b 77 db c6 |...........]kw..| b5 fd ee 5f 31 61 6e 9b b5 ee 15 49 bc f8 00 2b |..._1an....I...+| 39 95 6d d9 52 62 cb a9 45 c7 4d e3 2c 2d 90 18 |9.m.Rb..E.M.,-..| 92 b0 40 80 c5 43 0c 93 95 ff 7e f7 19 3c 48 81 |.. at ..C....~..Xn........| 97 31 6d 7e 70 47 dc 0b d8 1b 38 d8 27 38 c3 2e |.1m~pG....8.'8..| [2043 more] }, }, }, obj (IMS) = 0x7fb620d27800 { vxid = 2331935377, http[obj] = { ws = (nil)[] "HTTP/1.1", "200", "OK", "Server: nginx", "Date: Sun, 01 Feb 2015 03:40:16 GMT", "Content-Type: text/html; charset=utf-8", "Vary: Accept-Encoding", "Content-Security-Policy: script-src https://*.twitter.com:* https://api.adsnative.com/v1/ad.json *.adsafeprotected.com *.google- analytics.com https://glitter-services.disqus.com https://*.services.disqus.com:* disqus.com http://*.twitter.com:* a.disquscdn.com api.taboola.com referrer.disqus.com *.scorecardresearch.com *.moatads.com https://admin.appnext.com/offerWallApi.aspx 'unsafe-eval' https://mobile.adnxs.com/mob *.services.disqus.com:*", "Surrogate-Control: max-age=5", "Last-Modified: Sun, 01 Feb 2015 03:40:11 GMT", "Cache-Control: s-stalewhilerevalidate=3600, stale-while- revalidate=30, no-cache, must-revalidate, public, s-maxage=5", "p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"", "Timing-Allow-Origin: *", "X-Content-Type-Options: nosniff", "X-XSS-Protection: 1; mode=block", "Content-Encoding: gzip", "Surrogate-Grace: 3600s", "Grace: 30s", "Disqus-Cachetype: CACHE", "X-Served-By: app-155.dal01, shield-1.dal01", "X-Cache: MISS, HIT", "X-Cache-Hits: 0, 1", "Content-Length: 11241", "X-Backend: embed, shield1", }, len = 11241, store = { 11241 { 1f 8b 08 00 00 00 00 00 00 03 ed 5d 6b 77 db c6 |...........]kw..| b5 fd ee 5f 31 61 6e 9b b5 ee 15 49 bc f8 00 2b |..._1an....I...+| 39 95 6d d9 52 62 cb a9 45 c7 4d e3 2c 2d 90 18 |9.m.Rb..E.M.,-..| 92 b0 40 80 c5 43 0c 93 95 ff 7e f7 19 3c 48 81 |.. at ..C....~.. Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 07:50:20 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 07:50:20 -0000 Subject: [Varnish] #1673: Date header not updated in cached response In-Reply-To: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> References: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> Message-ID: <063.a908318bebbc3bb0ed5f27c735b6a86f@varnish-cache.org> #1673: Date header not updated in cached response ------------------------+-------------------- Reporter: ioppermann | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Description changed by phk: Old description: > varnish4 does not update the Date: header on a cache hit. > > Example response: > > HTTP/1.1 200 OK > Server: nginx > Date: Thu, 05 Feb 2015 15:09:01 GMT > Content-Type: image/jpeg > Last-Modified: Thu, 05 Feb 2015 11:12:21 GMT > ETag: "54d35015-48bc" > Cache-Control: max-age=72200 > X-Varnish: 781994914 780477333 > Age: 2974 > Via: 1.1 varnish-v4 > X-Cache: HIT 2974 69226.381 imagestore2 > Content-Length: 18620 > Connection: close > > Time of request: > Thu Feb 5 16:58:35 CET 2015 > > This was already fixed back in varnish2: https://www.varnish- > cache.org/trac/ticket/157 New description: varnish4 does not update the Date: header on a cache hit. Example response: {{{ HTTP/1.1 200 OK Server: nginx Date: Thu, 05 Feb 2015 15:09:01 GMT Content-Type: image/jpeg Last-Modified: Thu, 05 Feb 2015 11:12:21 GMT ETag: "54d35015-48bc" Cache-Control: max-age=72200 X-Varnish: 781994914 780477333 Age: 2974 Via: 1.1 varnish-v4 X-Cache: HIT 2974 69226.381 imagestore2 Content-Length: 18620 Connection: close Time of request: Thu Feb 5 16:58:35 CET 2015 }}} This was already fixed back in varnish2: https://www.varnish- cache.org/trac/ticket/157 -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 07:54:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 07:54:13 -0000 Subject: [Varnish] #1669: Assert error in http_Write() In-Reply-To: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> References: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> Message-ID: <059.1cadf0e2ce74de04655dbd4d3c4cf8a6@varnish-cache.org> #1669: Assert error in http_Write() ----------------------+-------------------- Reporter: yarivh | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 3.0.6 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Description changed by phk: Old description: > A few times a day varnish panics with the follwing : > > Feb 3 14:12:15 XXXXXXX varnishd[15432]: Child (19229) Panic message: > Assert error in http_Write(), cache_http.c line 1110:#012 > Condition((hp->hd[HTTP_HDR_STATUS].b > ) != 0) not true.#012thread = (cache-worker)#012ident = > Linux,2.6.32-431.29.2.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012Backtrace:#012 > 0x42e1c6: /usr/sbin/va > rnishd() [0x42e1c6]#012 0x428f53: /usr/sbin/varnishd(http_Write+0x293) > [0x428f53]#012 0x43136d: /usr/sbin/varnishd(RES_WriteObj+0x33d) > [0x43136d]#012 0x418cdc: /usr/sbi > n/varnishd(CNT_Session+0x7ac) [0x418cdc]#012 0x42fc56: > /usr/sbin/varnishd() [0x42fc56]#012 0x7f750e94a9d1: > /lib64/libpthread.so.0(+0x79d1) [0x7f750e94a9d1]#012 0x7f750e > 69786d: /lib64/libc.so.6(clone+0x6d) [0x7f750e69786d]#012sp = > 0x7f74c3f25008 {#012 fd = 195, id = 195, xid = 1321300019,#012 client = > 2.22.50.84 45769,#012 step = STP_D > ELIVER,#012 handling = deliver,#012 err_code = 200, err_reason = > (null),#012 restarts = 0, esi_level = 0#012 flags = is_gunzip#012 > bodystatus = 4#012 ws = 0x7f74c3f > 25080 { #012 id = "sess",#012 {s,f,r,e} = > {0x7f74c3f275f8,+1008,(nil),+131072},#012 },#012 http[req] = {#012 > ws = 0x7f74c3f25080[sess]#012 "GET",#012 > "/home/1,7340,L-1335-17979-31594109,00.html",#012 "HTTP/1.1",#012 > "Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",#012 > "From: goog > lebot(at)googlebot.com",#012 "User-Agent: Mozilla/5.0 (compatible; > Googlebot/2.1; +http://www.google.com/bot.html)",#012 "X-Akamai- > Edgescape: georegion=288,count > > OS and varnish config and version : > > #### varnish version varnish-3.0.6-1.el5.centos.x86_64 ### > ### OS centos 6.5 #### > ### Kernel 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 > x86_64 x86_64 x86_64 GNU/Linux ### > NFILES=131072 > MEMLOCK=82000 > RELOAD_VCL=1 > VARNISH_VCL_CONF=/etc/varnish/default.vcl > VARNISH_LISTEN_PORT=80 > VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 > VARNISH_ADMIN_LISTEN_PORT=6082 > VARNISH_SECRET_FILE=/etc/varnish/secret > VARNISH_MIN_THREADS=50 > VARNISH_MAX_THREADS=1000 > VARNISH_THREAD_TIMEOUT=120 > VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin > VARNISH_STORAGE_SIZE=7G > VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" > VARNISH_TTL=86400 > DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T > ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ > -t ${VARNISH_TTL} \ > -w > ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ > -u varnish -g varnish \ > -S ${VARNISH_SECRET_FILE} \ > -s ${VARNISH_STORAGE} \ > -p sess_timeout=305\ > -p http_req_hdr_len=16384 \ > -p http_req_size=65536 \ > -p http_resp_hdr_len=16384 \ > -p http_resp_size=65536 \ > -p http_gzip_support=off \ > -p pipe_timeout=120 \ > -p sess_workspace=131072 \ > -p http_max_hdr=256" New description: A few times a day varnish panics with the follwing : {{{ Feb 3 14:12:15 XXXXXXX varnishd[15432]: Child (19229) Panic message: Assert error in http_Write(), cache_http.c line 1110: Condition((hp->hd[HTTP_HDR_STATUS].b) != 0) not true. thread = (cache-worker) ident = Linux,2.6.32-431.29.2.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e1c6: /usr/sbin/varnishd() [0x42e1c6] 0x428f53: /usr/sbin/varnishd(http_Write+0x293) [0x428f53] 0x43136d: /usr/sbin/varnishd(RES_WriteObj+0x33d) [0x43136d] 0x418cdc: /usr/sbin/varnishd(CNT_Session+0x7ac) [0x418cdc] 0x42fc56: /usr/sbin/varnishd() [0x42fc56] 0x7f750e94a9d1: /lib64/libpthread.so.0(+0x79d1) [0x7f750e94a9d1] 0x7f750e69786d: /lib64/libc.so.6(clone+0x6d) [0x7f750e69786d] sp = 0x7f74c3f25008 { fd = 195, id = 195, xid = 1321300019, client = 2.22.50.84 45769, step = STP_DELIVER, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = is_gunzip bodystatus = 4 ws = 0x7f74c3f25080 { id = "sess", {s,f,r,e} = {0x7f74c3f275f8,+1008,(nil),+131072}, }, http[req] = { ws = 0x7f74c3f25080[sess] "GET", "/home/1,7340,L-1335-17979-31594109,00.html", "HTTP/1.1", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "From: googlebot(at)googlebot.com", "User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", "X-Akamai-Edgescape: georegion=288,count }}} OS and varnish config and version : {{{ #### varnish version varnish-3.0.6-1.el5.centos.x86_64 ### ### OS centos 6.5 #### ### Kernel 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ### NFILES=131072 MEMLOCK=82000 RELOAD_VCL=1 VARNISH_VCL_CONF=/etc/varnish/default.vcl VARNISH_LISTEN_PORT=80 VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 VARNISH_SECRET_FILE=/etc/varnish/secret VARNISH_MIN_THREADS=50 VARNISH_MAX_THREADS=1000 VARNISH_THREAD_TIMEOUT=120 VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin VARNISH_STORAGE_SIZE=7G VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" VARNISH_TTL=86400 DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -S ${VARNISH_SECRET_FILE} \ -s ${VARNISH_STORAGE} \ -p sess_timeout=305\ -p http_req_hdr_len=16384 \ -p http_req_size=65536 \ -p http_resp_hdr_len=16384 \ -p http_resp_size=65536 \ -p http_gzip_support=off \ -p pipe_timeout=120 \ -p sess_workspace=131072 \ -p http_max_hdr=256" }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 10:27:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 10:27:58 -0000 Subject: [Varnish] #1672: Assert on cached object with non-200 status and bogus 304 backend reply In-Reply-To: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> References: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> Message-ID: <059.04e77ca93ca9a09cefa5fadfb8ed444e@varnish-cache.org> #1672: Assert on cached object with non-200 status and bogus 304 backend reply ----------------------+---------------------------------------- Reporter: martin | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Changes (by Poul-Henning Kamp ): * owner: => Poul-Henning Kamp * status: new => closed * resolution: => fixed Comment: In [021ffeef1dbb9510d147f080795a568253a1a13a]: {{{ #!CommitTicketReference repository="" revision="021ffeef1dbb9510d147f080795a568253a1a13a" If the backend sends 304 to a non-conditional fetch, we should not assert but fail the fetch. Fixes #1672 Based mostly on patch from martin }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 12:46:23 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 12:46:23 -0000 Subject: [Varnish] #1666: make init script check config before restarting In-Reply-To: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> References: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> Message-ID: <065.8550351a53345848559bf7df6b3dbf9d@varnish-cache.org> #1666: make init script check config before restarting --------------------------+----------------------- Reporter: KlavsKlavsen | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 12:46:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 12:46:41 -0000 Subject: [Varnish] #1666: make init script check config before restarting In-Reply-To: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> References: <050.d39f2f7511b733263cce920f8a427047@varnish-cache.org> Message-ID: <065.35c572eb4ed7fbcbdada7325c945774b@varnish-cache.org> #1666: make init script check config before restarting --------------------------+----------------------- Reporter: KlavsKlavsen | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | --------------------------+----------------------- Comment (by lkarsten): Discussed during bugwash today. It makes sense, and we should do this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 13:04:11 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 13:04:11 -0000 Subject: [Varnish] #1663: Both chmod 0755 and chown mgmt.uid used In-Reply-To: <048.6c04e642e3b0b687751b8da68f22ad7a@varnish-cache.org> References: <048.6c04e642e3b0b687751b8da68f22ad7a@varnish-cache.org> Message-ID: <063.bac24385272acf28a354e4f2a60d6329@varnish-cache.org> #1663: Both chmod 0755 and chown mgmt.uid used ------------------------+-------------------- Reporter: puiterwijk | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ------------------------+-------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 13:21:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 13:21:30 -0000 Subject: [Varnish] #1667: std.cache_req_body() does not respect size limit for chunked transfer encoding request bodies. In-Reply-To: <043.04919218c8eeaea07623549965d6da5a@varnish-cache.org> References: <043.04919218c8eeaea07623549965d6da5a@varnish-cache.org> Message-ID: <058.cc17c9bfb082a1d802e04fac7f8fd19e@varnish-cache.org> #1667: std.cache_req_body() does not respect size limit for chunked transfer encoding request bodies. ----------------------+------------------------ Reporter: daghf | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: duplicate Keywords: | ----------------------+------------------------ Changes (by aondio): * status: new => closed * resolution: => duplicate Comment: That's related to #1664 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 13:21:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 13:21:49 -0000 Subject: [Varnish] #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 In-Reply-To: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> References: <043.782dcf41379c991d410f1ae63ea4edff@varnish-cache.org> Message-ID: <058.b67b40e42db0c1f3a4aec6f134b164f7@varnish-cache.org> #1664: Assert error in VFP_Error(), cache/cache_fetch_proc.c line 61 ----------------------+---------------------- Reporter: daghf | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: unknown Severity: normal | Resolution: Keywords: | ----------------------+---------------------- Changes (by aondio): * owner: => aondio Old description: > If std.cache_req_body() is called with a POST body larger than the > configured max size, we crash. New description: If std.cache_req_body() is called with a POST body larger than the configured max size, we crash. For chunked transfer encoding, std.cache_req_body() will happily cache request bodies larger than the provided size limitation. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 13:25:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 13:25:48 -0000 Subject: [Varnish] #1665: Wrong behavior of timeout_req In-Reply-To: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> References: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> Message-ID: <062.b4191e7942e342d3af403f5851dbf5d9@varnish-cache.org> #1665: Wrong behavior of timeout_req -----------------------+--------------------- Reporter: sorinescu | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Comment (by aondio): The suggested patch can't be applied to the varnish source, but we have pushed a commit that should fix the problem. commit f9aa6281f5194ed27cfa4c7ad7ce50cdb8f9bf1c -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 14:19:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 14:19:03 -0000 Subject: [Varnish] #1669: Assert error in http_Write() In-Reply-To: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> References: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> Message-ID: <059.917daf488089b7a26eb3dc0f95fa2f0e@varnish-cache.org> #1669: Assert error in http_Write() ----------------------+------------------------- Reporter: yarivh | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.6 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: Without a full panic log (varnishadm panic.show) it's hard to pinpoint exactly what happened in this case. As Varnish 3.0.6 is an old release, we suggest updating to Varnish 4 and this issue should be resolved there. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 14:22:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 14:22:30 -0000 Subject: [Varnish] #1673: Date header not updated in cached response In-Reply-To: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> References: <048.6771cc9292c849b70fd02ef19c153e98@varnish-cache.org> Message-ID: <063.34ed4178e384e3cb12dd87528d67d095@varnish-cache.org> #1673: Date header not updated in cached response ------------------------+---------------------- Reporter: ioppermann | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 4.0.2 Severity: normal | Resolution: wontfix Keywords: | ------------------------+---------------------- Changes (by martin): * status: new => closed * resolution: => wontfix Comment: The behavior of Date headers were changed with the release of Varnish 4.0 in commit 89870e0bbd785964c322e1e453f492d747731c88 in order to be more RFC compliant. This is the new behavior. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 15:10:44 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 15:10:44 -0000 Subject: [Varnish] #1669: Assert error in http_Write() In-Reply-To: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> References: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> Message-ID: <059.059ea8ca071b5a39b27e5b8618e96930@varnish-cache.org> #1669: Assert error in http_Write() ----------------------+------------------------- Reporter: yarivh | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.6 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Comment (by yarivh): Hi , Sorry for the inconvenience but an upgrade to v4 is a pretty big project that will take me a few months. Here is the panic.show - maybe that can point to a misconfig or else on my part (Thanks in advance) varnish> panic.show 200 Last panic at: Mon, 09 Feb 2015 08:30:49 GMT Assert error in http_Write(), cache_http.c line 1110: Condition((hp->hd[HTTP_HDR_STATUS].b) != 0) not true. thread = (cache-worker) ident = Linux,2.6.32-431.29.2.el6.x86_64,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x42e1c6: /usr/sbin/varnishd() [0x42e1c6] 0x428f53: /usr/sbin/varnishd(http_Write+0x293) [0x428f53] 0x43136d: /usr/sbin/varnishd(RES_WriteObj+0x33d) [0x43136d] 0x418cdc: /usr/sbin/varnishd(CNT_Session+0x7ac) [0x418cdc] 0x42fc56: /usr/sbin/varnishd() [0x42fc56] 0x7f03f3d939d1: /lib64/libpthread.so.0(+0x79d1) [0x7f03f3d939d1] 0x7f03f3ae086d: /lib64/libc.so.6(clone+0x6d) [0x7f03f3ae086d] sp = 0x7f039bb45008 { fd = 103, id = 103, xid = 901173933, client = 2.22.50.93 53732, step = STP_DELIVER, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esi_level = 0 flags = is_gunzip bodystatus = 4 ws = 0x7f039bb45080 { id = "sess", {s,f,r,e} = {0x7f039bb475f8,+744,(nil),+131072}, }, http[req] = { ws = 0x7f039bb45080[sess] "GET", "/articles/0%2C7340%2CL-3641851%2C00.html", "HTTP/1.1", "Pragma: no-cache", "Accept: */*", "From: bingbot(at)microsoft.com", "User-Agent: Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)", "True-Client-IP: 23.101.123.168", "X-Akamai-CONFIG-LOG-DETAIL: true", "TE: chunked;q=1.0", "Connection: TE", "Akamai-Origin-Hop: 2", "Via: 1.1 v1-akamaitech.net(ghost) (AkamaiGHost), 1.1 akamai.net(ghost) (AkamaiGHost)", "Host: www.xxx.co.il", "Cache-Control: no-cache, max-age=106", "Connection: keep-alive", "Cookie: akamai=true;akamai=true", "X-Forwarded-For: 23.101.123.168, 23.3.96.92, 2.22.50.93", }, worker = 0x7f03b22a8a90 { ws = 0x7f03b22a8cc8 { overflow id = "wrk", {s,f,r,e} = {0x7f03b2296a40,+65536,(nil),+65536}, }, http[resp] = { ws = 0x7f03b22a8cc8[wrk] "HTTP/1.1", "OK", "Pragma: no-cache", "vg_id: 1", "X-me: 09", "Content-Type: text/html; charset=UTF-8", "Content-Length: 154573", "Accept-Ranges: bytes", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f032e4f4800 { xid = 901173933, ws = 0x7f032e4f4818 { id = "obj", {s,f,r,e} = {0x7f032e4f49f0,+200,(nil),+232}, }, http[obj] = { ws = 0x7f032e4f4818[obj] "HTTP/1.1", "OK", "Date: Mon, 09 Feb 2015 08:30:47 GMT", "Server: Apache", "Pragma: no-cache", "vg_id: 1", "X-me: 09", "Content-Type: text/html; charset=UTF-8", "Content-Length: 154573", }, len = 154573, store = { 77286 { 3c 21 44 4f 43 54 59 50 45 20 68 74 6d 6c 3e 0a |.| 3c 21 2d 2d 20 56 69 67 6e 65 74 74 65 20 56 36 |. Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 15:11:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 15:11:50 -0000 Subject: [Varnish] #1669: Assert error in http_Write() In-Reply-To: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> References: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> Message-ID: <059.add2bb331dce39ea2a1fb5aa14e6efb4@varnish-cache.org> #1669: Assert error in http_Write() ----------------------+----------------------- Reporter: yarivh | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 3.0.6 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by yarivh): * status: closed => reopened * resolution: worksforme => -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 15:19:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 15:19:55 -0000 Subject: [Varnish] #1669: Assert error in http_Write() In-Reply-To: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> References: <044.389fdcfeb2bc441950de35f87e6b49da@varnish-cache.org> Message-ID: <059.5158e60873a72700418c5f164da9fcd8@varnish-cache.org> #1669: Assert error in http_Write() ----------------------+---------------------- Reporter: yarivh | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 3.0.6 Severity: normal | Resolution: invalid Keywords: | ----------------------+---------------------- Changes (by martin): * status: reopened => closed * resolution: => invalid Comment: That helped. You are running out of worker workspace, causing an empty header in the reply. Try increasing the thread_pool_workspace runtime parameter (ref https://www.varnish- cache.org/docs/3.0/reference/varnishd.html#run-time-parameters). Changes to this will take time to propagate, so a restart to apply it immediately might be advisable. Closing as 'invalid' as this is a configuration error and not a bug. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 16:00:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 16:00:03 -0000 Subject: [Varnish] #1674: Debian: varnishncsa fails to start during boot Message-ID: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> #1674: Debian: varnishncsa fails to start during boot -------------------+----------------------- Reporter: idl0r | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 4.0.2 | Severity: normal Keywords: | -------------------+----------------------- Hi, this is related to the repo.varnish-cache.org Debian packages. varnishncsa fails to start during boot because it will be started too fast (e.g. right after varnish itself) while varnish did not yet create that _.vsm file so varnishncsa can't find it and doesn't start. A detect and perhaps a very short delay might be good in this case. Example: {{{ _count=0 while [ ! -f $SOCKET -a $_count -lt 5 ]; do sleep 1 _count=$((_count + 1)) done if [ ! -f $SOCKET ]; then still not available ... else echo "Done!" fi }}} 5 Seconds should be fine. Even 10 might work but nothing higher should be used. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 9 16:39:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 09 Feb 2015 16:39:53 -0000 Subject: [Varnish] #1617: Varnish 4 weird memory consumption / calculation In-Reply-To: <046.253c419c4f87de3d220969908bfcf866@varnish-cache.org> References: <046.253c419c4f87de3d220969908bfcf866@varnish-cache.org> Message-ID: <061.e547e2167ebb393ac022d03c8388d9dd@varnish-cache.org> #1617: Varnish 4 weird memory consumption / calculation ----------------------+------------------------- Reporter: whocares | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: worksforme Keywords: | ----------------------+------------------------- Changes (by daghf): * status: new => closed * resolution: => worksforme Comment: Hi When a backend transmits a chunked transfer encoded response, Varnish can't know up front how much storage to allocate for that object. It will allocate 'fetch_chunksize' sized chunks, one at a time as the object is being fetched. In the case that streaming is disabled, a trim operation is issued on the last storage chunk, which will effectively resize it to the correct size, thus avoiding the overhead of always consuming a full sized chunk. However, with streaming enabled (as is the default in Varnish 4.0), we currently can't reliably trim a storage chunk. There are plans and some work in current master to redesign the storage/stevedore api which will allow us to handle this differently, but currently it's mostly on the drawing board. For now the best bet is to tune fetch_chunksize. The default has since been decreased from 128k to 16k (c84d1f886671fd98317890b5a41bd60f0837206f), which should very much lessen the overhead some users are seeing. Alternatively, you can set beresp.do_stream = false in vcl_backend_fetch, to disable streaming. This will enable 3.0-style trimming of storage chunks. Closing this as a configuration issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 10 12:00:22 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Feb 2015 12:00:22 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. Message-ID: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------- Reporter: zaterio@? | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: trunk | Severity: normal Keywords: in_waiter tcp_handle | ----------------------------------+---------------------- previously we only had panic according to ticket # 1628. then update to trunk version a281a10 and added a 90 GB file backend for some domains. with new version #1628 has not been detected, but this new condition appeared: varnish> panic.show 200 Last panic at: Sat, 07 Feb 2015 06:56:13 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 96: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision a281a10 ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-sfile,-smalloc,-hclassic,epoll Backtrace: 0x4356e4: pan_ic+0x134 0x415b54: tcp_handle+0x254 0x466fe9: Wait_Handle+0x89 0x467881: vwe_thread+0xf1 0x7fd62d5b8b50: libpthread.so.0(+0x6b50) [0x7fd62d5b8b50] 0x7fd62d30270d: libc.so.6(clone+0x6d) [0x7fd62d30270d] varnishd -V varnishd (varnish-trunk revision a281a10) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \ -T XXX.XXX.XXX.XXX:6082 \ -f /etc/varnish/default.vcl \ -h classic,16383 \ -s ram1=malloc,10G \ -s disk1=file,/varnishcache/varnish.bin,90G \ -p thread_pools=2 \ -p thread_pool_min=100 \ -p thread_pool_max=3000 \ -p thread_pool_add_delay=2 \ -p auto_restart=on \ -p listen_depth=2048 \ -p ping_interval=3 \ -p cli_timeout=25 \ -p ban_dups=on" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 10 15:55:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Feb 2015 15:55:45 -0000 Subject: [Varnish] #1676: improve solaris least privileges & workdir permissions Message-ID: <043.17b8f4df3156ae13bd2879c377c27b6f@varnish-cache.org> #1676: improve solaris least privileges & workdir permissions ----------------------+--------------------------------- Reporter: slink | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+--------------------------------- {{{ [uplex at varnishdev-il ~/src/varnish-cache]$ pfexec /tmp/sbin/varnishd -a 127.0.0.1:8080 -b 127.0.0.1:80 & [1] 90955 [uplex at varnishdev-il ~/src/varnish-cache]$ Message from C-compiler: ld: fatal: file ./vcl_boot.so: unlink failed: Permission denied collect2: error: ld returned 1 exit status Running C-compiler failed, exited with 1 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 10 20:08:55 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Feb 2015 20:08:55 -0000 Subject: [Varnish] #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: In-Reply-To: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> References: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> Message-ID: <062.544006e13b34d6721f48eb909c8a7266@varnish-cache.org> #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: -----------------------+---------------------------------- Reporter: shing6326 | Owner: Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.1 Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------- Comment (by mattrobenolt): Got a panic as well from this in 4.0.3-rc2 {{{ varnish> panic.show 200 Last panic at: Tue, 10 Feb 2015 20:02:20 GMT Assert error in cnt_lookup(), cache/cache_req_fsm.c line 430: Condition((oc->exp_flags & (1<<7)) == 0) not true. thread = (cache-worker) version = varnish-4.0.3-rc2 revision 1b96340 ident = Linux,3.13.0-43-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x433d8a: /usr/sbin/varnishd() [0x433d8a] 0x4379a6: /usr/sbin/varnishd() [0x4379a6] 0x4382d9: /usr/sbin/varnishd(CNT_Request+0x869) [0x4382d9] 0x42dd23: /usr/sbin/varnishd(HTTP1_Session+0x3f3) [0x42dd23] 0x43c268: /usr/sbin/varnishd() [0x43c268] 0x436ca1: /usr/sbin/varnishd(Pool_Work_Thread+0x381) [0x436ca1] 0x449c58: /usr/sbin/varnishd() [0x449c58] 0x7f393684ee9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f393684ee9a] 0x7f393657c2ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f393657c2ed] req = 0x7f3492b5b020 { sp = 0x7f38ce3411e0, vxid = 1133905496, step = R_STP_LOOKUP, req_body = R_BODY_NONE, restarts = 0, esi_level = 0, sp = 0x7f38ce3411e0 { fd = 11605, vxid = 60163671, client = 108.56.228.37 53694, step = S_STP_WORKING, }, worker = 0x7f3920a2ec20 { ws = 0x7f3920a2ee40 { id = "wrk", {s,f,r,e} = {0x7f3920a2e410,0x7f3920a2e410,(nil),+2048}, }, VCL::method = 0x0, VCL::return = deliver, }, ws = 0x7f3492b5b1b8 { id = "req", {s,f,r,e} = {0x7f3492b5d010,+4640,(nil),+73744}, }, http[req] = { ws = 0x7f3492b5b1b8[req] "GET", "/embed/comments/?base=default&f=ewm&s_o=default&t_d=Beck%20calls%20Kanye%20%27genius%27%20after%20Grammy%20interruption&t_t=Beck%20calls%20Kanye%20%27genius%27%20after%20Grammy%20interruption&t_u=http%3A%2F%2Fwww.ew.com%2Farticle%2F2015%2F02%2F09 %2Fbeck-responds-kanyes-grammy- interruption&version=d1b39333cdab5cb490d069fe62825a04", "HTTP/1.1", "Connection: keep-alive", "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36", "Referer: http://www.ew.com/article/2015/02/09/beck-responds-kanyes- grammy-interruption", "Accept-Language: en-US,en;q=0.8", "X-Forwarded-For: 108.56.228.37", "X-Forwarded-Proto: http", "Host: disqus.com", "Disqus-Root: 1", "Accept-Encoding: gzip", }, vcl = { srcname = { "input", "Builtin", "includes/4.0/recv-common.vcl", "includes/downgrade.vcl", "includes/4.0/responses/301-moved.vcl", "includes/4.0/responses/302-found.vcl", "includes/4.0/responses/404-not-found.vcl", "includes/4.0/responses/999-any.vcl", "includes/x-served-by.vcl", }, }, obj (REQ) = 0x7f3421b4e000 { vxid = 2187265575, http[obj] = { ws = (nil)[] "HTTP/1.1", "200", "OK", "Server: nginx", "Date: Tue, 10 Feb 2015 20:02:03 GMT", "Content-Type: text/html; charset=utf-8", "Content-Security-Policy: script-src https://*.twitter.com:* https://api.adsnative.com/v1/ad.json *.adsafeprotected.com *.google- analytics.com https://glitter-services.disqus.com https://*.services.disqus.com:* disqus.com http://*.twitter.com:* a.disquscdn.com api.taboola.com referrer.disqus.com *.scorecardresearch.com *.moatads.com https://admin.appnext.com/offerWallApi.aspx 'unsafe-eval' https://mobile.adnxs.com/mob *.services.disqus.com:*", "Cache-Control: s-stalewhilerevalidate=3600, stale-while- revalidate=30, no-cache, must-revalidate, public, s-maxage=5", "p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"", "Timing-Allow-Origin: *", "X-Content-Type-Options: nosniff", "X-XSS-Protection: 1; mode=block", "Vary: Accept-Encoding", "Surrogate-Control: max-age=5", "Last-Modified: Tue, 10 Feb 2015 00:03:45 GMT", "Content-Encoding: gzip", "Surrogate-Grace: 3600s", "Grace: 30s", "Disqus-Cachetype: CACHE", "X-Served-By: app-167.dal01, shield-1.dal01", "X-Cache: MISS, HIT", "X-Cache-Hits: 0, 2", "Content-Length: 2089", "X-Backend: embed, shield1", }, len = 2089, store = { 2089 { 1f 8b 08 00 00 00 00 00 00 03 b5 58 6d 73 db 36 |...........Xms.6| 12 fe ee 5f c1 f0 a6 ed 17 49 7c 7f 53 24 77 7c |..._.....I|.S$w|| b1 db c9 5c 73 4d 63 b7 b9 9b 4c 46 03 93 90 84 |...\sMc...LF....| 98 24 58 00 94 a2 64 fc df fb 00 a0 25 d9 9e b8 |.$X...d.....%...| [2025 more] }, }, }, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 10 21:03:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 10 Feb 2015 21:03:56 -0000 Subject: [Varnish] #1677: varnish init script returns before varnishlog can access it. Message-ID: <049.dd12648e61a15ef97afa5f73936bd7a7@varnish-cache.org> #1677: varnish init script returns before varnishlog can access it. -------------------------+-------------------- Reporter: mattjbarlow | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: unknown | Severity: normal Keywords: | -------------------------+-------------------- If you restart the varnish service and the varnishlog service directly after, varnishlog fails to restart. You must wait a few seconds before restarting varnishlog. I tested this on Ubuntu 14.04 installing varnish 4.0.2-1 from the varnish-cache repository. You can duplicate this by writing a script that performs a 'service varnish restart' followed by a 'service varnishlog restart' (see gist below). The varnishlog restart will fail, and then when you manually run it it will succeed the 2nd time. https://gist.github.com/mattjbarlow/bf067ab4d09852b9d03b -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 11:00:06 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 11:00:06 -0000 Subject: [Varnish] #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: In-Reply-To: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> References: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> Message-ID: <062.bc854158369a2db22569433496fa750b@varnish-cache.org> #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: -----------------------+----------------------------------------------- Reporter: shing6326 | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.1 Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------------------------------- Changes (by Martin Blix Grydeland ): * owner: => Martin Blix Grydeland * status: reopened => closed * resolution: => fixed Comment: In [e08ed1881b7d49a3f3bbf2f57cd792946976892a]: {{{ #!CommitTicketReference repository="" revision="e08ed1881b7d49a3f3bbf2f57cd792946976892a" Remove a racy assertion on the OC_EF_DYING state This state could be set by the expiry timer even though a reference is grabbed during lookup, causing the assertion to trigger. Fixes: #1539 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 12:24:25 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 12:24:25 -0000 Subject: [Varnish] #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: In-Reply-To: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> References: <047.9e25f0913e70f9b448183830fc7e3ec7@varnish-cache.org> Message-ID: <062.2c4df38527d85a893ea0f51c00e05ddc@varnish-cache.org> #1539: Assert error in cnt_lookup(), cache/cache_req_fsm.c line 411: -----------------------+----------------------------------------------- Reporter: shing6326 | Owner: Martin Blix Grydeland Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.1 Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------------------------------- Comment (by Lasse Karstensen ): In [4ddf0eef791a030694380fbb6736d87a43813c16]: {{{ #!CommitTicketReference repository="" revision="4ddf0eef791a030694380fbb6736d87a43813c16" Remove a racy assertion on the OC_EF_DYING state This state could be set by the expiry timer even though a reference is grabbed during lookup, causing the assertion to trigger. Fixes: #1539 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 12:59:15 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 12:59:15 -0000 Subject: [Varnish] #1672: Assert on cached object with non-200 status and bogus 304 backend reply In-Reply-To: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> References: <044.2580e6305cef4c20762b1d60b28f04d0@varnish-cache.org> Message-ID: <059.58094d501d8a924e1742f1a059363643@varnish-cache.org> #1672: Assert on cached object with non-200 status and bogus 304 backend reply ----------------------+---------------------------------------- Reporter: martin | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: Severity: normal | Resolution: fixed Keywords: | ----------------------+---------------------------------------- Comment (by Lasse Karstensen ): In [b7b4cf08134ea6cdb06214caac6fb56f94cc5cec]: {{{ #!CommitTicketReference repository="" revision="b7b4cf08134ea6cdb06214caac6fb56f94cc5cec" Do not recognize a 304 as a valid revalidation response for an ims_oc without OF_IMSCAND Fixes: #1672 Conflicts: bin/varnishd/cache/cache_fetch.c }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 13:01:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 13:01:30 -0000 Subject: [Varnish] #1637: Assert error in VFP_Fetch_Body() In-Reply-To: <045.21ddee12af57707010d10fb844941331@varnish-cache.org> References: <045.21ddee12af57707010d10fb844941331@varnish-cache.org> Message-ID: <060.2bbc8dedf0f741ee67cc1e6efad4aef4@varnish-cache.org> #1637: Assert error in VFP_Fetch_Body() --------------------------+--------------------- Reporter: llavaud | Owner: daghf Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: major | Resolution: fixed Keywords: Assert error | --------------------------+--------------------- Comment (by Lasse Karstensen ): In [41ee65f07722e397133bb14944163d0ae2d900c9]: {{{ #!CommitTicketReference repository="" revision="41ee65f07722e397133bb14944163d0ae2d900c9" Fail the fetch processing if the vep callback failed. Fixes: #1637 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 14:08:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 14:08:48 -0000 Subject: [Varnish] #1462: ReqURL is emitted twice In-Reply-To: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> References: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> Message-ID: <058.f7a00da55f89988cbcc250e120292fa9@varnish-cache.org> #1462: ReqURL is emitted twice --------------------+------------------------------ Reporter: scoof | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0-TP2 Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------ Comment (by Lasse Karstensen ): In [19f2d802a7175c7d0cc66c08ba368520130c31d2]: {{{ #!CommitTicketReference repository="" revision="19f2d802a7175c7d0cc66c08ba368520130c31d2" Varnishncsa logs the first value if on request side and uses the last entry if on delivery side. Fixes #1462 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 14:08:48 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 14:08:48 -0000 Subject: [Varnish] #1665: Wrong behavior of timeout_req In-Reply-To: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> References: <047.e6f799ed399a9059522e2ae83022e4c0@varnish-cache.org> Message-ID: <062.f97365781ecd762fbdb7287c26d0a7f4@varnish-cache.org> #1665: Wrong behavior of timeout_req -----------------------+--------------------- Reporter: sorinescu | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 4.0.2 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Comment (by Lasse Karstensen ): In [bbfbef651744d9749a3a818c588aa1a1277f5b91]: {{{ #!CommitTicketReference repository="" revision="bbfbef651744d9749a3a818c588aa1a1277f5b91" Be more accurate when computing client RX_TIMEOUT. This a backport of f9aa6281f5194ed27cfa4c7ad7ce50cdb8f9bf1c in master. Fixes #1665. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 11 16:40:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 11 Feb 2015 16:40:42 -0000 Subject: [Varnish] #1678: Varnish coredump (assert error in http_write()) Message-ID: <053.123d5446dd5ba930b6e51332f33da036@varnish-cache.org> #1678: Varnish coredump (assert error in http_write()) ------------------------------------------+-------------------- Reporter: sv@? | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.6 | Severity: normal Keywords: Panic core dump error http.c | ------------------------------------------+-------------------- Hi There, Every now and again (aprox once a week) I have a panic, which interrupts regular operations of a large website - the following is logged to syslog: {{{ Panic message: Assert error in http_Write(), cache_http.c line 1110:#012 Condition((hp->hd[HTTP_HDR_STATUS].b) != 0) not true.#012thread = (cache- worker)#012ident = Linux,3.2.0-75-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x431625: /usr/sbin/varnishd() [0x431625]#012 0x42f64f: /usr/sbin/varnishd(http_Write+0x29f) [0x42f64f]#012 0x4345f1: /usr/sbin/varnishd(RES_WriteObj+0x1a1) [0x4345f1]#012 0x418fcf: /usr/sbin/varnishd(CNT_Session+0x69f) [0x418fcf]#012 0x433425: /usr/sbin/varnishd() [0x433425]#012 0x7f6ca2e87e9a: /lib/x86_64-linux- gnu/libpthread.so.0(+0x7e9a) [0x7f6ca2e87e9a]#012 0x7f6ca2bb52ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f6ca2bb52ed]#012sp = 0x7f6c96302008 {#012 fd = 179, id = 179, xid = 1881767090,#012 client = 114.251.75.216 55874,#012 step = STP_DELIVER,#012 handling = deliver,#012 err_code = 404, err_reason = (null),#012 restarts = 0, esi_level = 0#012 flags = do_gzip is_gunzip#012 bodystatus = 4#012 ws = 0x7f6c96302080 { #012 id = "sess",#012 {s,f,r,e} = {0x7f6c963045f8,+14848,(nil),+131072},#012 },#012 http[req] = {#012 ws = 0x7f6c96302080[sess]#012 "GET",#012 "/racing/getXML.aspx?type=jcbwracing_reserve&date=11-02-2015&venue=HV?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type }}} And the following is the output from "panic.show": {{{ varnish> panic.show 200 Last panic at: Wed, 11 Feb 2015 15:33:50 GMT Assert error in http_Write(), cache_http.c line 1110: Condition((hp->hd[HTTP_HDR_STATUS].b) != 0) not true. thread = (cache-worker) ident = Linux,3.2.0-75-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x431625: /usr/sbin/varnishd() [0x431625] 0x42f64f: /usr/sbin/varnishd(http_Write+0x29f) [0x42f64f] 0x4345f1: /usr/sbin/varnishd(RES_WriteObj+0x1a1) [0x4345f1] 0x418fcf: /usr/sbin/varnishd(CNT_Session+0x69f) [0x418fcf] 0x433425: /usr/sbin/varnishd() [0x433425] 0x7f6ca2e87e9a: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f6ca2e87e9a] 0x7f6ca2bb52ed: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f6ca2bb52ed] sp = 0x7f6c96302008 { fd = 179, id = 179, xid = 1881767090, client = 114.251.75.216 55874, step = STP_DELIVER, handling = deliver, err_code = 404, err_reason = (null), restarts = 0, esi_level = 0 flags = do_gzip is_gunzip bodystatus = 4 ws = 0x7f6c96302080 { id = "sess", {s,f,r,e} = {0x7f6c963045f8,+14848,(nil),+131072}, }, http[req] = { ws = 0x7f6c96302080[sess] "GET", "/racing/getXML.aspx?type=jcbwracing_reserve&date=11-02-2015&venue=HV?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined", "HTTP/1.1", "Host: bet.hkjc.com", "Accept-Language: en-us", "Cookie: PHPSESSID=mcq54spt0k3ej9k5oh2ns55jk4; RNLBSERVERID=ded555; custProInBet=; HKJCSSOGP=1423665807701; s_pers=%20s_fid %3D34B8DC9194E8ACCE-3BFA429915222B77%7C1486824628162%3B; s_sess=%20s_cc%3Dtrue%3B%20s_sq%3D%3B", "Connection: keep-alive", "Accept: application/xml, text/xml, */*; q=0.01", "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/600.3.18 (KHTML, like Gecko) Version/7.1.3 Safari/537.85.12", "Referer: http://bet.hkjc.com/racing/pages/odds_wp.aspx?date=11-02-2015&venue=HV", "DNT: 1", "X-Requested-With: XMLHttpRequest", "X-Forwarder: 94.143.8.103", "X-Forwarded-For: 114.251.75.216", "Accept-Encoding: gzip", }, worker = 0x7f5e735faac0 { ws = 0x7f5e735facf8 { overflow id = "wrk", {s,f,r,e} = {0x7f5e735e8a50,+65536,(nil),+65536}, }, http[resp] = { ws = 0x7f5e735facf8[wrk] "HTTP/1.1", "Not Found", "Cache-Control: private", "Content-Type: text/html; charset=utf-8", "Server: Microsoft-IIS/8.0", "X-BackendServer: default", "X-Forwarded-For: 114.251.75.216", "Content-Encoding: gzip", "Transfer-Encoding: chunked", "Age: 0", "Via: 1.1 varnish", }, }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f6029979000 { xid = 1881767090, ws = 0x7f6029979018 { id = "obj", {s,f,r,e} = {0x7f6029979248,+14392,(nil),+14424}, }, http[obj] = { ws = 0x7f6029979018[obj] "HTTP/1.1", "Not Found", "Cache-Control: private", "Content-Type: text/html; charset=utf-8", "Server: Microsoft-IIS/8.0", "X-Powered-By: ASP.NET", "Date: Wed, 11 Feb 2015 15:30:12 GMT", "x-host: bet.hkjc.com", "x-url: /racing/getXML.aspx?type=jcbwracing_reserve&date=11-02-2015&venue=HV?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined?type=jcbwracing_reserve&date=1&venue=undefined", "X-BackendServer: default", "X-Forwarded-For: 114.251.75.216", "Content-Encoding: gzip", "Content-Length: 1979", }, len = 1979, store = { 1979 { 1f 8b 08 00 00 00 00 00 02 03 ed 5a 5b 73 e2 38 |...........Z[s.8| 16 7e de fe 15 1a ba ba e7 05 5f 20 e4 66 08 b3 |.~........_ .f..| dd 24 a9 a4 2a 99 cd 76 e8 9e 99 aa ae 4a 09 fb |.$..*..v.....J..| 80 dd 11 96 47 92 63 58 6a fe fb 1e 49 b6 31 84 |....G.cXj...I.1.| [1915 more] }, }, }, }, }}} The error seems to restart the cache process - and as my VCL is huge, at takes some time to compile, this effectively halts operations for a minute. Any hints, anyone? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 13 02:48:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 13 Feb 2015 02:48:56 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.1db5c1848833a64ab11770a149e164e0@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by coredump): I got two servers and I had them panic within 10 minutes of each other. Both are using a 65G malloc storage. Both panic'ed around 165k objects, what I think may be a coincidence but worth mentioning. Last panic at: Fri, 13 Feb 2015 02:27:20 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 96: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision 17dae8e ident = Linux,3.2.0-61-generic,x86_64,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x4344fa: pan_ic+0x13a 0x4145e4: tcp_handle+0x264 0x466949: Wait_Handle+0x89 0x467202: vwe_thread+0x112 0x7f47107d5e9a: libpthread.so.0(+0x7e9a) [0x7f47107d5e9a] 0x7f47105032ed: libc.so.6(clone+0x6d) [0x7f47105032ed] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 12:07:28 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 12:07:28 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.adcfbd8f7f4330c48a6d14bd520934fc@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Changes (by phk): * owner: => phk -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 12:57:00 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 12:57:00 -0000 Subject: [Varnish] #1678: Varnish coredump (assert error in http_write()) In-Reply-To: <053.123d5446dd5ba930b6e51332f33da036@varnish-cache.org> References: <053.123d5446dd5ba930b6e51332f33da036@varnish-cache.org> Message-ID: <068.9da71ec57d93adcd206d0ac64307ff77@varnish-cache.org> #1678: Varnish coredump (assert error in http_write()) ------------------------------------------+---------------------- Reporter: sv@? | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.6 Severity: normal | Resolution: invalid Keywords: Panic core dump error http.c | ------------------------------------------+---------------------- Changes (by daghf): * status: new => closed * resolution: => invalid Comment: This looks like you are running out of workspace, thus ending up with an empty header value in the response. Increasing parameter 'sess_workspace' should fix it. Closing this as it's a configuration issue and not a bug. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 13:17:53 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 13:17:53 -0000 Subject: [Varnish] #1662: BereqProtocol showing twice for HTTP 1.0 requests In-Reply-To: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> References: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> Message-ID: <058.a0d7dc43e539802070424d54419386e9@varnish-cache.org> #1662: BereqProtocol showing twice for HTTP 1.0 requests --------------------+--------------------- Reporter: fgsch | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+--------------------- Comment (by lkarsten): We have the same situation for other non-header attributes of a bereq. With the following VCL: {{{ sub vcl_backend_fetch { if (bereq.url ~ "/foo") { set bereq.url = bereq.url + "&more_cake=yes"; set bereq.method = "OPTIONS"; } set bereq.http.fakeheader = "headervalue"; unset bereq.http.fakeheader; } }}} We see that it behaves the same for bereq.method and bereq.url: {{{ * << BeReq >> 459249 - Begin bereq 459248 fetch - Timestamp Start: 1424092291.716213 0.000000 0.000000 - BereqMethod GET - BereqURL /foo/?26609 - BereqProtocol HTTP/1.0 - BereqHeader Host: hyse.org - BereqHeader User-Agent: Python-urllib/2.7 - BereqHeader X-Forwarded-For: 2001:16d8:ee00:1c1::2 - BereqProtocol HTTP/1.1 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 459249 - VCL_call BACKEND_FETCH - BereqURL /foo/?26609&more_cake=yes - BereqMethod OPTIONS - BereqHeader fakeheader: headervalue - BereqUnset fakeheader: headervalue - VCL_return fetch }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 13:42:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 13:42:02 -0000 Subject: [Varnish] #1662: BereqProtocol showing twice for HTTP 1.0 requests In-Reply-To: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> References: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> Message-ID: <058.c60ea84152165e24d3cea62c98c5caca@varnish-cache.org> #1662: BereqProtocol showing twice for HTTP 1.0 requests --------------------+--------------------- Reporter: fgsch | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+--------------------- Comment (by lkarsten): I think we either should leave it as it is, or invent some new BereqProtoUnset/BereqLineProtocol/BereqChange vsl class to log that it changed (or unset.) That was we get the same (or similar) behavior as BereqUnset, but without overloading BereqUnset to also include non-headers. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 13:55:42 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 13:55:42 -0000 Subject: [Varnish] #1662: BereqProtocol showing twice for HTTP 1.0 requests In-Reply-To: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> References: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> Message-ID: <058.931c3c605d0def560fc5944888d6efb2@varnish-cache.org> #1662: BereqProtocol showing twice for HTTP 1.0 requests --------------------+--------------------- Reporter: fgsch | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+--------------------- Comment (by fgsch): Since the Unset will be followed by the right tag, e.g.: {{{ **** v1 0.7 vsl| 1003 RespUnset c HTTP/1.0 **** v1 0.7 vsl| 1003 RespProtocol c HTTP/1.1 }}} Does it really matter whether we are overloading *Unset for these 3 fields? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 16 14:28:50 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 16 Feb 2015 14:28:50 -0000 Subject: [Varnish] #1662: BereqProtocol showing twice for HTTP 1.0 requests In-Reply-To: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> References: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> Message-ID: <058.bb072728839f0f8f5c0e777b8c203f81@varnish-cache.org> #1662: BereqProtocol showing twice for HTTP 1.0 requests --------------------+--------------------- Reporter: fgsch | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+--------------------- Comment (by aondio): Varnish just says when a field is unset. {{{ -BereqHeader fakeheader: headervalue -BereqUnset fakeheader: headervalue -BereqHeader fakeheader: fakeheader }}} I think we can adopt the *Unset also for these 3 non-headers(method, url, protocol). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 12:51:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 12:51:46 -0000 Subject: [Varnish] #1674: Debian: varnishncsa fails to start during boot In-Reply-To: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> References: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> Message-ID: <058.e56fd3bcbc8a8562fb293f9980187f33@varnish-cache.org> #1674: Debian: varnishncsa fails to start during boot -----------------------+-------------------- Reporter: idl0r | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------+-------------------- Comment (by razvanphp): +1 This bug is still present in 4.0.3. Here is the relevant mail-list discussion: http://www.gossamer- threads.com/lists/varnish/misc/32573 This also affects upgrades, because aptitude cannot configure the packages and leaves it in ` not fully installed or removed` state -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 12:59:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 12:59:03 -0000 Subject: [Varnish] #1677: varnish init script returns before varnishlog can access it. In-Reply-To: <049.dd12648e61a15ef97afa5f73936bd7a7@varnish-cache.org> References: <049.dd12648e61a15ef97afa5f73936bd7a7@varnish-cache.org> Message-ID: <064.b09f27ada96f25580e1f6bae2ca982a8@varnish-cache.org> #1677: varnish init script returns before varnishlog can access it. -------------------------+---------------------- Reporter: mattjbarlow | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: Keywords: | -------------------------+---------------------- Comment (by razvanphp): Duplicate Ticket #1674 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 13:06:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 13:06:56 -0000 Subject: [Varnish] #1674: Debian: varnishncsa fails to start during boot In-Reply-To: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> References: <043.c33d34c6dbef7fb5306b98989678d5f6@varnish-cache.org> Message-ID: <058.efbf2ddc516a9dcc88d61e5bf98c4814@varnish-cache.org> #1674: Debian: varnishncsa fails to start during boot -----------------------+-------------------- Reporter: idl0r | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: 4.0.2 Severity: normal | Resolution: Keywords: | -----------------------+-------------------- Comment (by fgsch): This affects varnishlog too (from #1677). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 13:07:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 13:07:27 -0000 Subject: [Varnish] #1677: varnish init script returns before varnishlog can access it. In-Reply-To: <049.dd12648e61a15ef97afa5f73936bd7a7@varnish-cache.org> References: <049.dd12648e61a15ef97afa5f73936bd7a7@varnish-cache.org> Message-ID: <064.eac0f40bd1a477e9ccfe08f66b93f52a@varnish-cache.org> #1677: varnish init script returns before varnishlog can access it. -------------------------+------------------------ Reporter: mattjbarlow | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Resolution: duplicate Keywords: | -------------------------+------------------------ Changes (by fgsch): * status: new => closed * resolution: => duplicate Comment: Moved to #1674 as they overlap somewhat. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 13:46:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 13:46:21 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile Message-ID: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+-------------------- Reporter: razvanphp | Type: defect Status: new | Priority: normal Milestone: | Component: vmod Version: 4.0.3 | Severity: normal Keywords: | -----------------------+-------------------- Today we upgraded to varnish 4.0.3 from the official debian wheezy repo. For some reason (I can't find the changelog or motivation for this) the vmod directory changed from `/usr/lib/varnish/vmods/` to `/usr/lib/x86_64 -linux-gnu/varnish/vmods/`. The std and directory vmods were moved to the new folder, but custom ones not, resulting in errors and failed startup: {{{ Unpacking replacement varnish ... dpkg: warning: unable to delete old directory '/usr/lib/varnish/vmods': Directory not empty dpkg: warning: unable to delete old directory '/usr/lib/varnish': Directory not empty ... [FAIL] Starting HTTP accelerator: varnishd failed! Message from VCC-compiler: Could not load VMOD geoip File name: /usr/lib/x86_64-linux- gnu/varnish/vmods/libvmod_geoip.so dlerror:: /usr/lib/x86_64-linux- gnu/varnish/vmods/libvmod_geoip.so: cannot open shared object file: No such file or directory ('input' Line 21 Pos 8) import geoip; -------#####- }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 20 16:40:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 20 Feb 2015 16:40:54 -0000 Subject: [Varnish] #1680: varnish init.d script status has no output Message-ID: <047.7962db934c3493b694497a279c8d882b@varnish-cache.org> #1680: varnish init.d script status has no output -----------------------+----------------------- Reporter: razvanphp | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 4.0.3 | Severity: normal Keywords: | -----------------------+----------------------- In 4.0.2, I could monitor the varnish process by saying: {{{ root at server:~# service varnish status [ ok ] varnishd is running. }}} After updating to 4.0.3, it seems that `status_of_proc -p ...` was changed to `start-stop-daemon --status --quiet ...` so the command does not print anything to stdin anymore. OS: Debian Wheezy Is this intended? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Feb 21 06:34:17 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 21 Feb 2015 06:34:17 -0000 Subject: [Varnish] #1681: Use of "Manual Section" in rst docs to properly generate manpages sections Message-ID: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> #1681: Use of "Manual Section" in rst docs to properly generate manpages sections ---------------------------------+--------------------------- Reporter: gui | Type: documentation Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: documentation Version: 4.0.3 | Severity: normal Keywords: | ---------------------------------+--------------------------- Hi, The Debian project parse manpages of upstream projects to track errors/warning [1]. There is non declared macros by rst2man because your rst files in doc/sphinx/reference/ does not define header macros that could be used to generate properly section for manpages (also note that you can move your bottom defined Copyright and Author in the header with ":Copyright: " and :"Author: "). Example for doc/sphinx/reference/varnishstat.rst file: {{{ [...] --------------------------- Varnish Cache statistics --------------------------- :Manual section: 1 SYNOPSIS ======== [...] }}} For information (for vsl-query and varnish-cli), i've also proposed to docutils project to always quote the title; when there is a space in, the section is wrong (set as the second word of the title) [2]. If you agree with this, i can provide a patch for all rst that are generated as manpages. Html generated pages are also changed when adding these macros. Sources: * [1] https://lintian.debian.org/maintainer/pkg-varnish- devel at lists.alioth.debian.org.html#varnish * [2] https://sourceforge.net/p/docutils/patches/126/ -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Feb 21 18:39:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 21 Feb 2015 18:39:26 -0000 Subject: [Varnish] #1681: Use of "Manual Section" in rst docs to properly generate manpages sections In-Reply-To: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> References: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> Message-ID: <056.73d4c855a3dd76ae244323b6eb8b5f3c@varnish-cache.org> #1681: Use of "Manual Section" in rst docs to properly generate manpages sections ---------------------------+---------------------------------- Reporter: gui | Owner: fgsch Type: documentation | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: documentation | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ---------------------------+---------------------------------- Changes (by fgsch): * status: new => assigned * owner: => fgsch Comment: Thanks for the report. I've committed a fix for the first and last errors from [1] and added the sections. I need to look a bit closer to the "manpage-has-errors-from-man" errors. If you have any input on those I'd appreciate it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Feb 21 18:41:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 21 Feb 2015 18:41:27 -0000 Subject: [Varnish] #1681: Poor quality of generated manpages (was: Use of "Manual Section" in rst docs to properly generate manpages sections) In-Reply-To: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> References: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> Message-ID: <056.93007f0abae0a239d8770ab3ddd3906d@varnish-cache.org> #1681: Poor quality of generated manpages ---------------------------+---------------------------------- Reporter: gui | Owner: fgsch Type: documentation | Status: assigned Priority: normal | Milestone: Varnish 4.0 release Component: documentation | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ---------------------------+---------------------------------- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sun Feb 22 15:30:34 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sun, 22 Feb 2015 15:30:34 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. Message-ID: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. -------------------+---------------------- Reporter: xcir | Type: defect Status: new | Priority: normal Milestone: | Component: varnishd Version: 4.0.3 | Severity: normal Keywords: | -------------------+---------------------- Hi, I pre-create the storage file using the dd(1). Unfortunately, I catch the "Error: (-sfile) size "XXG": larger than file system" message at the varnish start. And, If storage is full, Varnish can't restart.(specified size > file- system space) https://github.com/varnish/Varnish- Cache/commit/551e0de9e89a83b6b465fc8800dfc9b47217a9dc This commit does not care pre-allocation size. I think that it is necessary to check the real size. '''Storage is pre-created / Unpatched''' {{{ root at varnish4:~# /var/opt/varnish-4.0.3/sbin/varnishd -V varnishd (varnish-4.0.3 revision b8c4a34) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2014 Varnish Software AS root at varnish4:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 101195160 52082336 43972344 55% / udev 975672 4 975668 1% /dev tmpfs 199408 188 199220 1% /run none 5120 0 5120 0% /run/lock none 997024 0 997024 0% /run/shm root at varnish4:~# /var/opt/varnish-4.0.3/sbin/varnishd -a :6083 -T localhost:6084 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p feature=+esi_disable_xml_check -s file,/var/lib/varnish/varnish_storage.bin,50G Error: (-sfile) size "50G": larger than file system }}} '''Storage is pre-created / patched''' {{{ root at varnish4:~# /var/opt/varnish-4branch/sbin/varnishd -V varnishd (varnish-4.0.3 revision b8c4a34) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2014 Varnish Software AS root at varnish4:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 101195160 52082336 43972344 55% / udev 975672 4 975668 1% /dev tmpfs 199408 188 199220 1% /run none 5120 0 5120 0% /run/lock none 997024 0 997024 0% /run/shm root at varnish4:~# /var/opt/varnish-4branch/sbin/varnishd -a :6083 -T localhost:6084 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p feature=+esi_disable_xml_check -s file,/var/lib/varnish/varnish_storage.bin,50G root at varnish4:~# ps axut|grep varn root 23799 15.5 4.2 114988 84232 ? SLs 23:28 0:00 /var/opt /varnish-4branch/sbin/varnishd -a :6083 -T localhost:6084 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p feature=+esi_disable_xml_check -s file,/var/lib/varnish/varnish_storage.bin,50G nobody 23801 4.0 4.2 53475620 84936 ? Sl 23:28 0:00 /var/opt /varnish-4branch/sbin/varnishd -a :6083 -T localhost:6084 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p feature=+esi_disable_xml_check -s file,/var/lib/varnish/varnish_storage.bin,50G root 24019 0.0 0.0 9388 932 pts/2 R+ 23:28 0:00 grep --color=auto varn }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 11:12:45 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 11:12:45 -0000 Subject: [Varnish] #1462: ReqURL is emitted twice In-Reply-To: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> References: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> Message-ID: <058.8b6c53b7e09f594097ac601f63b1a2d7@varnish-cache.org> #1462: ReqURL is emitted twice --------------------+------------------------------ Reporter: scoof | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0-TP2 Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------ Comment (by razvanphp): There is another case when varnishncsa does not log changed header values, that is when we set a header twice or overwrite it in vcl_recv. For example: https://github.com/varnish/varnish- devicedetect/blob/master/devicedetect.vcl In this case, we can never log the correct device of the user, because varnishncsa will only pick the first one (generic). I think it makes more sense to use the LAST value. Should I create a new ticket for this? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 11:54:24 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 11:54:24 -0000 Subject: [Varnish] #1462: ReqURL is emitted twice In-Reply-To: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> References: <043.4c15f0afe4e6c22dd922a9ce48ae670c@varnish-cache.org> Message-ID: <058.33d0fef86671d1e38b5246b2acd1793b@varnish-cache.org> #1462: ReqURL is emitted twice --------------------+------------------------------ Reporter: scoof | Owner: aondio Type: defect | Status: closed Priority: normal | Milestone: Varnish 4.0-TP2 Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------ Comment (by aondio): Yes, do it please. Replying to [comment:10 razvanphp]: > There is another case when varnishncsa does not log changed header values, that is when we set a header twice or overwrite it in vcl_recv. > > For example: https://github.com/varnish/varnish- devicedetect/blob/master/devicedetect.vcl > > In this case, we can never log the correct device of the user, because varnishncsa will only pick the first one (pc). I think it makes more sense to use the LAST value. > > Should I create a new ticket for this? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:05:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:05:46 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile In-Reply-To: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> References: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> Message-ID: <062.65d8dcd23834dfb7591c828a1f9fce3b@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: vmod | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten Comment: Reproduced on wheezy: {{{ root at wheezy-amd64:~# dpkg -l varnish Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig- pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-========================-=================-=================-===================================================== ii varnish 4.0.3-1~wheezy amd64 state of the art, high-performance web accelerator root at wheezy-amd64:~# dpkg -L varnish [..] ?/usr/lib/x86_64-linux-gnu/varnish/libvgz.so /usr/lib/x86_64-linux-gnu/varnish/vmods /usr/lib/x86_64-linux-gnu/varnish/vmods/libvmod_directors.so /usr/lib/x86_64-linux-gnu/varnish/vmods/libvmod_std.so /usr/lib/x86_64-linux-gnu/varnish/libvcc.so }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:08:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:08:26 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub Message-ID: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+-------------------- Reporter: razvanphp | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 4.0.3 | Severity: normal Keywords: | -----------------------+-------------------- Related to Ticket #1462, there is another case when varnishncsa does not log changed header values, that is when we set a header twice or overwrite it in vcl_recv. For example: https://github.com/varnish/varnish- devicedetect/blob/master/devicedetect.vcl In this case, we can never log the correct device of the user, because varnishncsa will only pick the first one (pc). I think it makes more sense to use the LAST value. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:17:27 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:17:27 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.edd1756b378ccb98573ea9ec8376aa5e@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Changes (by aondio): * owner: => aondio -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:22:08 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:22:08 -0000 Subject: [Varnish] #1680: varnish init.d script status has no output In-Reply-To: <047.7962db934c3493b694497a279c8d882b@varnish-cache.org> References: <047.7962db934c3493b694497a279c8d882b@varnish-cache.org> Message-ID: <062.c6b0275aec8eaeffbbc9b01ba3eea64e@varnish-cache.org> #1680: varnish init.d script status has no output -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: packaging | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+----------------------- Changes (by lkarsten): * owner: => lkarsten -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:39:56 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:39:56 -0000 Subject: [Varnish] #1662: BereqProtocol showing twice for HTTP 1.0 requests In-Reply-To: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> References: <043.cd65aaf168b645e5087eb24ca7cc84b1@varnish-cache.org> Message-ID: <058.1c0b13458fa003c70b00a9264f0af4d7@varnish-cache.org> #1662: BereqProtocol showing twice for HTTP 1.0 requests --------------------+--------------------- Reporter: fgsch | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+--------------------- Comment (by fgsch): Proposed patch attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 12:58:19 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 12:58:19 -0000 Subject: [Varnish] #1680: varnish init.d script status has no output In-Reply-To: <047.7962db934c3493b694497a279c8d882b@varnish-cache.org> References: <047.7962db934c3493b694497a279c8d882b@varnish-cache.org> Message-ID: <062.e33f4364cabe4f7a5395ba19d745d440@varnish-cache.org> #1680: varnish init.d script status has no output -----------------------+------------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: packaging | Version: 4.0.3 Severity: normal | Resolution: worksforme Keywords: | -----------------------+------------------------- Changes (by lkarsten): * status: new => closed * resolution: => worksforme Comment: Hi. This change was introduced as part of importing downstream packaging changes from Debian. See commit 74c3c3bb037e1c54ed9ebd64da94090587c6f05d for details. I tested the behavior on wheezy, and as far as I can see start-stop-daemon behaves as you would expect it to. {{{ root at wheezy-amd64:~# varnishd -V varnishd (varnish-4.0.3 revision b8c4a34) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2014 Varnish Software AS root at wheezy-amd64:~# service varnish start [ ok ] Starting HTTP accelerator: varnishd. root at wheezy-amd64:~# /etc/init.d/varnish status; echo $? 0 root at wheezy-amd64:~# service varnish stop [ ok ] Stopping HTTP accelerator: varnishd. root at wheezy-amd64:~# /etc/init.d/varnish status; echo $? 3 root at wheezy-amd64:~# service varnish start [ ok ] Starting HTTP accelerator: varnishd. root at wheezy-amd64:~# pkill varnishd root at wheezy-amd64:~# /etc/init.d/varnish status; echo $? 1 }}} This is according to exit status documented in start-stop-daemon(8). If you want to output something after checking, you can use the contents of $?. {{{ root at wheezy-amd64:~# /etc/init.d/varnish status && echo running running }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 13:16:16 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 13:16:16 -0000 Subject: [Varnish] #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader Message-ID: <043.d90b1b8d19f2203e577f7e0ebefd25bb@varnish-cache.org> #1684: Retried bereqs don't log BereqMethod, BereqURL, BereqProtocol or BereqHeader --------------------+--------------------- Reporter: scoof | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: unknown Severity: normal | Keywords: --------------------+--------------------- When retrying a backend request, it does not log the full request: {{{ * << BeReq >> 3 - Begin bereq 2 fetch - Timestamp Start: 1424697235.589611 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader User-Agent: curl/7.38.0 - BereqHeader Host: localhost:6081 - BereqHeader Accept: */* - BereqHeader X-Forwarded-For: ::1 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 3 - VCL_call BACKEND_FETCH - VCL_return fetch - Timestamp Bereq: 1424697235.589768 0.000157 0.000157 - Timestamp Beresp: 1424697235.590826 0.001215 0.001058 - BerespProtocol HTTP/1.1 - BerespStatus 200 - BerespReason OK - BerespHeader Date: Mon, 23 Feb 2015 13:13:55 GMT - BerespHeader Server: Apache/2.4.10 (Debian) - BerespHeader Last-Modified: Wed, 05 Feb 2014 16:32:52 GMT - BerespHeader ETag: "2c1d-4f1ab4ef042ea-gzip" - BerespHeader Accept-Ranges: bytes - BerespHeader Vary: Accept-Encoding - BerespHeader Content-Encoding: gzip - BerespHeader Content-Length: 3149 - BerespHeader Content-Type: text/html - TTL RFC 120 10 -1 1424697236 1424697236 1424697235 0 0 - VCL_call BACKEND_RESPONSE - VCL_return retry - BackendClose 21 default(127.0.0.1,::1,8080) - Timestamp Retry: 1424697235.590879 0.001268 0.000053 - Link bereq 32769 retry - End * << BeReq >> 32769 - Begin bereq 3 retry - Timestamp Start: 1424697235.590879 0.001268 0.000000 - BereqHeader X-Varnish: 32769 - VCL_call BACKEND_FETCH - VCL_return fetch - Timestamp Bereq: 1424697235.590951 0.001340 0.000072 - Timestamp Beresp: 1424697235.591609 0.001998 0.000658 - BerespProtocol HTTP/1.1 - BerespStatus 200 - BerespReason OK - BerespHeader Date: Mon, 23 Feb 2015 13:13:55 GMT - BerespHeader Server: Apache/2.4.10 (Debian) - BerespHeader Last-Modified: Wed, 05 Feb 2014 16:32:52 GMT - BerespHeader ETag: "2c1d-4f1ab4ef042ea-gzip" - BerespHeader Accept-Ranges: bytes - BerespHeader Vary: Accept-Encoding - BerespHeader Content-Encoding: gzip - BerespHeader Content-Length: 3149 - BerespHeader Content-Type: text/html - TTL RFC 120 10 -1 1424697236 1424697236 1424697235 0 0 - VCL_call BACKEND_RESPONSE - VCL_return deliver - Storage malloc s0 - ObjProtocol HTTP/1.1 - ObjStatus 200 - ObjReason OK - ObjHeader Date: Mon, 23 Feb 2015 13:13:55 GMT - ObjHeader Server: Apache/2.4.10 (Debian) - ObjHeader Last-Modified: Wed, 05 Feb 2014 16:32:52 GMT - ObjHeader ETag: "2c1d-4f1ab4ef042ea-gzip" - ObjHeader Vary: Accept-Encoding - ObjHeader Content-Encoding: gzip - ObjHeader Content-Length: 3149 - ObjHeader Content-Type: text/html - Fetch_Body 3 length stream - Gzip u F - 3149 11293 80 80 25128 - BackendReuse 21 default(127.0.0.1,::1,8080) - Timestamp BerespBody: 1424697235.591868 0.002257 0.000259 - Length 3149 - BereqAcct 155 0 155 566 3149 3715 - End }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 16:18:12 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 16:18:12 -0000 Subject: [Varnish] #1681: Poor quality of generated manpages In-Reply-To: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> References: <041.1ecf0990d707f7b7c645d74b581e1064@varnish-cache.org> Message-ID: <056.51d21e0f8b0865adc53d5ca06c48c6dd@varnish-cache.org> #1681: Poor quality of generated manpages ---------------------------+---------------------------------- Reporter: gui | Owner: fgsch Type: documentation | Status: closed Priority: normal | Milestone: Varnish 4.0 release Component: documentation | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | ---------------------------+---------------------------------- Changes (by fgsch): * status: assigned => closed * resolution: => fixed Comment: I've gone through the second errors and fixed the only one I could reproduce. These errors are for 4.0.2 so the rest might have been fixed in between. I'll close this now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 19:52:44 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 19:52:44 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile In-Reply-To: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> References: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> Message-ID: <062.0ed671754fb772539b6cd814dc59df5b@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: new Priority: normal | Milestone: Component: vmod | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+----------------------- Comment (by lkarsten): Hi. Updated Debian packages (4.0.3-2) are available in Jenkins now, and will be put onto repo.varnish-cache.org shortly. Keeping the bug report open until we've confirmed it is in place. Thanks for reporting this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 21:25:21 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 21:25:21 -0000 Subject: [Varnish] #1685: ProGain and Testinate-250 Message-ID: <049.ae5ec5810203404594d0a02224442767@varnish-cache.org> #1685: ProGain and Testinate-250 -------------------------+-------------------- Reporter: IlsaGutmann | Type: defect Status: new | Priority: lowest Milestone: | Component: build Version: unknown | Severity: normal Keywords: | -------------------------+-------------------- [Http://www.progain350-test.com ProGain-350 and Testinate-250] are 2 best- selling supplements within the body building scene. ProGain-350 is a pre- workout formula to be taken before you go to the gym while Testinate 250 is a post work-out supplement to be taken when you finish your gym session. Both formulas work as testosterone boosters, which means they naturally drive up testosterone levels in the body, which is very important for men out there that want to improve muscle strength, improve strength and drop body fat. Studies show that healthy testosterone levels in guys improves their performance both at the gym and in the sack. The issue at hand here is, as our bodies begin to age, testosterone levels in the body can create is also hindered. By consuming Pro Gain-350 in combination with Testinate 250 you can revive your level of testosterone production back to those of an early 20 year old when hormone production is at its peak [http://www.progain350.fitness Pro Gain-350 and Testinate-250 effects] Both Pro Gain-350 and Testinate-250 are made up of 5 scientifically tested and blended ingredients which work with one another to signal to your body to create more testosterone. The following section talks about the reported [http://www.progain350.discount Pro-Gain 350 and Testinate 250 effects]. This is where Pro Gain-350 and Testinate-250 are different from other testosterone productsout there. Considering other products attempt to inject the body directly with testosterone, ProGain-350 and Testinate 250 work by influencing the body's hormone cycle encouraging it to create more testosterone itself. To get the most out of it these supplements ought to be consumed together ? ProGain-350 prior to a workout and Testinate-250 after a workout. In this way the products can work to produce testosterone simultaneously in the brain and the testicles. The ingredients in both products make sure the effective production of the luteinizing hormone (LH) from the pituitary gland almost instantaneously after the product is taken. LH moves at pace to the testicles directly through the blood stream and promotes testosterone production. The effects ofProGain-350 and Testinate 250 are that they work directly on the testicles to aid the processes of key enzymes that convert cholesterol into testosterone. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Feb 23 21:26:02 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 23 Feb 2015 21:26:02 -0000 Subject: [Varnish] #1686: ProGain and Testinate-250 Message-ID: <049.dc66dfd6ab00a25c0191d1b4d14322cd@varnish-cache.org> #1686: ProGain and Testinate-250 -------------------------+-------------------- Reporter: IlsaGutmann | Type: defect Status: new | Priority: lowest Milestone: | Component: build Version: unknown | Severity: normal Keywords: | -------------------------+-------------------- [http://www.progain350-test.com ProGain-350 and Testinate-250] are 2 best- selling supplements within the body building scene. ProGain-350 is a pre- workout formula to be taken before you go to the gym while Testinate 250 is a post work-out supplement to be taken when you finish your gym session. Both formulas work as testosterone boosters, which means they naturally drive up testosterone levels in the body, which is very important for men out there that want to improve muscle strength, improve strength and drop body fat. Studies show that healthy testosterone levels in guys improves their performance both at the gym and in the sack. The issue at hand here is, as our bodies begin to age, testosterone levels in the body can create is also hindered. By consuming Pro Gain-350 in combination with Testinate 250 you can revive your level of testosterone production back to those of an early 20 year old when hormone production is at its peak [http://www.progain350.fitness Pro Gain-350 and Testinate-250 effects] Both Pro Gain-350 and Testinate-250 are made up of 5 scientifically tested and blended ingredients which work with one another to signal to your body to create more testosterone. The following section talks about the reported [http://www.progain350.discount Pro-Gain 350 and Testinate 250 effects]. This is where Pro Gain-350 and Testinate-250 are different from other testosterone productsout there. Considering other products attempt to inject the body directly with testosterone, ProGain-350 and Testinate 250 work by influencing the body's hormone cycle encouraging it to create more testosterone itself. To get the most out of it these supplements ought to be consumed together ? ProGain-350 prior to a workout and Testinate-250 after a workout. In this way the products can work to produce testosterone simultaneously in the brain and the testicles. The ingredients in both products make sure the effective production of the luteinizing hormone (LH) from the pituitary gland almost instantaneously after the product is taken. LH moves at pace to the testicles directly through the blood stream and promotes testosterone production. The effects ofProGain-350 and Testinate 250 are that they work directly on the testicles to aid the processes of key enzymes that convert cholesterol into testosterone. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 24 09:00:13 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 24 Feb 2015 09:00:13 -0000 Subject: [Varnish] #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. In-Reply-To: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> References: <042.0ac59dafc60bb4439b0e09c88a89895c@varnish-cache.org> Message-ID: <057.6e49579ed90ad50444884a0ec0da9986@varnish-cache.org> #1682: If file-storage pre-created and more than 50% of the file-system space, Varnish does not start. ----------------------+-------------------- Reporter: xcir | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 4.0.3 Severity: normal | Resolution: Keywords: | ----------------------+-------------------- Comment (by martin): This comes down to lack of backporting commits 22ad1c95f7a0a6588c1dc4d682183616587595ef and 1f4067c95f144d80cc619cca629d339b72635d92 to 4.0 Martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 24 10:43:47 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 24 Feb 2015 10:43:47 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile In-Reply-To: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> References: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> Message-ID: <062.7d29d0149ccc86183e010a45e62f910f@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: vmod | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------- Changes (by lkarsten): * status: new => closed * resolution: => fixed Comment: Confirmed now (on wheezy) that apt-get upgrades to 4.0.3-2 where this issue is fixed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Feb 24 14:20:38 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 24 Feb 2015 14:20:38 -0000 Subject: [Varnish] #1679: After upgrade, custom vmods are not moved so vcl does not compile In-Reply-To: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> References: <047.cc71b4a105b78ca45075c21afdb459ae@varnish-cache.org> Message-ID: <062.def9d2cd2ebb146b899f2bd37a822198@varnish-cache.org> #1679: After upgrade, custom vmods are not moved so vcl does not compile -----------------------+----------------------- Reporter: razvanphp | Owner: lkarsten Type: defect | Status: closed Priority: normal | Milestone: Component: vmod | Version: 4.0.3 Severity: normal | Resolution: fixed Keywords: | -----------------------+----------------------- Comment (by razvanphp): Ok, so I just updated an older 4.0.2 to 4.0.3-2~wheezy, everything is ok, but the vmods dir is back to `/usr/lib/varnish/vmods`. Is this how it should be? I'm asking to know how to write my ansible scripts. Or.. is it a better way to programmatically get the vmods dir? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 25 10:11:26 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Feb 2015 10:11:26 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.a7c305993c5663526b2ecaa3ca0c3a7e@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Description changed by phk: Old description: > previously we only had panic according to ticket # 1628. > then update to trunk version a281a10 and added a 90 GB file backend for > some domains. > with new version #1628 has not been detected, but this new condition > appeared: > > varnish> panic.show > 200 > Last panic at: Sat, 07 Feb 2015 06:56:13 GMT > Assert error in tcp_handle(), cache/cache_backend_tcp.c line 96: > Condition((vbc->in_waiter) != 0) not true. > thread = (cache-epoll) > version = varnish-trunk revision a281a10 > ident = > Linux,3.2.0-4-amd64,x86_64,-smalloc,-sfile,-smalloc,-hclassic,epoll > Backtrace: > 0x4356e4: pan_ic+0x134 > 0x415b54: tcp_handle+0x254 > 0x466fe9: Wait_Handle+0x89 > 0x467881: vwe_thread+0xf1 > 0x7fd62d5b8b50: libpthread.so.0(+0x6b50) [0x7fd62d5b8b50] > 0x7fd62d30270d: libc.so.6(clone+0x6d) [0x7fd62d30270d] > > > varnishd -V > varnishd (varnish-trunk revision a281a10) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2015 Varnish Software AS > > DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \ > -T XXX.XXX.XXX.XXX:6082 \ > -f /etc/varnish/default.vcl \ > -h classic,16383 \ > -s ram1=malloc,10G \ > -s disk1=file,/varnishcache/varnish.bin,90G \ > -p thread_pools=2 \ > -p thread_pool_min=100 \ > -p thread_pool_max=3000 \ > -p thread_pool_add_delay=2 \ > -p auto_restart=on \ > -p listen_depth=2048 \ > -p ping_interval=3 \ > -p cli_timeout=25 \ > -p ban_dups=on" New description: previously we only had panic according to ticket # 1628. then update to trunk version a281a10 and added a 90 GB file backend for some domains. with new version #1628 has not been detected, but this new condition appeared: {{{ varnish> panic.show 200 Last panic at: Sat, 07 Feb 2015 06:56:13 GMT Assert error in tcp_handle(), cache/cache_backend_tcp.c line 96: Condition((vbc->in_waiter) != 0) not true. thread = (cache-epoll) version = varnish-trunk revision a281a10 ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-sfile,-smalloc,-hclassic,epoll Backtrace: 0x4356e4: pan_ic+0x134 0x415b54: tcp_handle+0x254 0x466fe9: Wait_Handle+0x89 0x467881: vwe_thread+0xf1 0x7fd62d5b8b50: libpthread.so.0(+0x6b50) [0x7fd62d5b8b50] 0x7fd62d30270d: libc.so.6(clone+0x6d) [0x7fd62d30270d] varnishd -V varnishd (varnish-trunk revision a281a10) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2015 Varnish Software AS DAEMON_OPTS="-a XXX.XXX.XXX.XXX:80, \ -T XXX.XXX.XXX.XXX:6082 \ -f /etc/varnish/default.vcl \ -h classic,16383 \ -s ram1=malloc,10G \ -s disk1=file,/varnishcache/varnish.bin,90G \ -p thread_pools=2 \ -p thread_pool_min=100 \ -p thread_pool_max=3000 \ -p thread_pool_add_delay=2 \ -p auto_restart=on \ -p listen_depth=2048 \ -p ping_interval=3 \ -p cli_timeout=25 \ -p ban_dups=on" }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 25 10:13:57 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Feb 2015 10:13:57 -0000 Subject: [Varnish] #1675: Condition((vbc->in_waiter) != 0) not true. In-Reply-To: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> References: <055.b51225add9ab2ac387171d437c0aea93@varnish-cache.org> Message-ID: <070.b69c4ec3525045c4d5a7195ce1db579c@varnish-cache.org> #1675: Condition((vbc->in_waiter) != 0) not true. ----------------------------------+---------------------------------- Reporter: zaterio@? | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: in_waiter tcp_handle | ----------------------------------+---------------------------------- Comment (by phk): Sorry about the delay in replying... This is related to some new work I've been doing, to put a "waiter" on the unused backend connections. It is not related to the storage changes. I will try to track this down. Any chance you can capture a varnishlog output for me ? (Preferably "varnislog -g raw") -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Feb 25 11:34:46 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 25 Feb 2015 11:34:46 -0000 Subject: [Varnish] #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling In-Reply-To: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> References: <050.7446d258f6b1af112a619a4b721885a7@varnish-cache.org> Message-ID: <065.d39bed675529feac2076dbb55383f1e4@varnish-cache.org> #1506: Make better use of Content-Length information: Avoid chunked responses, more control over Range handling --------------------------+---------------------------------- Reporter: DonMacAskill | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Varnish 4.0 release Component: varnishd | Version: 4.0.0 Severity: critical | Resolution: Keywords: | --------------------------+---------------------------------- Comment (by Moshe L): Hi, I was tracked on this bug and wait for fix. after upgrade to the last version (4.0.3) I can see strange problems when streaming video/audio using varnish. I am using HTML5 audio/video player. IE/ WMP first requesting normal HTTP GET, return the correct body with corect C-L. when I skip in the video, IE was requtest an 206 request, that varnish responds correctly with correct C-L and range. Chrome, otherwise, work diffrently: {{{ GET filename.mp4 HTTP/1.1 Host: host.host.host Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Accept-Encoding: identity;q=1, *;q=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36 Accept: */* Referer: Accept-Language: he-IL,he;q=0.8,en-US;q=0.6,en;q=0.4 Range: bytes=0- }}} on the first stage (when file not cached) I see this response: {{{ HTTP/1.1 416 Requested Range Not Satisfiable Date: Wed, 25 Feb 2015 11:20:26 GMT Content-Type: audio/mpeg3 Last-Modified: Fri, 11 Oct 2013 04:02:42 GMT X-Varnish: 140096817 Age: 0 Via: 1.1 varnish-v4 }}} Chrome requested again the same request (compared byte-by-byte) 0.5s after, the file was in the cache and all works. sometimes the problem is other: the first request (after 5-13 seconds) returned a wrong C-L range, so Chrome think that the file is 3-4 sec length and store this wrong info in the cache, stays "forever". log attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 26 11:43:17 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Feb 2015 11:43:17 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.6d9497c1172e79b571ad216a038467fe@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+---------------------------------------- Comment (by geoff): This problem (crash with the assertion failure shown above) is still occurring in release 4.0.3. The patch/commit in the previous comment is not included in 4.0.3, evidently because it requires changes (particularly in struct http_conn) that go far beyond the scope of the bugfix, and were not included in the release. The problem apparently does *not* occur for responses with status 200 and content-length == 0. But in my tests, it does occur in responses with status 204 (whose bodies are implicitly empty), provided the other conditions hold: * The response is ESI-included. * The response has the header Content-Encoding:gzip. I'll be attaching a modified version of r01602.vtc from above, which also has the 204 response. The requests/responses from the original VTC work without error, but the crash happens after the ESI-included 204 beresp is received. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 26 12:27:23 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Feb 2015 12:27:23 -0000 Subject: [Varnish] #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 In-Reply-To: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> References: <049.f27733d43af26eeaf7d4a9265aac0f8d@varnish-cache.org> Message-ID: <064.934a9111b88af31dba05a8a2adbc6ec8@varnish-cache.org> #1602: Assert error in ESI_DeliverChild(), cache/cache_esi_deliver.c line 530 -------------------------+---------------------------------------- Reporter: cache_layer | Owner: Poul-Henning Kamp Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+---------------------------------------- Comment (by fgsch): The attached patch against 4.0 fixes both test cases. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 26 15:54:38 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Feb 2015 15:54:38 -0000 Subject: [Varnish] #1687: Panic error Message-ID: <042.5c3533f4c22264c4ebdcfb67a6dabaf2@varnish-cache.org> #1687: Panic error ---------------------+-------------------- Reporter: cepi | Type: defect Status: new | Priority: high Milestone: | Component: build Version: unknown | Severity: normal Keywords: | ---------------------+-------------------- I'm having this error on a varnish 4.0.2 on debian 7 Attaching default.vcl and /etc/default/varnish Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: Child (4359) not responding to CLI, killing it. Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: Child (4359) died signal=6 Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: Child (4359) Panic message:#012Assert error in SES_ScheduleReq(), cache/cache_session.c line 229:#012 Condition((sp)->magic == 0x2c2f9c5a) not true.#012thread = (cache-worker)#012ident = Linux,3.2.0-4-amd64,x86_64,-smalloc,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x434225: /usr/sbin/varnishd() [0x434225]#012 0x43cf3d: /usr/sbin/varnishd(SES_ScheduleReq+0x16d) [0x43cf3d]#012 0x4263f8: /usr/sbin/varnishd() [0x4263f8]#012 0x427a3a: /usr/sbin/varnishd(HSH_DerefObjCore+0xba) [0x427a3a]#012 0x427d74: /usr/sbin/varnishd(HSH_DerefObj+0x34) [0x427d74]#012 0x438261: /usr/sbin/varnishd() [0x438261]#012 0x4387d1: /usr/sbin/varnishd(CNT_Request+0x111) [0x4387d1]#012 0x42e60b: /usr/sbin/varnishd(HTTP1_Session+0x77b) [0x42e60b]#012 0x43c848: /usr/sbin/varnishd() [0x43c848]#012 0x43d80d: /usr/sbin/varnishd(SES_pool_accept_task+0x29d) [0x43d80d]#012req = 0x7f233a5e6020 {#012 sp = 0x7f224d68e0e0, vxid = 1184662246, step = R_STP_DELIVER,#012 req_body = R_BODY_NONE,#012 err_code = 302, err_reason = (null),#012 restarts = 0, esi_level = 0#012 sp = 0x7f224d68e0e0 {#012 fd = 1914, vxid = 110920421,#012 client = 181.165.17.133 59938,#012 step = S_STP_WORKING,#012 },#012 worker = 0x7f22523f2c50 {#012 ws = 0x7f22523f2e68 {#012 id = "wrk",#012 {s,f,r,e} = {0x7f22523f2450,0x7f22523f2450,(nil),+2048},#012 },#012 VCL::method = 0x0,#012 VCL::return = deliver,#012 },#012 ws = 0x7f233a5e61b8 {#012 id = "req",#012 {s,f,r,e} = {0x7f233a5e8010,+984,(nil),+57360},#012 },#012 http[req] = {#012 ws = 0x7f233a5e61b8[req]#012 "GET",#012 "/escenario/Se-filtro-un- video-porno-de-una-ex-integrante-del-Bailando-20150226-0055.html",#012 "HTTP/1.1",#012 "Host: m.lacapital.com.ar",#012 "Connection: keep-alive",#012 "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",#012 "User-Agent: Mozilla/5.0 (Linux; Android 4.4.4; XT1032 Build/KXB21.14-L1.40) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: Child cleanup complete Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: child (11678) Started Feb 26 12:10:31 IPLCEPI-003 varnishd[4340]: Child (11678) said Child starts -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 26 16:01:03 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Feb 2015 16:01:03 -0000 Subject: [Varnish] #1687: Panic error In-Reply-To: <042.5c3533f4c22264c4ebdcfb67a6dabaf2@varnish-cache.org> References: <042.5c3533f4c22264c4ebdcfb67a6dabaf2@varnish-cache.org> Message-ID: <057.2c958931a45535aa0e363665a6474d96@varnish-cache.org> #1687: Panic error --------------------+------------------------ Reporter: cepi | Owner: Type: defect | Status: closed Priority: high | Milestone: Component: build | Version: unknown Severity: normal | Resolution: duplicate Keywords: | --------------------+------------------------ Changes (by daghf): * status: new => closed * resolution: => duplicate Comment: Hi This looks like a duplicate of #1607. Please upgrade to 4.0.3. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Feb 26 16:42:59 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 26 Feb 2015 16:42:59 -0000 Subject: [Varnish] #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response Message-ID: <043.c5450351f8b7a88acbc0573c0435d519@varnish-cache.org> #1688: ESI-included synthetic response is delivered uncompressed within a gzipped response ----------------------------------------+---------------------- Reporter: geoff | Type: defect Status: new | Priority: normal Milestone: Varnish 4.0 release | Component: varnishd Version: 4.0.3 | Severity: major Keywords: esi include synthetic gzip | ----------------------------------------+---------------------- In Varnish 4 (apparently in every release version <= 4.0.3), a synthetic response that is ESI-included within a gzipped response appears uncompressed in the midst of the gzip stream. gunzip fails, so the client response is unreadable. I'll be attaching a VTC test that demonstrates the problem, and the log of a varnishtest run. The conditions are that the response is synthetic and ESI-included, and Accept-Encoding:gzip appears in the request for the including response. The test log shows that there is no error when gzip-Encoding is not requested. When the response is gzipped, you can see that the uncompressed ESI response appears as a chunk in between gzipped chunks. I've tried the test with each of the 4.0.0 through 4.0.3 releases (by checking out the tags and building from the source repo), and it failed in all of them. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 27 02:46:54 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Feb 2015 02:46:54 -0000 Subject: [Varnish] #1689: Child not responding to CLI Message-ID: <042.96a3d3fcbeb10850167e110902a2e21a@varnish-cache.org> #1689: Child not responding to CLI ---------------------+-------------------- Reporter: cepi | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: unknown | Severity: normal Keywords: | ---------------------+-------------------- Hi, a few times a day I saw the following lines on my syslog, and the varnishd restarts: Feb 26 21:05:56 FIB4000-cepi2-fib001 varnishd[5788]: Child (5830) not responding to CLI, killing it. Feb 26 21:05:58 FIB4000-cepi2-fib001 varnishd[5788]: Child (5830) not responding to CLI, killing it. Feb 26 21:05:59 FIB4000-cepi2-fib001 varnishd[5788]: Child (5830) died signal=3 I have varnish 4.0.3 on debian 7. Attaching default.vcl and /etc/default/varnish Regards. Nicolas. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 27 15:10:30 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Feb 2015 15:10:30 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.b5a2bf4b1fe8898115598fb87a49780e@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by aondio): Hi razvanphp, could you please provide a chunk of varnishlog and varnishncsa outputs where this happens? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 27 15:42:58 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Feb 2015 15:42:58 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.40cd4e82752831e8faf5798e1e6d9e84@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by razvanphp): Sure, here it is: {{{ $ varnishlog * << Request >> 492374 - Begin req 492373 rxreq - Timestamp Start: 1425051112.717371 0.000000 0.000000 - Timestamp Req: 1425051112.717371 0.000000 0.000000 - ReqStart 82.x.x.x 40912 - ReqMethod GET - ReqURL / - ReqProtocol HTTP/1.1 - ReqHeader Host: www.site.com - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Cookie: sid=76l5ntmd311s12kqbg3jk3j - ReqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 8_1_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B466 Safari/600.1.4 - ReqHeader Accept-Language: en-us - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Connection: keep-alive - ReqHeader X-Forwarded-For: 82.x.x.x - VCL_call RECV - ReqHeader X-Pulse: c11b9f80-be95-11e4-abbd-0fe446ec98be - VCL_acl NO_MATCH upstream_proxy - ReqUnset X-Forwarded-For: 82.x.x.x - ReqHeader X-Forwarded-For: 82.x.x.x - ReqHeader X-Real-IP: 82.x.x.x - ReqURL / - ReqHeader X-UA-Device: desktop - ReqHeader X-UA-Vendor: generic - ReqUnset X-UA-Device: desktop - ReqHeader X-UA-Device: smartphone - ReqUnset X-UA-Vendor: generic - ReqHeader X-UA-Vendor: apple - Debug "VCL_error(702, http://m.site.com/)" - VCL_return synth - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - Timestamp Process: 1425051112.717628 0.000257 0.000257 - RespHeader Date: Fri, 27 Feb 2015 15:31:52 GMT - RespHeader Server: Varnish - RespHeader X-Varnish: 492374 - RespProtocol HTTP/1.1 - RespStatus 702 - RespReason Unknown HTTP Status - RespReason http://m.site.com/ - VCL_call SYNTH - RespHeader Location: http://m.site.com/ - RespStatus 302 - RespReason Found - VCL_return deliver - RespHeader Content-Length: 0 - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1425051112.717689 0.000319 0.000061 - ReqAcct 402 0 402 180 0 180 - End * << Session >> 492373 - Begin sess 0 HTTP/1 - SessOpen 82.x.x.x 40912 :80 10.107.104.87 80 1425051112.717016 16 - Link req 492374 rxreq - SessClose RX_TIMEOUT 5.073 - End }}} In varnishncsa log format I have: {{{ "_remote_device":"%{X-UA-Device}i","_remote_vendor":"%{X-UA-Vendor}i" }}} and in the log I get {{{ "_remote_device":"desktop","_remote_vendor":"generic" }}} .. instead of the expected smartphone, apple. Before #1462 was solved, I would also get response_status 702, but that is fixed now. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 27 16:09:41 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Feb 2015 16:09:41 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.32164012793b32d5ec32ea9a04f3ab21@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by aondio): Hi again, can you also share the VCL? I'm reproducing the error. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Feb 27 16:19:49 2015 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 27 Feb 2015 16:19:49 -0000 Subject: [Varnish] #1683: varnishncsa should log the last value in vcl sub In-Reply-To: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> References: <047.456b1c9f8b7b96eea716d228c7441a8b@varnish-cache.org> Message-ID: <062.b6e02befac7cb83298e13b6c58701978@varnish-cache.org> #1683: varnishncsa should log the last value in vcl sub -----------------------+--------------------- Reporter: razvanphp | Owner: aondio Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 4.0.3 Severity: normal | Resolution: Keywords: | -----------------------+--------------------- Comment (by razvanphp): I'm using a slightly modified version of devicedetect.vcl from https://docs.fastly.com/guides/caching/how-do-i-deliver-different-content- to-different-devices and then in my vcl: {{{ ... include "inc_devicedetect.vcl"; ... sub vcl_recv { ... # Mobile and locale redirects call detect_device; if (req.http.X-UA-Device ~ "(smartphone|mobile)") { return(synth(702, "http://m.site.com" + req.url)); } ... } sub vcl_synth { if (resp.status == 702) { # temporary redirect set resp.http.Location = resp.reason; set resp.status = 302; return(deliver); } } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator