From varnish-bugs at projects.linpro.no Thu Jun 4 17:31:43 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 04 Jun 2009 17:31:43 -0000 Subject: [Varnish] #515: Crash with persistent after seconds - no assert error Message-ID: <054.9e550955107166b204a6a2c093641b7f@projects.linpro.no> #515: Crash with persistent after seconds - no assert error ----------------------+----------------------------------------------------- Reporter: kristian | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- This is both with a fresh and "old" file, typically 20G. No assert error, but a core dump reveals: {{{ #0 0x00000033dfc30215 in raise () from /lib64/libc.so.6 No symbol table info available. #1 0x00000033dfc31cc0 in abort () from /lib64/libc.so.6 No symbol table info available. #2 0x000000000041fc07 in pan_ic (func=, file=, line=, cond=, err=, xxx=) at cache_panic.c:356 l = 65536 p = 0x2ba963e0b2c8
q = sp = (const struct sess *) 0x0 #3 0x0000000000436735 in smp_open (st=0x2ba463b1e040) at storage_persistent.c:790 sc = (struct smp_sc *) 0x2ba463b4c300 __func__ = "smp_open" #4 0x0000000000432a0e in STV_open () at stevedore.c:145 stv = (struct stevedore *) 0x2ba463b1e040 #5 0x000000000041e53a in child_main () at cache_main.c:130 __func__ = "child_main" #6 0x000000000042c011 in start_child (cli=0x0) at mgt_child.c:317 pid = u = 51 p = e = i = 1024 cp = {11, 12} __func__ = "start_child" #7 0x000000000042c809 in mgt_sigchld (e=, what=) at mgt_child.c:476 status = 139 vsb = (struct vsb *) 0x2ba463b17970 r = 10678 __func__ = "mgt_sigchld" #8 0x00002ba4634a0377 in vev_sched_signal (evb=0x2ba463b1d1c0) at vev.c:437 i = 0 j = 17 es = (struct vevsig *) 0x2ba463be25b0 e = (struct vev *) 0x2ba463b1f160 __func__ = "vev_sched_signal" #9 0x00002ba4634a0a08 in vev_schedule (evb=0x2ba463b1d1c0) at vev.c:365 i = ---Type to continue, or q to quit--- __func__ = "vev_schedule" #10 0x000000000042c273 in MGT_Run () at mgt_child.c:551 sac = {__sigaction_handler = {sa_handler = 0x1, sa_sigaction = 0x1}, sa_mask = {__val = {0 }}, sa_flags = 268435456, sa_restorer = 0} e = (struct vev *) 0x2ba463b1f160 i = __func__ = "MGT_Run" #11 0x0000000000438e89 in main (argc=7, argv=) at varnishd.c:738 o = C_flag = 0 F_flag = 0 b_arg = 0x0 f_arg = 0x7fff4761bdec "/root/autovarnish.vcl" i_arg = 0x0 l_arg = 0x44f171 "80m" l_size = 83886080 q = h_arg = 0x4440ac "classic" n_arg = 0x0 P_arg = 0x0 S_arg = 0x0 s_arg_given = 1 T_arg = 0x0 p = vcl = cli = {{sb = 0x2ba463b17850, result = CLIS_OK, priv = 0x0}} pfh = (struct pidfh *) 0x0 dirname = "/usr/local/autovarnish/var/varnish/varnish2\0003\000\000\000 p@\000\000\000\000\000?\005\200?3\000\000\000`?aG?\177\000\000??\200?\000\000\000\000H\b\200?3\000\000\000?\001\000\000\000\000\000\000\000\000\000\0003\000\000\000 p@", '\0' , "\237?\200?3\000\000\000\000\024?c?+\000\000\0300@\000\000\000\000\000\001\000\000\000\000\000\000\000?\"??\000\000\000\000\223a", '\0' , "?\205\200?3\000\000\000??aG?\177\000\000G\214\200?3", '\0' ... __func__ = "main" (gdb) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 5 01:58:33 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 05 Jun 2009 01:58:33 -0000 Subject: [Varnish] #516: vsl_mtx "deadlock"; child stops responding Message-ID: <048.692a2d331dfd2cef931d98d5b3e067c5@projects.linpro.no> #516: vsl_mtx "deadlock"; child stops responding ----------------------+----------------------------------------------------- Reporter: kb | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- I'm seeing children that stop responding reliably every day at roughly the same time: {{{ Jun 4 07:08:32 statcache0 varnishd[5958]: Child (26929) not responding to ping, killing it. Jun 4 07:08:36 statcache0 last message repeated 2 times Jun 4 07:08:36 statcache0 varnishd[5958]: Child (26929) died signal=3 (core dumped) Jun 4 07:08:36 statcache0 varnishd[5958]: Child cleanup complete }}} GDB: {{{ (gdb) where #0 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 #1 0x00007f054c4acb08 in _L_lock_104 () from /lib/libpthread.so.0 #2 0x00007f054c4ac470 in pthread_mutex_lock () from /lib/libpthread.so.0 #3 0x000000000042de2b in VSL (tag=SLT_CLI, id=0, fmt=0x43b502 "Rd %s") at shmlog.c:154 #4 0x0000000000411e95 in cli_vlu (priv=0x7fff553115e0, p=0xffffffffffffffff
) at cache_cli.c:105 #5 0x00007f054ceec472 in LineUpProcess (l=0x7f054bb08370) at vlu.c:156 #6 0x0000000000411d9c in CLI_Run () at cache_cli.c:165 #7 0x000000000041a243 in child_main () at cache_main.c:134 #8 0x0000000000428a0a in start_child (cli=0x0) at mgt_child.c:319 #9 0x0000000000429212 in mgt_sigchld (e=, what=) at mgt_child.c:472 #10 0x00007f054ceeb4ea in vev_sched_signal (evb=0x7f054bb0d040) at vev.c:437 #11 0x00007f054ceebb3d in vev_schedule (evb=0x7f054bb0d040) at vev.c:365 #12 0x0000000000428cca in mgt_run (dflag=0, T_arg=) at mgt_child.c:560 #13 0x000000000043228a in main (argc=, argv=0x7fff55311d48) at varnishd.c:655 }}} It's a block on vsl_mtx, and most other threads are blocked too: {{{ (gdb) info thread 16 process 26930 0x00007f054c4b1e81 in nanosleep () from /lib/libpthread.so.0 15 process 26931 0x00007f054c4aeb99 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 14 process 26933 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 13 process 26942 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 12 process 26943 0x00007f054bd76c86 in poll () from /lib/libc.so.6 11 process 8811 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 10 process 15459 0x00007f054bd788e3 in writev () from /lib/libc.so.6 9 process 15676 0x000000000042dadd in WSL_Flush (w=0x44e39be0, overflow=) at shmlog.c:194 8 process 17612 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 7 process 18638 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 6 process 19059 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 5 process 20041 0x00007f054bd788e3 in writev () from /lib/libc.so.6 4 process 20226 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 3 process 22666 HSH_Lookup (sp=0x7f04ff330008) at cache_hash.c:297 2 process 1427 0x00007f054bd788e3 in writev () from /lib/libc.so.6 * 1 process 26929 0x00007f054c4b1174 in __lll_lock_wait () from /lib/libpthread.so.0 }}} The blocks are either the LOCKSHM(&vsl_mtx) in VSL() (line 154 in 2.0.4) or LOCKSHM(&vsl_mtx) in WSL_Flush() (line 187). One thread is always at line 194 of WSL_Flush(): p[l] = SLT_ENDMARKER; p is pretty wierd; 2^64^-1 above and in the thread that terms at line 194: {{{ (gdb) where full #0 0x000000000042dadd in WSL_Flush (w=0x44e39be0, overflow=) at shmlog.c:194 p = (unsigned char *) 0x7f0549fc688c
l = 1958 __func__ = "WSL_Flush" #1 0x0000000000410b17 in cnt_done (sp=0x7f04ff32b008) at cache_center.c:235 dh = dp = da = pfd = {{fd = -13455352, events = 32516, revents = 0}} i = __func__ = "cnt_done" #2 0x0000000000411019 in CNT_Session (sp=0x7f04ff32b008) at steps.h:44 done = 0 w = (struct worker *) 0x44e39be0 __func__ = "CNT_Session" #3 0x000000000041cb6d in wrk_do_cnt_sess (w=0x44e39be0, priv=) at cache_pool.c:398 sess = (struct sess *) 0x7f04ff32b008 __func__ = "wrk_do_cnt_sess" #4 0x000000000041c21b in wrk_thread (priv=0x7f054bb0c0b0) at cache_pool.c:310 ww = {magic = 1670491599, nobjhead = 0x0, nobj = 0x0, lastused = 1244124456.0244427, cond = {__data = {__lock = 0, __futex = 700852, __total_seq = 350426, __wakeup_seq = 350426, __woken_seq = 350426, __mutex = 0x7f0546904228, __nwaiters = 0, __broadcast_seq = 0}, __size = "\000\000\000\000??\n\000?X\005\000\000\000\000\000?X\005\000\000\000\000\000?X\005\000\000\000\000\000(B\220F\005\177\000\000\000\000\000\000\000\000\000", __align = 3010136419336192}, list = { vtqe_next = 0x45e3bbe0, vtqe_prev = 0x7f054bb0c0c0}, wrq = 0x7f04ff32b198, wfd = 0x0, werr = 0, iov = {{iov_base = 0x7f052d0a6358, iov_len = 8}, {iov_base = 0x43cfa8, iov_len = 1}, { iov_base = 0x7f052d0a6361, iov_len = 3}, {iov_base = 0x43cfa8, iov_len = 1}, {iov_base = 0x7f052d0a6365, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6368, iov_len = 15}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6461, iov_len = 38}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6488, iov_len = 34}, {iov_base = 0x43ea4b, iov_len = 2}, { iov_base = 0x7f052d0a64ab, iov_len = 44}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a64d8, iov_len = 17}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a650e, iov_len = 59}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a654a, iov_len = 24}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052d0a6563, iov_len = 17}, {iov_base = 0x43ea4b, iov_len = 2}, { iov_base = 0x7f04ff32bf5e, iov_len = 35}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf82, iov_len = 21}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf98, iov_len = 6}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43d49b, iov_len = 16}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff32bf9f, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f051fd74000, iov_len = 185528}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f052c21b000, iov_len = 80}, {iov_base = 0x43ea4b, iov_len = 2}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04ff12e31d, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x44e35a00, iov_len = 145}, { iov_base = 0x7f04ff376dd8, iov_len = 33}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed092a0, iov_len = 21}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed092b6, iov_len = 28}, { iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x7f04fed0931c, iov_len = 22}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x43ea4b, iov_len = 2}, {iov_base = 0x0, iov_len = 0} }, niov = 0, liov = 0, vcl = 0x7f053f765328, srcaddr = 0x7f04fe53d2c0, wlb = 0x44e37bb0 "\026", wlp = 0x44e38356 "", wle = 0x44e39bb0 "", wlr = 52, sha256ctx = 0x44e3a0a0} sha256 = {state = {0, 0, 0, 0, 0, 0, 0, 0}, count = 0, buf = '\0' } __func__ = "wrk_thread" #5 0x00007f054c4aa3f7 in start_thread () from /lib/libpthread.so.0 No symbol table info available. #6 0x00007f054bd7fb3d in clone () from /lib/libc.so.6 No symbol table info available. #7 0x0000000000000000 in ?? () No symbol table info available. }}} Again p is out of bounds. Fascinating, note what ''always'' happens right before the FAIL: {{{ Jun 4 07:08:30 statcache0 syslogd 1.5.0#1ubuntu1: restart. Jun 4 07:08:32 statcache0 varnishd[5958]: Child (26929) not responding to ping, killing it. Jun 4 07:08:36 statcache0 last message repeated 2 times Jun 4 07:08:36 statcache0 varnishd[5958]: Child (26929) died signal=3 (core dumped) Jun 4 07:08:36 statcache0 varnishd[5958]: Child cleanup complete Jun 4 07:08:36 statcache0 varnishd[5958]: child (2535) Started Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Closed fds: 4 5 6 9 10 12 13 Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Child starts Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said managed to mmap 1073741824 bytes of 1073741824 Jun 4 07:08:36 statcache0 varnishd[5958]: Child (2535) said Ready }}} Nothing else from varnishd shows up in the log except the above, so there's no spurious log flood. I'm also not doing any syslog() C tricks (yet) so what syslog() or log rotation dependency is there within varnishd that could cause this? Manually running the daily scripts doesn't cause this, but I'm trying to find a reproduction case. Though clearly something varnishy is awry. Thx,[[BR]] -- [[BR]] Ken.[[BR]] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Jun 8 09:49:07 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 08 Jun 2009 09:49:07 -0000 Subject: [Varnish] #517: Syntax failure in VCLExampleLongerCaching Message-ID: <057.8e19e80de3196cd7fb5df0aa38d40020@projects.linpro.no> #517: Syntax failure in VCLExampleLongerCaching -------------------------+-------------------------------------------------- Reporter: mark.breyer | Type: defect Status: new | Priority: normal Milestone: | Component: website Version: trunk | Severity: normal Keywords: | -------------------------+-------------------------------------------------- Website: http://varnish.projects.linpro.no/wiki/VCLExampleLongerCaching false: /* marker for vcl_deliver to reset Age: */ set obj.http.magicmarker = 1; right: /* marker for vcl_deliver to reset Age: */ set obj.http.magicmarker = "1"; -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Jun 10 16:29:02 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 10 Jun 2009 16:29:02 -0000 Subject: [Varnish] #518: Default backend health right after launch Message-ID: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> #518: Default backend health right after launch -------------------+-------------------------------------------------------- Reporter: rts | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- When starting varnish, visitors experience 503 errors until the backends are deemed healthy. Depending on the choice of config, this might take between a few seconds and a few minutes. This was raised by another user at http://projects.linpro.no/pipermail /varnish-dist/2009-May/000105.html, but no response was received. Is there an existing configuration flag for ".assumehealthyonboot = true" or similar, and if not, can one be added? I'm happy to offer a cash bounty to get this feature added. rts -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Jun 10 16:37:18 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 10 Jun 2009 16:37:18 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.5d1bebfb420d70e80caba26792c801dd@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by rts): This issue appears to have been raised in a slightly different form at http://varnish.projects.linpro.no/ticket/512 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Jun 11 11:16:52 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 11 Jun 2009 11:16:52 -0000 Subject: [Varnish] #519: 503 error problem Message-ID: <052.b54f7a7932a5fbc3946b8e6cd550fd74@projects.linpro.no> #519: 503 error problem -----------------------+---------------------------------------------------- Reporter: silver | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.0 | Severity: normal Keywords: 503 error | -----------------------+---------------------------------------------------- ENV: centos 5.2 64bit, varnish 2.0.4 The problem has 2 different appearances: First, one of my dynamic link alwayse turns 503 error. And varnishlog result is as follows: {{{ 10 SessionOpen c 125.230.149.219 1819 :80 12 SessionOpen c 124.207.129.40 1601 :80 12 ReqStart c 124.207.129.40 1601 1822984100 12 RxRequest c GET 12 RxURL c /pause/index.php?c=dshj,guzhuang,ndjch 12 RxProtocol c HTTP/1.1 12 RxHeader c Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms -powerpoint, application/msword, application/x-silverlight, */* 12 RxHeader c Accept-Language: zh-cn 12 RxHeader c Accept-Encoding: gzip, deflate 12 RxHeader c User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727) 12 RxHeader c Host: fs.funshion.com 12 RxHeader c Connection: Keep-Alive 12 RxHeader c Cookie: tacarea=1; taczone=z1; __utma=227745162.516174632591317950.1244613332.1244615894.1244615928.4; __utmz=227745162.1244613332.1.1.u tmcsr=(direct)|utmccn=(direct)|utmcmd=(none) 12 VCL_call c recv 12 VCL_return c lookup 12 VCL_call c hash 12 VCL_return c hash 12 VCL_call c miss 12 VCL_return c fetch 13 BackendOpen b funshionfs 222.35.250.5 25595 222.35.250.4 80 12 Backend c 13 funshionfs funshionfs 13 TxRequest b GET 13 TxURL b /pause/index.php?c=dshj,guzhuang,ndjch 13 TxProtocol b HTTP/1.1 13 TxHeader b Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms -powerpoint, application/msword, application/x-silverlight, */* 13 TxHeader b Accept-Language: zh-cn 13 TxHeader b Accept-Encoding: gzip, deflate 13 TxHeader b User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727) 13 TxHeader b Host: fs.funshion.com 13 TxHeader b Cookie: tacarea=1; taczone=z1; __utma=227745162.516174632591317950.1244613332.1244615894.1244615928.4; __utmz=227745162.1244613332.1.1.u tmcsr=(direct)|utmccn=(direct)|utmcmd=(none) 13 TxHeader b X-Forwarded-For: 124.207.129.40 13 TxHeader b X-Varnish: 1822984100 13 TxHeader b X-Forwarded-For: 124.207.129.40 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b X-Powered-By: PHP/5.2.1 13 RxHeader b Set-Cookie: PHPSESSID=m3suseosc4a74095r5r7ip9kp6; path=/; domain=.funshion.com 13 RxHeader b Pragma: no-cache 13 RxHeader b Last-Modified: Thu, 11 Jun 2009 02:04:36 GMT 13 RxHeader b Expires: Fri, 12 Jun 2009 02:04:36 GMT 13 RxHeader b Cache-Control: max-age=86400 13 RxHeader b Content-Length: 1506 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept-Encoding 13 RxHeader b Content-type: text/html 13 RxHeader b Date: Thu, 11 Jun 2009 02:04:36 GMT 13 RxHeader b Server: lighttpd/1.4.22 12 ObjProtocol c HTTP/1.1 12 ObjStatus c 200 12 ObjResponse c OK 12 ObjHeader c X-Powered-By: PHP/5.2.1 12 ObjHeader c Set-Cookie: PHPSESSID=m3suseosc4a74095r5r7ip9kp6; path=/; domain=.funshion.com 12 ObjHeader c Pragma: no-cache 12 ObjHeader c Last-Modified: Thu, 11 Jun 2009 02:04:36 GMT 12 ObjHeader c Expires: Fri, 12 Jun 2009 02:04:36 GMT 12 ObjHeader c Cache-Control: max-age=86400 12 ObjHeader c Content-Encoding: gzip 12 ObjHeader c Vary: Accept-Encoding 12 ObjHeader c Content-type: text/html 12 ObjHeader c Date: Thu, 11 Jun 2009 02:04:36 GMT 12 ObjHeader c Server: lighttpd/1.4.22 13 BackendClose b funshionfs }}} ''' 12 VCL_call c error''' {{{ 12 VCL_return c deliver 12 Length c 466 12 VCL_call c deliver 12 VCL_return c deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Retry-After: 0 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Content-Length: 466 12 TxHeader c Date: Thu, 11 Jun 2009 02:04:42 GMT 12 TxHeader c X-Varnish: 1822984100 12 TxHeader c Age: 1 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 TxHeader c X-Cache: MISS 12 ReqEnd c 1822984100 1244685882.232357979 1244685882.997204065 0.001631975 0.764765978 0.000080109 12 SessionClose c error }}} And i can get the right content via varnish by curl, but 503 by IE. The bold line ,varnish should call fetch instead of error. There are some similar actions i found here, but no solution. The second one: {{{ 952 ReqStart c 118.100.158.74 64065 350606500 952 RxRequest c GET 952 RxURL c /attachment/fs/521/26/52126.jpg?1241778919 952 RxProtocol c HTTP/1.0 952 RxHeader c Accept: */* 952 RxHeader c Referer: http://fs.funshion.com/embed_list/region?r=e6aca7e6b4b2&o=z1&pt=vp&pg=13 952 RxHeader c Accept-Language: en-us 952 RxHeader c UA-CPU: x86 952 RxHeader c Accept-Encoding: gzip, deflate 952 RxHeader c User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 2.0.50727; InfoPath.1; OfficeLiveConnector.1.3; OfficeLive Patch.0.0) 952 RxHeader c Host: img.funshion.com 952 RxHeader c Cookie: __utma=110910354.772200403240562500.1237512648.1237512648.1237512648.1; __utmz=110910354.1237512648.1.1.utmcsr=funshion-movie-on -demand.software.informer.com|utmccn=(referral)|utmcmd=referral|utmcct=/; funshion_setup=1; thide=0; userplay=%u5904%u5 952 RxHeader c Via: 1.1 SnapGear:3128 (squid/2.5.STABLE10) 952 RxHeader c X-Forwarded-For: 192.168.0.2 952 RxHeader c Cache-Control: max-age=259200 952 RxHeader c Connection: keep-alive 952 VCL_call c recv 952 VCL_return c lookup 952 VCL_call c hash 952 VCL_return c hash 952 VCL_call c miss 952 VCL_return c fetch }}} ''' 952 VCL_call c error''' {{{ 952 VCL_return c deliver 952 Length c 465 952 VCL_call c deliver 952 VCL_return c deliver 952 TxProtocol c HTTP/1.1 952 TxStatus c 503 952 TxResponse c Service Unavailable 952 TxHeader c Server: Varnish 952 TxHeader c Retry-After: 0 952 TxHeader c Content-Type: text/html; charset=utf-8 952 TxHeader c Content-Length: 465 952 TxHeader c Date: Wed, 10 Jun 2009 04:06:14 GMT 952 TxHeader c X-Varnish: 350606500 952 TxHeader c Age: 5 952 TxHeader c Via: 1.1 varnish 952 TxHeader c Connection: close 952 TxHeader c X-Cache: MISS 952 ReqEnd c 350606500 1244606769.210163116 1244606774.209917068 0.784834146 4.999708891 0.000045061 952 SessionClose c error 952 StatSess c 118.100.158.74 64065 148 1 49 0 0 7 21706 177002 }}} Sometimes non-dynamic links came to 503. But in this situation, a refresh will make it right. BTW, when i install varnish, make check turns 2 fails, but the check result is too long, i can't find the wrong part. So if there is any need, just let me know. Thanks in advance. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 12 11:38:20 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 12 Jun 2009 11:38:20 -0000 Subject: [Varnish] #519: 503 error problem In-Reply-To: <052.b54f7a7932a5fbc3946b8e6cd550fd74@projects.linpro.no> References: <052.b54f7a7932a5fbc3946b8e6cd550fd74@projects.linpro.no> Message-ID: <061.3200d996830cf16e08f53b31c9a9c705@projects.linpro.no> #519: 503 error problem -----------------------+---------------------------------------------------- Reporter: silver | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: Keywords: 503 error | -----------------------+---------------------------------------------------- Comment (by silver): Hi, there. The first problem had been resolved? It's all because of Content-Length. Php script got the lenth of the HTML text was wrong, and it is larger than it really is. In this situation, varnish gives 503. '''So it's definitely a bug.''' Therefor i comment the "header("Content-Length:......" line, the world turns wonderful again. Just for your infomation. The second one i add a RESTART in FETCH and ERROR when 503,504 hanppend. So far it works well. If you got better idea, i'd like to know. cheers -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 16 00:37:30 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Jun 2009 00:37:30 -0000 Subject: [Varnish] #513: Implement a VCL "constructor" In-Reply-To: <048.0ca14503bff93f739ffb0346b288b20f@projects.linpro.no> References: <048.0ca14503bff93f739ffb0346b288b20f@projects.linpro.no> Message-ID: <057.1b25295c3b8fe8f2430589886cad0e72@projects.linpro.no> #513: Implement a VCL "constructor" -------------------------+-------------------------------------------------- Reporter: kb | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by kb): After a lot of iteration, I think using pthread_once as necessary in custom C within VCL is a clean-enough implementation. Adding semantics for this to VCL seems overkill. Withdrawn. :) -- kb -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 16 00:43:43 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Jun 2009 00:43:43 -0000 Subject: [Varnish] #520: check_varnish parameters truncated to signed int Message-ID: <048.662d5ec30a8056ba3f757638023873bd@projects.linpro.no> #520: check_varnish parameters truncated to signed int -----------------+---------------------------------------------------------- Reporter: kb | Type: defect Status: new | Priority: normal Milestone: | Component: nagios Version: 2.0 | Severity: major Keywords: | -----------------+---------------------------------------------------------- Monitoring sma_balloc size (for example) is impossible because the warn/crit variables are naively declared as ints in check_varnish. I've attached a patch so that this is usable on machines with >31bit addressing. And fixed a typo. Thanks,[[BR]] --[[BR]] kb[[BR]] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 16 01:04:38 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Jun 2009 01:04:38 -0000 Subject: [Varnish] #495: HTTP/1.0 or 'Connection: closed' backend race condition In-Reply-To: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> References: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> Message-ID: <058.d64a88774d1c8a451bdf2eaf80d99d7f@projects.linpro.no> #495: HTTP/1.0 or 'Connection: closed' backend race condition ----------------------+----------------------------------------------------- Reporter: cra | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by kb): I notice that TCP_connect() isn't thread-safe. I wonder if compiling varnish with: -D_REENTRANT -D_LIBC_REENTRANT helps with this problem.[[BR]] -- [[BR]] kb[[BR]] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 16 01:27:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 16 Jun 2009 01:27:32 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.8abb1505bfc48a4f2603c2d077cbe338@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kb): Changing the default to healthy is trivial. But I've been going back and forth on this. Currently, the user experience is 503 errors instead of an abrupt connection refused. I'm not fully convinced that's necessarily bad, or at least any worse than the alternatives. If the default state for BEs was healthy, traffic would immediately go to the BEs at startup. But if the URL used to monitor the BEs is intended to verify the sanity of a BE source, then sending hits, even though they might "work", could be the wrong decision. And possibly bad data could be cached, which is much worse than bad data until the health checks complete. Other options might be: * Varnish doesn't listen() until at least one BE is healthy * Not sure this is a fabulous idea; process active but not listening * Not as informative as 503s * Varnish blocks in the case of no healthy BEs * This is what most load-balancers do by default * Not sure yet what the consequences would be to the varnish internals to block in the *_getfd() calls. Any thoughts? Ken. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Jun 18 11:40:03 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Jun 2009 11:40:03 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.8207363aba28e86c08a3fe954266d4c4@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by whocares): > Currently, the user experience is 503 errors instead of an abrupt connection refused. > I'm not fully convinced that's necessarily bad, or at least any worse than the alternatives. Actually in my case sending out 503s *is* worse than sending "Connection refused". That's simply because most of our users accept "Connection Refused" as the server being down for some reason or other whereas 503s are (for reasons unknown to me) associated with a faulty application. For the latter in turn our internal development department is blamed by management and that's the *real* problem. So basically I'm infavour of the "default healthy" state but as I already wrote in the other thread: For me it'd be enough to add an command line switch to varnish that activates the "default healthy" behaviour. Stefan -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Jun 18 12:04:49 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Jun 2009 12:04:49 -0000 Subject: [Varnish] #492: cache_waiter_epoll.c / cache_acceptor_epoll.c entire rewritten for Linux boxes better performance In-Reply-To: <053.23d7ccb9ad554347cf82b8ea488d042f@projects.linpro.no> References: <053.23d7ccb9ad554347cf82b8ea488d042f@projects.linpro.no> Message-ID: <062.df1828021440bac6517eb58581748511@projects.linpro.no> #492: cache_waiter_epoll.c / cache_acceptor_epoll.c entire rewritten for Linux boxes better performance ---------------------------------------------------------------------------------+ Reporter: stockrt | Owner: phk Type: enhancement | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: cache_acceptor_epoll.c cache_waiter_epoll.c performance linux epoll | ---------------------------------------------------------------------------------+ Comment (by stockrt): I would like to leave here the changeset regarding this commit, so others can better track which changes took place: http://varnish.projects.linpro.no/changeset/4085/trunk/varnish- cache/bin/varnishd/cache_waiter_epoll.c Regards, Rog?rio Schneider -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Jun 18 12:13:26 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 18 Jun 2009 12:13:26 -0000 Subject: [Varnish] #235: Varnish Linux performance In-Reply-To: <057.939e5a37a22cf5a6a639947d809cd9b1@projects.linpro.no> References: <057.939e5a37a22cf5a6a639947d809cd9b1@projects.linpro.no> Message-ID: <066.bf0151f937745bf444289eed7e4948e5@projects.linpro.no> #235: Varnish Linux performance -------------------------+-------------------------------------------------- Reporter: rafaelumann | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: performance | -------------------------+-------------------------------------------------- Comment (by stockrt): Regarding this concern of a better implementation on how to clean the timeout(ed) sockets, this ticket treats and implements the definitely (submitted and accepted) version of acceptor/waiter for Linux boxes using epoll: http://varnish.projects.linpro.no/ticket/492 I have tried some other approaches, like one running a separated thread for cleaning timeout(ed) sockets, but this one has shown little to no improvement compared to #492, and presented itself running into race conditions. Best Regards, Rog?rio Schneider -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 19 06:16:58 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 19 Jun 2009 06:16:58 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.7a6095349a3d564b0cc934ec9e880875@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kb): I'll take single-malt scotch instead of cash? :-) The attached patch creates a backend flag to change the initial health of backends upon varnishd startup: {{{ backend foo { .initial_health = 1; } }}} The backend healthy flag internally is an unsigned int, so I kept the same type. A value > 0 here will cause this backend to default to healthy. Also, the initial "Probe" output at startup will note "initially healthy" if this flag is set. The backend will immediately go "sick" if a single health check fails*, until the window is flushed. This seems like the safest way to implement this option, since a bad host will only receive hits until the first probe can execute against it, which in my testing was nearly immediate. Healthy hosts will continue to pass and stay healthy. I'm not using probed backends in production right now, but I'm running one instance of the patched version at ~1,000 requests per second (at 8% of a single 2.5G Xeon) and it's stable. YMMV, protect yourself, etc. I thought about having this flag apply to the director, and backends would inherit this central flag... but while setting this for every backend in the config is a little verbose, it just seems cleaner, conceptually. And a per-backend setting seems cleaner and more flexible than a command-line flag. TODO: I should probably add "-p initial_health=1", as it fits the defaults like between_bytes_timeout. Comments? Thoughts? -- Ken. *(I'm marking the "oldest" .threshold probes in the .window as "pass" and the others "fail", so a single probe fail will cause sickness until the window has flushed. The BITMAP()/vt->happy stuff in bin/varnishd/cache_backend_poll.[ch] made my face bleed, but look there for specifics of the implementation.) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 19 06:21:38 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 19 Jun 2009 06:21:38 -0000 Subject: [Varnish] #512: 503 error with load-balancer setup In-Reply-To: <051.93b0fdd93e18ab1938af816defd3ca3b@projects.linpro.no> References: <051.93b0fdd93e18ab1938af816defd3ca3b@projects.linpro.no> Message-ID: <060.26303c591141db960743c861d8357c4b@projects.linpro.no> #512: 503 error with load-balancer setup --------------------------+------------------------------------------------- Reporter: ajung | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Loadbalancer | --------------------------+------------------------------------------------- Comment (by kb): See #518 for a first take at this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 19 11:30:28 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 19 Jun 2009 11:30:28 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.ef6dcbc0754d58b1e4654940b598364f@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by whocares): > I'll take single-malt scotch instead of cash? :-) Glennfiddich, Glenlivet or perhaps Lagavulin? Send delivery address and preferred brand of poison and I'll see what I can do ;) Thanks for the patch. Stefan -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Jun 21 09:25:04 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 21 Jun 2009 09:25:04 -0000 Subject: [Varnish] #224: Improved logging In-Reply-To: <049.b840548eba4d9d14102064bf6f789678@projects.linpro.no> References: <049.b840548eba4d9d14102064bf6f789678@projects.linpro.no> Message-ID: <058.0482cde0147b30c367505c2b8514e3cb@projects.linpro.no> #224: Improved logging -------------------------+-------------------------------------------------- Reporter: des | Owner: des Type: enhancement | Status: reopened Priority: normal | Milestone: Varnish 2.0 code complete Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by rts): * status: closed => reopened * resolution: invalid => Comment: How can you search the varnishlog to find a particular XID. For example, I know that 1654974565 resulted in a 503, but I don't seem to be able to find all the communication and processing associated with this request using: varnishlog -r /var/log/varnish/varnish.log -c -b -o TxHeader 1654974565 I'm not sure if I'm doing something wrong, or if this feature doesn't exist. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Jun 22 11:29:25 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 22 Jun 2009 11:29:25 -0000 Subject: [Varnish] #521: Sig 11 crash in trunk 4102 Message-ID: <052.e505c5dbe831127d825da5676e317109@projects.linpro.no> #521: Sig 11 crash in trunk 4102 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- I'm running Varnish trunk 4102 in FreeBSD/amd64 7.2-RELEASE, and got a sig11 crash:pid 97116 (varnishd), uid 0: exited on signal 11 (core dumped) Backtrace: {{{ (gdb) bt #0 0x0000000800896dff in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:323 #1 0x000000080089d29f in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:64 #2 0x000000000041ff45 in pan_backtrace () at cache_panic.c:272 #3 0x00000000004202f7 in pan_ic (func=Variable "func" is not available. ) at cache_panic.c:326 #4 0x000000000042ab08 in WS_Release (ws=0x7fff964b4788, bytes=13) at cache_ws.c:174 #5 0x000000000042576c in vrt_assemble_string (hp=0x7fff964b4c60, h=0x1adfe036e2 "X-Cache:", p=Variable "p" is not available. ) at cache_vrt.c:178 #6 0x0000000000429ddb in VRT_SetHdr (sp=0x1ad0061008, where=Variable "where" is not available. ) at cache_vrt.c:199 #7 0x0000001adfe02e31 in ?? () from ./vcl.FANefPfn.so #8 0x0000000000424cbb in VCL_deliver_method (sp=0x1ad0061008) at vcl_returns.h:59 #9 0x0000000000411cf4 in cnt_deliver (sp=0x1ad0061008) at cache_center.c:186 #10 0x00000000004129b3 in CNT_Session (sp=0x1ad0061008) at steps.h:42 #11 0x0000000000422171 in wrk_do_cnt_sess (w=0x7fff964b42c0, priv=Variable "priv" is not available. ) at cache_pool.c:456 #12 0x0000000000421467 in wrk_thread_real (qp=0x801010880, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:351 #13 0x0000000800abe4d1 in pthread_getprio () from /lib/libthr.so.3 #14 0x00007fff962b5000 in ?? () Cannot access memory at address 0x7fff964b5000 }}} The problem here could be that my obj_workspace is a little low, as I am adding headers like X-Cache in vcl_deliver and a coupld of others in vcl_fetch. I'll remove one and see if it helps. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Jun 22 11:58:12 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 22 Jun 2009 11:58:12 -0000 Subject: [Varnish] #521: Sig 11 crash in trunk 4102 In-Reply-To: <052.e505c5dbe831127d825da5676e317109@projects.linpro.no> References: <052.e505c5dbe831127d825da5676e317109@projects.linpro.no> Message-ID: <061.52e78ee052802ac562ef0fa79d526d66@projects.linpro.no> #521: Sig 11 crash in trunk 4102 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: closed Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: duplicate Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => duplicate Comment: (From IRC discussion) it looks like there were not enough room in the workspace for the X-Cache header. The error handling is far from optimal, but that is a known defect. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Jun 22 18:08:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 22 Jun 2009 18:08:19 -0000 Subject: [Varnish] #522: Odd TCP reset problems with trunk 4080 Message-ID: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> #522: Odd TCP reset problems with trunk 4080 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- I am running Varnish trunk 4080 in FreeBSD/amd64 7.2-RELEASE. After upgrading to 4080, we have had some strange issues with users with 3G (mobile) connections or surfing from abroad (like Thailand) can not access www.aftenposten.no where all Varnish servers run this version of Varnish. Tracking this problem, I have found out that reverting the change in changeset 4046 (which changes the way client connections are closed) seems to fix it. From the client: user tries to load http://www.aftenposten.no/, and gets an empty page or a page about connection closed. Tcpdump shows connections are closed by the cache server. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Jun 22 22:11:43 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 22 Jun 2009 22:11:43 -0000 Subject: [Varnish] #522: Odd TCP reset problems with trunk 4080 In-Reply-To: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> References: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> Message-ID: <061.e4cf7ac4bc5a4526634e8d998bb29b5c@projects.linpro.no> #522: Odd TCP reset problems with trunk 4080 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): Let it be known that: I do use IP Filter 4.1.28. And that the "bad cksum" in the tcpdump from the cache server is odd? When testing above, I used a MacBook with Mac OS and a Huawei 3G USB adapter to connect to Internet. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 23 07:54:35 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Jun 2009 07:54:35 -0000 Subject: [Varnish] #523: How to stop varnish ? Message-ID: <050.9b142a981cab998af7ce19c0361c8e4f@projects.linpro.no> #523: How to stop varnish ? -------------------+-------------------------------------------------------- Reporter: ovi1 | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- Is there a command to stop varnish? Currently I'm using kill, but I would like make script for stopping varnish without the kill command. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 23 11:37:30 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Jun 2009 11:37:30 -0000 Subject: [Varnish] #524: esi + keepalive + HTTP/1.0 hangs untill sess_timeout Message-ID: <054.3f587d287afea2cc76ead53a946033e5@projects.linpro.no> #524: esi + keepalive + HTTP/1.0 hangs untill sess_timeout ----------------------+----------------------------------------------------- Reporter: nicholas | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ----------------------+----------------------------------------------------- When using HTTP/1.0 clients like wget or ab to test an esi parsed page, the client hangs untill keepalive times out (sess_timeout, default 5s). No closing tcp flags after the content is sent, no tcp traffic in the interval. Varnish closes the connection after 5s with FIN, ACK. With esi parsing on we se no Content-Length header, which we guess is by design. Both ab and wget uses HTTP/1.0. wget --no-http-keep-alive works like expected. wget --ignore-length works like expected. HTTP/1.1 clients work as expected. skarven:~# varnishd -V varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS skarven:~# rpm -qa | grep varnish varnish-2.0.4-1.el5 varnish-libs-2.0.4-1.el5 Shout for more details or testing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Jun 23 20:10:11 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 23 Jun 2009 20:10:11 -0000 Subject: [Varnish] #523: How to stop varnish ? In-Reply-To: <050.9b142a981cab998af7ce19c0361c8e4f@projects.linpro.no> References: <050.9b142a981cab998af7ce19c0361c8e4f@projects.linpro.no> Message-ID: <059.ca6b1e7cdc6d559de665c6f62c5b89b1@projects.linpro.no> #523: How to stop varnish ? --------------------+------------------------------------------------------- Reporter: ovi1 | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kb): "varnishadm stop" will stop the child, but not the parent. I'm not aware of an alternative for killing the parent other than 'kill'. This is what I use right now, until "varnishadm die" or somesuch is implemented (requires "varnishd -P /foobar/varnishd.pid"). Unless we're both missing something? {{{ #!/bin/bash export PATH=/usr/sbin:/usr/bin:/sbin:/bin LISTEN_PORT=3333 MGMT_PORT=2222 PID_FILE=/foobar/varnishd.pid echo -n "Gracefully stopping varnishd..." /usr/local/bin/varnishadm -T :${MGMT_PORT} stop > /dev/null if [ -f $PID_FILE ]; then kill `cat ${PID_FILE}` rm -f $PID_FILE fi c=30 stat=0 while [ $stat -eq 0 -a $c -gt 0 ]; do PIDS=`ps -ef | egrep "varnishd.*:${LISTEN_PORT}" | grep -v grep | awk '{print $2}'` if [ "${PIDS}" == '' ]; then stat=1 else echo -n '.' if [ $c -gt 2 ]; then kill $PIDS else kill -9 $PIDS fi sleep 1 let c-- fi done if [ $stat -eq 0 ]; then echo 'FAILED!' exit -1 fi echo 'Done.' exit 0 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Jun 24 01:23:25 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 24 Jun 2009 01:23:25 -0000 Subject: [Varnish] #356: v00017.vtc fails on x86_64 In-Reply-To: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> References: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> Message-ID: <060.1dd245d8fc403ecc81cc3ff6deb19add@projects.linpro.no> #356: v00017.vtc fails on x86_64 -------------------------------+-------------------------------------------- Reporter: wiebe | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: Keywords: v00017.vtc x86_64 | -------------------------------+-------------------------------------------- Comment (by booi): Confirmed on CentOS 5.3 (RHEL 5.3) building Varnish 2.0.4 {{{ ---- v1 VCL compilation got 200 expected 106 ---- TEST FILE: ././tests/v00017.vtc ---- TEST DESCRIPTION: VCL compiler coverage test: vcc_acl.c FAIL: ./tests/v00017.vtc =============================================== 1 of 130 tests failed Please report to varnish-dev at projects.linpro.no =============================================== }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Jun 24 01:30:04 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 24 Jun 2009 01:30:04 -0000 Subject: [Varnish] #356: v00017.vtc fails on x86_64 In-Reply-To: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> References: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> Message-ID: <060.82dac8676f7955f4333b16b2722942e1@projects.linpro.no> #356: v00017.vtc fails on x86_64 -------------------------------+-------------------------------------------- Reporter: wiebe | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: Keywords: v00017.vtc x86_64 | -------------------------------+-------------------------------------------- Comment (by booi): Seems to build fine in Fedora 6 (Zod) on x86_64. Linux nexus3 2.6.18-1.2798.fc6 #1 SMP Mon Oct 16 14:39:22 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux {{{ # top TEST ././tests/v00017.vtc starting # TEST VCL compiler coverage test: vcc_acl.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9091' -T 127.0.0.1:9011 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 # top RESETTING after ././tests/v00017.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 22911 Status: 0200 # top TEST ././tests/v00017.vtc completed PASS: ./tests/v00017.vtc ... ==================== All 130 tests passed ==================== }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 26 18:17:22 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 26 Jun 2009 18:17:22 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.b3c72d551fbc077d0eab27f4dd495d7c@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kb): Replying to [comment:5 whocares]: Email me at kb + varnish at slide com :) Ken. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Jun 26 19:19:17 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 26 Jun 2009 19:19:17 -0000 Subject: [Varnish] #310: WS_Reserve panic + error In-Reply-To: <049.b8602d04f753afe2eb5dcabb5e8fbfca@projects.linpro.no> References: <049.b8602d04f753afe2eb5dcabb5e8fbfca@projects.linpro.no> Message-ID: <058.50a67e26802f40107cfcc9e643a4247e@projects.linpro.no> #310: WS_Reserve panic + error ----------------------+----------------------------------------------------- Reporter: sky | Owner: phk Type: defect | Status: new Priority: lowest | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by kb): Just as a note to others, this issue also occurs if you set obj headers in vcl_fetch(). That's a good place to mark the cached object with new headers with useful information (say, the original URL if you're rewriting), so IMHO it would be nice to have this functionality at some point. -- Ticket URL: Varnish The Varnish HTTP Accelerator