From varnish-bugs at projects.linpro.no Sun Nov 1 14:26:36 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 01 Nov 2009 14:26:36 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) Message-ID: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: -------------------------+-------------------------------------------------- Today I tried kristian's [wiki:Performance] tips on a Linux box and ran into the same problem as was already mentioned in #85: Depending on what box I tried, I never managed to get more than 238 to 302 threads. The reason for that behaviour is that on Linux the number of threads that can be started is essentially limited by the stack size occupied by every thread. In a standard configuration every thread will get 8 MByte of stack attached to it. Thus, in conjunction with other memory related limits in a standard Linux environment you'll never ever get more than around 240 to 300 threads. You can however modify the thread's stack size before creating the thread by doing something like this: {{{ include "bits/local_lim.h" pthread_attr_t attr; pthread_t thread[1024*1024]; size_t size; /* do some calculation for optimal stack size or ... */ size = PTHREAD_STACK_MIN; pthread_attr_init(&attr); pthread_attr_setstacksize(&attr, size); pthread_create(&thread[somecounter], &attr, (void *)&function_to_run, NULL); ... }}} On Linux this will set the stack size to 16384 bytes which of course is way too small, but allows for around 32.000 threads to be created on my test box. If you like to rum some tests for yourself I attach a sample C program to test different thread stack sizes on a Linux box. It's possible that the code will run on other platforms, too, but I didn't test that. Anyway, it would be nice if varnish would set it's thread's stack size to a reasonable value so that poor Linux users can enjoy at least some of the performance that is possible on FreeBSD and Solaris ;) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 2 05:00:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 02 Nov 2009 05:00:40 -0000 Subject: [Varnish] #571: Assert error while setting header using inline C-code In-Reply-To: <060.fb60b1b5d57304659e42557e698a3ac1@projects.linpro.no> References: <060.fb60b1b5d57304659e42557e698a3ac1@projects.linpro.no> Message-ID: <069.4dd1869af5236656668d8d261ecf7c1b@projects.linpro.no> #571: Assert error while setting header using inline C-code ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: critical | Resolution: Keywords: | ----------------------------+----------------------------------------------- Comment (by maheshollalwar): Thanks, it works... ~Mahesh. Replying to [comment:1 kb]: > VRT_SetHdr(sp, HDR_REQ, "\007R-CN:",country,vrt_magic_string_end); > > I think you want this: > > VRT_SetHdr(sp, HDR_REQ, "\005R-CN:",country,vrt_magic_string_end); > > Not a bug -- the initial number needs to accurately reflect the length of the header name, including colon. > > Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 2 20:38:10 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 02 Nov 2009 20:38:10 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.5b331c5ab4dbc8a958480f9cf3b2f298@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by kb): The "easy" way to avoid the 8MB or 10MB default stacksize penalty is to run "ulimit -s 1024" to reduce the default stack to 1MB in the same process before starting varnishd. Putting the ulimit in the start script is the easier way. But some threads will have a higher stack requirement than others. The primary threads that are required for scale on Varnish (AFIACT) are the worker and backend threads, so it made sense to me that reducing the usage of these is most important, and that it was safest to only reduce the stack size of these threads. A few months ago, I wrote the attached patch which adds a varnishd parameter to set the stack size for worker and backend threads. I've been using it in production for months as "-p thread_pool_stacksize=256". 256KB is the smallest safe stack size on x86_64 in my experience, and allows for plenty of threads. This patch is against 2.0.4, but it applies without fuzz to trunk at 4339. Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 2 21:38:42 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 02 Nov 2009 21:38:42 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.c80bd2c14d5f3b470529828ac2e3a638@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by whocares): You again ;) Actually, my intention was to mention that I know about the 'ulimit -s xxx' trick but I just can't use it in a very special environment where I'm allowed to install software but not allowed to "change any environment setting" as was the wording of those reviewing the installation. So, thanks again, and if you're still looking for some single malt, just drop me a note at stefan at whocares.de. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 3 08:10:34 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 03 Nov 2009 08:10:34 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.ef865850f1da18e597f900d43ddcbf6d@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by tfheen): Replying to [ticket:572 whocares]: > Today I tried kristian's [wiki:Performance] tips on a Linux box and > ran into the same problem as was already mentioned in #85: Depending > on what box I tried, I never managed to get more than 238 to 302 > threads. This is because you're on a 32 bit platform, not intrinsically because of the stack size. The easy fix is just to upgrade to a 64 bit host. > On Linux this will set the stack size to 16384 bytes which of course > is way too small, but allows for around 32.000 threads to be created > on my test box. The reason you can't have more is that you run out of pids. There are only 32k of those. (I can create 32k-ish threads on my laptop with 8MB stack size just fine.) as kb noted in another report, -s is a workaround for this (even if I understand this isn't an option for you in this case). I'll leave this bug open so we can take a look at it and decide if we want to add such an option or not. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 3 08:29:10 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 03 Nov 2009 08:29:10 -0000 Subject: [Varnish] #571: Assert error while setting header using inline C-code In-Reply-To: <060.fb60b1b5d57304659e42557e698a3ac1@projects.linpro.no> References: <060.fb60b1b5d57304659e42557e698a3ac1@projects.linpro.no> Message-ID: <069.4a79fee036e4217f99088a2a9c461062@projects.linpro.no> #571: Assert error while setting header using inline C-code ----------------------------+----------------------------------------------- Reporter: maheshollalwar | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: critical | Resolution: fixed Keywords: | ----------------------------+----------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 3 13:15:36 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 03 Nov 2009 13:15:36 -0000 Subject: [Varnish] #573: varnish 2.0.4 crash very Message-ID: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> #573: varnish 2.0.4 crash very -------------------------+-------------------------------------------------- Reporter: adungaos | Type: defect Status: new | Priority: highest Milestone: | Component: build Version: trunk | Severity: normal Keywords: crash panic | -------------------------+-------------------------------------------------- I find my varnishd crashd about 4-5 times today, please help me. my os: {{{ $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.2 (Tikanga) $ uname -a Linux s14.cache 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux $ varnishd -V varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS }}} varnishd: {{{ /usr/sbin/varnishd -P /var/run/varnish.pid -u varnish -g varnish -a 0.0.0.0:80 -T 127.0.0.1:8888 -w 16,65535,120 -l 240m -p thread_pools 8 -p listen_depth 4096 -p lru_interval 3600 -h classic,50000 -s malloc,80G -f /etc/varnish/default.vcl }}} crash message: {{{ Nov 3 12:34:11 s14 varnishd[7894]: Child (7895) died signal=6 (core dumped) Nov 3 12:34:11 s14 varnishd[7894]: Child (7895) Panic message: Assert error in vsl_hdr(), shmlog.c line 85: Condition(id < 0x10000) not true. errno = 11 (Resource temporarily unavailable) thread = (cache-worker)sp = 0x2aad96f7f008 { fd = 65537, id = 65537, xid = 0, client = 122.4.141.17:21793, step = STP_FIRST, handling = error, ws = 0x2aad96f7f078 { id = "sess", {s,f,r,e} = {0x2aad96f7f808,,+19,(nil),+16384}, }, worker = 0x73838bd0 { }, }, Nov 3 12:34:11 s14 varnishd[7894]: child (12679) Started Nov 3 12:34:11 s14 varnishd[7894]: Child (12679) said Closed fds: 3 4 5 8 9 11 12 Nov 3 12:34:11 s14 varnishd[7894]: Child (12679) said Child starts Nov 3 12:34:11 s14 varnishd[7894]: Child (12679) said Ready Nov 2 12:51:28 s14 varnishd[7817]: Child (5197) not responding to ping, killing it. Nov 2 12:51:29 s14 varnishd[7817]: Child (5197) died signal=6 Nov 2 12:51:29 s14 varnishd[7817]: Child (5197) Panic message: Assert error in vsl_hdr(), shmlog.c line 85: Condition(id < 0x10000) not true. errno = 110 (Connection timed out) thread = (cache-worker)sp = 0x2aace8693008 { fd = 65536, id = 65536, xid = 0, client = 222.38.164.93:4339, step = STP_FIRST, handling = error, ws = 0x2aace8693078 { id = "sess", {s,f,r,e} = {0x2aace8693808,,+19,(nil),+16384}, }, worker = 0x2aab0b800bd0 { }, }, Nov 2 12:51:29 s14 varnishd[7817]: child (7409) Started Nov 2 12:51:29 s14 varnishd[7817]: Child (7409) said Closed fds: 3 4 5 8 9 11 12 Nov 2 12:51:29 s14 varnishd[7817]: Child (7409) said Child starts Nov 2 12:51:29 s14 varnishd[7817]: Child (7409) said Ready Nov 2 20:52:25 s14 kernel: possible SYN flooding on port 80. Sending cookies. Nov 2 12:53:23 s14 varnishd[7817]: Child (7409) died signal=6 Nov 2 12:53:23 s14 varnishd[7817]: Child (7409) Panic message: Assert error in vsl_hdr(), shmlog.c line 85: Condition(id < 0x10000) not true. errno = 104 (Connection reset by peer) thread = (cache-worker)sp = 0x2ab25869d008 { fd = 65542, id = 65542, xid = 0, client = 121.70.179.89:2513, step = STP_FIRST, handling = error, ws = 0x2ab25869d078 { id = "sess", {s,f,r,e} = {0x2ab25869d808,,+19,(nil),+16384}, }, worker = 0x2ab21bf00bd0 { }, }, Nov 2 12:53:23 s14 varnishd[7817]: child (10480) Started Nov 2 12:53:24 s14 varnishd[7817]: Child (10480) said Closed fds: 3 4 5 8 9 11 12 Nov 2 12:53:24 s14 varnishd[7817]: Child (10480) said Child starts Nov 2 12:53:24 s14 varnishd[7817]: Child (10480) said Ready }}} gdb backtrace: {{{ # gdb /usr/sbin/varnishd core.7895 GNU gdb Red Hat Linux (6.5-37.el5rh) Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/libthread_db.so.1". Error while mapping shared library sections: ./vcl.1P9zoqAU.so: No such file or directory. Error while mapping shared library sections: ./vcl.FANefPfn.so: No such file or directory. Error while mapping shared library sections: ./vcl.OaGbHZXp.so: No such file or directory. Reading symbols from /usr/lib64/libvarnish.so.1...done. Loaded symbols for /usr/lib64/libvarnish.so.1 Reading symbols from /lib64/librt.so.1...done. Loaded symbols for /lib64/librt.so.1 Reading symbols from /usr/lib64/libvarnishcompat.so.1...done. Loaded symbols for /usr/lib64/libvarnishcompat.so.1 Reading symbols from /usr/lib64/libvcl.so.1...done. Loaded symbols for /usr/lib64/libvcl.so.1 Reading symbols from /lib64/libdl.so.2...done. Loaded symbols for /lib64/libdl.so.2 Reading symbols from /lib64/libpthread.so.0...done. Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libnsl.so.1...done. Loaded symbols for /lib64/libnsl.so.1 Reading symbols from /lib64/libm.so.6...done. Loaded symbols for /lib64/libm.so.6 Reading symbols from /lib64/libc.so.6...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Reading symbols from /lib64/libnss_files.so.2...done. Loaded symbols for /lib64/libnss_files.so.2 Error while reading shared library symbols: ./vcl.1P9zoqAU.so: No such file or directory. Error while reading shared library symbols: ./vcl.FANefPfn.so: No such file or directory. Error while reading shared library symbols: ./vcl.OaGbHZXp.so: No such file or directory. Error while reading shared library symbols: ./vcl.1P9zoqAU.so: No such file or directory. Error while reading shared library symbols: ./vcl.FANefPfn.so: No such file or directory. Error while reading shared library symbols: ./vcl.OaGbHZXp.so: No such file or directory. Core was generated by `/usr/sbin/varnishd -P /var/run/varnish.pid -u varnish -g varnish -a 0.0.0.0:80'. Program terminated with signal 6, Aborted. #0 0x0000003b6fe30155 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x0000003b6fe30155 in raise () from /lib64/libc.so.6 #1 0x0000003b6fe31bf0 in abort () from /lib64/libc.so.6 #2 0x000000000041af57 in pan_ic (func=, file=, line=, cond=, err=0, xxx=) at cache_panic.c:317 #3 0x000000000042df39 in vsl_hdr (tag=SLT_SessionOpen, p=0x2b1f4b21b945
, len=29, id=65537) at shmlog.c:85 #4 0x000000000042eca3 in VSL (tag=SLT_SessionOpen, id=65537, fmt=0x43a7c5 "%s %s %s") at shmlog.c:168 #5 0x0000000000409503 in VCA_Prep (sp=0x2aad96f7f008) at cache_acceptor.c:127 #6 0x00000000004106f0 in CNT_Session (sp=0x2aad96f7f008) at cache_center.c:466 #7 0x000000000041c4f2 in wrk_do_cnt_sess (w=0x73838bd0, priv=) at cache_pool.c:398 #8 0x000000000041bbbf in wrk_thread (priv=0x2b1f4703c1f0) at cache_pool.c:310 #9 0x0000003b706062f7 in start_thread () from /lib64/libpthread.so.0 #10 0x0000003b6fed1b6d in clone () from /lib64/libc.so.6 (gdb) }}} thanks. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 3 14:02:43 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 03 Nov 2009 14:02:43 -0000 Subject: [Varnish] #573: varnish 2.0.4 crash very In-Reply-To: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> References: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> Message-ID: <063.be0d0f8ea5984eca6813527f8796f621@projects.linpro.no> #573: varnish 2.0.4 crash very -------------------------+-------------------------------------------------- Reporter: adungaos | Owner: Type: defect | Status: new Priority: highest | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: crash panic | -------------------------+-------------------------------------------------- Comment (by stockrt): adungaos, Perhaps you can take a look at: http://varnish.projects.linpro.no/ticket/492 This seems to be the same problem I was facing when using the release 2.0.4. It seems to not close the file descriptors fast enough when you have a scenario with many clients. Another trick is to test this patch, which changes less things, and enhance the capability of shmlog in handling more clients at once: http://varnish.projects.linpro.no/changeset/4264 Best regards, Rog?rio Schneider -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 3 15:18:20 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 03 Nov 2009 15:18:20 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.0d7775ca46764f60ca96ba93956dae5f@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by whocares): | The reason you can't have more is that you run out of pids. There are only 32k of those. | (I can create 32k-ish threads on my laptop with 8MB stack size just fine.) I'd say the number of available PIDs isn't the limit there. Even if I do a 'sysctl -w kernel.pid_max=4194303' to increase the amount of available PIDs I'll end up with around 32k threads max. Ok, it'll allow for roughly 600 threads more than with the standard settings but that's only such a marginal improvement that it suggests that I'm still running into some kind of memory limit there. And yes, I'd love to use a 64bit system. Unfortunately, this won't happen - at least not with this specific installation. Not in the foreseeable future anyway. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 4 03:38:57 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 04 Nov 2009 03:38:57 -0000 Subject: [Varnish] #573: varnish 2.0.4 crash very In-Reply-To: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> References: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> Message-ID: <063.a49140f01d5b97ede613d79ce1250741@projects.linpro.no> #573: varnish 2.0.4 crash very -------------------------+-------------------------------------------------- Reporter: adungaos | Owner: Type: defect | Status: new Priority: highest | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: crash panic | -------------------------+-------------------------------------------------- Comment (by adungaos): Thanks, I will try these patchs and give more infomation. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 4 08:12:03 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 04 Nov 2009 08:12:03 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.4dbfdda6fd06e39b4e0bf34212846d25@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by whocares): Just for completeness' sake: On my Ubuntu x86_64 workstation I can only start 511 threads with standard settings (tried as normal user and as root). So the available number of pids isn't the limit on 64bit (Ubuntu) Linux. Regards, Stefan -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 5 12:21:45 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 05 Nov 2009 12:21:45 -0000 Subject: [Varnish] #574: Comparing two headers Message-ID: <051.d599e745e9d07ae7dacdc9310dbe5e5b@projects.linpro.no> #574: Comparing two headers -------------------------+-------------------------------------------------- Reporter: mikko | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: -------------------------+-------------------------------------------------- Hello, I am trying to achieve the following with VCL: {{{ if (req.http.cookie ~ "country=[^;]+") { set req.http.X-Cookie-Country = regsub(req.http.Cookie, "country=([^;]+)", "\1"); set req.http.X-Url-Country = regsub(req.url, "^/([a-zA-Z]{2})/", "\1"); if (req.http.X-Cookie-Country != req.http.X-Url-Country) { error 750 req.http.X-Cookie-Country } } }}} I am using Varnish 2.0.4 and the VCL compiler answers with the following message: Message from VCC-compiler: Expected CSTR got 'req.http.X-Url-Country' Error 750 is from VCLExamples to do redirect. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 5 15:46:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 05 Nov 2009 15:46:40 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.79b83a854cc7153cfb94902f56c74a3e@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by olau): I think I have the same leak. Varnish has been dying lately, apparently without reason. I had to setup a monitor process to restart it, in spite of the manager architecture. I'm using really simple VCL. I'm on a 32 bit 4-core Linux server (2.6.9). Here's top: {{{ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24092 nobody 16 0 586m 273m 269m S 0 3.4 1:01.59 varnishd }}} Note 273 MB resident. And virtual memory is more than the 300 MB file allocated: {{{ /usr/sbin/varnishd -d -d -a :80 -T 127.0.0.1:6082 -t 15 -f /etc/varnish/iola.vcl -s file,/tmp/varnish_storage.bin,300M }}} Here's my .vcl, simple, two backends and a bit of regexp: {{{ backend default { .host = "xxx.xxx.xxx.xxx"; .port = "xxxx"; } backend lighttpd { .host = "xxx.xxx.xxx.xxx"; .port = "xxxx"; } sub vcl_recv { if (req.http.host ~ "media.*" || req.url ~ "^/media/") { set req.backend = lighttpd; pass; } if (req.http.host ~ "people.iola.dk") { set req.backend = lighttpd; pass; } if (req.request == "GET" && req.http.Cookie) { lookup; } } }}} Output from varnishstat -1, the cache looks like it's almost empty, only 266 kb allocated (low timeout, not much traffic), strangely there's actually a purge request in there (?), I'm sure I haven't added that: {{{ uptime 167638 . Child uptime client_conn 34227 0.20 Client connections accepted client_req 67452 0.40 Client requests received cache_hit 789 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 12605 0.08 Cache misses backend_conn 66680 0.40 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 0 0.00 Backend connections failures backend_reuse 45604 0.27 Backend connections reuses backend_recycle 60551 0.36 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 1022 . N struct srcaddr n_srcaddr_act 4 . N active struct srcaddr n_sess_mem 82 . N struct sess_mem n_sess 11 . N struct sess n_object 4 . N struct object n_objecthead 43 . N struct objecthead n_smf 12 . N struct smf n_smf_frag 3 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 3 . N struct vbe_conn n_bereq 22 . N struct bereq n_wrk 10 . N worker threads n_wrk_create 46 0.00 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 308 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 12603 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 634 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 59979 0.36 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 34225 0.20 Total Sessions s_req 67468 0.40 Total Requests s_pipe 0 0.00 Total pipe s_pass 54074 0.32 Total pass s_fetch 66619 0.40 Total fetch s_hdrbytes 19646304 117.19 Total header bytes s_bodybytes 2546177291 15188.54 Total body bytes sess_closed 5558 0.03 Session Closed sess_pipeline 197 0.00 Session Pipeline sess_readahead 37 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 61934 0.37 Session herd shm_records 5002142 29.84 SHM records shm_writes 423864 2.53 SHM writes shm_flushes 98 0.00 SHM flushes due to overflow shm_cont 130 0.00 SHM MTX contention shm_cycles 2 0.00 SHM cycles through buffer sm_nreq 127688 0.76 allocator requests sm_nobj 8 . outstanding allocations sm_balloc 266240 . bytes allocated sm_bfree 314306560 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 60 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 27900 . SMS bytes allocated sms_bfree 27900 . SMS bytes freed backend_req 66681 0.40 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) }}} The log has some dead children: {{{ ... socket(): Address family not supported by protocol child (20094) Started Child (20094) said Closed fds: 4 6 9 10 12 13 Child (20094) said Child starts Child (20094) said managed to mmap 314572800 bytes of 314572800 Child (20094) said Ready Child (20094) not responding to ping, killing it. Child (20094) died signal=3 Child cleanup complete socket(): Address family not supported by protocol child (27662) Started Child (27662) said Closed fds: 4 6 9 10 12 13 Child (27662) said Child starts Child (27662) said managed to mmap 314572800 bytes of 314572800 Child (27662) said Ready Child (27662) not responding to ping, killing it. Child (27662) died signal=3 Child cleanup complete socket(): Address family not supported by protocol child (11809) Started Child (11809) said Closed fds: 4 6 9 10 12 13 Child (11809) said Child starts Child (11809) said managed to mmap 314572800 bytes of 314572800 Child (11809) said Ready Child (11809) not responding to ping, killing it. Child (11809) died signal=3 Child cleanup complete socket(): Address family not supported by protocol child (24092) Started Child (24092) said Closed fds: 4 6 9 10 12 13 Child (24092) said Child starts Child (24092) said managed to mmap 314572800 bytes of 314572800 Child (24092) said Ready }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Nov 6 13:18:12 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 06 Nov 2009 13:18:12 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.3e8ef63ef14f44bdcaf81650d002d0e1@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by olau): By the way, I'm using Varnish from a Debian package (backport to lenny), and this is on a virtual server, but apparently I have the same problem on another non-virtual server running Debian testing. Interestingly it's got a purge too, which I don't know where comes from. The pattern is the same, more virtual memory than allowed by the memory mapped file and much more resident memory than what can be explained by the "bytes allocated" statistic. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Nov 6 18:41:58 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 06 Nov 2009 18:41:58 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.8a16f43cc6238aabf18c55c742f4e4bf@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by kb): A) purge will cause unbounded memory usage[[BR]] B) pmap will show you where the memory is going. Look for thread stack allocations.[[BR]] C) Have you tried compiling mainline 2.0.4 instead of the old Debian package? D) I sense RL needs a "My Varnish is leaking" FAQ...[[BR]] [[BR]] Ken[[BR]] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Nov 7 13:13:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 07 Nov 2009 13:13:19 -0000 Subject: [Varnish] #575: Python Tools Enhancement Message-ID: <055.4d7ec80ef230d3e2d5b296a8d0c0c2ad@projects.linpro.no> #575: Python Tools Enhancement --------------------------------+------------------------------------------- Reporter: justquick | Type: enhancement Status: new | Priority: low Milestone: | Component: varnishadm Version: trunk | Severity: minor Keywords: python enhancement | --------------------------------+------------------------------------------- I have revamped the Python interface to the Varnish management port into a more robust application. I have written a new module (python-varnish) and have posted it on github. Here are some of the features: * Uses `telnetlib` instead of raw sockets * Implements `threading` module * Can run commands across multiple Varnish instances * More comprehensive methods, closely matching the management API (purge_*, vcl_*, etc.) * Unittests The code is available at [http://github.com/justquick/python-varnish http://github.com/justquick/python-varnish] This implementation is currently in use in production for [http://www.washingtontimes.com The Washington Times] It is part of an upcoming release of a project of mine using this module in conjunction with [http://www.djangoproject.com the django web framework]. Python-Varnish is under the New BSD license and I would be willing to commit this module into your trunk. Contact me: justquick [@] gmail.com -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 9 13:23:25 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 09 Nov 2009 13:23:25 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.c315901f21a60d0595d2d7c043698b01@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by olau): Ken: are you replying to me or the OP? Anyway, I'll answer as if it were for me. :) A) I don't have a purge in my VCL which as you can see is really simple. The stats seem to say that I have one active purge, but I don't know where it comes from.[[BR]] B) See pmap output below. I don't know what you make of it. :)[[BR]] C) The Debian package I'm using is 2.0.4. I compiled it myself to get debugging symbols. And I checked the Debian patches, they don't touch the source. pmap output: {{{ 08048000 324K r-x-- /usr/sbin/varnishd 08099000 8K rw--- /usr/sbin/varnishd 0809b000 64K rw--- [ anon ] 86a00000 4K ----- [ anon ] 86a01000 10236K rw--- [ anon ] 89c00000 4K ----- [ anon ] 89c01000 10236K rw--- [ anon ] 8b000000 4K ----- [ anon ] 8b001000 10236K rw--- [ anon ] 8e200000 4K ----- [ anon ] 8e201000 11260K rw--- [ anon ] 90b00000 1024K rw--- [ anon ] 90cff000 4K rw--- [ anon ] 90d00000 4K ----- [ anon ] 90d01000 10236K rw--- [ anon ] 92100000 4K ----- [ anon ] 92101000 10236K rw--- [ anon ] 93f00000 4K ----- [ anon ] 93f01000 11260K rw--- [ anon ] 95400000 4K ----- [ anon ] 95401000 14332K rw--- [ anon ] 962ff000 1028K rw--- [ anon ] 96400000 4K ----- [ anon ] 96401000 10236K rw--- [ anon ] 96e00000 4K ----- [ anon ] 96e01000 10236K rw--- [ anon ] 97800000 4K ----- [ anon ] 97801000 10236K rw--- [ anon ] 98c00000 4K ----- [ anon ] 98c01000 10236K rw--- [ anon ] 9a000000 4K ----- [ anon ] 9a001000 10236K rw--- [ anon ] 9aa00000 4K ----- [ anon ] 9aa01000 11260K rw--- [ anon ] 9b5fb000 4K ----- [ anon ] 9b5fc000 10236K rw--- [ anon ] 9bffb000 4K ----- [ anon ] 9bffc000 10236K rw--- [ anon ] 9c9fb000 16K r-x-- /var/lib/varnish/iola.dk/vcl.1P9zoqAU.so 9c9ff000 4K rw--- /var/lib/varnish/iola.dk/vcl.1P9zoqAU.so 9d400000 307200K rw-s- /tmp/varnish_storage.bin b0000000 1024K rw--- [ anon ] b01ff000 4K rw--- [ anon ] b0b00000 3072K rw--- [ anon ] b0eda000 4K ----- [ anon ] b0edb000 10236K rw--- [ anon ] b18da000 4K ----- [ anon ] b18db000 10236K rw--- [ anon ] b22da000 4K ----- [ anon ] b22db000 10236K rw--- [ anon ] b2cda000 81988K rw-s- /var/lib/varnish/iola.dk/_.vsl b7ceb000 36K r-x-- /lib/libnss_files-2.7.so b7cf4000 8K rw--- /lib/libnss_files-2.7.so b7cf6000 32K r-x-- /lib/libnss_nis-2.7.so b7cfe000 8K rw--- /lib/libnss_nis-2.7.so b7d00000 1024K rw--- [ anon ] b7e03000 28K r-x-- /lib/libnss_compat-2.7.so b7e0a000 8K rw--- /lib/libnss_compat-2.7.so b7e12000 4K rw--- [ anon ] b7e13000 1248K r-x-- /lib/libc-2.7.so b7f4b000 4K r---- /lib/libc-2.7.so b7f4c000 8K rw--- /lib/libc-2.7.so b7f4e000 16K rw--- [ anon ] b7f52000 144K r-x-- /lib/libm-2.7.so b7f76000 8K rw--- /lib/libm-2.7.so b7f78000 76K r-x-- /lib/libnsl-2.7.so b7f8b000 8K rw--- /lib/libnsl-2.7.so b7f8d000 8K rw--- [ anon ] b7f8f000 80K r-x-- /lib/libpthread-2.7.so b7fa3000 8K rw--- /lib/libpthread-2.7.so b7fa5000 8K rw--- [ anon ] b7fa7000 8K r-x-- /lib/libdl-2.7.so b7fa9000 8K rw--- /lib/libdl-2.7.so b7fab000 84K r-x-- /usr/lib/libvcl.so.1.0.0 b7fc0000 4K rw--- /usr/lib/libvcl.so.1.0.0 b7fc1000 4K r-x-- /usr/lib/libvarnishcompat.so.1.0.0 b7fc2000 4K rw--- /usr/lib/libvarnishcompat.so.1.0.0 b7fc3000 4K rw--- [ anon ] b7fc4000 28K r-x-- /lib/librt-2.7.so b7fcb000 8K rw--- /lib/librt-2.7.so b7fcd000 56K r-x-- /usr/lib/libvarnish.so.1.0.0 b7fdb000 4K rw--- /usr/lib/libvarnish.so.1.0.0 b7fe1000 12K rw--- [ anon ] b7fe4000 104K r-x-- /lib/ld-2.7.so b7ffe000 8K rw--- /lib/ld-2.7.so bffec000 72K rw--- [ stack ] ffffe000 4K r-x-- [ anon ] total 600664K }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 9 18:35:29 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 09 Nov 2009 18:35:29 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.476bd7f500fe8a6fac555755ff415c38@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by kb): Your pmap shows that varnish is correctly allocating the 300MB you specified. The rest of the RAM is spent on the compiled config (always about 80MB in my experience) and 10MB of preallocated stack per worker thread. 10MB is apparently your distribution's default stacksize (see 'ulimit -a' or 'limit'). There's no leak here, just a fact of life with pthreads. Because you have a very small cache size, the overhead ratio is exaggerated. You can reduce the memory usage of each pthread by reducing your default stacksize. Run 'ulimit -s 1024' or 'limit stacksize 1024' before starting varnishd and you should see the memory use drop from 600MB to something like 450MB, and scaling out worker threads will become an order of magnitude less RAM-intensive. Like every other server out there that lets you specify RAM usage for a cache (e.g., MySQL, squid) there are many other factors that determine memory usage -- some within the app's control, some not. Managing MySQL memory usage is extremely similar to varnishd -- a base cache, thread- specific buffers, and the default stacksize that comes preallocated with every pthread. MySQL explicitly sets the stacksize for pthreads it creates to 128KB because of the high memory usage that on the surface looks like a leak. FWIW, I have a patch to varnishd that allows control of the worker threads' stacksize in much the same way MySQL controls it. See #572 if you're interested. Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 10 11:31:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 10 Nov 2009 11:31:19 -0000 Subject: [Varnish] #573: varnish 2.0.4 crash very In-Reply-To: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> References: <054.92e032679685de4f04b606a82dee1454@projects.linpro.no> Message-ID: <063.97d7baa10cf729c52dbe3cad9827dda7@projects.linpro.no> #573: varnish 2.0.4 crash very -------------------------+-------------------------------------------------- Reporter: adungaos | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: crash panic | -------------------------+-------------------------------------------------- Changes (by kristian): * priority: highest => normal * resolution: => invalid * status: new => closed * component: build => varnishd Comment: You are running out of file descriptors. Make sure your sess_timeout isn't too high and set up session_linger. This is default in 2.0.5. If you need further tuning advice, please consult the mailing list, as this is not a bug. (Or varnish at redpill-linpro.com for commercial support). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 10 13:11:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 10 Nov 2009 13:11:40 -0000 Subject: [Varnish] #576: Feature request: make check; better timing or more stable tests Message-ID: <052.ea532920bf3dd49821b6ebf7a71a7573@projects.linpro.no> #576: Feature request: make check; better timing or more stable tests -------------------------+-------------------------------------------------- Reporter: ingvar | Owner: Type: enhancement | Status: new Priority: low | Milestone: Component: build | Version: trunk Severity: normal | Keywords: -------------------------+-------------------------------------------------- When building rpm packages for epel, I have to run the package through Red Hat's build farm. They are running in chroots on virtualized environments, (xen instances for i386 and x86_64). When these builders has a bit of load, the "make check" stage fails randomly, probably because of timing issues. I had to rerun the epel4 build five times before it suddenly went through without errors. Here are a couple of examples: http://koji.fedoraproject.org/koji/getfile?taskID=1797309&name=build.log http://koji.fedoraproject.org/koji/getfile?taskID=1797266&name=build.log Yes, I *know* that make check is still not counted as "production ready" by phk, and may be omitted. But on the other hand, running through a set of known regression and other bugs on several sets of operating system versions and platforms (including ppc, sparc and s390x) can't be a bad thing. For testing, give me a note. I can request scratch builds at any time. Ingvar -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 10 13:13:49 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 10 Nov 2009 13:13:49 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.f35e960e41807746ce5836ef4307addb@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by olau): Ken: Thanks for the explanation. The crashes might be something else then. I'm still waiting for it to die again and give me a back trace. It's weird the resident size is so large, though, but maybe the kernel just doesn't want to let go of the mmap'ed file. I've tried using a bucket load of memory in another process, but it doesn't seem to help. Oh well. I'll open another bug if I get a back trace. It bugs me why Varnish is being scaled with threads but that's another story. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 11 09:35:21 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 11 Nov 2009 09:35:21 -0000 Subject: [Varnish] #576: Feature request: make check; better timing or more stable tests In-Reply-To: <052.ea532920bf3dd49821b6ebf7a71a7573@projects.linpro.no> References: <052.ea532920bf3dd49821b6ebf7a71a7573@projects.linpro.no> Message-ID: <061.6a977b3530b55b4edea5a283df9d8389@projects.linpro.no> #576: Feature request: make check; better timing or more stable tests -------------------------+-------------------------------------------------- Reporter: ingvar | Owner: kristian Type: enhancement | Status: assigned Priority: low | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by kristian): * owner: => kristian * status: new => assigned Comment: I've been meaning to take a look at this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 11 11:43:11 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 11 Nov 2009 11:43:11 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.021ec27bdaa708edba168cb0e17015f5@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: send_timeout connections | --------------------------------------+------------------------------------- Comment (by kolo): Reporting the same issue on 2.0.5 linux 2.6.28 debian lenny varnish kills connection after send_timeout despite transfer didnot hang for such time. we set send_timeout=20 and simulated slow download with wget ... in atached screen there is the result... it is reproducable on any value in send_timeout the man should be changed or better, change the behaviour. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 12 01:22:09 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 12 Nov 2009 01:22:09 -0000 Subject: [Varnish] #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) Message-ID: <054.ec05b545ae9e3cb7a5d7d7f3ddfd3c8d@projects.linpro.no> #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) ----------------------+----------------------------------------------------- Reporter: whocares | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 2.0 | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Until recently, I was using Sun Studio 12.1 to compile Varnish on Solaris 10/x86_64. This is no longer possible, because compilation will fail here: {{{ /opt/SUNWspro/bin/cc -xc99=all -fast -xtarget=opteron -xarch=sse3 -m64 -mt -Kpic -o .libs/varnishadm varnishadm.o ../../lib/libvarnish/.libs/libvarnish.so -lrt -lm ../../lib/libvarnishcompat/.libs/libvarnishcompat.so -lumem -lnsl -lsocket -R/opt/soft/varnish/lib Undefined first referenced symbol in file __builtin_frame_address ../../lib/libvarnishcompat/.libs/libvarnishcompat.so __builtin_return_address ../../lib/libvarnishcompat/.libs/libvarnishcompat.so ld: fatal: Symbol referencing errors. No output written to .libs/varnishadm }}} This is quite understandable since `__builtin_frame_address` and `__builtin_return_address` are functions specific to `gcc`. I'm not really happy with theses close ties to `gcc` but, well, that's not for me to decide anyway. Ok, so I tried to compile using `gcc 3.4.6` from !SunFreeware.com. This '''will''' work but only for 32-bit binaries. The `gcc` version supplied by !SunFreeware.com just isn't capable of building 64-bit binaries. So I did what is "general practice" to build 64-bit binaries on Solaris with `gcc`: I used the `gcc 3.4.3` supplied by Sun with Solaris 10. The problem there: {{{ libtool: link: gcc -std=gnu99 -g -O2 -m64 -o .libs/varnishadm varnishadm.o -L/usr/sfw/lib/64 ../../lib/libvarnish/.libs/libvarnish.so -lrt -lm ../../lib/libvarnishcompat/.libs/libvarnishcompat.so -lnsl -lsocket -R/opt/soft/varnish/lib -R/usr/sfw/lib/64 Undefined first referenced symbol in file __builtin_isfinite ../../lib/libvarnish/.libs/libvarnish.so }}} At this point I decided to build `gcc 4.4.2` from scratch. That seemed to work, however now I'm stuck with Varnish not compiling the initial VCL and not starting its child processes. I already tried setting VCC_CC to different values but couldn't make it work. Any ideas? I could quite easily provide access to my dev environment if that would help / be of interest. Ah, lest I forget: The reason I need to use 64bit binaries is that Varnish shall run on a system where it is to use 6 GB of memory based cache. And that'll only work with a 64bit binary. Regards, Stefan -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 12 09:04:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 12 Nov 2009 09:04:32 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.6fcbef254734b5ed2416945a176555f5@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: invalid Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: Replying to [comment:10 kb]: > Your pmap shows that varnish is correctly allocating the 300MB you specified. The rest of the RAM is spent on the compiled config (always about 80MB in my experience) and 10MB of preallocated stack per worker thread. 10MB is apparently your distribution's default stacksize (see 'ulimit -a' or 'limit'). The 80MB is the log file that is mmap-ed into the memory space, not the configuration. The configuration looks like: 00007ff4cda31000 16K r-x-- /tmp/_v1/vcl.1P9zoqAU.so 00007ff4cda35000 2044K ----- /tmp/_v1/vcl.1P9zoqAU.so 00007ff4cdc34000 4K rw--- /tmp/_v1/vcl.1P9zoqAU.so I wouldn't worry about the RSS, it's irrelevant and is just a small overhead for the kernel to keep track of some extra pages. The only exception is if you're on a 32 bit platform. I'll close this bug as I believe kb has shown quite well where the memory is going and we don't have any unaccounted for. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 12 20:11:14 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 12 Nov 2009 20:11:14 -0000 Subject: [Varnish] #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) In-Reply-To: <054.ec05b545ae9e3cb7a5d7d7f3ddfd3c8d@projects.linpro.no> References: <054.ec05b545ae9e3cb7a5d7d7f3ddfd3c8d@projects.linpro.no> Message-ID: <063.4f7053314d085ff0f6ccafb7ba34e256@projects.linpro.no> #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) ----------------------+----------------------------------------------------- Reporter: whocares | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by IgorMinar): I face the same issue. So far I was not able to work around it. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Nov 13 15:00:03 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 13 Nov 2009 15:00:03 -0000 Subject: [Varnish] #578: Regression in 2.0.5: Segfault while processing a page with dozens of ESI fragments Message-ID: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> #578: Regression in 2.0.5: Segfault while processing a page with dozens of ESI fragments --------------------------+------------------------------------------------- Reporter: kali | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: major Keywords: esi segfault | --------------------------+------------------------------------------------- While processing pages with numerous {{{ }}} tags, varnish crashes with a segmentation fault. We use literally hundreds of esi fragment to compose some of our pages. I have tracked the error to cache_vrt_esi.c:384. This code duplicates the included fragment URI before "fixing" its url. When there are too many fragments in the same object, the object workspace, wich is used here, overflows and WS_Alloc returns NULL. This returned value is not checked, so the memcpy segfaults with very little usefull diagnostic information. I'm not sure about a possible configuration workaround by increasing obj_workspace by several orders of magnitude, but this does not sound right to me. I thought it would be better to use the session_workspace to store these urls, as space there is less expensive. I have setup a vtc test that work with 2.0.4, but crashes 2.0.5. It artificialy reduces obj_workspace to 2048, to crash with a few dozens of esi includes, but shows the difference of behaviour between 2.0.4 and 2.0.5. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 09:35:11 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 09:35:11 -0000 Subject: [Varnish] #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) In-Reply-To: <054.ec05b545ae9e3cb7a5d7d7f3ddfd3c8d@projects.linpro.no> References: <054.ec05b545ae9e3cb7a5d7d7f3ddfd3c8d@projects.linpro.no> Message-ID: <063.519b13bf30c192126e2f6f870eeeb64a@projects.linpro.no> #577: 64bit Catch 22 on Solaris (gcc _builtin_xxx functions) ----------------------+----------------------------------------------------- Reporter: whocares | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4349]) Hide GCC specific backtrace() compat function under a #ifdef. We do not want to be dependent on GCC. Fixes #577 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 09:54:26 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 09:54:26 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.d1c396301c3c2de2085d39db26f9bf9b@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: send_timeout connections | --------------------------------------+------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Your test is not valid, because SO_SNDTIMEO works on a per-packet basis. 20 bytes per second, will only amount to 400 bytes in 20 seconds, and 400 bytes is below the payload in the 576 byte minimum MTU, so the timeout should fire. If I run wget with --limit-rate=20 (on FreeBSD), the initial Tcp packet has only 512 bytes (slow-start ?) and the test fails because wget does not reopen the TCP window within 20 seconds. If I increase --limit-rate to 40, the connection does not get cut off, because a packet of data is transferred more often than 20 seconds. The intent behind send_timeout, is to prevent a client from holding a worker thread hostage, due to bugs, malicious intent or network trouble, and for all I can see, it works as it should. Try for instance to fetch a multi-MB file with wget and --limit-rate=1000 and then CTRL-Z the wget. After send_timeout, the connection is broken, as it should be. It can be argued that different behaviours should be implemented, depending on the client sending TCP-ACKS with shut window or the client not responding at all, but this is not possible within the POSIX definition of the socket API, and would likely just change DoS attacks sematics accordingly. Poul-Henning -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 10:07:33 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 10:07:33 -0000 Subject: [Varnish] #578: Regression in 2.0.5: Segfault while processing a page with dozens of ESI fragments In-Reply-To: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> References: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> Message-ID: <059.76f15ebbbd525db7817990215c51e6dd@projects.linpro.no> #578: Regression in 2.0.5: Segfault while processing a page with dozens of ESI fragments --------------------------+------------------------------------------------- Reporter: kali | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: major | Resolution: Keywords: esi segfault | --------------------------+------------------------------------------------- Comment (by phk): We cannot use the session workspace, the rewritten strings need to be stored with the object. The correct solution is to increase the object workspace, but as you say, this is not optimal. In -trunk we have obj_workspace=0, causing exact allocation, but you test- case revealed that this does not actually take ESI processing into account, so I will leave this ticket open, to track that issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 10:10:44 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 10:10:44 -0000 Subject: [Varnish] #578: Lots of ESI:include breaks obj_workspace=0 In-Reply-To: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> References: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> Message-ID: <059.bc3bf61dc285215bc3ac3d00ecfa0e06@projects.linpro.no> #578: Lots of ESI:include breaks obj_workspace=0 --------------------------+------------------------------------------------- Reporter: kali | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: major | Resolution: Keywords: esi segfault | --------------------------+------------------------------------------------- Changes (by phk): * summary: Regression in 2.0.5: Segfault while processing a page with dozens of ESI fragments => Lots of ESI:include breaks obj_workspace=0 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 12:09:46 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 12:09:46 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.a20e0b4fbf464f3dc6dafa52c2e0359a@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: send_timeout connections | --------------------------------------+------------------------------------- Changes (by kolo): * status: closed => reopened * resolution: worksforme => Comment: I have retested as recommended and the result is that on linux 2.6.28 debian lenny it doesnt work as supposed, connection is always cut off after send_timeout; the strange thing is that when I run the wget test straight from proxy, then the connection lives after send_timeout, but when i run that test from any machine (localnet/internet) then after send_timeout the connection is always cut off. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 12:23:38 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 12:23:38 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.008c3775deec8f4659449a70a55e9502@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: send_timeout connections | --------------------------------------+------------------------------------- Comment (by phk): Please capture a tcpdump of the failing connection. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 12:43:16 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 12:43:16 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.7aebd94f5f6e4af0687544403c487a70@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: reopened Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: send_timeout connections | --------------------------------------+------------------------------------- Comment (by kolo): wget -t1 --limit-rate=1000 http://www.soccer1.com//res/image/bookmaker- list.png wget error message: "Connection reset by peer" send_timeout=120 tcpdump attached -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 12:44:04 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 12:44:04 -0000 Subject: [Varnish] #578: Lots of ESI:include breaks obj_workspace=0 In-Reply-To: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> References: <050.70f564bb62c021fc9b16f766071bc640@projects.linpro.no> Message-ID: <059.eb752295f5808190e5d76179c2062f30@projects.linpro.no> #578: Lots of ESI:include breaks obj_workspace=0 --------------------------+------------------------------------------------- Reporter: kali | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: major | Resolution: fixed Keywords: esi segfault | --------------------------+------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4351]) Rework ESI storage allocation. Previously we stored the esi-metadata in the object workspace, but since we are trying to make that a snug fit and we cannot preestimate how much space ESI parsing will need, this no longer works. Instead parse the ESI metadata into the workers workspace and when done, allocate a storage object and move it all into that. Beware that this may increase the memory cost for ESI objects by the stevedores granularity. Fixes #578 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 12:53:57 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 12:53:57 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.96628df90b5f168235d6cfa00ce03e32@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: send_timeout connections | --------------------------------------+------------------------------------- Changes (by phk): * status: reopened => closed * resolution: => worksforme Comment: Sorry, but it *does* work the way it should. SO_SNDTIMEO is an option to set a timeout value for output operations. It accepts a struct timeval argument with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. If delivering the result takes longer than send_timeout, we give up. That is why the default is 10 minutes. We can soon agree that it is not the behaviour we really want or need, but it is the only behaviour the POSIX and kernels offer us. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 13:10:55 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 13:10:55 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.f235c6bdca9a0b9b0d26c3ff4190ae7a@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: send_timeout connections | --------------------------------------+------------------------------------- Comment (by kolo): no offense ... just does corespond with description of this param in varnishadm param.show: Send timeout for client connections. If no data has been sent to the client in this many seconds, the session is closed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 20:42:56 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 20:42:56 -0000 Subject: [Varnish] #402: send_timeout cause connections to be prematurely closed In-Reply-To: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> References: <053.2ab0215f9043e6511535c570cbb343ac@projects.linpro.no> Message-ID: <062.3d70f089b60cf219a897ada3b983c3ea@projects.linpro.no> #402: send_timeout cause connections to be prematurely closed --------------------------------------+------------------------------------- Reporter: havardf | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: send_timeout connections | --------------------------------------+------------------------------------- Comment (by stockrt): The only problem is with the description when it says "If no data has been sent to the client". This was already been discussed in http://www.mail-archive.com/varnish- misc at projects.linpro.no/msg02975.html Best regards, Rog?rio Schneider -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 23:56:13 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 23:56:13 -0000 Subject: [Varnish] #520: check_varnish parameters truncated to signed int In-Reply-To: <048.662d5ec30a8056ba3f757638023873bd@projects.linpro.no> References: <048.662d5ec30a8056ba3f757638023873bd@projects.linpro.no> Message-ID: <057.58cf8845a5e4b625439d7b100263f4df@projects.linpro.no> #520: check_varnish parameters truncated to signed int --------------------+------------------------------------------------------- Reporter: kb | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: nagios | Version: 2.0 Severity: major | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kb): check_varnish seems revamped in 2.0.5 trunk, and this patch seems no longer necessary. Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 16 23:59:37 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 16 Nov 2009 23:59:37 -0000 Subject: [Varnish] #579: check_varnish fails to compile in 2.0.5 Message-ID: <048.fc77c855f804212f8e17a8ff29ea4015@projects.linpro.no> #579: check_varnish fails to compile in 2.0.5 -------------------+-------------------------------------------------------- Reporter: kb | Type: defect Status: new | Priority: normal Milestone: | Component: nagios Version: trunk | Severity: normal Keywords: | -------------------+-------------------------------------------------------- In file included from check_varnish.c:197: /usr/local/encap/varnish-2.0.5-kb4/include/varnish/stat_field.h:32:68: error: macro "MAC_STAT" requires 5 arguments, but only 4 given Simple patch attached. Also worth noting that the 2.0.4 check_varnish returns results for incorrect parameters when run against 2.0.5, so existing nagios monitors still start breaking with the 2.0.5 upgrade. Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 17 17:36:09 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 17 Nov 2009 17:36:09 -0000 Subject: [Varnish] #580: Esoteric path problem on Solaris 10 using SMF (unable to load compiled VCL file) Message-ID: <054.91ebe0b9d9311fee756ec8e43964bc02@projects.linpro.no> #580: Esoteric path problem on Solaris 10 using SMF (unable to load compiled VCL file) ----------------------+----------------------------------------------------- Reporter: whocares | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: Solaris SMF ----------------------+----------------------------------------------------- First of all: Sorry, this is going to be looooong. '''The short question:''' Are there any differences between running Varnish from the command line compared to running Varnish from Solaris' Service Management Facility that immediately come to mind? Because I'm currently tearing my hair out because it won't work using SMF. '''The long expalanation:''' After applying the patch from r4349 I was able to compile Varnish using !SunStudio but now I'm running into another problem that maybe also affected my unsuccessful tries when using GCC 4.4.2. In short `varnishd` seems to be unable to find it's compiled VCL on startup. This only happens when I try to use Solaris' Service Management Facility (SMF) to run Varnish. Starting from the command line works fine. Ok, here's what I tried until now: Compile settings: {{{ cd varnish-2.0.5 XTARGET="opteron -xarch=sse3" export CFLAGS="-fast -xtarget=${XTARGET} -m64 -mt -Kpic" export CPPFLAGS="-fast -xtarget=${XTARGET} -m64 -mt -Kpic" export LDFLAGS="-fast -xtarget=${XTARGET} -m64 -mt -Kpic -lumem" export VCC_CC="vcc %o %s" if [ -f config.log ]; then gmake distclean 2>&1 >> /dev/null fi ./configure --prefix=/opt/soft \ --sysconfdir=/opt/conf \ }}} As you can see, this sets VCC_CC to an external script `vcc`, the contents of which is: {{{ # env > /tmp/vcc.env pwd >> /tmp/vcc.env cc -fast -xtarget=opteron -xarch=sse3 -m64 -mt -Kpic -c -o $1 $2 cp $1 /tmp/$1.sru cp $2 /tmp/$2.sru }}} This is just to capture some of the environment variables and to copy the files used to a safe location so that I can see that `varnishd` actually compiled something. Now, when I start varnish using this commandline: {{{ /opt/soft/varnish/sbin/varnishd -F -a 192.168.27.33:80 -T 192.168.27.33:6082 -t 1800 -w 50,1000,120 -s file,/data/varnish/varnish_store.bin,4G -p obj_workspace=16384 -f /opt/conf/varnish/www.vcl }}} I'll get this output from `varnishlog`: {{{ #root at soldevamd:~# /opt/soft/varnish/bin/varnishlog 0 WorkThread - fffffd7ff97e0d80 start 0 CLI - Rd vcl.load boot ./vcl.ORk8t3RP.so 0 CLI - Wr 0 200 Loaded "./vcl.ORk8t3RP.so" as "boot" 0 CLI - Rd vcl.use boot 0 Backend_health - lb01 Back healthy --------H 1 1 2 0.000000 0.000000 0 CLI - Wr 0 200 0 CLI - Rd start 0 Debug - "Acceptor is ports" 0 CLI - Wr 0 200 0 Backend_health - lb01 Still healthy 4--X-S-RH 2 1 2 0.002183 0.001091 HTTP/1.1 200 OK 0 WorkThread - fffffd7f78bfed80 start 0 WorkThread - fffffd7f589fed80 start 0 WorkThread - fffffd7f387fed80 start (... many more WorkThreads ...) }}} Also, I get some files in `/tmp` created by the `vcc` script: {{{ #root at soldevamd:~# ls -al /tmp/vc* -rw-r--r-- 1 root root 1022 Nov 17 18:02 /tmp/vcc.env -rw------- 1 root root 45322 Nov 17 18:02 /tmp/vcl.ORk8t3RP.c.sru -rw-r--r-- 1 root root 37224 Nov 17 18:02 /tmp/vcl.ORk8t3RP.so.sru #root at soldevamd:~# }}} And the `.so` will also be in the exptected location: {{{ #root at soldevamd:~# ls -al /opt/soft/varnish/var/varnish/soldevamd/ Gesamt 318 -rw-r--r-- 1 root root 83952688 Nov 17 18:03 _.vsl drwxr-xr-x 2 root root 512 Nov 17 18:02 . drwxr-xr-x 4 root root 512 Nov 17 17:33 .. -rw-r--r-- 1 root root 37224 Nov 17 18:02 vcl.ORk8t3RP.so #root at soldevamd:~# }}} So far, so good. However, if using the SMF, the following will happen: Once I start Varnish using `svcadm enable varnish` the following will show up in the output from `varnishlog`: {{{ #root at soldevamd:~# /opt/soft/varnish/bin/varnishlog 0 WorkThread - fffffd7ff97e0d80 start 0 CLI - Rd vcl.load boot ./vcl.ORk8t3RP.so 0 CLI - Wr 0 106 dlopen(./vcl.ORk8t3RP.so): ld.so.1: varnishd: Schwerer Fehler: ./vcl.ORk8t3RP.so: ?ffnen fehlgeschlagen: Datei oder Verzeichnis nicht gefunden 0 CLI - EOF on CLI connection, worker stops }}} At first I thought this was due to an error while compiling the VCL. But as it turns out, according to the `vcc` script an `.so` file was actually generated and even the correct one: {{{ #root at soldevamd:~# ls -al /tmp/vc* -rw-r--r-- 1 root root 679 Nov 17 18:06 /tmp/vcc.env -rw------- 1 root root 45322 Nov 17 18:06 /tmp/vcl.ORk8t3RP.c.sru -rw-r--r-- 1 root root 37224 Nov 17 18:06 /tmp/vcl.ORk8t3RP.so.sru #root at soldevamd:~# }}} It just doesn't happen to make it to where `varnishd` tries looking for it: {{{ #root at soldevamd:~# ls -al /opt/soft/varnish/var/varnish/soldevamd/ Gesamt 244 -rw-r--r-- 1 root root 83952688 Nov 17 18:03 _.vsl drwxr-xr-x 2 root root 512 Nov 17 18:06 . drwxr-xr-x 4 root root 512 Nov 17 17:33 .. #root at soldevamd:~# }}} The strangest thing is that when I telnet to `varnishd` and make it load the VCL manually, it will compile at least compile them: {{{ #root at soldevamd:~# telnet 192.168.27.33 6082 Trying 192.168.27.33... Connected to 192.168.27.33. Escape character is '^]'. vcl.list 200 23 active N/A boot vcl.show boot 300 138 failed to load boot: ld.so.1: varnishd: Schwerer Fehler: ./vcl.ORk8t3RP.so: ?ffnen fehlgeschlagen: Datei oder Verzeichnis nicht gefunden vcl.load test /opt/conf/varnish/www.vcl 200 13 VCL compiled. vcl.list 200 46 active N/A boot available N/A test vcl.use test 200 0 vcl.list 200 46 available N/A boot active N/A test vcl.show test 200 2861 backend lb01 { .host = "127.0.0.1"; .port = "80"; .probe = { .url = "/balance.html"; .timeout = 100 ms; .interval = 5s; .window = 2; .threshold = 1; } } (... rest of VCL deleted ...) }}} The most notable thing about this is that although it will compile, it won't be usable as is already denoted by the `N/A` flag in the output of `vcl.list` - even if this time the `.so` _will_ get created: {{{ #root at soldevamd:~# ls -al /opt/soft/varnish/var/varnish/soldevamd/ Gesamt 350 -rw-r--r-- 1 root root 83952688 Nov 17 18:23 _.vsl drwxr-xr-x 2 root root 512 Nov 17 18:24 . drwxr-xr-x 4 root root 512 Nov 17 17:33 .. -rw-r--r-- 1 root root 37224 Nov 17 18:24 vcl.hndxcCNb.so #root at soldevamd:~# }}} I'm at a loss at what to try next to make it work using SMF. Well, thanks for reading this far. If you've got any idea or if there's some additional info you need, just let me know. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 17 18:54:34 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 17 Nov 2009 18:54:34 -0000 Subject: [Varnish] #580: Esoteric path problem on Solaris 10 using SMF (unable to load compiled VCL file) In-Reply-To: <054.91ebe0b9d9311fee756ec8e43964bc02@projects.linpro.no> References: <054.91ebe0b9d9311fee756ec8e43964bc02@projects.linpro.no> Message-ID: <063.c111bc16cfd2bdd8edc8f3a3a7d66aac@projects.linpro.no> #580: Esoteric path problem on Solaris 10 using SMF (unable to load compiled VCL file) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: Solaris SMF | -------------------------+-------------------------------------------------- Comment (by whocares): Ok, turns out the difference is whether I'm running Varnish in the foreground or as a daemon. Still, 2.0.4 works flawlessly with exactly the same settings. Will investigate further. Since even changing the compiler to GCC doesn't help I suspect it's a more general problem with Solaris that's at work here. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 17 20:45:15 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 17 Nov 2009 20:45:15 -0000 Subject: [Varnish] #581: struct acct stats counted improperly in 2.0.5 Message-ID: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> #581: struct acct stats counted improperly in 2.0.5 ----------------------+----------------------------------------------------- Reporter: tgr | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: acct stats ----------------------+----------------------------------------------------- struct acct ("Total ...") requests are counted improperly in varnishd (visible in the CLI and in varnishstat), for example: - 700 client_req / s - 8 s_req / s - 30 backend_req / s - 1 s_fetch / s etc., in varnishd 2.0.5 on Linux (amd64). It did not occur with 2.0.4. The libraries match varnishd, there are no leftovers from a previous installation. It happens with a vanilla 2.0.5 compiled from the tarball and with ssm's official Debian binary package. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 17 20:57:31 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 17 Nov 2009 20:57:31 -0000 Subject: [Varnish] #581: struct acct stats counted improperly in 2.0.5 In-Reply-To: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> References: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> Message-ID: <058.0774d43c64a2db0da4b8ec854f7bd941@projects.linpro.no> #581: struct acct stats counted improperly in 2.0.5 ------------------------+--------------------------------------------------- Reporter: tgr | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: acct stats | ------------------------+--------------------------------------------------- Comment (by kb): Is there something in error about those numbers? The stats I'm seeing in 2.0.5 are correct (on x86_64, BTW). Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 09:51:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 09:51:32 -0000 Subject: [Varnish] #582: Varhish crash with no error code Message-ID: <054.6288692149fe22877a0c7692d2babcec@projects.linpro.no> #582: Varhish crash with no error code ----------------------+----------------------------------------------------- Reporter: doserror | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS ----------------------+----------------------------------------------------- So here is the log from /var/log/messages {{{ Nov 18 09:41:40 tv2varnish varnishd[14004]: Child (14507) said Ready Nov 18 09:43:23 tv2varnish varnishd[14004]: Child (14507) died signal=6 Nov 18 09:43:23 tv2varnish varnishd[14004]: child (14655) Started Nov 18 09:43:23 tv2varnish varnishd[14004]: Child (14655) said Closed fds: 4 5 6 7 11 12 14 15 Nov 18 09:43:23 tv2varnish varnishd[14004]: Child (14655) said Child starts Nov 18 09:43:23 tv2varnish varnishd[14004]: Child (14655) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 09:43:23 tv2varnish varnishd[14004]: Child (14655) said Ready Nov 18 10:09:11 tv2varnish varnishd[14004]: Child (14655) died signal=6 Nov 18 10:09:11 tv2varnish varnishd[14004]: child (15202) Started Nov 18 10:09:11 tv2varnish varnishd[14004]: Child (15202) said Closed fds: 4 5 6 7 11 12 14 15 Nov 18 10:09:11 tv2varnish varnishd[14004]: Child (15202) said Child starts Nov 18 10:09:11 tv2varnish varnishd[14004]: Child (15202) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 10:09:11 tv2varnish varnishd[14004]: Child (15202) said Ready Nov 18 10:14:34 tv2varnish varnishd[14004]: Child (15202) died signal=6 Nov 18 10:14:34 tv2varnish varnishd[14004]: child (15406) Started Nov 18 10:14:34 tv2varnish varnishd[14004]: Child (15406) said Closed fds: 4 5 6 7 11 12 14 15 Nov 18 10:14:34 tv2varnish varnishd[14004]: Child (15406) said Child starts Nov 18 10:14:34 tv2varnish varnishd[14004]: Child (15406) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 10:14:34 tv2varnish varnishd[14004]: Child (15406) said Ready Nov 18 10:20:46 tv2varnish varnishd[14004]: Child (15406) died signal=6 Nov 18 10:20:46 tv2varnish varnishd[14004]: child (15621) Started Nov 18 10:20:47 tv2varnish varnishd[14004]: Child (15621) said Closed fds: 4 5 6 7 11 12 14 15 Nov 18 10:20:47 tv2varnish varnishd[14004]: Child (15621) said Child starts Nov 18 10:20:47 tv2varnish varnishd[14004]: Child (15621) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 10:20:47 tv2varnish varnishd[14004]: Child (15621) said Ready }}} , varnishd {{{ /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -s file,/var/lib/varnish/tv2varnish/varnish_storage.bin,4G }}} , the OS {{{ alex at tv2varnish:/$ cat /etc/debian_version 5.0.3 alex at tv2varnish:/$ uname -a Linux tv2varnish 2.6.26-1-686 #1 SMP Fri Mar 13 18:08:45 UTC 2009 i686 GNU/Linux }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 09:55:31 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 09:55:31 -0000 Subject: [Varnish] #582: Varhish crash with no error code In-Reply-To: <054.6288692149fe22877a0c7692d2babcec@projects.linpro.no> References: <054.6288692149fe22877a0c7692d2babcec@projects.linpro.no> Message-ID: <063.aad6f40946c84a379b69c17e1f141ec7@projects.linpro.no> #582: Varhish crash with no error code ------------------------------------------------------------------------------------------+ Reporter: doserror | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS | ------------------------------------------------------------------------------------------+ Comment (by doserror): Please help, Thanks in advance -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 11:35:39 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 11:35:39 -0000 Subject: [Varnish] #582: Varhish crash with no error code In-Reply-To: <054.6288692149fe22877a0c7692d2babcec@projects.linpro.no> References: <054.6288692149fe22877a0c7692d2babcec@projects.linpro.no> Message-ID: <063.5430cdcfd0c0ee240b7aeb3cadde1461@projects.linpro.no> #582: Varhish crash with no error code ------------------------------------------------------------------------------------------+ Reporter: doserror | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS | ------------------------------------------------------------------------------------------+ Comment (by phk): Please try to run it in the foreground with "-d -d" argument, to see if that gives further details. Does it serve any traffic before it dies ? Please look for a core dump and try to get us a backtrace. (see http://varnish.projects.linpro.no/wiki/DebuggingVarnish) Poul-Henning -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 11:40:45 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 11:40:45 -0000 Subject: [Varnish] #576: Feature request: make check; better timing or more stable tests In-Reply-To: <052.ea532920bf3dd49821b6ebf7a71a7573@projects.linpro.no> References: <052.ea532920bf3dd49821b6ebf7a71a7573@projects.linpro.no> Message-ID: <061.156aa0d02a6cb0aa9d8e4a23a0f9d473@projects.linpro.no> #576: Feature request: make check; better timing or more stable tests -------------------------+-------------------------------------------------- Reporter: ingvar | Owner: kristian Type: enhancement | Status: assigned Priority: low | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by phk): I can't see the two logs you linked to above. We need to know at least which tests are failing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 11:47:50 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 11:47:50 -0000 Subject: [Varnish] #574: Comparing two headers In-Reply-To: <051.d599e745e9d07ae7dacdc9310dbe5e5b@projects.linpro.no> References: <051.d599e745e9d07ae7dacdc9310dbe5e5b@projects.linpro.no> Message-ID: <060.af350bddcb73bfb1752a0adb521adecd@projects.linpro.no> #574: Comparing two headers -------------------------+-------------------------------------------------- Reporter: mikko | Owner: phk Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * version: trunk => 2.0 Comment: This is fixed in -trunk, not sure if was merged to 2.0.5 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 12:34:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 12:34:40 -0000 Subject: [Varnish] #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) In-Reply-To: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> References: <054.6c0a8b1caae77bb470ba56faa0e87948@projects.linpro.no> Message-ID: <063.64631f771e3c083e3eae1e06370bc056@projects.linpro.no> #572: Fix for: Create worker thread failed 11 Resource temporarily unavailable (Thread Problem on Linux) -------------------------+-------------------------------------------------- Reporter: whocares | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4352]) Add a parameter to set the workerthread stacksize. On 32 bit systems, it may be necessary to tweak this down to get high numbers of worker threads squeezed into the address-space. I have no idea how much stack-space a worker thread normally uses, so no guidance is given, and we default to the system default. Fixes #572 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 13:37:22 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 13:37:22 -0000 Subject: [Varnish] #583: child process crash and restart Message-ID: <054.6ad29a913ee64de1d2e55153f8ccf5c9@projects.linpro.no> #583: child process crash and restart ----------------------+----------------------------------------------------- Reporter: nidosaur | Owner: phk Type: defect | Status: new Priority: normal | Milestone: After Varnish 2.1 Component: varnishd | Version: 2.0 Severity: normal | Keywords: 2.0.4 2.0.5 ----------------------+----------------------------------------------------- Debian Lenny kernel 2.6.26-2-686-bigmem #1 SMP on DELL R710 8Go RAM Varnish was installed in 2.0.5 in production. Memory Consumption was 1.2Go / 8Go and regulary we got message sayinhg child died /var/log/messages {{{ Nov 18 11:01:38 front1 varnishd[2533]: child (2540) Started Nov 18 11:01:38 front1 varnishd[2533]: Child (2540) said Closed fds: 3 5 6 7 10 11 13 14 Nov 18 11:01:38 front1 varnishd[2533]: Child (2540) said Child starts Nov 18 11:01:38 front1 varnishd[2533]: Child (2540) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 11:01:38 front1 varnishd[2533]: Child (2540) said Ready Nov 18 11:11:57 front1 varnishd[2533]: Child (2540) died signal=3 Nov 18 11:11:57 front1 varnishd[2533]: child (4201) Started Nov 18 11:11:57 front1 varnishd[2533]: Child (4201) said Closed fds: 3 5 6 7 10 11 13 14 Nov 18 11:11:57 front1 varnishd[2533]: Child (4201) said Child starts Nov 18 11:11:57 front1 varnishd[2533]: Child (4201) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 11:11:57 front1 varnishd[2533]: Child (4201) said Ready Nov 18 11:18:23 front1 varnishd[2533]: CLI 8 open from telnet 127.0.0.1:44949 127.0.0.1:6082 Nov 18 11:18:29 front1 varnishd[2533]: CLI 8 result 200 "help" Nov 18 11:18:41 front1 varnishd[2533]: CLI 8 result 200 "param.show" Nov 18 11:19:24 front1 varnishd[2533]: Child (4201) died signal=6 Nov 18 11:19:24 front1 varnishd[2533]: child (5212) Started Nov 18 11:19:25 front1 varnishd[2533]: Child (5212) said Closed fds: 3 5 6 7 8 11 12 14 16 Nov 18 11:19:25 front1 varnishd[2533]: Child (5212) said Child starts Nov 18 11:19:25 front1 varnishd[2533]: Child (5212) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 11:19:25 front1 varnishd[2533]: Child (5212) said Ready Nov 18 11:23:13 front1 varnishd[2533]: CLI 8 result 500 "quit" Nov 18 11:23:13 front1 varnishd[2533]: CLI 8 closed Nov 18 11:26:48 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:26:51 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:26:53 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:27:08 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:27:13 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:27:24 front1 varnishd[2533]: Child (5212) said Memory exhaustedCLI result = 106 Nov 18 11:27:36 front1 varnishd[2533]: Child (5212) died signal=6 Nov 18 11:27:36 front1 varnishd[2533]: child (6344) Started Nov 18 11:27:36 front1 varnishd[2533]: Child (6344) said Closed fds: 3 5 6 7 10 11 13 14 Nov 18 11:27:36 front1 varnishd[2533]: Child (6344) said Child starts Nov 18 11:27:36 front1 varnishd[2533]: Child (6344) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 11:27:36 front1 varnishd[2533]: Child (6344) said Ready Nov 18 11:40:31 front1 varnishd[2533]: Child (6344) died signal=6 Nov 18 11:40:31 front1 varnishd[2533]: child (8013) Started Nov 18 11:40:31 front1 varnishd[2533]: Child (8013) said Closed fds: 3 5 6 7 10 11 13 14 Nov 18 11:40:31 front1 varnishd[2533]: Child (8013) said Child starts Nov 18 11:40:31 front1 varnishd[2533]: Child (8013) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 11:40:31 front1 varnishd[2533]: Child (8013) said Ready }}} /var/log/syslog with varnish 2.0.5 {{{ Nov 18 08:45:59 front1 varnishd[8364]: Child (8365) Panic message: Assert error in add_objexp(), cache_expire.c line 114:#012 Condition((o->objexp) != 0) not true.#012errno = 12 (Cannot allocate memory)#012thread = (cache- worker)#012Backtrace:#012 0x80693b1: pan_ic+13a#012 0x805e63d: add_objexp +1b9#012 0x805eb86: EXP_Insert+12b#012 0x805a17e: cnt_fetch+550#012 0x805c121: CNT_Session+665#012 0x806ae57: wrk_do_cnt_sess+158#012 0x806a89a: wrk_thread+51c#012 0xb7ec24c0: _end+afe0c4f8#012sp = 0x22d15004 {#012 fd = 321, id = 321, xid = 1406669065,#012 client = 195.93.102.42:44774,#012 step = STP_FETCH,#012 handling = pass,#012 err_code = 200, err_reason = (null),#012 restarts = 0, esis = 0#012 ws = 0x22d15050 { #012 id = " sess",#012 {s,f,r,e} = {0x22d15544,+419,(nil),+16384},#012 },#012 http[req] = {#012 ws = 0x22d15050[sess]#012 "HEAD",#012 "/img/blo gs/ico_cine.jpg",#012 "HTTP/1.1",#012 "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.43 22; MSN Optimized;FR; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)",#012 "Accept-Encoding: gzip, deflate",#012 "Via: HTT P/1.1 cache-prs-ab10.proxy.aol.com[C35D662A] (Traffic-Server/6.1.5 [uScM])",#012 "Host: ANONYMOUS"",#012 },#012 worker = 0x403f138#01 2 vcl = {#012 srcname = {#012 "input",#012 "Default",#012 },#012 },#012 obj = 0x8f7d8000 {#012 refcnt = 2, xid = 14 06669065,#012 ws = 0x8f7d8018 { #012 id = "obj",#012 {s,f,r,e} = {0x8f7d81f4,+192,(nil),+7692},#012 },#012 http[obj] = {#012 ws = 0x8f7d8018[obj]#012 "HTTP/1.1",#012 "200",#012 "OK",#012 "Date: Wed, 18 Nov 2009 07:45:59 GMT",#012 "Server: Apache",#012 "Last-Modified: Wed, 09 Jul 2008 10:05:20 GMT",#012 "ETag: "24a9a0-1972-4519472361800"",#012 "Content-Type: image/j peg",#012 "Content-Length: 6514",#012 },#012 len = 6514,#012 store = {#012 6514 {#012 Nov 18 08:45:59 front1 varnishd[8364]: Child cleanup complete Panic message: Missing errorhandling code in alloc_smf(), storage_file.c line 427:#012 Condition((sp2) != 0) not true.errno = 12 (Cannot allocate memory)#012thread = (cache-worker)#012Backtrace:#012 0x80693b1: pan_ic+13a#012 0x80846c9: alloc_ smf+255#012 0x80857a2: smf_alloc+c9#012 0x8082cd1: STV_alloc+7d#012 0x8061a46: HSH_Prealloc+38c#012 0x8062599: HSH_Lookup+299#012 0x805a7f8: cnt _lookup+1a6#012 0x805c079: CNT_Session+5bd#012 0x806ae57: wrk_do_cnt_sess+158#012 0x806a89a: wrk_thread+51c#012sp = 0x1b810004 {#012 fd = 104, id = 104, xid = 899199840,#012 client = 83.193.205.241:53623,#012 step = STP_LOOKUP,#012 handling = hash,#012 err_code = 200, err_reason = (null),# 012 restarts = 0, esis = 0#012 ws = 0x1b810050 { #012 id = "sess",#012 {s,f,r,e} = {0x1b810544,+935,(nil),+16384},#012 },#012 http[req] = { #012 ws = 0x1b810050[sess]#012 "GET",#012 "bottom_bloc_bleu_long.gif",#012 "HTTP/1.1",#012 "Accept: */*",#012 "Referer: http://ANONYMOUS",#012 "Accept-Language: fr",#012 "User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Wi ndows NT 6.0; Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30618)",#012 "Accept-Encoding: gzip, deflate",#012 "Host: www.ANONYMOUS",#012 "Connection: Keep- Alive",#012 },#012 worker = 0x230fb138#012 vcl = {#012 srcname = {#012 "input",#012 "Default",#012 },#012 },#012},#012 Nov 18 08:58:32 front1 varnishd[8364]: Child cleanup complete Nov 18 08:58:32 front1 varnishd[8364]: child (16931) Started Nov 18 08:58:32 front1 varnishd[8364]: Child (16931) said Closed fds: 3 5 6 7 10 11 13 14 Nov 18 08:58:32 front1 varnishd[8364]: Child (16931) said Child starts Nov 18 08:58:32 front1 varnishd[8364]: Child (16931) said managed to mmap 2147479552 bytes of 2147479552 Nov 18 08:58:32 front1 varnishd[8364]: Child (16931) said Ready Nov 18 09:07:09 front1 varnishd[8364]: Child (16931) died signal=6 Nov 18 09:07:09 front1 varnishd[8364]: Child (16931) Panic message: Missing errorhandling code in VBE_NewConn(), cache_backend.c line 214:#012 Condi tion((vc) != 0) not true.errno = 12 (Cannot allocate memory)#012thread = (cache-worker)#012Backtrace:#012 0x80693b1: pan_ic+13a#012 0x8052f70: VBE_ NewConn+123#012 0x80538d4: VBE_GetVbe+373#012 0x805dffa: vdi_simple_getfd+1ca#012 0x8053550: VBE_GetFd+17e#012 0x8060e90: Fetch+38d#012 0x8059e2 e: cnt_fetch+200#012 0x805c121: CNT_Session+665#012 0x806ae57: wrk_do_cnt_sess+158#012 0x806a89a: wrk_thread+51c#012sp = 0x20b6f004 {#012 fd = 32 9, id = 329, xid = 1613131390,#012 client = 90.1.41.205:2341,#012 step = STP_FETCH,#012 handling = fetch,#012 err_code = 200, err_reason = (null) ,#012 restarts = 0, esis = 0#012 ws = 0x20b6f050 { #012 id = "sess",#012 {s,f,r,e} = {0x20b6f544,+1162,(nil),+16384},#012 },#012 http[req] = {#012 ws = 0x20b6f050[sess]#012 "GET",#012 "86.jpg.jpg",#012 "HTTP/1.1",#012 "Host: www.ANONYMOUS",#012 "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.9.0.1 5) Gecko/2009101601 Firefox/3.0.15",#012 "Accept: image/png,image/*;q=0.8,*/*;q=0.5",#012 "Accept-Language: fr,fr- fr;q=0.8,en-us;q=0.5,en;q =0.3",#012 "Accept-Encoding: gzip,deflate",#012 "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7",#012 "Keep-Alive: 300",#012 "Con nection: keep-alive",#012 "Referer: http://www.ANONYMOUS",#012 },#012 worker = 0x1a4fc138#012 v cl = {#012 srcname = {#012 "input",#012 "Default",#012 },#012 },#012 obj = 0x82ff8000 {#012 refcnt = 1, xid = 16131313 90,#012 ws = 0x82ff8018 { #012 id = "obj",#012 {s,f,r,e} = {0x82ff81f4,0x82ff81f4,(nil),+7692},#012 },#012 http[obj] = {#012 ws = 0x82ff8018[obj]#012 },#012 len = 0,#012 store = {#012 },#012 },#012},#012 Nov 18 09:07:09 front1 varnishd[8364]: Child cleanup complete Nov 18 09:07:09 front1 varnishd[8364]: child (18071) Started }}} /var/log/syslog with varnish 2.0.4 {{{ Nov 18 13:36:41 front1 varnishd[2809]: Child (2810) Panic message: Assert error in add_objexp(), cache_expire.c line 114:#012 C ondition((o->objexp) != 0) not true. errno = 12 (Cannot allocate memory) thread = (cache-worker)sp = 0x16756004 {#012 fd = 11 7, id = 117, xid = 1158858687,#012 client = 90.34.68.204:49789,#012 step = STP_FETCH,#012 handling = deliver,#012 err_code = 200, err_reason = (null),#012 ws = 0x1675604c { #012 id = "sess",#012 {s,f,r,e} = {0x16756534,,+1348,(nil),+16384},#012 },#012 worker = 0x8047138 {#012 },#012 vcl = {#012 srcname = {#012 "input",#012 "Default",#012 },#012 },#012 obj = 0x71db0000 {#012 refcnt = 2, xid = 1158858687,#012 ws = 0x71db0018 { #012 id = "obj",#012 {s,f,r,e} = {0x71db01ec,,+191,(nil),+7700},#012 },#012 http = {#012 ws = 0x71db0018 { #012 id = "obj",#012 {s,f,r,e} = {0x71db01ec,,+191,(nil),+7700},#012 },#012 hd = {#012 "Date: Wed, 18 Nov 2009 12:36:41 GMT", #012 "Server: Apache",#012 "Last-Modified: Wed, 18 Nov 2009 05:42:34 GMT",#012 "ETag: "55f751-db8-4789eb59d 6680"",#012 "Content-Type: image/jpeg",#012 "Content-Length: 3512",#012 },#012 },#012 len = 3512,#012 store = {#012 3512 {#012 ff d8 ff e0 00 10 4a 46 49 46 00 01 01 00 00 01 |......JFIF......|#012 00 01 00 00 ff fe 00 3b 43 52 45 41 54 4f 52 3a |.......;CREATOR:|#012 20 67 64 2d 6a 70 65 67 20 76 31 2e 30 20 28 75 | gd-jpeg v1. 0 (u|#012 73 69 6e 67 20 49 4a 47 20 4a 50 45 47 20 76 36 |sing IJG JPEG v6|#012 [3448 more]#012 },#012 }, #012 },#012},#012 Nov 18 13:40:27 front1 varnishd[2809]: Child (6271) died signal=6 Nov 18 13:40:27 front1 varnishd[2809]: Child (6271) Panic message: Missing errorhandling code in HSH_Copy(), cache_hash.c line 1 60:#012 Condition((oh->hash) != 0) not true. errno = 12 (Cannot allocate memory) thread = (cache-worker)sp = 0x1b74c004 {#012 fd = 418, id = 418, xid = 373195725,#012 client = 195.5.200.5:3803,#012 step = STP_LOOKUP,#012 handling = hash,#012 ws = 0 x1b74c04c { #012 id = "sess",#012 {s,f,r,e} = {0x1b74c534,,+1286,(nil),+16384},#012 },#012 worker = 0xbe7ff138 {#012 },#012 vcl = {#012 srcname = {#012 "input",#012 "Default",#012 },#012 },#012},#012 Nov 18 13:40:27 front1 varnishd[2809]: Child cleanup complete Nov 18 13:40:27 front1 varnishd[2809]: child (6754) Started }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 18:55:54 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 18:55:54 -0000 Subject: [Varnish] #583: child process crash and restart In-Reply-To: <054.6ad29a913ee64de1d2e55153f8ccf5c9@projects.linpro.no> References: <054.6ad29a913ee64de1d2e55153f8ccf5c9@projects.linpro.no> Message-ID: <063.ec77d5b2525df95468648fbb850bba80@projects.linpro.no> #583: child process crash and restart -------------------------+-------------------------------------------------- Reporter: nidosaur | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: After Varnish 2.1 Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: 2.0.4 2.0.5 | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: As far as I can see, you simply run out of malloc(3) memory. This can be many different things, but it is almost guaranteed to be something you specified as a parameter, very likely too many threads. Also check your ulimit(1) output, you may need to increase som of the resource limits there. Poul-Henning -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 18:56:36 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 18:56:36 -0000 Subject: [Varnish] #574: [2.0.5] Comparing two headers In-Reply-To: <051.d599e745e9d07ae7dacdc9310dbe5e5b@projects.linpro.no> References: <051.d599e745e9d07ae7dacdc9310dbe5e5b@projects.linpro.no> Message-ID: <060.12f20bba231e9ffd02c1b86f84a8c515@projects.linpro.no> #574: [2.0.5] Comparing two headers -------------------------+-------------------------------------------------- Reporter: mikko | Owner: tfheen Type: enhancement | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * owner: phk => tfheen * summary: Comparing two headers => [2.0.5] Comparing two headers -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 18:59:30 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 18:59:30 -0000 Subject: [Varnish] #581: struct acct stats counted improperly in 2.0.5 In-Reply-To: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> References: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> Message-ID: <058.1f7e06e815a7f19a05d4afd5e327f2f8@projects.linpro.no> #581: struct acct stats counted improperly in 2.0.5 ------------------------+--------------------------------------------------- Reporter: tgr | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: acct stats | ------------------------+--------------------------------------------------- Changes (by phk): * owner: phk => tfheen -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:05:08 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:05:08 -0000 Subject: [Varnish] #559: Missing varnishd options in documentation In-Reply-To: <054.d95de5af4b2fe8fb6aa84ac6634632f1@projects.linpro.no> References: <054.d95de5af4b2fe8fb6aa84ac6634632f1@projects.linpro.no> Message-ID: <063.0d8b8b46e4269d12f3c5d50344ce4091@projects.linpro.no> #559: Missing varnishd options in documentation ---------------------------+------------------------------------------------ Reporter: walraven | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: documentation | Version: trunk Severity: normal | Resolution: fixed Keywords: | ---------------------------+------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4354]) Document -C option in usage. Fixes #559 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:07:14 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:07:14 -0000 Subject: [Varnish] #563: Unable to mangle one of multiple Set-Cookie headers. In-Reply-To: <049.c51db836066750a18661c2034ddcef9f@projects.linpro.no> References: <049.c51db836066750a18661c2034ddcef9f@projects.linpro.no> Message-ID: <058.e58a0d16d4565809e21df8d904394293@projects.linpro.no> #563: Unable to mangle one of multiple Set-Cookie headers. ----------------------+----------------------------------------------------- Reporter: are | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: invalid Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: We already have this on our "shopping list" (http://varnish.projects.linpro.no/wiki/PostTwoShoppingList#a21.VCLcookiehanding) Closing ticket, as we only use tickets for bugs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:10:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:10:40 -0000 Subject: [Varnish] #436: 2.0.2 crashes on os X with default configuration In-Reply-To: <053.de24599fd2dfd02431a96c17d161db8a@projects.linpro.no> References: <053.de24599fd2dfd02431a96c17d161db8a@projects.linpro.no> Message-ID: <062.b72da5894c3441105e28a274b04f6677@projects.linpro.no> #436: 2.0.2 crashes on os X with default configuration ---------------------+------------------------------------------------------ Reporter: josh3io | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: worksforme Keywords: | ---------------------+------------------------------------------------------ Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Time this ticket out for lack of response. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:13:12 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:13:12 -0000 Subject: [Varnish] #369: Enable serving of graced objects if backend is down In-Reply-To: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> References: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> Message-ID: <060.1d4fdc0f565ed4d5d87934cfedaf2064@projects.linpro.no> #369: Enable serving of graced objects if backend is down -------------------------+-------------------------------------------------- Reporter: perbu | Owner: kristian Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by perbu): * status: assigned => closed * resolution: => fixed Comment: 20:08 perbu, what is the difference between ticket #369 and saint mode ? 20:11 nothing, AFAIK. 20:11 we didn't have the fancy name back then. :-) 20:12 ok, want to close it yourself then ? 20:12 oh, may I? 20:12 you know how it is, when that time comes, real men shoot their own dogs. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:34:58 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:34:58 -0000 Subject: [Varnish] #581: struct acct stats counted improperly in 2.0.5 In-Reply-To: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> References: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> Message-ID: <058.019fe446ccf6165f6bfaa90925eec4ec@projects.linpro.no> #581: struct acct stats counted improperly in 2.0.5 ------------------------+--------------------------------------------------- Reporter: tgr | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: acct stats | ------------------------+--------------------------------------------------- Comment (by tgr): Ken: the numbers are definitely wrong, s_req should be more or less the same as client_req. In fact, this is what we see on our 2.0.4 instances. All our instances of 2.0.5 varnishd count s_foo stats improperly. However, I've been unable to reproduce it just by running a separate varnishd instance (with its own varnishlog and varnishncsa reading from the SHM log, just to get the same environment as for the production instance) on the same set of servers and running some synthetic benchmarks to generate traffic. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 18 19:54:33 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 18 Nov 2009 19:54:33 -0000 Subject: [Varnish] #564: Varnish crash with -s persistent In-Reply-To: <052.331a803e2e21bbe48b75b59e9ef0db48@projects.linpro.no> References: <052.331a803e2e21bbe48b75b59e9ef0db48@projects.linpro.no> Message-ID: <061.0c21e380dc6b939458362183c14defc0@projects.linpro.no> #564: Varnish crash with -s persistent ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4355]) Close a race that demonstrates that I have no idea what kind of load my users have: Do not load new segments opened after we started, even if multiple such have been created, before we finish loading the old segments from the silo. Fixes #564 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 11:34:48 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 11:34:48 -0000 Subject: [Varnish] #584: Backend probing half-close issue Message-ID: <049.6294e4c2987b9bdaaa94b2de81cedc73@projects.linpro.no> #584: Backend probing half-close issue ----------------------+----------------------------------------------------- Reporter: phk | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- There seems to be no clear answer to when/if it is allowed to half-close a TCP connection when doing a request: After the request is sent or after the response has been received. Good cases can be made for both scenarios, but the first fails on some backends, whereas the second works on all. The varnish backend probe uses the first method, in an attemt to be able to figure out if the backend has ACK'ed the request or not. Given that the request is almost guaranteed to fit inside the first window, only very deficient TCP stacks will not ACK the request unconditionally, so this test is worthless. We should switch to the second method, and do a full close, after the poll response has been received. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 11:40:24 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 11:40:24 -0000 Subject: [Varnish] #584: Backend probing half-close issue In-Reply-To: <049.6294e4c2987b9bdaaa94b2de81cedc73@projects.linpro.no> References: <049.6294e4c2987b9bdaaa94b2de81cedc73@projects.linpro.no> Message-ID: <058.e8a24a18620aed0b8c1a4aa06bd2618a@projects.linpro.no> #584: Backend probing half-close issue ----------------------+----------------------------------------------------- Reporter: phk | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: (In [4356]) Don't halfclose the backend polling TCP connection after sending the request, some backends gets confused by this. Add a ".status" to backend polling, to configure the expected HTTP status code for a good poll. Fixes #584 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 14:53:56 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 14:53:56 -0000 Subject: [Varnish] #550: varnish fails to start In-Reply-To: <054.ae38316f6d56cef376ccd851e72e5aa3@projects.linpro.no> References: <054.ae38316f6d56cef376ccd851e72e5aa3@projects.linpro.no> Message-ID: <063.eb4f992914d148f1d0e6fba9da9edead@projects.linpro.no> #550: varnish fails to start ----------------------+----------------------------------------------------- Reporter: rainer_d | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: worksforme Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Time this ticket out, due to lack of response. A 32bit exhaustion issue is likely. Traceback from a core-dump can confirm this if supplied. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 14:56:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 14:56:19 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.7a2f6a3dc5b7617219c863b0b2203ac4@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Time this ticket out. I don't think I can imagine a way grace would impact the memory footprint, but I will keep the observation in mind. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 14:57:28 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 14:57:28 -0000 Subject: [Varnish] #534: Threads stuck in trunk (critbit) In-Reply-To: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> References: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> Message-ID: <061.b2c2ff3be443cd68eaf01f590009bc53@projects.linpro.no> #534: Threads stuck in trunk (critbit) ---------------------------+------------------------------------------------ Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: threads stuck | ---------------------------+------------------------------------------------ Changes (by phk): * summary: Threads stuck in trunk => Threads stuck in trunk (critbit) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Nov 19 15:01:05 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 19 Nov 2009 15:01:05 -0000 Subject: [Varnish] #537: Sticky director In-Reply-To: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> References: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> Message-ID: <058.f449baa0cbde72b8cc428cdd68263b98@projects.linpro.no> #537: Sticky director ----------------------+----------------------------------------------------- Reporter: rts | Owner: phk Type: defect | Status: closed Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Resolution: invalid Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: This is a feature request, I have moved it to our "shopping list", we try to use tickets only for bugs. (See: http://varnish.projects.linpro.no/wiki/PostTwoShoppingList) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Nov 20 13:23:27 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 20 Nov 2009 13:23:27 -0000 Subject: [Varnish] #369: Enable serving of graced objects if backend is down In-Reply-To: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> References: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> Message-ID: <060.6ecffdd8030d431febdff5d549f24052@projects.linpro.no> #369: Enable serving of graced objects if backend is down -------------------------+-------------------------------------------------- Reporter: perbu | Owner: kristian Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Comment (by nadin.uvre): Bravissimo, guys, such a nice [http://www.superiorpapers.com research paper]!! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Nov 20 14:17:18 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 20 Nov 2009 14:17:18 -0000 Subject: [Varnish] #369: Enable serving of graced objects if backend is down In-Reply-To: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> References: <051.733d2435622e06abe187e3ae2e40534c@projects.linpro.no> Message-ID: <060.ad3b0570bb5a03058c949cbe28028583@projects.linpro.no> #369: Enable serving of graced objects if backend is down -------------------------+-------------------------------------------------- Reporter: perbu | Owner: kristian Type: enhancement | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Comment (by nadin.uvre): [http://www.superiorpapers.com research paper] -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Nov 21 00:38:07 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 21 Nov 2009 00:38:07 -0000 Subject: [Varnish] #495: HTTP/1.0 or 'Connection: closed' backend race condition In-Reply-To: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> References: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> Message-ID: <058.4b73687b083807cf211a236a8b96021e@projects.linpro.no> #495: HTTP/1.0 or 'Connection: closed' backend race condition ----------------------+----------------------------------------------------- Reporter: cra | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by niallohiggins): The HTTP/1.1 spec (RFC2608 Section 8.1.2.1 `Negotiation' http://www.apps.ietf.org/rfc/rfc2068.html#sec-8.1) seems quite explicit about behaviour here: Clients and servers SHOULD NOT assume that a persistent connection is maintained for HTTP versions less than 1.1 unless it is explicitly signaled. See section 19.7.1 for more information on backwards compatibility with HTTP/1.0 clients. Therefore, I believe the correct thing for Varnish to do if it detects a HTTP/1.0 response from an origin server is to disable re-use for that connection. Seeing as Varnish has an existing check for HTTP/1.1 responses with Connection: Close headers, the patch (against 2.0.5) seems to be relatively straightforward: {{{ --- cache_fetch.c.orig 2009-11-21 00:32:08.302103546 +0000 +++ cache_fetch.c 2009-11-21 00:32:32.590103329 +0000 @@ -497,7 +497,8 @@ Fetch(struct sess *sp) http_PrintfHeader(sp->wrk, sp->fd, hp2, "Content-Length: %u", sp->obj->len); - if (http_HdrIs(hp, H_Connection, "close")) + if (http_HdrIs(hp, H_Connection, "close") + || hp->protover < 1.1) cls = 1; if (cls) }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Nov 21 00:47:25 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 21 Nov 2009 00:47:25 -0000 Subject: [Varnish] #495: HTTP/1.0 or 'Connection: closed' backend race condition In-Reply-To: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> References: <049.ef605108ffb6a27a8a69ad285c7e044f@projects.linpro.no> Message-ID: <058.d656c01ad67df21a591dd269c2a43825@projects.linpro.no> #495: HTTP/1.0 or 'Connection: closed' backend race condition ----------------------+----------------------------------------------------- Reporter: cra | Owner: phk Type: defect | Status: new Priority: low | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by niallohiggins): Replying to [comment:14 niallohiggins]: Correction to my above comment: Of course, it should only happen if Connection: is not keep-alive AND protocol is < 1.1, so above diff needs an extra condition, something like: (hp->protover < 1.1 && !http_HdrIs(hp, H_Connection, "keep-alive")) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Nov 21 12:52:37 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 21 Nov 2009 12:52:37 -0000 Subject: [Varnish] #585: ESI produces wrong backend requests Message-ID: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> #585: ESI produces wrong backend requests ----------------------+----------------------------------------------------- Reporter: ttmails | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- After installing the new 2.0.5 version ESI processing breaks in many cases. It does not always fail but when failing strange requests like these arrive at the backend lighttpd server: "GET /esi/extra/eventlocation/telefunken- hochhaus'''\t'''/esi/teaser/toplocations" HTTP/1.1" 400 "GET /esi/extra/eventlocation/telefunken- hochhaus'''n'''/esi/teaser/toplocations HTTP/1.1" 404 These are 2 esi urls combined into one, separated with a tab or "n"!? These 2 urls are two esi includes in one html page. Please help :) and thanks in advance! -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Nov 22 13:52:37 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 22 Nov 2009 13:52:37 -0000 Subject: [Varnish] #586: Debian source files instalation Message-ID: <052.f49e8f689d60029221cb2cbaa3d68f87@projects.linpro.no> #586: Debian source files instalation --------------------+------------------------------------------------------- Reporter: werdan | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: trunk | Severity: normal Keywords: | --------------------+------------------------------------------------------- On Debian 5.0 (32bit) after : configure make make install It is necessary to run: ldconfig As without that varnishd says, for example: "can not find shared library libvarnish.so.1" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Nov 22 14:54:26 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 22 Nov 2009 14:54:26 -0000 Subject: [Varnish] #587: Ability to load files from hard disk as error pages Message-ID: <054.e6b90f9c239eb6235595e817792efdf3@projects.linpro.no> #587: Ability to load files from hard disk as error pages -------------------------+-------------------------------------------------- Reporter: chris_se | Owner: phk Type: enhancement | Status: new Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: minor | Keywords: -------------------------+-------------------------------------------------- It would be nice to be able to load files from the local hard disk as error pages. Example of how it could look like: {{{ sub vcl_error { if (obj.status == 503) { set obj.http.Content-Type = "text/html; charset=utf-8"; sendfile "/path/to/error503.html"; return (deliver); } } }}} Currently, this can be emulated with Inline C and fopen/fread/VRT_synth_page but an implementation in varnish itself would probably be much more efficient. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Nov 22 23:08:57 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 22 Nov 2009 23:08:57 -0000 Subject: [Varnish] #588: Varnish crashes every other second with Assert error in VCA_Prep(), cache_acceptor.c line 148 Message-ID: <059.86cbd22e616d124a82dcd8aabaf578aa@projects.linpro.no> #588: Varnish crashes every other second with Assert error in VCA_Prep(), cache_acceptor.c line 148 ---------------------------+------------------------------------------------ Reporter: erik.berglund | Type: defect Status: new | Priority: high Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ---------------------------+------------------------------------------------ Hello. We are running Varnish 2.0.5 on OS X 10.5 Server. A few days ago both our varnishservers started to crash and respawn all the time not staying up longer than 1-2 seconds before respawning. And finally after about 3-5 hours the CrashReporter cant keep up or something else happens so a complete reboot of the server is needed. And in the computer crashlog we keep getting these messages: {{{ 2009-11-22 23.58.03 no.linpro.varnish[77] child (56494) Started 2009-11-22 23.58.04 no.linpro.varnish[77] Child (56494) said Closed fds: 4 5 7 10 11 13 14 2009-11-22 23.58.04 no.linpro.varnish[77] Child (56494) said Child starts 2009-11-22 23.58.04 no.linpro.varnish[77] Child (56494) said managed to mmap 1925185536 bytes of 1925185536 2009-11-22 23.58.04 no.linpro.varnish[77] Child (56494) said Ready 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) said getnameinfo = 5 ai_family not supported 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) said getnameinfo = 5 ai_family not supported 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) said getnameinfo = 5 ai_family not supported 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) said getnameinfo = 5 ai_family not supported 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) not responding to ping, killing it. 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) died signal=6 2009-11-22 23.58.39 no.linpro.varnish[77] Child (56494) Panic message: Assert error in VCA_Prep(), cache_acceptor.c line 148: 2009-11-22 23.58.39 no.linpro.varnish[77] Condition((setsockopt(sp->fd, 0xffff, 0x0080, &linger, sizeof linger)) == 0) not true. 2009-11-22 23.58.39 no.linpro.varnish[77] errno = 22 (Invalid argument) }}} Any help would be appreciated at pointing out what might be causing this and helping us get on the right track with this issue. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 12:57:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 12:57:32 -0000 Subject: [Varnish] #588: Varnish crashes every other second with Assert error in VCA_Prep(), cache_acceptor.c line 148 In-Reply-To: <059.86cbd22e616d124a82dcd8aabaf578aa@projects.linpro.no> References: <059.86cbd22e616d124a82dcd8aabaf578aa@projects.linpro.no> Message-ID: <068.9d277585acf3948ce951f5191740414b@projects.linpro.no> #588: Varnish crashes every other second with Assert error in VCA_Prep(), cache_acceptor.c line 148 ---------------------------+------------------------------------------------ Reporter: erik.berglund | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ---------------------------+------------------------------------------------ Changes (by phk): * owner: => phk * component: build => varnishd Comment: This looks like a protocol issue, possibly some kind of IPv4/IPv6 confusion. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 13:00:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 13:00:19 -0000 Subject: [Varnish] #587: Ability to load files from hard disk as error pages In-Reply-To: <054.e6b90f9c239eb6235595e817792efdf3@projects.linpro.no> References: <054.e6b90f9c239eb6235595e817792efdf3@projects.linpro.no> Message-ID: <063.61047e2cd947485f26cf5075411da697@projects.linpro.no> #587: Ability to load files from hard disk as error pages -------------------------+-------------------------------------------------- Reporter: chris_se | Owner: phk Type: enhancement | Status: closed Priority: lowest | Milestone: Component: varnishd | Version: trunk Severity: minor | Resolution: invalid Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: Interesting idea. I have added it to our shopping-list: http://varnish.projects.linpro.no/wiki/PostTwoShoppingList and will close this ticket (we only track bugs in tickets, to avoid clutter.) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 13:02:01 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 13:02:01 -0000 Subject: [Varnish] #585: ESI produces wrong backend requests (2.0.5) In-Reply-To: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> References: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> Message-ID: <062.6422b6e00b7663f55452614c0d40ad69@projects.linpro.no> #585: ESI produces wrong backend requests (2.0.5) ----------------------+----------------------------------------------------- Reporter: ttmails | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * owner: phk => tfheen * summary: ESI produces wrong backend requests => ESI produces wrong backend requests (2.0.5) Comment: I think this was fixed as part of r4351. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 14:19:59 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 14:19:59 -0000 Subject: [Varnish] #589: Round robin director not using healthy backend Message-ID: <053.8216b435e8ebc0ea5199a50c66271486@projects.linpro.no> #589: Round robin director not using healthy backend ----------------------+----------------------------------------------------- Reporter: jrieger | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- I have configured varnish to use two backends. If i turn off the first one (idefix_gallia), I get HTTP status 503. varnishlog correctly reports that the first backend is sick and the second one is healthy, but varnish doesn't use that backend. If i turn off the second one (miraculix_gallia), the first one is used and I get HTTP status 200 as expected. This is my VCL: {{{ backend idefix_gallia { .host = "192.168.0.199"; .port = "http"; .probe = { .url = "/"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } backend miraculix_gallia { .host = "192.168.0.200"; .port = "http"; .probe = { .url = "/"; .interval = 5s; .timeout = 1s; .window = 5; .threshold = 3; } } director gallia round-robin { { .backend = idefix_gallia; } { .backend = miraculix_gallia; } } sub vcl_recv { if (req.http.host ~ "^(www.)\.example\.com$") { set req.backend = gallia; } } }}} This is what varnishlog reports: {{{ 0 Backend_health - idefix_gallia Still sick ------- 0 3 5 0.000000 0.000319 0 Backend_health - miraculix_gallia Still healthy 4--X-RH 5 3 5 0.000436 0.000473 HTTP/1.1 200 OK }}} This problem is reproducible in Varnish 2.0.4, 2.0.5 and trunk on Gentoo Linux. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 15:05:18 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 15:05:18 -0000 Subject: [Varnish] #585: ESI produces wrong backend requests (2.0.5) In-Reply-To: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> References: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> Message-ID: <062.e9f53399fe7d8f0d24ab65d0b385e34d@projects.linpro.no> #585: ESI produces wrong backend requests (2.0.5) ----------------------+----------------------------------------------------- Reporter: ttmails | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by ttmails): Bug #578 (fixed by r4351) sounds very similar to my symptoms, i'll check out varnish trunk if I can spare some time. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Nov 23 19:45:55 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 23 Nov 2009 19:45:55 -0000 Subject: [Varnish] #585: ESI produces wrong backend requests (2.0.5) In-Reply-To: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> References: <053.0e4d27e966162965c92c334184ab9afc@projects.linpro.no> Message-ID: <062.ba7c43177229b461e832a9bbe789147a@projects.linpro.no> #585: ESI produces wrong backend requests (2.0.5) ----------------------+----------------------------------------------------- Reporter: ttmails | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by ttmails): Works flawlessly with trunk. Think this ticket can be closed. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 24 10:09:41 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 24 Nov 2009 10:09:41 -0000 Subject: [Varnish] #540: X-Forwarded-For created and not appended. In-Reply-To: <055.5449af7e5b7e33441d8b0f9e65fb0fac@projects.linpro.no> References: <055.5449af7e5b7e33441d8b0f9e65fb0fac@projects.linpro.no> Message-ID: <064.1d8470abffd4a1d3f3eef5a34cbca265@projects.linpro.no> #540: X-Forwarded-For created and not appended. -----------------------+---------------------------------------------------- Reporter: bmfurtado | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------------------------- Comment (by stewsnooze): We have this also on Economist.com. We get two HTTP X-Forwarded-For headers instead of an edited original. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Nov 24 13:04:57 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 24 Nov 2009 13:04:57 -0000 Subject: [Varnish] #581: struct acct stats counted improperly in 2.0.5 In-Reply-To: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> References: <049.1d1d34d794e869189da7c82cf71e6068@projects.linpro.no> Message-ID: <058.bd1e3781fcfdd395a9cb5d5b4fa603c8@projects.linpro.no> #581: struct acct stats counted improperly in 2.0.5 ------------------------+--------------------------------------------------- Reporter: tgr | Owner: tfheen Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Resolution: Keywords: acct stats | ------------------------+--------------------------------------------------- Comment (by stockrt): Replying to [comment:3 tgr]: > Ken: the numbers are definitely wrong, s_req should be more or less the same as client_req. In fact, this is what we see on our 2.0.4 instances. All our instances of 2.0.5 varnishd count s_foo stats improperly. tgr: isn't this related to the session_linger now being enabled by default? What about you testing 2.0.4 with the session_linger default value of the 2.0.5 release? Regards, Rog?rio Schneider -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Nov 25 16:41:47 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 25 Nov 2009 16:41:47 -0000 Subject: [Varnish] #590: Assert error in ESI_Parse(), cache_esi.c line 822: Condition(st->len < st->space) not true. Message-ID: <050.18b8c0fcb73b96bb78d3b132f35e6485@projects.linpro.no> #590: Assert error in ESI_Parse(), cache_esi.c line 822: Condition(st->len < st->space) not true. ----------------------+----------------------------------------------------- Reporter: kali | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: esi parser crash ----------------------+----------------------------------------------------- if ew->space is 4096, the assertion will fail. I think e need to allocate ew->space + 1 line 773. -- Ticket URL: Varnish The Varnish HTTP Accelerator From ibeginhere at gmail.com Fri Nov 27 01:33:14 2009 From: ibeginhere at gmail.com (ll) Date: Fri, 27 Nov 2009 09:33:14 +0800 Subject: varnish bottleneck? Message-ID: <4B0F2C5A.5020900@gmail.com> I think there are maybe some problem about varnish .My varnish's version is 2.0.4 .I want to cache a website for everything .so I set the rules like that if (req.http.host ~"www.abc.cn"){ lookup; } it's well. Varnish can cache everything .but some function of the website is unable .eg POST. So I put the POST judge before the HOST .like that : if (req.request == "POST"){ pipe; } if (req.http.host ~"www.abc.cn"){ lookup; } there are some problem .many of url's record can't be find in the varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in the Response Headers . I had post this problem in this maillist .it's nothing about pipe or pass about no marked. And there urls will go to the backend server every time. I think is there the varnish bottleneck? whether varnish judge the POST first ,it can't handle all .so there are some miss handle ,and go though to the backend ? or maybe some options ,I didn't have the right configure in the vcl ? From varnish-bugs at projects.linpro.no Sun Nov 29 21:26:51 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 29 Nov 2009 21:26:51 -0000 Subject: [Varnish] #591: Extra .Ed causes vcl(7) to be rendered incorrectly Message-ID: <051.39f66d59f7216559410a389214027784@projects.linpro.no> #591: Extra .Ed causes vcl(7) to be rendered incorrectly -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: trunk | Severity: minor Keywords: | -------------------+-------------------------------------------------------- Extra .Ed causes vcl(7) to be rendered incorrectly. Patch attached. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Nov 29 21:37:22 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 29 Nov 2009 21:37:22 -0000 Subject: [Varnish] #592: Too many arguments for some macros causes varnishd.1 to be rendered incorrectly Message-ID: <051.7c8467608f0db910836a706d7f5aa972@projects.linpro.no> #592: Too many arguments for some macros causes varnishd.1 to be rendered incorrectly -------------------+-------------------------------------------------------- Reporter: fgsch | Type: defect Status: new | Priority: normal Milestone: | Component: documentation Version: trunk | Severity: minor Keywords: | -------------------+-------------------------------------------------------- This is at least true in OpenBSD, but I'd expect others to behave in similar ways. Attached patch makes it render correctly and should not hurt others. Without patch man shows: Usage: Too many arguments (maximum of 8 accepted) (#66) a Ar address Ns Op : Ns Ar port Usage: Too many arguments (maximum of 8 accepted) (#94) Fl b Ar host Ns Op : Ns Ar Usage: Too many arguments (maximum of 8 accepted) (#136) Fl h Ar type Ns Op , Ns Ar Usage: Too many arguments (maximum of 8 accepted) (#166) Fl s Ar type Ns Op , Ns Ar Usage: Too many arguments (maximum of 8 accepted) (#173) Fl T Ar address Ns Op : Ns Ar Usage: Too many arguments (maximum of 8 accepted) (#197) Fl w Ar min Ns Op , Ns Ar Usage: Too many arguments (maximum of 8 accepted) (#224) Cm classic Ns Op Ns , Ns Ar buckets Usage: Too many arguments (maximum of 8 accepted) (#241) Cm malloc Ns Op Ns , Ns Ar size Usage: Too many arguments (maximum of 8 accepted) (#262) Cm file Ns Op Ns , Ns Ar path Usage: Too many arguments (maximum of 8 accepted) (#361) Cm purge Ar field Ar operator Ar argument Op -- Ticket URL: Varnish The Varnish HTTP Accelerator From ibeginhere at gmail.com Fri Nov 27 01:12:10 2009 From: ibeginhere at gmail.com (ll) Date: Fri, 27 Nov 2009 09:12:10 +0800 Subject: varnish bottleneck? Message-ID: <4B0F276A.9000909@gmail.com> I think there are maybe some problem about varnish .My varnish's version is 2.0.4 .I want to cache a website for everything .so I set the rules like that if (req.http.host ~"www.abc.cn"){ lookup; } it's well. Varnish can cache everything .but some function of the website is unable .eg POST. So I put the POST judge before the HOST .like that : if (req.request == "POST"){ pipe; } if (req.http.host ~"www.abc.cn"){ lookup; } there are some problem .many of url's record can't be find in the varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in the Response Headers . I had post this problem in this maillist .it's nothing about pipe or pass about no marked. And there urls will go to the backend server every time. I think is there the varnish bottleneck? whether varnish judge the POST first ,it can't handle all .so there are some miss handle ,and go though to the backend ? or maybe some options ,I didn't have the right configure in the vcl ?