From varnish-bugs at projects.linpro.no Sat Aug 1 23:07:42 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 01 Aug 2009 23:07:42 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement Message-ID: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------------+------------------------------------------ Reporter: ned14 | Type: documentation Status: new | Priority: lowest Milestone: Varnish 2.1 release | Component: documentation Version: trunk | Severity: trivial Keywords: | ---------------------------------+------------------------------------------ Most of the varnish documentation on the web (including this wiki) suggests something like this for mangling up Zope VirtualHostBase URLs: {{{ if (req.http.host ~ "^(www.)?example.com") { set req.http.host = "example.com"; set req.url = regsub(req.url, "^", "/VirtualHostBase/http/example.com:80/Sites/example.com/VirtualHostRoot"); } elsif (req.http.host ~ "^(www.)?example.org") { set req.http.host = "example.org"; set req.url = regsub(req.url, "^", "/VirtualHostBase/http/example.org:80/Sites/example.org/VirtualHostRoot"); } else { error 404 "Unknown virtual host"; } }}} The big problem with this is that every single virtual host must be labouriously specified, and then doubly specified if you support HTTPS. This rapidly becomes a royal PITA. A far better solution is to use varnish's regsub support and generate it entirely dynamically: {{{ if (req.http.X-Forwarded-Proto == "https" ) { set req.http.X-Forwarded-Port = "443"; } else { set req.http.X-Forwarded-Port = "80"; } if (req.http.host ~ "^(www\.|ipv6\.)?(.+)\.(.+)?$") { set req.http.host = regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)?$", "www.\2.\3"); set req.url = "/VirtualHostBase/" req.http.X-Forwarded- Proto regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)$", "/www.\2.\3:") req.http.X-Forwarded-Port regsub(req.http.host, "^(www\.|ipv6\.)?(.+)\.(.+)$", "/\2.\3/\2.\3/VirtualHostR$ req.url; } }}} It also does a great job of demonstrating string concantation in varnish as well as a few other useful tricks. I have an optional ipv6. subdomain in there, but note that it doesn't include the ipv6. redirect into Zope so all the links in the Plone site will still point to www. instead. This doesn't bother me as I only use the ipv6. subdomain to test my site's IPv6 connectivity. I hope that you find this useful. I'd suggest sticking the above anywhere where a search for VirtualHostBase on this site returns including the example Plone .vcl file. Cheers,[[BR]] Niall Douglas -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 3 23:58:13 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 03 Aug 2009 23:58:13 -0000 Subject: [Varnish] #533: Varnishncsa stucks after error vcl message In-Reply-To: <052.9a7cc08a6df41c87cf3a9e3a3743eb44@projects.linpro.no> References: <052.9a7cc08a6df41c87cf3a9e3a3743eb44@projects.linpro.no> Message-ID: <061.914d7eb2d31cfd86bdde71b1d2afbd52@projects.linpro.no> #533: Varnishncsa stucks after error vcl message -------------------------+-------------------------------------------------- Reporter: Tarick | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishncsa | Version: 2.0 Severity: major | Resolution: Keywords: | -------------------------+-------------------------------------------------- Comment (by twenty-three): Works for me! In my case I had problems with the home page ("/") that was ESI processed. It just didn't get logged by varnishncsa. Turning ESI off solved my problem. Applying the patch with ESI turned on solves it, too :). -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 4 12:12:58 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 04 Aug 2009 12:12:58 -0000 Subject: [Varnish] #537: Sticky director Message-ID: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> #537: Sticky director ----------------------+----------------------------------------------------- Reporter: rts | Owner: phk Type: defect | Status: new Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- We'd like to be able to have a sticky director. When varnish first starts, it'd look for any healthy backends, but would then remain with that backend until it became unhealthy. It'd then look for another healthy backend. I'm happy to either sponsor this work, or get it done ourselves. However, before starting, I'd like to know if anyone else has any feature requests / requirements for this sort of director -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 4 18:26:02 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 04 Aug 2009 18:26:02 -0000 Subject: [Varnish] #537: Sticky director In-Reply-To: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> References: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> Message-ID: <058.db29107ac2bd21c25c4a2493c199d9da@projects.linpro.no> #537: Sticky director ----------------------+----------------------------------------------------- Reporter: rts | Owner: phk Type: defect | Status: new Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by kb): Is your goal a distinct "primary/failover" setup (use server B only if A failed, and revert back to A when it becomes healthy again) or generalized persistence? I think you could implement both in VCL, so the question that comes to my mind is whether its use would be widespread enough to justify added director complexity. I think primary/failover might be worthy, but adding features to the "director" could be a slippery slope -- there are literally hundreds of major features that could be added that exist in, say, LVS or Netscalers or BigIPs. The other feature I'd like to see is a least-connections director. Past that, I'm not sure I wouldn't implement more complex balancing/persistence/etc on dedicated up-stream systems. Just my US$0.02, Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 4 22:13:14 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 04 Aug 2009 22:13:14 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> References: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> Message-ID: <060.0c28bb95e2d284dcfdab84eb7d61edec@projects.linpro.no> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: Type: documentation | Status: new Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by ned14): This regsub is better (less likely to err): {{{ if (req.http.host ~ "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$") { set req.http.host = regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a-zA-Z]+)\.([a-zA-Z]+)$", "\1\2$ set req.url = "/VirtualHostBase/" req.http.X-Forwarded- Proto regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$", "/\1\2.\3:") req.http.X-Forwarded-Port regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$", "/\2.\3/\2.\3/Vir$ req.url; } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 4 23:27:04 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 04 Aug 2009 23:27:04 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak Message-ID: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -----------------------+---------------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Keywords: Memory Leak 2.0.4 -----------------------+---------------------------------------------------- Just yesterday I upgraded from varnish version 1.x to 2.0.4 running on FBSD 7.1. Ever since the varnish upgrade, Resident memory used by varnish has increased dramatically. Normally allocations were <50 Megs on average are now as you can see nearly 3G. This is quite irregular. PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 23037 root 26 44 0 10573M 2943M ucond 2 0:02 0.00% varnishd Worst off, resident memory will continue to climb until the OS starts paging other stuff out to swap, causing huge slowdowns overall. The config that I used was unchanged between the two versions of the software. Here is my rc.conf from BSD'ish rc script which I know Mr. Kamp is familiar with: varnishd_enable="YES" varnishd_listen="xxx.xx.xx.xx:80" varnishd_config="/usr/local/etc/varnish/yourtango.com.vcl" varnishd_telnet="127.0.0.1:8080" varnishd_storage="file,/tmp/varnish.bin,10G -p thread_pools=4 -h classic,500009" I don't have a previous `top` output to provide, though I'm sure that wouldn't matter much. There is certainly something up, and I can't (don't) have a clue where to begin looking. A couple of questions: 1) Is this at all a known issue? 2) Though doubtful, (to your knowledge) would a more recent version of FBSD yield different results. 3) How would I go about debugging this if it does in fact appear to be a memory leak. Much appreciated, Paul -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 4 23:38:16 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 04 Aug 2009 23:38:16 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.a7baa1540911da2f477efd86d1b740c4@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by pprocacci): I apologize. The copy of my rc.conf, though appeared properly when pasted, didn't fully paste properly in the output. varnishd_enable="YES" varnishd_listen="xxx.xx.xx.xx:80" varnishd_config="/usr/local/etc/varnish/yourtango.com.vcl" varnishd_telnet="127.0.0.1:8080" varnishd_storage="file,/tmp/varnish.bin,10G -p thread_pools=4 -h classic,500009" -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Aug 5 11:10:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 05 Aug 2009 11:10:32 -0000 Subject: [Varnish] #537: Sticky director In-Reply-To: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> References: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> Message-ID: <058.11df5eb19b150744620b3ffe9394c4f8@projects.linpro.no> #537: Sticky director ----------------------+----------------------------------------------------- Reporter: rts | Owner: phk Type: defect | Status: new Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by rts): Yes, it's for failover. I could do this in VCL, but for various application-level reasons, we'd prefer to minimise the number of transitions we do. So, if we stuck to B, then only fail to A when B fails, we'd have half the number of transitions than if we were to go A-B when A fails then straight back to A when A is available again. However, if there's a way of having persistent variables in VCL, then even this logic could be implemented. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 6 00:33:30 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 06 Aug 2009 00:33:30 -0000 Subject: [Varnish] #537: Sticky director In-Reply-To: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> References: <049.5de7bea5521baadc35f7ad691493e76e@projects.linpro.no> Message-ID: <058.d19352c98e648a4dd9867c73211e7951@projects.linpro.no> #537: Sticky director ----------------------+----------------------------------------------------- Reporter: rts | Owner: phk Type: defect | Status: new Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by kb): You could do that with two pretty trivial C inlines. I'm not saying it's /pretty/. :-) master/slave, the "toggle" mode, and least-connections all seem like good projects, IMHO. Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 11 10:34:06 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 11 Aug 2009 10:34:06 -0000 Subject: [Varnish] #539: unable to compare two headers in vcl Message-ID: <052.e5078e9a5cabfc9ba265d02f8e388a04@projects.linpro.no> #539: unable to compare two headers in vcl ----------------------+----------------------------------------------------- Reporter: hamnis | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Keywords: ----------------------+----------------------------------------------------- {{{ sub vcl_fetch { if (obj.http.etag ~ req.http.if-none-match) { error 304 "Not Modified"; } } }}} causes compile error. Message from VCC-compiler: Expected CSTR got 'req.http.if-none-match' (program line 255), at (input Line 92 Pos 25) if (obj.http.etag ~ req.http.if-none-match) { ------------------------######################--- Running VCC-compiler failed, exit 1 VCL compilation failed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 11 10:35:12 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 11 Aug 2009 10:35:12 -0000 Subject: [Varnish] #539: unable to compare two headers in vcl In-Reply-To: <052.e5078e9a5cabfc9ba265d02f8e388a04@projects.linpro.no> References: <052.e5078e9a5cabfc9ba265d02f8e388a04@projects.linpro.no> Message-ID: <061.ae61795918d176692f0a94155f6d4850@projects.linpro.no> #539: unable to compare two headers in vcl ----------------------+----------------------------------------------------- Reporter: hamnis | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hamnis): The right hand side of the expression is not evaluated. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Aug 12 12:54:56 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 12 Aug 2009 12:54:56 -0000 Subject: [Varnish] #534: Threads stuck in trunk In-Reply-To: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> References: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> Message-ID: <061.e337779fb7d75400d112e110286ad2f7@projects.linpro.no> #534: Threads stuck in trunk ---------------------------+------------------------------------------------ Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: threads stuck | ---------------------------+------------------------------------------------ Comment (by anders): Running varnish trunk/4080 the same issue happened. This time I set diag_bitmap to 0x7f to see if I could get any more info. Varnishlog shows: {{{ 0 Debug - "MTX_LOCK(wrk_decimate_flock,cache_pool.c,529,&wq[u]->mtx)" 0 Debug - "MTX_UNLOCK(wrk_decimate_flock,cache_pool.c,542,&wq[u]->mtx)" 0 Debug - "MTX_LOCK(wrk_decimate_flock,cache_pool.c,529,&wq[u]->mtx)" 0 Debug - "MTX_UNLOCK(wrk_decimate_flock,cache_pool.c,542,&wq[u]->mtx)" 0 Debug - "MTX_LOCK(SES_New,cache_session.c,180,&ses_mem_mtx)" 0 Debug - "MTX_UNLOCK(SES_New,cache_session.c,182,&ses_mem_mtx)" 0 Debug - "WS_Init(0x1ab89a2078, "sess", 0x1ab89a2808, 16384)" 0 Debug - "WS(0x1ab89a2078 = (sess, 0x1ab89a2808 0 0 16384)" 0 Debug - "MTX_LOCK(WRK_Queue,cache_pool.c,417,&wq[u]->mtx)" 0 Debug - "MTX_UNLOCK(WRK_Queue,cache_pool.c,432,&wq[u]->mtx)" 7057 SessionClose - dropped 7057 StatSess - (null) (null) 1250081073 0 0 0 0 0 0 0 0 Debug - "MTX_LOCK(SES_Delete,cache_session.c,230,&ses_mem_mtx)" 0 Debug - "MTX_UNLOCK(SES_Delete,cache_session.c,232,&ses_mem_mtx)" 0 Debug - "MTX_LOCK(wrk_decimate_flock,cache_pool.c,529,&wq[u]->mtx)" 0 Debug - "MTX_UNLOCK(wrk_decimate_flock,cache_pool.c,542,&wq[u]->mtx)" 0 Debug - "MTX_LOCK(wrk_decimate_flock,cache_pool.c,529,&wq[u]->mtx)" 0 Debug - "MTX_UNLOCK(wrk_decimate_flock,cache_pool.c,542,&wq[u]->mtx)" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 13 16:34:39 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 13 Aug 2009 16:34:39 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> References: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> Message-ID: <060.bfb21e2300f602011adf7addc41fb5fd@projects.linpro.no> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: Type: documentation | Status: new Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by heureso): Replying to [comment:1 ned14]: It appears that both the original suggestion and the modification are being mangled by Trac (long lines are getting cut off). Would it be possible to re-post these as attachments ('cuz I could sure use one of the snippets right now). Thanks, Jeremy. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Aug 14 15:47:26 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 14 Aug 2009 15:47:26 -0000 Subject: [Varnish] #540: X-Forwarded-For created and not appended. Message-ID: <055.5449af7e5b7e33441d8b0f9e65fb0fac@projects.linpro.no> #540: X-Forwarded-For created and not appended. -----------------------+---------------------------------------------------- Reporter: bmfurtado | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | -----------------------+---------------------------------------------------- Cheers, On our infrastructure we have a reverse-proxy/connection multiplexer (Juniper DX3600) in front of our varnish cluster. Today I was trying to figure out why the X-Forwarded-For our backend servers were getting was not including the original client's ip... After some debugging I discovered that Varnish was in fact ignoring the previously existing X-Forwarded-For header and adding a new one of its own... I have attached the output of 1 request taken from "varnishlog -b" and as you can see, on line 13 we have the X-Forwarded-For header coming from the DX and on line 15 the one varnish added. I don't think this is the expected behaviour... Thanks in advance. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Aug 14 15:49:59 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 14 Aug 2009 15:49:59 -0000 Subject: [Varnish] #540: X-Forwarded-For created and not appended. In-Reply-To: <055.5449af7e5b7e33441d8b0f9e65fb0fac@projects.linpro.no> References: <055.5449af7e5b7e33441d8b0f9e65fb0fac@projects.linpro.no> Message-ID: <064.4d3346bb84e641b658c49e6691b352a6@projects.linpro.no> #540: X-Forwarded-For created and not appended. -----------------------+---------------------------------------------------- Reporter: bmfurtado | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: Keywords: | -----------------------+---------------------------------------------------- Comment (by bmfurtado): Sorry... it seems I added the ticket on the wrong component... please change it to varnishd when possible. Thanks -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Fri Aug 14 21:59:08 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Fri, 14 Aug 2009 21:59:08 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> References: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> Message-ID: <060.ebbf6d34785187b526ef9b58f32ade48@projects.linpro.no> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: Type: documentation | Status: new Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by ned14): My most deepest apologies - the clipping was definitely my fault when copying and pasting. Here it is in all its glory: {{{ if (req.http.host ~ "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$") { set req.http.host = regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a-zA-Z]+)\.([a-zA-Z]+)$", "\1\2.\3"); set req.url = "/VirtualHostBase/" req.http.X-Forwarded- Proto regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$", "/\1\2.\3:") req.http.X-Forwarded-Port regsub(req.http.host, "^(www\.|ipv6\.)?([-0-9a- zA-Z]+)\.([a-zA-Z]+)$", "/\2.\3/\2.\3/VirtualHostRoot") req.url; } }}} I haven't tested this for speed impact, but seeing as I have eight servers on the same host any penalty is worth it for me. Cheers, Niall -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Aug 15 11:13:01 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 15 Aug 2009 11:13:01 -0000 Subject: [Varnish] #541: Suggested VCL for cross domain XMLHttpRequest using Varnish Message-ID: <051.4797d7b5614b97217fafae38ed755527@projects.linpro.no> #541: Suggested VCL for cross domain XMLHttpRequest using Varnish ---------------------------------+------------------------------------------ Reporter: ned14 | Type: enhancement Status: new | Priority: low Milestone: Varnish 2.1 release | Component: documentation Version: trunk | Severity: minor Keywords: | ---------------------------------+------------------------------------------ Following on from [http://varnish.projects.linpro.no/ticket/536], here is a suggestion for how to have varnish cache the proxying of another website such that AJAX code can perform cross domain XMLHttpRequests without running into browser security issues. In other words, this is how to make a third party website appear like it is part of your own website using URL rewriting. Normally speaking one configures Apache or whatever your front end web server is to do the URL rewriting and proxying. However having varnish do it instead has one massive benefit: you can have varnish cache the results such that load on the third party server is greatly reduced. Firstly, add a backend: {{{ backend repec { .host = "ideas.repec.org"; .port = "80"; } }}} This is ideas.repec.org which is an index of Economics publicatons, so one can pull the list of all Economics academic publications for a given author by pulling a magic URL like [http://ideas.repec.org/cgi- bin/authorref.cgi?handle=pdo206&output=0]. In sub vcl_recv you want something like this at the start: {{{ sub vcl_recv { /*set req.grace = 20s;*/ /* Only enable if you don't mind slightly stale content */ /* Rewrite all requests to /repec/cgi-bin/authorref.cgi to http://ideas.repec.org/cgi-bin/authorref.cgi */ if (req.url ~ "^/repec/cgi-bin/authorref.cgi") { set req.http.host = "ideas.repec.org"; set req.url = regsub(req.url, "^/repec", ""); set req.backend = repec; remove req.http.Cookie; lookup; } else { set req.backend = default; ... do normal processing ... }}} And finally in sub vcl_fetch: {{{ sub vcl_fetch { /*set req.grace = 20s;*/ /* Only enable if you don't mind slightly stale content */ if (req.http.host == "ideas.repec.org") { set obj.http.Content-Type = "text/html; charset=utf-8"; /* Correct the wrong response */ set obj.ttl = 86400s; set obj.http.Cache-Control = "max-age=3600"; deliver; } }}} What this does is to firstly correct the wrong MIME type returned by the RePEc server - it says text/plain and iso-8859-1. It then keeps it in the varnish cache for 1 day such that the RePEc server will only ever be asked once per day per author. It then tells the web browser and any intermediate caches to not bother varnish for one hour after a fetch. Ideally I'd like to have set an Expires: header but I am not entirely sure how to compute one of these in VCL. I suppose one could overwrite the max- age in vcl_hit by subtracting the Age header returned by varnish when it fetches from cache from 86400. Anyway a one hour browser cache expiry is good enough for most cases when someone is casually browsing a website. I hope that someone finds this useful - I certainly have. Cheers,[[BR]] Niall -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 01:35:18 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 01:35:18 -0000 Subject: [Varnish] #542: Solar Energy Technology Message-ID: <056.61e37ed8f0100e8aee8683ca9fd2b5b0@projects.linpro.no> #542: Solar Energy Technology ------------------------+--------------------------------------------------- Reporter: markjamess | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ------------------------+--------------------------------------------------- Haining Xianke Solar Energy Technology Co., Ltd. Professional China Supplier ( Manufacturer / Exporter / Factory ) of solar water heater, solar hot water heater, solar collector, solar power/energy heater, solar water heating, solar heating project, solar thermal, solar pool heating system, pool solar heater, water heater accessories, evacuated heat pipe, vacuum tubes, tanks and more. Address: Yuanhua Industrial Park, Haining City, Zhejiang, China City: Haining Province: Zhejiang Country: China Zip Code: 314416 Key Contact: Ms. Connie Duan Tel: 0086-573-87817298 Fax: 0086-573-87872882 E-Mail: xianke at supplierlist.com Website: Solarhotwatersystem.net -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 09:22:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 09:22:40 -0000 Subject: [Varnish] #532: reference to uninitialized variable In-Reply-To: <048.241b8e67532a5a03a594ed63e911d713@projects.linpro.no> References: <048.241b8e67532a5a03a594ed63e911d713@projects.linpro.no> Message-ID: <057.e3e93c3dbe980b6d0152ba5f63804574@projects.linpro.no> #532: reference to uninitialized variable ----------------------+----------------------------------------------------- Reporter: ow | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: fixed in 4153 by sky -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 10:54:37 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 10:54:37 -0000 Subject: [Varnish] #522: Odd TCP reset problems with trunk 4080 In-Reply-To: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> References: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> Message-ID: <061.9603549706f78ed47d057011493d478e@projects.linpro.no> #522: Odd TCP reset problems with trunk 4080 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): I belive this is now fixed for good in r4183, please test and report when convenient. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 11:54:26 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 11:54:26 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.121bce97d9783de5b1a54a1648c9dc58@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: I have solved this slightly differently in r4185: the ".initial" attribute contains the number of good probes we pretend to have when we load the backend declaration. I set the default to one less than the threshold, so that the backend goes healty on the first probe, if it manages to reply correctly. Thanks for your patience. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 11:55:02 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 11:55:02 -0000 Subject: [Varnish] #512: 503 error with load-balancer setup In-Reply-To: <051.93b0fdd93e18ab1938af816defd3ca3b@projects.linpro.no> References: <051.93b0fdd93e18ab1938af816defd3ca3b@projects.linpro.no> Message-ID: <060.012d187fcd519a3b52a2a386144a9718@projects.linpro.no> #512: 503 error with load-balancer setup --------------------------+------------------------------------------------- Reporter: ajung | Owner: phk Type: defect | Status: closed Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: fixed Keywords: Loadbalancer | --------------------------+------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: See #518 for how the new ".initial" attribute can be used to solve this. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 12:30:03 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 12:30:03 -0000 Subject: [Varnish] #539: unable to compare two headers in vcl In-Reply-To: <052.e5078e9a5cabfc9ba265d02f8e388a04@projects.linpro.no> References: <052.e5078e9a5cabfc9ba265d02f8e388a04@projects.linpro.no> Message-ID: <061.ae9ffc34c53c8ea69d733c8544f9c048@projects.linpro.no> #539: unable to compare two headers in vcl ----------------------+----------------------------------------------------- Reporter: hamnis | Owner: phk Type: defect | Status: closed Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: fixed Keywords: | ----------------------+----------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: The exact scenario you propose is unlikely to be possible, because compiling regular expressions is horribly expensive (malloc overhead amongst other things). The other half of it, is that it is so suicidal to allow a client to send you regexps that it defies description. (In VCL we precompile the regular expressions at load-time to minimize this overhead.) In r4186 I have changed the VCL compiler so that the "==" and "!=" operators take general strings as arguments, so replacing your "~" with "==" the above should now be possible. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 12:44:47 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 12:44:47 -0000 Subject: [Varnish] #503: Assertion failure when trying to fetch a large object In-Reply-To: <051.365f2683f5c3171e1cc19a330c44dbab@projects.linpro.no> References: <051.365f2683f5c3171e1cc19a330c44dbab@projects.linpro.no> Message-ID: <060.a56853dcf0f1b600cbd7e18413d861ae@projects.linpro.no> #503: Assertion failure when trying to fetch a large object --------------------+------------------------------------------------------- Reporter: Sesse | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: wontfix Keywords: | --------------------+------------------------------------------------------- Changes (by phk): * status: reopened => closed * resolution: => wontfix Comment: Be aware than Varnish cannot deal with objects larger than the VM space it has. For a 32bit OS/hw combo, that practically limits you to objects well short of 100MB, unless you only have a single object to serve. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 12:51:46 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 12:51:46 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.6caee90213c9839f369a3fffb5a4f017@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by phk): Can you give us your "varnisstat -1" output ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 13:37:17 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 13:37:17 -0000 Subject: [Varnish] #356: v00017.vtc fails on x86_64 In-Reply-To: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> References: <051.f99bddfcf0e83b27a73b71e0bb8abdaf@projects.linpro.no> Message-ID: <060.9d4cb9b7ab1d5616365348d5088d7a13@projects.linpro.no> #356: v00017.vtc fails on x86_64 -------------------------------+-------------------------------------------- Reporter: wiebe | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 2.0 Severity: normal | Resolution: fixed Keywords: v00017.vtc x86_64 | -------------------------------+-------------------------------------------- Changes (by phk): * status: new => closed * resolution: => fixed Comment: If I spot this correctly, you managed to look up the hostname "en.lille.nisse.rejste.", which, if true, makes me wonder if my kids should mail their xmas-wish-lists to you :-) Either way, I think this isse is DNS related somehow, I do not think it is a varnish issue. If you can reproduce, run varnishtest with the -v option and this specific test, and reopen the ticket with the output. PS: For people without interscandinavian language: "En lille Nisse Rejste" is a childrens song about a gnome travelling the world to find "the greatest man of all". -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 14:27:00 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 14:27:00 -0000 Subject: [Varnish] #536: Suggested regsub example for Plone VirtualHostBase replacement In-Reply-To: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> References: <051.0e9f3385eac11f507cdecf99eec014de@projects.linpro.no> Message-ID: <060.519faf64691860acec8a44066955761e@projects.linpro.no> #536: Suggested regsub example for Plone VirtualHostBase replacement ---------------------------+------------------------------------------------ Reporter: ned14 | Owner: Type: documentation | Status: new Priority: lowest | Milestone: Varnish 2.1 release Component: documentation | Version: trunk Severity: trivial | Resolution: Keywords: | ---------------------------+------------------------------------------------ Comment (by heureso): Awesome, thanks! I wrote up a how-to on plone.org using these suggestions as the basis for Varnish component: http://plone.org/documentation/how-to/plone-behind-varnish-using-pound- for-ssl Best, Jeremy. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 14:55:48 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 14:55:48 -0000 Subject: [Varnish] #517: Syntax failure in VCLExampleLongerCaching In-Reply-To: <057.8e19e80de3196cd7fb5df0aa38d40020@projects.linpro.no> References: <057.8e19e80de3196cd7fb5df0aa38d40020@projects.linpro.no> Message-ID: <066.14e29d52c379df54f2252bc3fb142b93@projects.linpro.no> #517: Syntax failure in VCLExampleLongerCaching -------------------------+-------------------------------------------------- Reporter: mark.breyer | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: website | Version: trunk Severity: normal | Resolution: fixed Keywords: | -------------------------+-------------------------------------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 17:12:50 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 17:12:50 -0000 Subject: [Varnish] #542: varnishlog dies with segfault Message-ID: <053.b91218c85a617d1daf064681cf895824@projects.linpro.no> #542: varnishlog dies with segfault ------------------------+--------------------------------------------------- Reporter: wfelipe | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 2.0 Severity: normal | Keywords: ------------------------+--------------------------------------------------- I was doing a stress test on varnish and when I ran varnishlog it died with segfault code. It's not always that generates this error, if I run just after the segfault, it generates the output. # gdb /opt/varnish/bin/varnishlog /opt/varnish/bin/core.7935 GNU gdb Red Hat Linux (6.5-25.el5rh) Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/libthread_db.so.1". Reading symbols from /opt/varnish-2.0.4/lib/libvarnish.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnish.so.1 Reading symbols from /lib64/librt.so.1...done. Loaded symbols for /lib64/librt.so.1 Reading symbols from /lib64/libnsl.so.1...done. Loaded symbols for /lib64/libnsl.so.1 Reading symbols from /lib64/libm.so.6...done. Loaded symbols for /lib64/libm.so.6 Reading symbols from /opt/varnish-2.0.4/lib/libvarnishcompat.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnishcompat.so.1 Reading symbols from /opt/varnish-2.0.4/lib/libvarnishapi.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnishapi.so.1 Reading symbols from /lib64/libpthread.so.0...done. Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libc.so.6...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Core was generated by `./varnishlog'. Program terminated with signal 11, Segmentation fault. #0 VSL_OpenLog (vd=0x179eb010, varnish_name=) at shmlog.c:208 208 WSLR(struct worker *w, enum shmlogtag tag, int id, txt t) (gdb) backtrace full #0 VSL_OpenLog (vd=0x179eb010, varnish_name=) at shmlog.c:208 p = (unsigned char *) 0x2aaab00e4117
__PRETTY_FUNCTION__ = "VSL_OpenLog" #1 0x00000000004013e6 in main (argc=1, argv=0x7fffe827bcc8) at varnishlog.c:367 c = -1341243113 a_flag = 0 D_flag = 0 o_flag = u_flag = 0 n_arg = 0x0 P_arg = 0x0 w_arg = 0x0 pfh = vd = (struct VSL_data *) 0x179eb010 I hope it helps -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 17:14:39 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 17:14:39 -0000 Subject: [Varnish] #542: varnishlog dies with segfault In-Reply-To: <053.b91218c85a617d1daf064681cf895824@projects.linpro.no> References: <053.b91218c85a617d1daf064681cf895824@projects.linpro.no> Message-ID: <062.0e35c1c4f5bf16181b18e0deb64fa2ee@projects.linpro.no> #542: varnishlog dies with segfault ------------------------+--------------------------------------------------- Reporter: wfelipe | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishlog | Version: 2.0 Severity: normal | Resolution: Keywords: | ------------------------+--------------------------------------------------- Comment (by wfelipe): I'm not used to trac formatting :/ just reposting {{{ # gdb /opt/varnish/bin/varnishlog /opt/varnish/bin/core.7935 GNU gdb Red Hat Linux (6.5-25.el5rh) Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/libthread_db.so.1". Reading symbols from /opt/varnish-2.0.4/lib/libvarnish.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnish.so.1 Reading symbols from /lib64/librt.so.1...done. Loaded symbols for /lib64/librt.so.1 Reading symbols from /lib64/libnsl.so.1...done. Loaded symbols for /lib64/libnsl.so.1 Reading symbols from /lib64/libm.so.6...done. Loaded symbols for /lib64/libm.so.6 Reading symbols from /opt/varnish-2.0.4/lib/libvarnishcompat.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnishcompat.so.1 Reading symbols from /opt/varnish-2.0.4/lib/libvarnishapi.so.1...done. Loaded symbols for /opt/varnish-2.0.4/lib/libvarnishapi.so.1 Reading symbols from /lib64/libpthread.so.0...done. Loaded symbols for /lib64/libpthread.so.0 Reading symbols from /lib64/libc.so.6...done. Loaded symbols for /lib64/libc.so.6 Reading symbols from /lib64/ld-linux-x86-64.so.2...done. Loaded symbols for /lib64/ld-linux-x86-64.so.2 Core was generated by `./varnishlog'. Program terminated with signal 11, Segmentation fault. #0 VSL_OpenLog (vd=0x179eb010, varnish_name=) at shmlog.c:208 208 WSLR(struct worker *w, enum shmlogtag tag, int id, txt t) (gdb) backtrace full #0 VSL_OpenLog (vd=0x179eb010, varnish_name=) at shmlog.c:208 p = (unsigned char *) 0x2aaab00e4117
__PRETTY_FUNCTION__ = "VSL_OpenLog" #1 0x00000000004013e6 in main (argc=1, argv=0x7fffe827bcc8) at varnishlog.c:367 c = -1341243113 a_flag = 0 D_flag = 0 o_flag = u_flag = 0 n_arg = 0x0 P_arg = 0x0 w_arg = 0x0 pfh = vd = (struct VSL_data *) 0x179eb010 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 19:49:53 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 19:49:53 -0000 Subject: [Varnish] #518: Default backend health right after launch In-Reply-To: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> References: <049.e1162e9b4de6546075bcb636c7657920@projects.linpro.no> Message-ID: <058.95222b6d9557c9f5b158873c9511631f@projects.linpro.no> #518: Default backend health right after launch --------------------+------------------------------------------------------- Reporter: rts | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Resolution: fixed Keywords: | --------------------+------------------------------------------------------- Comment (by kb): Great, that makes a lot of sense. Thanks! Ken -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Mon Aug 17 20:44:31 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Mon, 17 Aug 2009 20:44:31 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.1bf2a3f22e4d98367bca404a0a61caa5@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by barnaclebob): I also have this problem. I did not notice it as we were not serving proper cache headers any where and now are. {{{ top - 15:40:13 up 39 days, 14:44, 2 users, load average: 0.39, 0.21, 0.11 Mem: 8181520k total, 8130296k used, 51224k free, 54624k buffers Swap: 1052248k total, 41220k used, 1011028k free, 7485492k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16602 varnish 15 0 50.3g 3.7g 3.5g S 3.7 47.7 3:05.06 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T :6082 -t 120 -p thread_pools 4 -p lru_interval 120 -h classic,350003 -p obj_workspace 4096 -w 50,2000,120 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,30% }}} varnish stat: {{{ karl at fe01:~$ varnishstat -1 uptime 8145 . Child uptime client_conn 924273 113.48 Client connections accepted client_req 924260 113.48 Client requests received cache_hit 327260 40.18 Cache hits cache_hitpass 6028 0.74 Cache hits for pass cache_miss 341935 41.98 Cache misses backend_conn 597001 73.30 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 0 0.00 Backend connections failures backend_reuse 482556 59.25 Backend connections reuses backend_recycle 576729 70.81 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 3 . N struct srcaddr n_srcaddr_act 1 . N active struct srcaddr n_sess_mem 263 . N struct sess_mem n_sess 35 . N struct sess n_object 102049 . N struct object n_objecthead 102124 . N struct objecthead n_smf 224940 . N struct smf n_smf_frag 20472 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 55 . N struct vbe_conn n_bereq 238 . N struct bereq n_wrk 200 . N worker threads n_wrk_create 333 0.04 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 528 0.06 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 239953 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 58306 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 42 0.01 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 880956 108.16 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 924241 113.47 Total Sessions s_req 924232 113.47 Total Requests s_pipe 3 0.00 Total pipe s_pass 255053 31.31 Total pass s_fetch 596914 73.29 Total fetch s_hdrbytes 353891229 43448.89 Total header bytes s_bodybytes 5969771056 732936.90 Total body bytes sess_closed 924241 113.47 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 68151008 8367.22 SHM records shm_writes 5265060 646.42 SHM writes shm_flushes 23615 2.90 SHM flushes due to overflow shm_cont 2762 0.34 SHM MTX contention shm_cycles 28 0.00 SHM cycles through buffer sm_nreq 1155811 141.90 allocator requests sm_nobj 204467 . outstanding allocations sm_balloc 3320414208 . bytes allocated sm_bfree 48156446720 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 53 0.01 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 172992 . SMS bytes allocated sms_bfree 172992 . SMS bytes freed backend_req 596988 73.30 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 2612 . N total active purges n_purge_add 2631 0.32 N new purges added n_purge_retire 19 0.00 N old purges deleted n_purge_obj_test 108700 13.35 N objects tested n_purge_re_test 22665736 2782.78 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) }}} VCL: {{{ backend default { .host = "localhost"; .port = "8080"; } sub vcl_recv { if(req.http.Accept-Encoding ~ "gzip"){ set req.http.Accept-Encoding="gzip"; }else{ remove req.http.Accept-Encoding; } #unless we have the only 2 cookies we care about just remove the whole string if (!req.http.cookie ~ "_cookie_id\s*=" && ! req.http.cookie ~ "auth_token\s*=" && ! req.http.cookie ~ "flash\s*=") { remove req.http.cookie; } #doing purges this way causes a memory leak in varnish. do not ever use them #if (req.request == "PURGE") { # if (!client.ip ~ private) { # error 405 "Not allowed."; # } # lookup; #} set req.grace = 60s; if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.Cookie) { /* Not cacheable by default */ pass; } lookup; } sub vcl_hit { #doing purges this way causes a memory leak in varnish. do not ever use them #if (req.request == "PURGE") { # set obj.ttl = 0s; # error 200 "Purged."; #} } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } } sub vcl_hash { if(req.http.Accept-Encoding ~ "gzip"){ set req.hash+="gzip"; }else{ set req.hash+="nogzip"; } } sub vcl_fetch { set obj.grace = 60s; if(obj.http.cache-control && obj.http.x-bench-route){ set obj.http.cache-control = regsub(obj.http.cache-control ,"max-age\s*=\s*[0-9]+","max-age = 0"); } if(obj.http.x-ss-static){ unset obj.http.expires; unset obj.http.set-cookie; set obj.http.cache-control = "max-age = 300"; set obj.ttl = 1w; set obj.prefetch = -30s; deliver; } if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } set obj.prefetch = -30s; deliver; } sub vcl_deliver { if(resp.http.x-ss-static){ unset resp.http.x-ss-static; set resp.http.age = "0"; } } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" ...snip... "}; deliver; } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 00:07:12 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 00:07:12 -0000 Subject: [Varnish] #543: vcl.discard causes crash Message-ID: <049.4e608f14d8d0a3da3ed0419af34b039e@projects.linpro.no> #543: vcl.discard causes crash ----------------------+----------------------------------------------------- Reporter: sky | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- {{{ Aug 17 23:55:31 varnish2 varnishd[16391]: Child (308) Panic message: Assert erro r in STV_alloc(), stevedore.c line 71: Condition((st) != NULL) not true. errn o = 104 (Connection reset by peer) thread = (cache-worker)sp = 0x7f93ffc02008 { fd = 1654, id = 1654, xid = 741934159, client = 208.68.167.133:56858, ste p = STP_FETCH, handling = discard, err_code = 200, err_reason = (null), ws = 0x7f93ffc02080 { id = "sess", {s,f,r,e} = {0x7f93ffc02820,,+3084,(ni l),+131072}, }, worker = 0x7f98f4b90bd0 { }, vcl = { srcname = { "input", "Default", "/etc/varnish/yellowiki.vcl", "/etc/varnish/SJC_backends.vcl", "/etc/varnish/routing.vcl", }, }, obj = 0x7fb986e90000 { refcnt = 1, xid = 741934159, ws = 0x7fb986e90028 { id = "obj", {s,f,r,e} = {0x7fb986e90358,,+238,(nil),+3240}, }, http = { ws = 0x7fb986e90028 { id = "obj", {s,f,r,e} = {0x7fb986e90358,,+238,(nil),+3240}, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 04:31:37 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 04:31:37 -0000 Subject: [Varnish] #296: Tuning keep-alive In-Reply-To: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> References: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> Message-ID: <057.2fc72dca0881575d41e5e549405eb7a2@projects.linpro.no> #296: Tuning keep-alive -------------------------+-------------------------------------------------- Reporter: ay | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | -------------------------+-------------------------------------------------- Comment (by docunexter): The parameter name is really sess_timeout, and it can be set on debian in /etc/default/varnish like this: {{{ DAEMON_OPTS="-a :80 \ -T :6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -p sess_timeout=60 \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,2G" }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 04:32:20 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 04:32:20 -0000 Subject: [Varnish] #296: Tuning keep-alive In-Reply-To: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> References: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> Message-ID: <057.594e35751885c7f50834f1c140aebfca@projects.linpro.no> #296: Tuning keep-alive -------------------------+-------------------------------------------------- Reporter: ay | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | -------------------------+-------------------------------------------------- Comment (by docunexter): I have it set so high because this particular varnish is talking to another varnish that I manage so there will be lots of direct connections between the two. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 06:41:01 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 06:41:01 -0000 Subject: [Varnish] #543: vcl.discard causes crash In-Reply-To: <049.4e608f14d8d0a3da3ed0419af34b039e@projects.linpro.no> References: <049.4e608f14d8d0a3da3ed0419af34b039e@projects.linpro.no> Message-ID: <058.a68559bc94f06e8bde01217bac159815@projects.linpro.no> #543: vcl.discard causes crash ----------------------+----------------------------------------------------- Reporter: sky | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Old description: > {{{ > Aug 17 23:55:31 varnish2 varnishd[16391]: Child (308) Panic message: > Assert erro > r in STV_alloc(), stevedore.c line 71: Condition((st) != NULL) not > true. errn > o = 104 (Connection reset by peer) thread = (cache-worker)sp = > 0x7f93ffc02008 { > fd = 1654, id = 1654, xid = 741934159, client = 208.68.167.133:56858, > ste > p = STP_FETCH, handling = discard, err_code = 200, err_reason = > (null), ws > = 0x7f93ffc02080 { id = "sess", {s,f,r,e} = > {0x7f93ffc02820,,+3084,(ni > l),+131072}, }, worker = 0x7f98f4b90bd0 { }, vcl = { > srcname > = { "input", "Default", > "/etc/varnish/yellowiki.vcl", > "/etc/varnish/SJC_backends.vcl", > "/etc/varnish/routing.vcl", }, }, obj = 0x7fb986e90000 { > refcnt = 1, xid = 741934159, ws = 0x7fb986e90028 { id = "obj", > {s,f,r,e} = {0x7fb986e90358,,+238,(nil),+3240}, }, http = { > ws = 0x7fb986e90028 { id = "obj", {s,f,r,e} = > {0x7fb986e90358,,+238,(nil),+3240}, }, > > }}} New description: {{{ Aug 17 23:55:31 varnish2 varnishd[16391]: Child (308) Panic message: Assert error in STV_alloc(), stevedore.c line 71: Condition((st) != NULL) not true. errno = 104 (Connection reset by peer) thread = (cache-worker) sp = 0x7f93ffc02008 { fd = 1654, id = 1654, xid = 741934159, client = 208.68.167.133:56858, step = STP_FETCH, handling = discard, err_code = 200, err_reason = (null), ws = 0x7f93ffc02080 { id = "sess", {s,f,r,e} = {0x7f93ffc02820,,+3084,(nil),+131072}, }, worker = 0x7f98f4b90bd0 { }, vcl = { srcname= { "input", "Default", "/etc/varnish/yellowiki.vcl", "/etc/varnish/SJC_backends.vcl", "/etc/varnish/routing.vcl", }, }, obj = 0x7fb986e90000 { refcnt = 1, xid = 741934159, ws = 0x7fb986e90028 { id = "obj", {s,f,r,e} = {0x7fb986e90358,,+238,(nil),+3240}, }, http = { ws = 0x7fb986e90028 { id = "obj", {s,f,r,e} = {0x7fb986e90358,,+238,(nil),+3240}, }, }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 06:44:48 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 06:44:48 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.4bd2b44bbe40c11676fc74489153ea4c@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by phk): @barnaclebob: did the mem-leak go away when you commented out the PURGE code in the VCL ? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 07:36:40 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 07:36:40 -0000 Subject: [Varnish] #530: Constant Database (CDB) support in VCL In-Reply-To: <052.eeb88450d5fae863baf4ddfeb7686d37@projects.linpro.no> References: <052.eeb88450d5fae863baf4ddfeb7686d37@projects.linpro.no> Message-ID: <061.a43d7340c009d696a9a092a43a426e2a@projects.linpro.no> #530: Constant Database (CDB) support in VCL -------------------------+-------------------------------------------------- Reporter: albert | Owner: phk Type: enhancement | Status: closed Priority: low | Milestone: After Varnish 2.1 Component: varnishd | Version: trunk Severity: minor | Resolution: invalid Keywords: | -------------------------+-------------------------------------------------- Changes (by phk): * status: new => closed * resolution: => invalid Comment: I have moved this to the wiki page where we track feature requests: PostTwoShoppingList (We try to use the ticket system only for bugs) I am not sure I know how CDB should work in this context, but feel free to edit the above page and explain it. (If you don't have wiki edit permissions, drop me an email) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 13:14:54 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 13:14:54 -0000 Subject: [Varnish] #522: Odd TCP reset problems with trunk 4080 In-Reply-To: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> References: <052.9e4f22a7767a39fc57cd165effa2904d@projects.linpro.no> Message-ID: <061.cb33caf902efc884bd41107d753d582b@projects.linpro.no> #522: Odd TCP reset problems with trunk 4080 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by anders): It seems to help a lot, according to one of our journalists who had problems loading www.aftenposten.no. He still had a little problem after I applied this update, but not much. It could be the quality of his Internet connection also I suppose. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 18:20:51 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 18:20:51 -0000 Subject: [Varnish] #538: [varnish-2.0.4] Potential Memory Leak In-Reply-To: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> References: <055.fcce2615f8b4e303e8120e764697ec47@projects.linpro.no> Message-ID: <064.5c14f3d4d00d7d2a8503dead2a9e5380@projects.linpro.no> #538: [varnish-2.0.4] Potential Memory Leak -------------------------------+-------------------------------------------- Reporter: pprocacci | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: major | Resolution: Keywords: Memory Leak 2.0.4 | -------------------------------+-------------------------------------------- Comment (by barnaclebob): @phk When i pasted the config above i forgot that it was the one after i made those changes to the vcl when the servers ground to a halt. So far with those commented out 24 hours later i do not see the problem returning. Sorry for the faulty info. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Tue Aug 18 20:52:57 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Tue, 18 Aug 2009 20:52:57 -0000 Subject: [Varnish] #534: Threads stuck in trunk In-Reply-To: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> References: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> Message-ID: <061.8ace2249a39378849da3537fd31bc20b@projects.linpro.no> #534: Threads stuck in trunk ---------------------------+------------------------------------------------ Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: threads stuck | ---------------------------+------------------------------------------------ Comment (by anders): PHK asked for a backtrace from one of the stuck threads. I got one now, from trunk/4144. First of all, info threads shows most threads are in __error: {{{ 3298 Thread 0x806b418f0 (LWP 101198) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3297 Thread 0x80bb703d0 (LWP 101199) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3296 Thread 0x806b3fb40 (LWP 101200) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3295 Thread 0x82cf45500 (LWP 101204) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3294 Thread 0x8085e80b0 (LWP 101206) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3293 Thread 0x806305500 (LWP 100523) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3292 Thread 0xb659d4050 (LWP 100648) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3291 Thread 0x806305050 (LWP 100662) 0x0000000800ac929c in __error () from /lib/libthr.so.3 3290 Thread 0x80cc83d40 (LWP 100731) 0x0000000800ac929c in __error () from /lib/libthr.so.3 }}} Some backtraces for these threads in __eroror: 1) {{{ (gdb) thread 3291 [Switching to thread 3291 (Thread 0x806305050 (LWP 100662))]#0 0x0000000800ac929c in __error () from /lib/libthr.so.3 (gdb) bt #0 0x0000000800ac929c in __error () from /lib/libthr.so.3 #1 0x0000000800ac8f8c in __error () from /lib/libthr.so.3 #2 0x0000000800ac41eb in pthread_mutex_getyieldloops_np () from /lib/libthr.so.3 #3 0x000000000041f7e2 in Lck__Lock (lck=Variable "lck" is not available. ) at cache_lck.c:78 #4 0x000000000042c00b in hcb_lookup (sp=0x809cc0008, noh=0x17e0ebf600) at hash_critbit.c:452 #5 0x000000000041ae3a in HSH_Lookup (sp=0x809cc0008, poh=0x7fffd4aa01e0) at cache_hash.c:443 #6 0x0000000000410a42 in cnt_lookup (sp=0x809cc0008) at cache_center.c:734 #7 0x0000000000412ad3 in CNT_Session (sp=0x809cc0008) at steps.h:38 #8 0x0000000000422121 in wrk_do_cnt_sess (w=0x7fffd4aa62c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #9 0x0000000000421417 in wrk_thread_real (qp=0x80100f740, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #10 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #11 0x00007fffd48a7000 in ?? () Error accessing memory address 0x7fffd4aa7000: Bad address. }}} 2) {{{ (gdb) thread 3317 [Switching to thread 3317 (Thread 0x806b4bc70 (LWP 101131))]#0 0x0000000800ac929c in __error () from /lib/libthr.so.3 (gdb) bt #0 0x0000000800ac929c in __error () from /lib/libthr.so.3 #1 0x0000000800ac8f8c in __error () from /lib/libthr.so.3 #2 0x0000000800ac41eb in pthread_mutex_getyieldloops_np () from /lib/libthr.so.3 #3 0x000000000041f7e2 in Lck__Lock (lck=Variable "lck" is not available. ) at cache_lck.c:78 #4 0x000000000042c00b in hcb_lookup (sp=0x827034008, noh=0x17e0dfd200) at hash_critbit.c:452 #5 0x000000000041ae3a in HSH_Lookup (sp=0x827034008, poh=0x7fffad9681e0) at cache_hash.c:443 #6 0x0000000000410a42 in cnt_lookup (sp=0x827034008) at cache_center.c:734 #7 0x0000000000412ad3 in CNT_Session (sp=0x827034008) at steps.h:38 #8 0x0000000000422121 in wrk_do_cnt_sess (w=0x7fffad96e2c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #9 0x0000000000421417 in wrk_thread_real (qp=0x80100f6a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #10 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #11 0x00007fffad76f000 in ?? () Error accessing memory address 0x7fffad96f000: Bad address. }}} 3) {{{ (gdb) frame 3489 Error accessing memory address 0x7fffad96f000: Bad address. (gdb) bt #0 0x0000000800ac929c in __error () from /lib/libthr.so.3 #1 0x0000000800ac8f8c in __error () from /lib/libthr.so.3 #2 0x0000000800ac41eb in pthread_mutex_getyieldloops_np () from /lib/libthr.so.3 #3 0x000000000041f7e2 in Lck__Lock (lck=Variable "lck" is not available. ) at cache_lck.c:78 #4 0x000000000042c00b in hcb_lookup (sp=0x827034008, noh=0x17e0dfd200) at hash_critbit.c:452 #5 0x000000000041ae3a in HSH_Lookup (sp=0x827034008, poh=0x7fffad9681e0) at cache_hash.c:443 #6 0x0000000000410a42 in cnt_lookup (sp=0x827034008) at cache_center.c:734 #7 0x0000000000412ad3 in CNT_Session (sp=0x827034008) at steps.h:38 #8 0x0000000000422121 in wrk_do_cnt_sess (w=0x7fffad96e2c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #9 0x0000000000421417 in wrk_thread_real (qp=0x80100f6a0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #10 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #11 0x00007fffad76f000 in ?? () Error accessing memory address 0x7fffad96f000: Bad address. }}} 4) {{{ (gdb) thread 2583 [Switching to thread 2583 (Thread 0x17e3202ba0 (LWP 102654))]#0 0x0000000800ac929c in __error () from /lib/libthr.so.3 (gdb) bt #0 0x0000000800ac929c in __error () from /lib/libthr.so.3 #1 0x0000000800ac8f8c in __error () from /lib/libthr.so.3 #2 0x0000000800ac41eb in pthread_mutex_getyieldloops_np () from /lib/libthr.so.3 #3 0x000000000041f7e2 in Lck__Lock (lck=Variable "lck" is not available. ) at cache_lck.c:78 #4 0x000000000042c00b in hcb_lookup (sp=0x828d84008, noh=0x17e0f5c380) at hash_critbit.c:452 #5 0x000000000041ae3a in HSH_Lookup (sp=0x828d84008, poh=0x7fff4e6701e0) at cache_hash.c:443 #6 0x0000000000410a42 in cnt_lookup (sp=0x828d84008) at cache_center.c:734 #7 0x0000000000412ad3 in CNT_Session (sp=0x828d84008) at steps.h:38 #8 0x0000000000422121 in wrk_do_cnt_sess (w=0x7fff4e6762c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #9 0x0000000000421417 in wrk_thread_real (qp=0x80100f420, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #10 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #11 0x0000000000000000 in ?? () Error accessing memory address 0x7fff4e677000: Bad address. }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Wed Aug 19 20:37:20 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Wed, 19 Aug 2009 20:37:20 -0000 Subject: [Varnish] #544: Some VCL variables may be missing in wiki VCL doc Message-ID: <054.9235ccfcdad3a2afeb59b52d57778c38@projects.linpro.no> #544: Some VCL variables may be missing in wiki VCL doc ----------------------+----------------------------------------------------- Reporter: walraven | Type: documentation Status: new | Priority: low Milestone: | Component: documentation Version: trunk | Severity: normal Keywords: | ----------------------+----------------------------------------------------- Seems some variables are not covered in the VCL wiki documentation. obj.hash and obj.prefetch as far as I can see. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 20 08:21:49 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 20 Aug 2009 08:21:49 -0000 Subject: [Varnish] #296: Tuning keep-alive In-Reply-To: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> References: <048.4ee71176e67a3e1c5f8f1eabe50f450c@projects.linpro.no> Message-ID: <057.797660edb9140773f5dac0f7bc19b7a7@projects.linpro.no> #296: Tuning keep-alive -------------------------+-------------------------------------------------- Reporter: ay | Owner: phk Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: worksforme Keywords: | -------------------------+-------------------------------------------------- Comment (by kristian): While this is a bug tracker not a forum, I really want to warn you: a sess_timeout of 60 is just begging for thread problems. Might work for a low-traffic site but it is VERY easy to run out of file descriptors and the like if you allow sessions to last 60 seconds. Feel free to bring this up on one of the mailing lists for further discussion. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 20 11:43:06 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 20 Aug 2009 11:43:06 -0000 Subject: [Varnish] #534: Threads stuck in trunk In-Reply-To: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> References: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> Message-ID: <061.c4ec73d1abbdd70772ad963ad71ad619@projects.linpro.no> #534: Threads stuck in trunk ---------------------------+------------------------------------------------ Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: threads stuck | ---------------------------+------------------------------------------------ Comment (by anders): PHK wanted to know more about the threads which are not in __error, so here we go. Running trunk/4144, I got the thread problem with all threads used up. Summarizing the threads, I get this list (count in left column): {{{ 1 in kevent () 1 in nanosleep () 1 in writev () 2 in poll () 3995 in __error () }}} The threads of interest: {{{ 4000 Thread 0x8010020b0 (LWP 100434) 0x0000000800d7b07c in poll () 3999 Thread 0x8010023d0 (LWP 100059) 0x0000000800db0e0c in nanosleep () 3994 Thread 0x801002a10 (LWP 100104) 0x0000000800db1f8c in kevent () 3993 Thread 0x801002ba0 (LWP 100110) 0x0000000800d7b07c in poll () 1916 Thread 0x187aa80ba0 (LWP 103321) 0x0000000800db67ec in writev () }}} Backtrace from these: {{{ (gdb) thread 4000 [Switching to thread 4000 (Thread 0x8010020b0 (LWP 100434))]#0 0x0000000800d7b07c in poll () from /lib/libc.so.7 (gdb) bt #0 0x0000000800d7b07c in poll () from /lib/libc.so.7 #1 0x0000000800ac180e in poll () from /lib/libthr.so.3 #2 0x000000000041308f in CLI_Run () at cache_cli.c:157 #3 0x000000000041ed81 in child_main () at cache_main.c:141 #4 0x000000000042d57d in start_child (cli=Variable "cli" is not available. ) at mgt_child.c:318 #5 0x000000000042d8f4 in MGT_Run () at mgt_child.c:545 #6 0x000000000043b5f6 in main (argc=35, argv=0x7fffffffea20) at varnishd.c:737 (gdb) thread 3999 [Switching to thread 3999 (Thread 0x8010023d0 (LWP 100059))]#0 0x0000000800db0e0c in nanosleep () from /lib/libc.so.7 (gdb) bt #0 0x0000000800db0e0c in nanosleep () from /lib/libc.so.7 #1 0x0000000800ac1915 in nanosleep () from /lib/libthr.so.3 #2 0x0000000800682a81 in TIM_sleep (t=Variable "t" is not available. ) at time.c:162 #3 0x0000000000421c03 in wrk_herdtimer_thread (priv=Variable "priv" is not available. ) at cache_pool.c:429 #4 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #5 0x0000000000000000 in ?? () Error accessing memory address 0x7fffffbff000: Bad address. (gdb) thread 3994 [Switching to thread 3994 (Thread 0x801002a10 (LWP 100104))]#0 0x0000000800db1f8c in kevent () from /lib/libc.so.7 (gdb) bt #0 0x0000000800db1f8c in kevent () from /lib/libc.so.7 #1 0x000000000040a9d2 in vca_kqueue_main (arg=Variable "arg" is not available. ) at cache_waiter_kqueue.c:172 #2 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #3 0x0000000000000000 in ?? () Error accessing memory address 0x7fffff1fa000: Bad address. (gdb) thread 3993 [Switching to thread 3993 (Thread 0x801002ba0 (LWP 100110))]#0 0x0000000800d7b07c in poll () from /lib/libc.so.7 (gdb) bt #0 0x0000000800d7b07c in poll () from /lib/libc.so.7 #1 0x0000000800ac180e in poll () from /lib/libthr.so.3 #2 0x00000000004098f4 in vca_acct (arg=Variable "arg" is not available. ) at cache_acceptor.c:215 #3 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #4 0x0000000000000000 in ?? () Error accessing memory address 0x7ffffeff9000: Bad address. (gdb) thread 1916 [Switching to thread 1916 (Thread 0x187aa80ba0 (LWP 103321))]#0 0x0000000800db67ec in writev () from /lib/libc.so.7 (gdb) bt #0 0x0000000800db67ec in writev () from /lib/libc.so.7 #1 0x0000000800ac103e in writev () from /lib/libthr.so.3 #2 0x000000000042a06f in WRW_Flush (w=0x7ffefabda2c0) at cache_wrw.c:104 #3 0x000000000042a4ae in WRW_FlushRelease (w=0x7ffefabda2c0) at cache_wrw.c:124 #4 0x000000000042246a in RES_WriteObj (sp=0x187de2f008) at cache_response.c:197 #5 0x0000000000411e4a in cnt_deliver (sp=0x187de2f008) at cache_center.c:201 #6 0x0000000000412a43 in CNT_Session (sp=0x187de2f008) at steps.h:42 #7 0x0000000000422121 in wrk_do_cnt_sess (w=0x7ffefabda2c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #8 0x0000000000421417 in wrk_thread_real (qp=0x80100f7e0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #9 0x0000000800abf4d1 in pthread_getprio () from /lib/libthr.so.3 #10 0x0000000000000000 in ?? () Error accessing memory address 0x7ffefabdb000: Bad address. }}} Anything else I can check? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 08:32:23 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 08:32:23 -0000 Subject: [Varnish] #545: synthetic screws national characters Message-ID: <050.298ac45e938bdbf50944b8f898279e89@projects.linpro.no> #545: synthetic screws national characters ----------------------+----------------------------------------------------- Reporter: kolo | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: 2.0 Severity: normal | Keywords: ----------------------+----------------------------------------------------- part of vlc: {{{ set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {"
D?kujeme za pochopen?, T?m

"}; }}} returned from browser: {{{
D?77777704?77777633kujeme za pochopen?77777703?77777655, T?77777703?77777675m

}}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:14:19 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:14:19 -0000 Subject: [Varnish] #546: Varnish eating up my memory Message-ID: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Keywords: ----------------------+----------------------------------------------------- Varnish is eating up my memory for unknown reason. I told it to not use more than 2G malloc, but top shows my it is using 5.7G TOP Output: {{{ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29529 nobody 18 0 6277m 5.7g 80m S 16.4 24.3 215:40.63 /usr/local/varnish/sbin/varnishd ....... -s malloc,2G ....... }}} Varnishstats -1: {{{ uptime 71321 . Child uptime client_conn 79040520 1108.24 Client connections accepted client_req 79020353 1107.95 Client requests received cache_hit 78868507 1105.82 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 141920 1.99 Cache misses backend_conn 141924 1.99 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 0 0.00 Backend connections failures backend_reuse 131429 1.84 Backend connections reuses backend_recycle 134474 1.89 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 2 . N struct srcaddr n_srcaddr_act 18446744073709551317 . N active struct srcaddr n_sess_mem 172 . N struct sess_mem n_sess 5085 . N struct sess n_object 14007 . N struct object n_objecthead 14012 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 4 . N struct vbe_conn n_bereq 79 . N struct bereq n_wrk 10 . N worker threads n_wrk_create 115 0.00 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 165 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 1694 0.02 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 4 . N backends n_expired 127947 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 15422829 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 472 0.01 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 55193830 773.88 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 79040520 1108.24 Total Sessions s_req 79040401 1108.23 Total Requests s_pipe 4 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 141859 1.99 Total fetch s_hdrbytes 21286131093 298455.31 Total header bytes s_bodybytes 111229645966 1559563.75 Total body bytes sess_closed 79040484 1108.24 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 3260786558 45719.87 SHM records shm_writes 395398785 5543.93 SHM writes shm_flushes 10374 0.15 SHM flushes due to overflow shm_cont 2903710 40.71 SHM MTX contention shm_cycles 1209 0.02 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 284408 3.99 SMA allocator requests sma_nobj 27834 . SMA outstanding allocations sma_nbytes 160583048 . SMA outstanding bytes sma_balloc 1495571135 . SMA bytes allocated sma_bfree 1334988087 . SMA bytes free sms_nreq 533 0.01 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 221593 . SMS bytes allocated sms_bfree 221593 . SMS bytes freed backend_req 141920 1.99 Backend requests made n_vcl 3 0.00 N vcl total n_vcl_avail 3 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 73540 . N total active purges n_purge_add 94376 1.32 N new purges added n_purge_retire 20836 0.29 N old purges deleted n_purge_obj_test 16450270 230.65 N objects tested n_purge_re_test 1229907219 17244.67 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) }}} My memory python program says this: {{{ 5.8 GiB + 1.1 MiB = 5.8 GiB /usr/local/varnish/sbin/varnishd ... -s ... (2) Private + Shared = RAM used Program }}} Varnish version: {{{ varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS }}} Compile options: {{{ ./configure --enable-static --enable-kqueue --enable-epoll --enable-ports --disable-jemalloc }}} varnish maked with -O2 --march & --mtype optimalizations. Any ideas why varnish uses this amount of memory? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:18:32 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:18:32 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.aa184988910b516c9a98fb7609fe0c07@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): Note: On the same servers runs another instance of varnish (on another port). That one is limited to 10G malloc. Maybe they interfere? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:22:51 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:22:51 -0000 Subject: [Varnish] #547: Varnish core dumps frequently Message-ID: <053.bb3ab0292d73880f2e8477a3aa3e3c6f@projects.linpro.no> #547: Varnish core dumps frequently ---------------------+------------------------------------------------------ Reporter: victori | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: trunk | Severity: normal Keywords: | ---------------------+------------------------------------------------------ OS: OpenSolaris SNV98 Params used for starting varnish: newtask -p highfile /opt/extra/sbin/varnishd -f /opt/extra/etc/varnish/default.vcl -a 0.0.0.0:3000 -p listen_depth=8192 -p thread_pool_max=2000 -p thread_pool_min=12 -p thread_pools=4 -p cc_command='cc -Kpic -G -m64 -o %o %s' -s file,/sessions/varnish_cache.bin,2G -p sess_timeout=10s -p max_restarts=12 -p send_timeout=60s -p session_linger=50s -p connect_timeout=0s -p obj_workspace=16384 -p sess_workspace=32768 -T 0.0.0.0:8086 -u webservd {{{ # /opt/startvarnish.sh Usage: nm [-APvChlnV] [-efox] [-r | -R] [-g | -u] [-t format] file ... storage_file: filename: /sessions/varnish_cache.bin size 2048 MB. Using old SHMFILE child (10165) Started Child (10165) said Closed fds: 3 5 6 9 10 12 13 Child (10165) said Child starts Child (10165) said managed to mmap 2147483648 bytes of 2147483648 Child (10165) said Ready Child (10165) died signal=6 Child (10165) Panic message: Assert error in TCP_linger(), tcp.c line 246: Condition((setsockopt(sock, 0xffff, 0x0080, &lin, sizeof lin)) == 0) not true. errno = 9 (Bad file number) thread = (cache-worker) Backtrace: sp = 91f018 { fd = 29, id = 29, xid = 0, client = 127.0.0.1:30078, step = STP_DONE, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esis = 0 ws = 91f088 { id = "sess", {s,f,r,e} = {91f918,+740,0,+32768}, }, http[req] = { ws = 91f088[sess] "GET", "/group/66/thumbx120/pills-loose.jpg", "HTTP/1.0", "X-Real-IP: 220.244.39.82", "Host: images.fabulously40.com", "Connection: close", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/530.5 (KHTML, like Gecko) Chrome/2.0.172.39 Safari/530.5", "Referer: http://fabulously40.com/fabulously/vasilia", "Accept: */*", "Accept-Language: en-US,en;q=0.8", "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3", "X-Forwarded-For: 127.0.0.1", }, worker = fffffd7f764125b0 { ws = fffffd7f76412780 { id = "wrk", {s,f,r,e} = {fffffd7f76408570,+420,0,+32768}, }, }, }, Child cleanup complete child (10430) Started Child (10430) said Closed fds: 3 5 6 9 10 12 13 Child (10430) said Child starts Child (10430) said managed to mmap 2147483648 bytes of 2147483648 Child (10430) said Ready Child (10430) died signal=6 Child (10430) Panic message: Assert error in TCP_linger(), tcp.c line 246: Condition((setsockopt(sock, 0xffff, 0x0080, &lin, sizeof lin)) == 0) not true. errno = 9 (Bad file number) thread = (cache-worker) Backtrace: sp = 7cd018 { fd = 12, id = 12, xid = 0, client = 127.0.0.1:64129, step = STP_DONE, handling = deliver, err_code = 200, err_reason = (null), restarts = 0, esis = 0 ws = 7cd088 { id = "sess", {s,f,r,e} = {7cd918,+943,0,+32768}, }, http[req] = { ws = 7cd088[sess] "GET", "/article/2057/thumbx128/model1.jpg", "HTTP/1.0", "X-Real-IP: 75.80.141.237", "Host: images.fabulously40.com", "Connection: close", "User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en- us) AppleWebKit/531.9 (KHTML, like Gecko) Version/4.0.3 Safari/531.9", "Referer: http://fabulously40.com/topic/id/67", "Accept: */*", "Accept-Language: en-us", "X-Forwarded-For: 127.0.0.1", }, worker = fffffd7f72a2e5b0 { ws = fffffd7f72a2e780 { id = "wrk", {s,f,r,e} = {fffffd7f72a24570,+420,0,+32768}, }, }, }, Child cleanup complete child (10441) Started Child (10441) said Closed fds: 3 5 6 9 10 12 13 Child (10441) said Child starts Child (10441) said managed to mmap 2147483648 bytes of 2147483648 Child (10441) said Ready }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:23:01 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:23:01 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.6055cc9d5778c93f95b2de2e73114378@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): First, Varnish needs other kinds of memory than the backstore, and the exact ratio depends on many things and factors. For instance ban/purge entries which contain regular expressions will allocate memory for the parsing tables etc. I have spent a lot of time looking for memory leaks, and not found any and not found any way to reproduce the memory loads you and others have reported. I guess what we need is for somebody to run 2.0.4 under Valgrind to see if it can spot anything... Poul-Henning -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:28:54 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:28:54 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.5dca11c9c6f629ab4d60ea97ea78b41d@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): update: Checked and that it has to do with interference. My other (physical) varnish server is running 1X 1G & 1X 2G instance. If i set them both to 1G they obey the max of 1G. But if they are different then varnish will take the highest value. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:32:16 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:32:16 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.767c0830f6dc5c84729ef40ff1fdcdc4@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): That sounds like a problem in the script that starts them. The individual varnish processes have no way of even knowning about each others existence. Check the command line arguments actually given to varnishd (ps -axlwwww on FreeBSD, something else on Linux) Poul-Henning -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 09:44:22 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 09:44:22 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.8ac8d919cc96640133168bca8b23d7ee@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): {{{ 5 0 29306 1 15 0 106036 1032 587981 Ss ? 0:01 /usr/local/varnish/sbin/varnishd -P /var/run/varnish_xxx.pid -a :xxx -f /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T :xxx -s malloc,10G -i xxx -n /usr/local/varnish/var/varnish/xxx 5 99 29307 29306 15 0 6666872 6219692 300904 Sl ? 36:58 /usr/local/varnish/sbin/varnishd -P /var/run/varnish_xxx.pid -a :xxx -f /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T :xxx -s malloc,10G -i xxx -n /usr/local/varnish/var/varnish/xxx 5 0 29528 1 15 0 106036 1032 587981 Ss ? 0:01 /usr/local/varnish/sbin/varnishd -P /var/run/varnish_yyy.pid -a :yyy -f /usr/local/varnish/etc/varnish/yyy.yyy.yyy.vcl -T :yyy -s malloc,2G -i yyy -n /usr/local/varnish/var/varnish/yyy 5 99 29529 29528 18 0 6494140 6130848 392359 Sl ? 223:38 /usr/local/varnish/sbin/varnishd -P /var/run/varnish_yyy.pid -a :yyy -f /usr/local/varnish/etc/varnish/yyy.yyy.yyy.vcl -T :yyy -s malloc,2G -i yyy -n /usr/local/varnish/var/varnish/yyy }}} sorry for the xxx & yyy but we are very discreet. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:13:43 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:13:43 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.b087ae367025d3bfa06edba4cecbd90f@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): Do you see a problem in my startup command phk? -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:21:10 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:21:10 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.f013a4a5bae3947cbd528a4b8486061a@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): No, nothing obvious. But as I said, apart from competing about the kernels resources, the varnish instances have no way of even knowing about each other. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:27:45 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:27:45 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.e13cafb7251534805e41c6ddca36731e@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): So... Could you give me a explanation why varnishd uses so much memory? This is a bit blocking for me right now. As i intended to run a 5 or 6 varnish instances on this server (Server has 24G total). And as you can maybe noticed, i dont manage one of the less busiest sites in the world. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:30:21 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:30:21 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.862cafc6aaa8952fbcf47cdef1153ca4@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): bla bla bla...... "you can maybe noticed" My grammar is perfect today -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:34:35 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:34:35 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.006d788fd2112d7a6fcb0f4ba82a31df@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by phk): As I said above, I have no obvious clues to help you... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 11:56:46 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 11:56:46 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.0959a49b5398712633d6bdc9a08a4730@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by perbu): If at all interesting we might be able to put you in touch with the commercial side of Varnish. They have consultants that can help track down and possibly solve the problem. If interested send an mail to varnish at linpro.no. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Thu Aug 27 12:17:41 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Thu, 27 Aug 2009 12:17:41 -0000 Subject: [Varnish] #546: Varnish eating up my memory In-Reply-To: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> References: <051.ebec840b24471793276b5ba73ddc9ec2@projects.linpro.no> Message-ID: <060.0d0d2e559919c3f17d4da5a14ae18021@projects.linpro.no> #546: Varnish eating up my memory ----------------------+----------------------------------------------------- Reporter: hp197 | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Varnish 2.1 release Component: varnishd | Version: trunk Severity: normal | Resolution: Keywords: | ----------------------+----------------------------------------------------- Comment (by hp197): Thank you vary much, we'll consider it as an option. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sat Aug 29 16:32:09 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sat, 29 Aug 2009 16:32:09 -0000 Subject: [Varnish] #534: Threads stuck in trunk In-Reply-To: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> References: <052.0e28aca60f88a160e1d7f78d585e88f5@projects.linpro.no> Message-ID: <061.ebfffc9c149d237024fa1ce3c031e427@projects.linpro.no> #534: Threads stuck in trunk ---------------------------+------------------------------------------------ Reporter: anders | Owner: phk Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: trunk Severity: critical | Resolution: Keywords: threads stuck | ---------------------------+------------------------------------------------ Comment (by anders): PHK indicated this might be a critbit bug. My experience so far is that this is right. I've switched to classic hashing on three out of four cache servers for finn.no, and only the last server using critbit gets these thread pile-ups. How can I help find this critbit bug? With critbit number of vm faults (context switches) is lower, so I would prefer to go back to critbit. Cheers, Anders. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at projects.linpro.no Sun Aug 30 17:15:21 2009 From: varnish-bugs at projects.linpro.no (Varnish) Date: Sun, 30 Aug 2009 17:15:21 -0000 Subject: [Varnish] #548: Sig 11 crash in trunk 4199 Message-ID: <052.8d61fb47426264944c0cfe079ff0aa0c@projects.linpro.no> #548: Sig 11 crash in trunk 4199 ----------------------+----------------------------------------------------- Reporter: anders | Owner: phk Type: defect | Status: new Priority: normal | Milestone: Component: varnishd | Version: trunk Severity: normal | Keywords: sig 11 segmentation fault trunk ----------------------+----------------------------------------------------- I'm running Varnish trunk 4199 in FreeBSD/amd64 7.2-RELEASE-p3, and got a sig11 crash: Aug 30 16:01:23 cache11 kernel: pid 845 (varnishd), uid 0: exited on signal 11 (core dumped) Backtrace: {{{ (gdb) bt #0 0x0000000800899d8f in getframeaddr (level=Variable "level" is not available. ) at execinfo.c:324 #1 0x00000008008a022f in backtrace (buffer=Variable "buffer" is not available. ) at execinfo.c:65 #2 0x0000000000420305 in pan_backtrace () at cache_panic.c:273 #3 0x00000000004206b7 in pan_ic (func=Variable "func" is not available. ) at cache_panic.c:327 #4 0x000000000041b2b5 in HSH_Lookup (sp=0x822acf008, poh=0x7fffbe5ee1e0) at cache_hash.c:506 #5 0x0000000000410c22 in cnt_lookup (sp=0x822acf008) at cache_center.c:739 #6 0x0000000000412c83 in CNT_Session (sp=0x822acf008) at steps.h:38 #7 0x0000000000422391 in wrk_do_cnt_sess (w=0x7fffbe5f42c0, priv=Variable "priv" is not available. ) at cache_pool.c:281 #8 0x0000000000421687 in wrk_thread_real (qp=0x80100f8d0, shm_workspace=Variable "shm_workspace" is not available. ) at cache_pool.c:176 #9 0x0000000800ac14d1 in pthread_getprio () from /lib/libthr.so.3 #10 0x0000000000000000 in ?? () Cannot access memory at address 0x7fffbe5f5000 (gdb) frame 4 #4 0x000000000041b2b5 in HSH_Lookup (sp=0x822acf008, poh=0x7fffbe5ee1e0) at cache_hash.c:506 506 CHECK_OBJ_NOTNULL(o, OBJECT_MAGIC); (gdb) print *sp $1 = {magic = 741317722, fd = 1360, id = 1360, xid = 1041139975, restarts = 0, esis = 0, disable_esi = 0, wrk = 0x7fffbe5f42c0, sockaddrlen = 16, mysockaddrlen = 128, sockaddr = 0x822acf708, mysockaddr = 0x822acf788, mylsock = 0x80101bb80, addr = 0x822acf808 "80.213.122.235", port = 0x822acf817 "7416", doclose = 0x0, http = 0x822acf250, http0 = 0x822acf4a0, ws = {{magic = 905626964, id = 0x443828 "sess", s = 0x822acf808 "80.213.122.235", f = 0x822acfb5e "", r = 0x0, e = 0x822ad3808 "", overflow = 0}}, ws_ses = 0x822acf81c "GET", ws_req = 0x822acfb5e "", htc = {{magic = 1041886673, fd = 1360, ws = 0x822acf078, rxbuf = {b = 0x822acf81c "GET", e = 0x822acfb5e ""}, pipeline = {b = 0x0, e = 0x0}}}, t_open = 1251626952.7275271, t_req = 1251626952.745923, t_resp = nan(0x8000000000000), t_end = 1251626952.7275271, connect_timeout = 0.40000000000000002, first_byte_timeout = 60, between_bytes_timeout = 60, grace = 300, step = STP_LOOKUP, cur_method = 0, handling = 3, sendbody = 0 '\0', wantbody = 1 '\001', err_code = 0, err_reason = 0x0, list = { vtqe_next = 0x0, vtqe_prev = 0x0}, director = 0x80bae8f68, vbe = 0x0, obj = 0x0, objcore = 0x0, objhead = 0x0, vcl = 0x80baed108, mem = 0x822acf000, workreq = {list = {vtqe_next = 0x0, vtqe_prev = 0x0}, func = 0x4222d0 , priv = 0x822acf008}, acct = { first = 1251626952.7275271, sess = 0, req = 0, pipe = 0, pass = 0, fetch = 0, hdrbytes = 0, bodybytes = 0}, acct_req = {first = 0, sess = 1, req = 1, pipe = 0, pass = 0, fetch = 0, hdrbytes = 0, bodybytes = 0}, nhashptr = 0, ihashptr = 0, lhashptr = 0, hashptr = 0x0} (gdb) print *hp }}} Varnish is started with these parameters: {{{ -p sess_workspace=16384 -p obj_workspace=2048 -p lru_interval=3600 -h classic,500009 -p ping_interval=0 -p cli_timeout=30 -p auto_restart=on -p thread_pools=10 -p thread_pool_min=70 -p thread_pool_max=4000 -p session_linger=40 -p listen_depth=4096 -p default_ttl=604800 -T localhost:8080 -f /usr/local/etc/varnish.vcl -s malloc,60G -P /var/run/varnishd.pid }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator