From jhalfmoon at milksnot.com Fri Apr 1 12:04:45 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Fri, 01 Apr 2011 14:04:45 +0200 Subject: Mailing list archives inaccessible In-Reply-To: References: Message-ID: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> To the maintainers of this list: The archives of this mailing list of October 2010 and newer are not accessible at the the following location and give a Varnish Guru meditation Error 503: http://www.varnish-cache.org/lists/pipermail/varnish-misc/ Just thought you'd like to know & maybe fix. Cheers, Johnny From kristian at varnish-software.com Fri Apr 1 12:16:23 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 1 Apr 2011 14:16:23 +0200 Subject: Mailing list archives inaccessible In-Reply-To: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> References: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> Message-ID: <20110401121623.GA3743@localhost.localdomain> On Fri, Apr 01, 2011 at 02:04:45PM +0200, Johnny Halfmoon wrote: > The archives of this mailing list of October 2010 and newer are not > accessible at the the following location and give a Varnish Guru > meditation Error 503: > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/ Seems to be up now: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-April/005979.html Most likely you experienced a glitch as we are testing the master-branch of Varnish on varnish-cache.org. - Kristian From jhalfmoon at milksnot.com Fri Apr 1 13:22:11 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Fri, 01 Apr 2011 15:22:11 +0200 Subject: Mailing list archives inaccessible In-Reply-To: <20110401121623.GA3743@localhost.localdomain> References: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> <20110401121623.GA3743@localhost.localdomain> Message-ID: <20110401152211.c4473vewyok4wos4@webmail.milksnot.com> Hmm, I was not specific enough in my previous post. I should have said "The compressed archives of October 2010 and newer yield a 503 error." Here, this link will get there: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-March.txt.gz Cheers, Johnny ----- Message from kristian at varnish-software.com --------- Date: Fri, 1 Apr 2011 14:16:23 +0200 From: Kristian Lyngstol Reply-To: Kristian Lyngstol Subject: Re: Mailing list archives inaccessible To: Johnny Halfmoon Cc: varnish-misc at varnish-cache.org > On Fri, Apr 01, 2011 at 02:04:45PM +0200, Johnny Halfmoon wrote: >> The archives of this mailing list of October 2010 and newer are not >> accessible at the the following location and give a Varnish Guru >> meditation Error 503: >> >> http://www.varnish-cache.org/lists/pipermail/varnish-misc/ > > Seems to be up now: > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-April/005979.html > > Most likely you experienced a glitch as we are testing the master-branch > of Varnish on varnish-cache.org. > > - Kristian > ----- End message from kristian at varnish-software.com ----- From perbu at varnish-software.com Fri Apr 1 13:31:33 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 1 Apr 2011 15:31:33 +0200 Subject: Mailing list archives inaccessible In-Reply-To: <20110401152211.c4473vewyok4wos4@webmail.milksnot.com> References: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> <20110401121623.GA3743@localhost.localdomain> <20110401152211.c4473vewyok4wos4@webmail.milksnot.com> Message-ID: Hi Johnny, Top-posting isn't to popular on this list, you know. :-) On Fri, Apr 1, 2011 at 3:22 PM, Johnny Halfmoon wrote: > Hmm, I was not specific enough in my previous post. I should have said "The > compressed archives of October 2010 and newer yield a 503 error." Here, this > link will get there: > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-March.txt.gz Right. I've think you've found a bug in Varnish. We're running trunk on the site to iron out the remaining bugs. Thank you so much for letting us know. If you want to I could zip stuff up and mail it to you. Let me know. Regards, Per. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From diego.roccia at subito.it Fri Apr 1 13:58:06 2011 From: diego.roccia at subito.it (Diego Roccia) Date: Fri, 01 Apr 2011 15:58:06 +0200 Subject: Varnish stuck on most served content In-Reply-To: <4D944C08.7040804@uplex.de> References: <4D944C08.7040804@uplex.de> Message-ID: <4D95D9EE.7060201@subito.it> On 03/31/2011 11:40 AM, Geoff Simmons wrote: > On 03/31/11 11:15 AM, Hettwer, Marian wrote: >>>> >>>> I'm running Centos 5.5 64bit and here's my varnish startup parameters: >>>> >>>> DAEMON_OPTS=" -a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>>> -f ${VARNISH_VCL_CONF} \ >>>> -T 0.0.0.0:6082 \ >>>> -t 604800 \ >>>> -u varnish -g varnish \ >>>> -s malloc,54G \ >>>> -p thread_pool_add_delay=2 \ >>>> -p thread_pools=16 \ >>>> -p thread_pool_min=50 \ >>>> -p thread_pool_max=4000 \ >>>> -p listen_depth=4096 \ >>>> -p lru_interval=600 \ >>>> -hclassic,500009 \ >>>> -p log_hashstring=off \ >>>> -p shm_workspace=16384 \ >>>> -p ping_interval=2 \ >>>> -p default_grace=3600 \ >>>> -p pipe_timeout=10 \ >>>> -p sess_timeout=6 \ >>>> -p send_timeout=10" >> >> Hu. What are all those "-p" parameters? Looks like some heavy tweaking to >> me. >> Perhaps some varnish gurus might shime in, but to me tuning like that >> sounds like trouble. >> Unless you really know what you did there. >> >> I wouldn't (not without the documentation at hands). > > Um. Many of those -p's are roughly in the ranges recommended on the Wiki > performance page, and on Kristian Lyngstol's blog. > > http://www.varnish-cache.org/trac/wiki/Performance > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/ > > Perhaps one of the settings is causing a problem, but it isn't wrong to > be doing it all -- and it's quite necessary on a high-traffic site. > > > Best, > Geoff In fact, this is one of my problems. I found ah hardly optimized configuration, and I'm not yet so confident to remove lines without fear :) From jnerin+varnish at gmail.com Fri Apr 1 14:25:45 2011 From: jnerin+varnish at gmail.com (=?UTF-8?B?Sm9yZ2UgTmVyw61u?=) Date: Fri, 1 Apr 2011 16:25:45 +0200 Subject: Mailing list archives inaccessible In-Reply-To: References: <20110401140445.t4myh0uum88s488c@webmail.milksnot.com> <20110401121623.GA3743@localhost.localdomain> <20110401152211.c4473vewyok4wos4@webmail.milksnot.com> Message-ID: On Fri, Apr 1, 2011 at 15:31, Per Buer wrote: > Hi Johnny, > > Top-posting isn't to popular on this list, you know. :-) > > On Fri, Apr 1, 2011 at 3:22 PM, Johnny Halfmoon > wrote: > > Hmm, I was not specific enough in my previous post. I should have said > "The > > compressed archives of October 2010 and newer yield a 503 error." Here, > this > > link will get there: > > > > > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-March.txt.gz > > Right. I've think you've found a bug in Varnish. We're running trunk > on the site to iron out the remaining bugs. Thank you so much for > letting us know. If you want to I could zip stuff up and mail it to > you. Let me know. > > Regards, > Per. > > Well, they are accesible if you strip the .gz suffix like this: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-March.txt Regards > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Jorge Ner?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From junxian.yan at gmail.com Fri Apr 1 18:31:52 2011 From: junxian.yan at gmail.com (Junxian Yan) Date: Fri, 1 Apr 2011 11:31:52 -0700 Subject: Is there a stable release supporting customized log format Message-ID: I wanna use varnishncsa to store log with customized format. Now I'm using 2.1.5, seems -F still can not been supported. Can I use this feature without compiling trunk code? R -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Apr 1 18:39:19 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 1 Apr 2011 20:39:19 +0200 Subject: Is there a stable release supporting customized log format In-Reply-To: References: Message-ID: On Fri, Apr 1, 2011 at 8:31 PM, Junxian Yan wrote: > I wanna use varnishncsa to store log with customized format. Now I'm using > 2.1.5, seems -F still can not been supported. ?Can I use this feature > without compiling trunk code? No. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From marino.pl at gmail.com Mon Apr 4 09:21:17 2011 From: marino.pl at gmail.com (p.marino) Date: Mon, 4 Apr 2011 11:21:17 +0200 Subject: Varnish acceleration for Liferay/Alfresco combo Message-ID: Hi everyone, a colleague of mine is working on a corporate intranet portal based on Liferay (for the front end) and Alfresco (CMS). The system has been working for a bit more than a year, but the performances could use some speeding up. I suggested him to investigate Varnish as a way to speed up the whole, especially as a cache between Liferay and Alfresco. Even if the front end can be customized it still relies on querying the CMS for stuff like "today news", "corporate manuals", "personal announces" and so on... and these are dynamic, but rarely change more than once a day... so I suspect that intercepting the calls to Alfresco and caching the content of the CMS-backed portlets would work. Can anyone offer some pointers to this? Or any suggestions/hints? What kind of calls can be managed by Varnish? (HTTP, webservices, XML-RPC...? I am asking because the Liferay/AlFresco connection has been implemented using custom java code so we may have to do something on that side, too) Thanks in advance, Paolo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From isharov at yandex-team.ru Mon Apr 4 11:35:25 2011 From: isharov at yandex-team.ru (Iliya Sharov) Date: Mon, 04 Apr 2011 15:35:25 +0400 Subject: Is there a stable release supporting customized log format In-Reply-To: References: Message-ID: <4D99ACFD.7050903@yandex-team.ru> You may edit log format in the varnishca.c and recompile it. h_ncsa structure. 01.04.2011 22:31, Junxian Yan ?????: > I wanna use varnishncsa to store log with customized format. Now I'm > using 2.1.5, seems -F still can not been supported. Can I use this > feature without compiling trunk code? > > R > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Iliya Sharov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronan at iol.ie Tue Apr 5 09:09:23 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 5 Apr 2011 10:09:23 +0100 (IST) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: After further tuning and tweaking I've managed to reduce the incidence of this problem to about 1/1500 POSTs: Hour GETs Fails POSTs Fails 01:00 39750 0 (0.00%) 530 0 (0.00%) 02:00 30733 0 (0.00%) 419 0 (0.00%) 03:00 28696 0 (0.00%) 361 0 (0.00%) 04:00 25687 0 (0.00%) 348 0 (0.00%) 05:00 27207 0 (0.00%) 310 0 (0.00%) 06:00 31298 0 (0.00%) 344 0 (0.00%) 07:00 35533 0 (0.00%) 324 1 (0.31%) 08:00 41602 0 (0.00%) 360 0 (0.00%) 09:00 49797 0 (0.00%) 441 0 (0.00%) 10:00 55202 0 (0.00%) 521 0 (0.00%) 11:00 65108 0 (0.00%) 729 1 (0.14%) 12:00 70108 0 (0.00%) 684 1 (0.15%) 13:00 76969 0 (0.00%) 739 1 (0.14%) 14:00 73088 0 (0.00%) 781 0 (0.00%) 15:00 73698 0 (0.00%) 798 0 (0.00%) 16:00 80874 0 (0.00%) 912 0 (0.00%) 17:00 109908 0 (0.00%) 1203 0 (0.00%) 18:00 113348 0 (0.00%) 1374 2 (0.15%) 19:00 97369 0 (0.00%) 1059 1 (0.09%) 20:00 90987 0 (0.00%) 950 0 (0.00%) 21:00 88719 0 (0.00%) 1084 0 (0.00%) 22:00 79641 0 (0.00%) 943 2 (0.21%) 23:00 67361 0 (0.00%) 815 0 (0.00%) Increasing the Keepalive time on apache on the backends from 1 to 5 seconds made the biggest impact. I suspect this suggests that the problem occurs when Varnish tries to direct a POST to a connection which apache has just closed. -Ronan On Fri, 25 Mar 2011, Ronan Mullally wrote: > I am still encountering this problem - about 1% on average of POSTs are > failing with a 503 when there is no problem apparent on the back-ends. > GETs are not affected: > > Hour GETs Fails POSTs Fails > 00:00 38060 0 (0.00%) 480 2 (0.42%) > 01:00 34051 0 (0.00%) 412 0 (0.00%) > 02:00 29881 0 (0.00%) 383 2 (0.52%) > 03:00 25741 0 (0.00%) 374 1 (0.27%) > 04:00 22296 0 (0.00%) 326 2 (0.61%) > 05:00 22594 0 (0.00%) 349 20 (5.73%) > 06:00 31422 0 (0.00%) 408 6 (1.47%) > 07:00 58746 0 (0.00%) 656 6 (0.91%) > 08:00 74307 0 (0.00%) 870 4 (0.46%) > 09:00 87386 0 (0.00%) 1280 8 (0.62%) > 10:00 51744 0 (0.00%) 741 8 (1.08%) > 11:00 50060 0 (0.00%) 825 1 (0.12%) > 12:00 58573 0 (0.00%) 664 5 (0.75%) > 13:00 60548 0 (0.00%) 735 7 (0.95%) > 14:00 60242 0 (0.00%) 875 8 (0.91%) > 15:00 61427 0 (0.00%) 778 3 (0.39%) > 16:00 66480 0 (0.00%) 810 4 (0.49%) > 17:00 65749 0 (0.00%) 836 12 (1.44%) > 18:00 64312 0 (0.00%) 732 3 (0.41%) > 19:00 60930 0 (0.00%) 652 5 (0.77%) > 20:00 59646 0 (0.00%) 626 1 (0.16%) > 21:00 61218 0 (0.00%) 674 3 (0.45%) > 22:00 55908 0 (0.00%) 598 3 (0.50%) > 23:00 45173 0 (0.00%) 560 1 (0.18%) > > There was another poster on this thread with the same problem which > suggests a possible varnish problem rather than anything specific to > my setup. > > Does anybody have any ideas? > From samcrawford at gmail.com Tue Apr 5 09:20:59 2011 From: samcrawford at gmail.com (Sam Crawford) Date: Tue, 5 Apr 2011 10:20:59 +0100 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Ronan, I suspect you are right - Apache may be discarding the connection as you are sending the POST request, resulting in the 503. Incidentally, what are your Apache keep-alive settings? If I remember rightly, there is a KeepAliveTimeout (which it sounds like you've been using) and also a KeepAlive parameter (which controls the maximum number of requests on one TCP connection, irrespective of timeouts). A couple of other questions/suggestions: 1) If you disable keepalives completely at Apache, does the problem disappear? 2) If the above is true and you want to keep keepalives enabled on Apache for performance reasons, could you perhaps instruct Varnish to inject a "Connection: close" header in the backend request when you encounter a POST? Thanks, Sam On 5 April 2011 10:09, Ronan Mullally wrote: > After further tuning and tweaking I've managed to reduce the incidence > of this problem to about 1/1500 POSTs: > > ?Hour ? GETs ? ? Fails ? POSTs ? ? Fails > ?01:00 ?39750 ? 0 (0.00%) ? 530 ? 0 (0.00%) > ?02:00 ?30733 ? 0 (0.00%) ? 419 ? 0 (0.00%) > ?03:00 ?28696 ? 0 (0.00%) ? 361 ? 0 (0.00%) > ?04:00 ?25687 ? 0 (0.00%) ? 348 ? 0 (0.00%) > ?05:00 ?27207 ? 0 (0.00%) ? 310 ? 0 (0.00%) > ?06:00 ?31298 ? 0 (0.00%) ? 344 ? 0 (0.00%) > ?07:00 ?35533 ? 0 (0.00%) ? 324 ? 1 (0.31%) > ?08:00 ?41602 ? 0 (0.00%) ? 360 ? 0 (0.00%) > ?09:00 ?49797 ? 0 (0.00%) ? 441 ? 0 (0.00%) > ?10:00 ?55202 ? 0 (0.00%) ? 521 ? 0 (0.00%) > ?11:00 ?65108 ? 0 (0.00%) ? 729 ? 1 (0.14%) > ?12:00 ?70108 ? 0 (0.00%) ? 684 ? 1 (0.15%) > ?13:00 ?76969 ? 0 (0.00%) ? 739 ? 1 (0.14%) > ?14:00 ?73088 ? 0 (0.00%) ? 781 ? 0 (0.00%) > ?15:00 ?73698 ? 0 (0.00%) ? 798 ? 0 (0.00%) > ?16:00 ?80874 ? 0 (0.00%) ? 912 ? 0 (0.00%) > ?17:00 109908 ? 0 (0.00%) ?1203 ? 0 (0.00%) > ?18:00 113348 ? 0 (0.00%) ?1374 ? 2 (0.15%) > ?19:00 ?97369 ? 0 (0.00%) ?1059 ? 1 (0.09%) > ?20:00 ?90987 ? 0 (0.00%) ? 950 ? 0 (0.00%) > ?21:00 ?88719 ? 0 (0.00%) ?1084 ? 0 (0.00%) > ?22:00 ?79641 ? 0 (0.00%) ? 943 ? 2 (0.21%) > ?23:00 ?67361 ? 0 (0.00%) ? 815 ? 0 (0.00%) > > Increasing the Keepalive time on apache on the backends from 1 to 5 > seconds made the biggest impact. ?I suspect this suggests that the > problem occurs when Varnish tries to direct a POST to a connection > which apache has just closed. > > > -Ronan > > > On Fri, 25 Mar 2011, Ronan Mullally wrote: > >> I am still encountering this problem - about 1% on average of POSTs are >> failing with a 503 when there is no problem apparent on the back-ends. >> GETs are not affected: >> >> ? Hour ? GETs ? ? Fails ? POSTs ? ? Fails >> ?00:00 ?38060 ? 0 (0.00%) ? 480 ? 2 (0.42%) >> ?01:00 ?34051 ? 0 (0.00%) ? 412 ? 0 (0.00%) >> ?02:00 ?29881 ? 0 (0.00%) ? 383 ? 2 (0.52%) >> ?03:00 ?25741 ? 0 (0.00%) ? 374 ? 1 (0.27%) >> ?04:00 ?22296 ? 0 (0.00%) ? 326 ? 2 (0.61%) >> ?05:00 ?22594 ? 0 (0.00%) ? 349 ?20 (5.73%) >> ?06:00 ?31422 ? 0 (0.00%) ? 408 ? 6 (1.47%) >> ?07:00 ?58746 ? 0 (0.00%) ? 656 ? 6 (0.91%) >> ?08:00 ?74307 ? 0 (0.00%) ? 870 ? 4 (0.46%) >> ?09:00 ?87386 ? 0 (0.00%) ?1280 ? 8 (0.62%) >> ?10:00 ?51744 ? 0 (0.00%) ? 741 ? 8 (1.08%) >> ?11:00 ?50060 ? 0 (0.00%) ? 825 ? 1 (0.12%) >> ?12:00 ?58573 ? 0 (0.00%) ? 664 ? 5 (0.75%) >> ?13:00 ?60548 ? 0 (0.00%) ? 735 ? 7 (0.95%) >> ?14:00 ?60242 ? 0 (0.00%) ? 875 ? 8 (0.91%) >> ?15:00 ?61427 ? 0 (0.00%) ? 778 ? 3 (0.39%) >> ?16:00 ?66480 ? 0 (0.00%) ? 810 ? 4 (0.49%) >> ?17:00 ?65749 ? 0 (0.00%) ? 836 ?12 (1.44%) >> ?18:00 ?64312 ? 0 (0.00%) ? 732 ? 3 (0.41%) >> ?19:00 ?60930 ? 0 (0.00%) ? 652 ? 5 (0.77%) >> ?20:00 ?59646 ? 0 (0.00%) ? 626 ? 1 (0.16%) >> ?21:00 ?61218 ? 0 (0.00%) ? 674 ? 3 (0.45%) >> ?22:00 ?55908 ? 0 (0.00%) ? 598 ? 3 (0.50%) >> ?23:00 ?45173 ? 0 (0.00%) ? 560 ? 1 (0.18%) >> >> There was another poster on this thread with the same problem which >> suggests a possible varnish problem rather than anything specific to >> my setup. >> >> Does anybody have any ideas? >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ionathan at gmail.com Tue Apr 5 15:20:03 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Tue, 5 Apr 2011 12:20:03 -0300 Subject: vcl to avoid caching 404 Message-ID: Hi! I am trying to configure varnish to avoid caching some specific http statuses sent from the backend. Since I can't control 100% the headers they are sending, I should do it in varnish as they all go through it. What I came with is: sub vcl_fetch { if (obj.status == 404 || obj.status == 503 || obj.status == 500) { set obj.http.Cache-Control = "max-age=0"; return (pass); } return (deliver); } But when I try to run varnish it complains about not being able to compile vcl: Message from VCC-compiler: Variable 'obj.status' not accessible in method 'vcl_fetch'. At: (input Line 80 Pos 9) if (obj.status == 404 || obj.status == 503 || obj.status == 500) { --------##########---------------------------------------------------- Running VCC-compiler failed, exit 1 VCL compilation failed I saw in the documentation that obj.status is valid. What I am doing wrong? I am using varnish-2.1.3 Thanks! Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Tue Apr 5 15:38:40 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 05 Apr 2011 17:38:40 +0200 Subject: vcl to avoid caching 404 In-Reply-To: References: Message-ID: <4D9B3780.4090003@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 04/ 5/11 05:20 PM, Jonathan Leibiusky wrote: > Hi! > I am trying to configure varnish to avoid caching some specific http > statuses sent from the backend. Since I can't control 100% the headers they > are sending, I should do it in varnish as they all go through it. > What I came with is: > > sub vcl_fetch { > if (obj.status == 404 || obj.status == 503 || obj.status == 500) { > set obj.http.Cache-Control = "max-age=0"; > return (pass); > } > return (deliver); > } > > But when I try to run varnish it complains about not being able to compile > vcl: > > Message from VCC-compiler: > Variable 'obj.status' not accessible in method 'vcl_fetch'. Try 'beresp.status' instead of 'obj.status'. And you'll need to assign to beresp.http.Cache-Control as well. 'obj' in vcl_fetch() was changed to 'beresp' in Varnish 2.1 or thereabouts. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNmzeAAAoJEOUwvh9pJNURGSQP/1KuagJMYRPPszQJwB2nuWL7 U6ccUxK2cyznsVlvL5SF7JDE6GTBv4bTy50INBHPIV0z7d6JI3IQ1XBPpt1EDIZ4 IuQRZXKc2XuLdK7sfhCUiKQxrOYJOjmrhiASCUJdT7Waj73oP4cfNMJVzwcuZwZI ZGm6zwjGRreohdELuXeaxF1+fTCxDkH7PkKVYSZKuzF0u0uU/q78V7moGctqU9t0 U6TJjEQO2cO0qHLMJtwh28tAjepKQNs5FO0zR2VwlgwEW4AvQzgjx4t5YnWouXXM VJQSL9MljvsQYjBYth9A59F7ZHXQ9Fbvii+v3Fzay7S8SzBB3Rr+a+zefj5Ep0nR wm5TAYoXfZ5XmV30uhFxKKE6SXGW1v/ibtUiLaNAemjRdjBk+ozwpkjAepHRNk4x olLz0lepJSC04ZsMD77bzmA+7dFWYs8/SdoOCZrUgkjUo2lHOqyz+cRPoNFZApBG RRBvIZdBO+cGS1eesu05QabtE6VSWkuf30quh/rxkFmh3GsDsbftwQDcq1OOyDWW cBOK1eemNQJCgykAIiVSeWKsKYsi+2MlR+sX1chJE6Fi5aiH9VjN/7oRAV/1GnLW TDdZ9pCAJd5UxNKKp5LEC+XH38ZD9EHfvxMK9JfJDPxgBqfOj5Yzk9tBlygTSpqk +K6Cb/7V8qLzv7NS8gYA =uXwP -----END PGP SIGNATURE----- From ronan at iol.ie Tue Apr 5 15:49:13 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 5 Apr 2011 16:49:13 +0100 (IST) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Hi Sam, On Tue, 5 Apr 2011, Sam Crawford wrote: > I suspect you are right - Apache may be discarding the connection as you > are sending the POST request, resulting in the 503. Incidentally, what > are your Apache keep-alive settings? If I remember rightly, there is a > KeepAliveTimeout (which it sounds like you've been using) and also a > KeepAlive parameter (which controls the maximum number of requests on > one TCP connection, irrespective of timeouts). I've got the number of requests per session set high (thousands). The number of actual back-end requests is modest (10-20 per back-end per second at most) so it won't hit that limit often. > 1) If you disable keepalives completely at Apache, does the problem > disappear? Not entirely. I did this eariler in the process but still saw occurances (not as many as I saw with the Apache Keepalive timeout = 1, but not significantly fewer than when it was = 5 either). I've disabled keepalives again and will run with it that way overnight, so far it looks like more of the same - I've seen one occurance in about 2000 POSTs. -Ronan From samcrawford at gmail.com Tue Apr 5 16:14:34 2011 From: samcrawford at gmail.com (samcrawford at gmail.com) Date: Tue, 5 Apr 2011 16:14:34 +0000 Subject: Varnish 503ing on ~1/100 POSTs Message-ID: <502918899-1302020074-cardhu_decombobulator_blackberry.rim.net-1633648465-@b11.c2.bise7.blackberry> Hi Ronan, It'd also be worth getting output from varnishlog for that broken request if possible. I realise it could be difficult to isolate it though. Thanks, Sam ------Original Message------ From: Ronan Mullally To: Sam Crawford Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish 503ing on ~1/100 POSTs Sent: 5 Apr 2011 16:49 Hi Sam, On Tue, 5 Apr 2011, Sam Crawford wrote: > I suspect you are right - Apache may be discarding the connection as you > are sending the POST request, resulting in the 503. Incidentally, what > are your Apache keep-alive settings? If I remember rightly, there is a > KeepAliveTimeout (which it sounds like you've been using) and also a > KeepAlive parameter (which controls the maximum number of requests on > one TCP connection, irrespective of timeouts). I've got the number of requests per session set high (thousands). The number of actual back-end requests is modest (10-20 per back-end per second at most) so it won't hit that limit often. > 1) If you disable keepalives completely at Apache, does the problem > disappear? Not entirely. I did this eariler in the process but still saw occurances (not as many as I saw with the Apache Keepalive timeout = 1, but not significantly fewer than when it was = 5 either). I've disabled keepalives again and will run with it that way overnight, so far it looks like more of the same - I've seen one occurance in about 2000 POSTs. -Ronan Sent from my BlackBerry? wireless device From jonathan.hursey at adrevolution.com Tue Apr 5 16:35:45 2011 From: jonathan.hursey at adrevolution.com (Jonathan Hursey) Date: Tue, 5 Apr 2011 11:35:45 -0500 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: Is there any good documentation out there on how to write inline C for varnish ? On Tue, Apr 5, 2011 at 10:49 AM, Ronan Mullally wrote: > Hi Sam, > > On Tue, 5 Apr 2011, Sam Crawford wrote: > > > I suspect you are right - Apache may be discarding the connection as you > > are sending the POST request, resulting in the 503. Incidentally, what > > are your Apache keep-alive settings? If I remember rightly, there is a > > KeepAliveTimeout (which it sounds like you've been using) and also a > > KeepAlive parameter (which controls the maximum number of requests on > > one TCP connection, irrespective of timeouts). > > I've got the number of requests per session set high (thousands). The > number of actual back-end requests is modest (10-20 per back-end per > second at most) so it won't hit that limit often. > > > 1) If you disable keepalives completely at Apache, does the problem > > disappear? > > Not entirely. I did this eariler in the process but still saw occurances > (not as many as I saw with the Apache Keepalive timeout = 1, but not > significantly fewer than when it was = 5 either). > > I've disabled keepalives again and will run with it that way overnight, so > far it looks like more of the same - I've seen one occurance in about 2000 > POSTs. > > > -Ronan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksorensen at nordija.com Tue Apr 5 18:42:42 2011 From: ksorensen at nordija.com (Kristian =?ISO-8859-1?Q?Gr=F8nfeldt_S=F8rensen?=) Date: Tue, 05 Apr 2011 20:42:42 +0200 Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: References: <871v2fwizs.fsf@qurzaw.varnish-software.com> Message-ID: <1302028962.13564.35.camel@kriller.nordija.dk> On Tue, 2011-04-05 at 10:09 +0100, Ronan Mullally wrote: > Increasing the Keepalive time on apache on the backends from 1 to 5 > seconds made the biggest impact. I suspect this suggests that the > problem occurs when Varnish tries to direct a POST to a connection > which apache has just closed. > That indicates to me that the hack that was implemented to fix http://www.varnish-cache.org/trac/ticket/749 is not doing what it was supposed to do. The earlier varnishlog snippet from your original post includes a restart, which I assume is the restart added by the fix for #749 - unless you are doing a manual restart in your VCL. It seems that the backend connection that you get when the restart is done is also closed before Varnish sends the request. I had a similar issue (on 2.1.3 which does not include the fix for #749), and "solved" it by setting the keepalive-timeout of my backends insanely high (= 2 days - default was 20 seconds). This of course only works well if you do not have anything other than Varnish talking directly to your backend server since that would allow those clients to hog resources on your backend for longer time - making it easier for anyone to launch a denial of service attack on your backend. We saw the issue when we had two load-spikes after each other closely matching the keepalive-timeout. The first spike would make varnish create a lot of backend-connections, the second spike would use the all the available connections until it got a connection that had been idle very close to it's timeout value, which would then be closed just as Varnish tried to use it. So if you have load-spikes at regular intervals, you will want to adjust your keepalive-settings on the backend, so that they are different than the interval between the load-spikes. I think the best way to solve this would be a configurable keepalive-timeout of Varnish's backend connections, enabling you to set it slightly lower than the keepalive-timeout of your backend. This would ensure that Varnish would always be the one closing the connection. This issue vas actually discussed at VUG3 and I added a wishlist entry on PostTwoShoppingList for the feature a couple of weeks ago. (http://www.varnish-cache.org/trac/wiki/PostTwoShoppingList#Keepalivetimeoutonbackendconnections) Best regards Kristian From ronan at iol.ie Tue Apr 5 21:00:44 2011 From: ronan at iol.ie (Ronan Mullally) Date: Tue, 5 Apr 2011 22:00:44 +0100 (IST) Subject: Varnish 503ing on ~1/100 POSTs In-Reply-To: <1302028962.13564.35.camel@kriller.nordija.dk> References: <871v2fwizs.fsf@qurzaw.varnish-software.com> <1302028962.13564.35.camel@kriller.nordija.dk> Message-ID: Hej Kristian, On Tue, 5 Apr 2011, Kristian Gr?nfeldt S?rensen wrote: > That indicates to me that the hack that was implemented to fix > http://www.varnish-cache.org/trac/ticket/749 is not doing what it was > supposed to do. The earlier varnishlog snippet from your original post > includes a restart, which I assume is the restart added by the fix for > #749 - unless you are doing a manual restart in your VCL. It seems that > the backend connection that you get when the restart is done is also > closed before Varnish sends the request. Correct. I've since implemented a manual restart to make a few extra requests and the varnishlog for a typical incident is below. Note that this occured with Keepalive *disabled* on the apache backends (and sess_timeout=5 on varnish). -Ronan 46 ReqStart c 1.2.3.4 2902 1066443305 46 RxRequest c POST 46 RxURL c /ajax.php?do=verifyusername 46 RxProtocol c HTTP/1.1 46 RxHeader c Accept: */* 46 RxHeader c Accept-Language: ko 46 RxHeader c Referer: http://www.redcafe.net/register.php?do=register 46 RxHeader c x-requested-with: XMLHttpRequest 46 RxHeader c Content-Type: application/x-www-form-urlencoded; charset=UTF-8 46 RxHeader c Accept-Encoding: gzip, deflate 46 RxHeader c User-Agent: Mozilla/4.0 (compatible; .... 46 RxHeader c Host: www.redcafe.net 46 RxHeader c Content-Length: 48 46 RxHeader c Connection: Keep-Alive 46 RxHeader c Cache-Control: no-cache 46 RxHeader c Cookie: bbsessionhash=.... 46 VCL_call c recv 46 VCL_return c pass 46 VCL_call c hash 46 VCL_return c hash 46 VCL_call c pass 46 VCL_return c pass 48 BackendOpen b redcafe2 193.27.1.46 62143 193.27.1.45 80 46 Backend c 48 redcafe redcafe2 48 TxRequest b POST 48 TxURL b /ajax.php?do=verifyusername 48 TxProtocol b HTTP/1.1 48 TxHeader b Accept: */* 48 TxHeader b Accept-Language: ko 48 TxHeader b Referer: http://www.redcafe.net/register.php?do=register 48 TxHeader b x-requested-with: XMLHttpRequest 48 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 48 TxHeader b User-Agent: Mozilla/4.0 (.... 48 TxHeader b Host: www.redcafe.net 48 TxHeader b Content-Length: 48 48 TxHeader b Cache-Control: no-cache 48 TxHeader b Cookie: bbsessionhash=.... 48 TxHeader b Accept-Encoding: gzip 48 TxHeader b X-Forwarded-For: 1.2.3.4 48 TxHeader b X-Varnish: 1066443305 * 46 FetchError c backend write error: 104 (Connection reset by peer) 48 BackendClose b redcafe2 46 VCL_call c error 46 VCL_return c restart 46 VCL_call c recv 46 VCL_return c pass 46 VCL_call c hash 46 VCL_return c hash 46 VCL_call c pass 46 VCL_return c pass 48 BackendOpen b redcafe2 193.27.1.46 62147 193.27.1.45 80 46 Backend c 48 redcafe redcafe2 48 TxRequest b POST 48 TxURL b /ajax.php?do=verifyusername 48 TxProtocol b HTTP/1.1 48 TxHeader b Accept: */* 48 TxHeader b Accept-Language: ko 48 TxHeader b Referer: http://www.redcafe.net/register.php?do=register 48 TxHeader b x-requested-with: XMLHttpRequest 48 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 48 TxHeader b User-Agent: Mozilla/4.0 (... 48 TxHeader b Host: www.redcafe.net 48 TxHeader b Content-Length: 48 48 TxHeader b Cache-Control: no-cache 48 TxHeader b Cookie: bbsessionhash=... 48 TxHeader b X-Forwarded-For: 1.2.3.4 48 TxHeader b Accept-Encoding: gzip 48 TxHeader b X-Varnish: 1066443305 * 46 FetchError c backend write error: 0 (Success) 48 BackendClose b redcafe2 46 VCL_call c error 46 VCL_return c restart 46 VCL_call c recv 46 VCL_return c pass 46 VCL_call c hash 46 VCL_return c hash 46 VCL_call c pass 46 VCL_return c pass 48 BackendOpen b redcafe1 193.27.1.46 24825 193.27.1.44 80 46 Backend c 48 redcafe redcafe1 48 TxRequest b POST 48 TxURL b /ajax.php?do=verifyusername 48 TxProtocol b HTTP/1.1 48 TxHeader b Accept: */* 48 TxHeader b Accept-Language: ko 48 TxHeader b Referer: http://www.redcafe.net/register.php?do=register 48 TxHeader b x-requested-with: XMLHttpRequest 48 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 48 TxHeader b User-Agent: Mozilla/4.0 (... 48 TxHeader b Host: www.redcafe.net 48 TxHeader b Content-Length: 48 48 TxHeader b Cache-Control: no-cache 48 TxHeader b Cookie: bbsessionhash=... 48 TxHeader b X-Forwarded-For: 1.2.3.4 48 TxHeader b Accept-Encoding: gzip 48 TxHeader b X-Varnish: 1066443305 * 46 FetchError c backend write error: 0 (Success) 48 BackendClose b redcafe1 46 VCL_call c error 46 VCL_return c restart 46 VCL_call c recv 46 VCL_return c pass 46 VCL_call c hash 46 VCL_return c hash 46 VCL_call c pass 46 VCL_return c pass 48 BackendOpen b redcafe2 193.27.1.46 62149 193.27.1.45 80 46 Backend c 48 redcafe redcafe2 48 TxRequest b POST 48 TxURL b /ajax.php?do=verifyusername 48 TxProtocol b HTTP/1.1 48 TxHeader b Accept: */* 48 TxHeader b Accept-Language: ko 48 TxHeader b Referer: http://www.redcafe.net/register.php?do=register 48 TxHeader b x-requested-with: XMLHttpRequest 48 TxHeader b Content-Type: application/x-www-form-urlencoded; charset=UTF-8 48 TxHeader b User-Agent: Mozilla/4.0 (... 48 TxHeader b Host: www.redcafe.net 48 TxHeader b Content-Length: 48 48 TxHeader b Cache-Control: no-cache 48 TxHeader b Cookie: bbsessionhash=... 48 TxHeader b X-Forwarded-For: 1.2.3.4 48 TxHeader b Accept-Encoding: gzip 48 TxHeader b X-Varnish: 1066443305 * 46 FetchError c backend write error: 0 (Success) 48 BackendClose b redcafe2 46 SessionClose c remote closed 46 VCL_call c error 46 VCL_return c restart 46 VCL_call c recv 46 VCL_return c pass 46 VCL_call c error 46 VCL_return c restart 46 VCL_call c deliver 46 VCL_return c deliver 46 TxProtocol c HTTP/1.1 46 TxStatus c 503 46 TxResponse c Service Unavailable 46 TxHeader c Server: Varnish 46 TxHeader c Retry-After: 0 46 TxHeader c Date: Tue, 05 Apr 2011 17:20:00 GMT 46 TxHeader c X-Varnish: 1066443305 46 TxHeader c Age: 0 46 TxHeader c Via: 1.1 varnish 46 TxHeader c Connection: close 46 Debug c "Write error, retval = -1, len = 174, errno = Broken pipe" -1 Length - 0 46 ReqEnd c 1066443305 1302024000.284072638 1302024000.408756971 0.000043154 0.124658108 0.000026226 46 StatSess c 1.2.3.4 2902 0 1 1 0 4 0 174 0 -Ronan From hieubka50 at gmail.com Wed Apr 6 08:48:23 2011 From: hieubka50 at gmail.com (Hieu DuongTrung) Date: Wed, 6 Apr 2011 15:48:23 +0700 Subject: Save cookie &session with web load balacer using varnish Message-ID: Hi, I'm configuring Varnish Load Balancing for 3 applications server.I used?director round-robin but i don't known cofigure to save session, cookie. Please help me! Thank a lot! Duong Hieu From bjorn at ruberg.no Wed Apr 6 09:02:51 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Wed, 06 Apr 2011 11:02:51 +0200 Subject: Save cookie &session with web load balacer using varnish In-Reply-To: References: Message-ID: <4D9C2C3B.7050702@ruberg.no> On 06. april 2011 10:48, Hieu DuongTrung wrote: > Hi, > I'm configuring Varnish Load Balancing for 3 applications server.I > used director round-robin but i don't known cofigure to save session, > cookie. Several options are available. If your application servers can share session information between them, make sure to let session cookies through to the backend where required. If sessions are not used often, you could dedicate one application server to handle sessions. Finally, you could use the client director and (somehow) add the session cookie to the client identity. There might be other options as well, these are from the top of my head. Hope this helps, -- Bj?rn From andrea.campi at zephirworks.com Wed Apr 6 13:21:08 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Wed, 6 Apr 2011 15:21:08 +0200 Subject: ETA for Varnish 3.0 Message-ID: Hi all, I know this question comes up from time to time, but I have to ask... What is the current feeling, when do you expect 3.0 to come out? I guess the previous answer ("when we feel it's been tested enough") still holds, but I'm looking at a generic timeframe, i.e. this quarter or later. FWIW, we have been stress-testing ESI with gzip, including replicating some actual traffic via em-proxy, and things look really good. Andrea -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwlazy at gmail.com Thu Apr 7 08:51:37 2011 From: pwlazy at gmail.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 7 Apr 2011 16:51:37 +0800 Subject: a problem about obj.cacheable Message-ID: hi all: i am learning document from varnish offical website. in the documents, there is a configure as below: sub vcl_fetch { if (!obj.cacheable) { obj.ttl = 10s; pass; } i wonder after varnish fetch a page form back server, what varnish decides the page if it's cacheable according to? please help me! thanks ! Regards! pwlazy -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Thu Apr 7 08:56:22 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Thu, 7 Apr 2011 10:56:22 +0200 Subject: a problem about obj.cacheable In-Reply-To: References: Message-ID: Varnish will decide if the object is cacheable based on the caching headers sent by the backend (Cache-Control etc.) Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of ?? Sent: Thursday, April 07, 2011 10:52 AM To: varnish-misc at varnish-cache.org Subject: a problem about obj.cacheable hi all: i am learning document from varnish offical website. in the documents, there is a configure as below: sub vcl_fetch { if (!obj.cacheable) { obj.ttl = 10s; pass; } i wonder after varnish fetch a page form back server, what varnish decides the page if it's cacheable according to? please help me! thanks ! Regards! pwlazy -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwlazy at gmail.com Thu Apr 7 09:25:01 2011 From: pwlazy at gmail.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 7 Apr 2011 17:25:01 +0800 Subject: a possible dead cycle? Message-ID: > > > hi all: > > at the bottom of the url > http://www.varnish-cache.org/docs/2.1/faq/general.html > > there is a configure as below: > > > sub vcl_fetch { > if (!obj.cacheable) { > # Limit the lifetime of all 'hit for pass' objects to 10 seconds > obj.ttl = 10s; > pass; > } > } > > from the configuration as above, when varnish fetch a page from backend server, and find it 's not cacheable, so go to fetch the page from backend sever again? and find it 's not cacheable again , so again and agian .......? please help me! thanks ! Regards! pwlazy > > please help me! thanks ! > > > > Regards! > pwlazy > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Thu Apr 7 09:31:14 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Thu, 7 Apr 2011 11:31:14 +0200 Subject: a possible dead cycle? In-Reply-To: References: Message-ID: No, if an object is not cacheable, it will simply not be cached (stored in memory and later served from here instead of fetching from the backend), but IT WILL BE DELIVERED to the client. The only time varnish will re-fetch the object from the backend is on a new client request. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of ?? Sent: Thursday, April 07, 2011 11:25 AM To: varnish-misc at varnish-cache.org Subject: a possible dead cycle? when varnish fetch a page from backend server, and find it 's not cacheable, so go to fetch the page from backend sever again? and find it 's not cacheable again , so again and agian .......? please help me! thanks ! Regards! pwlazy -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwlazy at gmail.com Thu Apr 7 10:04:31 2011 From: pwlazy at gmail.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 7 Apr 2011 18:04:31 +0800 Subject: a possible dead cycle? In-Reply-To: References: Message-ID: based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } when the object is not cacheable, then go to "pass" which means going to fetch the object from the backend server again? 2011/4/7 Traian Bratucu > No, if an object is not cacheable, it will simply not be cached (stored in > memory and later served from here instead of fetching from the backend), but > IT WILL BE DELIVERED to the client. > > The only time varnish will re-fetch the object from the backend is on a new > client request. > > > > Traian > > > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *?? > *Sent:* Thursday, April 07, 2011 11:25 AM > *To:* varnish-misc at varnish-cache.org > *Subject:* a possible dead cycle? > > > when varnish fetch a page from backend server, and find it 's not > cacheable, so go to fetch the page from backend sever again? > > > and find it 's not cacheable again , so again and agian .......? > > please help me! thanks ! > > > > Regards! > pwlazy > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Thu Apr 7 10:10:46 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Thu, 7 Apr 2011 12:10:46 +0200 Subject: a possible dead cycle? In-Reply-To: References: Message-ID: Please read the documentation you are referring to. You are talking about ?hit for pass? behaviour, it is clearly explained there, in English. Since I cannot explain again in another language, please read it more carefully. From: ?? [mailto:pwlazy at gmail.com] Sent: Thursday, April 07, 2011 12:05 PM To: Traian Bratucu Cc: varnish-misc at varnish-cache.org Subject: Re: a possible dead cycle? based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } when the object is not cacheable, then go to "pass" which means going to fetch the object from the backend server again? 2011/4/7 Traian Bratucu > No, if an object is not cacheable, it will simply not be cached (stored in memory and later served from here instead of fetching from the backend), but IT WILL BE DELIVERED to the client. The only time varnish will re-fetch the object from the backend is on a new client request. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of ?? Sent: Thursday, April 07, 2011 11:25 AM To: varnish-misc at varnish-cache.org Subject: a possible dead cycle? when varnish fetch a page from backend server, and find it 's not cacheable, so go to fetch the page from backend sever again? and find it 's not cacheable again , so again and agian .......? please help me! thanks ! Regards! pwlazy _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Thu Apr 7 10:38:46 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Thu, 07 Apr 2011 12:38:46 +0200 Subject: Vanrish 2.1.5 eating memory, hit % decrease Message-ID: Hi there ! I recently updated varnish from version 2.0.6 to 2.1.5 on a centos 5.4 server. After pretty much a day running 2.1.5, the system went out of memory. The server has 8G of memory and we specified a file of 50Gb as varnish cache file. I changed some options yesterday following Kristian Lyngstol recommendations ( http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/) and did a restart so the system is now running. I decreased the size of the cache file to 40Gb to. Here is a top output where you can see that varnish is consuming pretty much all the memory available: top -cbn 1 top - 11:54:24 up 625 days, 19:51, 1 user, load average: 0.52, 0.41, 0.36 Tasks: 110 total, 1 running, 109 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.1%id, 0.5%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 8177764k total, 8132712k used, 45052k free, 13100k buffers Swap: 4096564k total, 263324k used, 3833240k free, 7337568k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3733 varnish 15 0 44.6g 7.1g 6.7g S 2.0 91.4 9:56.07 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish -g varn .... 3732 root 15 0 106m 720 492 S 0.0 0.0 0:00.72 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish -g varn Regularly it?s consuming more and more swap aswell: >From 15mn ago: free -m total used free shared buffers cached Mem: 7986 7941 44 0 12 7163 -/+ buffers/cache: 765 7220 Swap: 4000 258 3742 >From 5 mn ago: free -m total used free shared buffers cached Mem: 7986 7940 45 0 15 7163 -/+ buffers/cache: 761 7224 Swap: 4000 260 3739 >From 1mn ago: free -m total used free shared buffers cached Mem: 7986 7940 45 0 17 7160 -/+ buffers/cache: 763 7222 Swap: 4000 262 3737 I understood that varnish relies on the system itself for memory allocation. Could we say that the bigger you file cache size is the more varnish need memory ? Or Am I facing an issue with my vcl that could have this side effect or ?? In addition to this I've seen the percentage hit decreasing from 65% average with 2.0.6 to 40% with 2.1.5. Apart from changing obj to beresp in the vcl_fetch, I had to specify return(deliver) or return(pass) instead of only deliver or pass. Would it be a reason for this ? Would you be able to give me pointers for me to dig a bit more ? I Attached a plot from cacti wich would show the difference between 2.0.6 and 2.1.5 (the gap is due to the time to update the server, the trafic was sent to an other server ...) Thanks in advance for your advices ! Jean-Francois Laurens Ing?nieur Syst?me Unix Resources et D?veloppement Secteur Backend RTS - Radio T?l?vision Suisse Quai Ernest-Ansermet 20 Case postale 234 CH - 1211 Gen?ve 8 T +41 (0)58 236 81 63 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graph_vanrish01.png Type: application/octet-stream Size: 27804 bytes Desc: not available URL: From weipeng.pengw at alibaba-inc.com Thu Apr 7 10:39:10 2011 From: weipeng.pengw at alibaba-inc.com (=?GB2312?B?xe3OsA==?=) Date: Thu, 7 Apr 2011 18:39:10 +0800 Subject: a possible dead cycle? In-Reply-To: References: Message-ID: i don't express myself well, then i talk about in detail the whole process i understand, please help me to check it based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } 1) after varnish get the object from the backend server, then the process will enter the hook vcl_fetch accoding to the configuration code above 2) if the obj is not cacheable, then varnish switch to "pass mode" accoding to the configuration code above 3) entering the pass model means varnish will get the object from the backend server again? ? 2011?4?7? ??6:10?Traian Bratucu >??? Please read the documentation you are referring to. You are talking about ?hit for pass? behaviour, it is clearly explained there, in English. Since I cannot explain again in another language, please read it more carefully. From: ?? [mailto:pwlazy at gmail.com] Sent: Thursday, April 07, 2011 12:05 PM To: Traian Bratucu Cc: varnish-misc at varnish-cache.org Subject: Re: a possible dead cycle? based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } when the object is not cacheable, then go to "pass" which means going to fetch the object from the backend server again? 2011/4/7 Traian Bratucu > No, if an object is not cacheable, it will simply not be cached (stored in memory and later served from here instead of fetching from the backend), but IT WILL BE DELIVERED to the client. The only time varnish will re-fetch the object from the backend is on a new client request. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of ?? Sent: Thursday, April 07, 2011 11:25 AM To: varnish-misc at varnish-cache.org Subject: a possible dead cycle? when varnish fetch a page from backend server, and find it 's not cacheable, so go to fetch the page from backend sever again? and find it 's not cacheable again , so again and agian .......? please help me! thanks ! Regards! pwlazy _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From traian.bratucu at eea.europa.eu Thu Apr 7 11:27:09 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Thu, 7 Apr 2011 13:27:09 +0200 Subject: a possible dead cycle? In-Reply-To: References: Message-ID: It works like this: 1. Varnish retrieves object and determines it is not cacheable 2. Through the ?hook? in fetch, you tell varnish that THIS object should be marked as not cacheable for the next 10 seconds 3. The object is delivered to the client 4. At some other point, within the next 10 seconds, another client request for the same object is received 5. Varnish knows that the object is not in the cache, and not cacheable, so it fetches it directly from the backend (pass) The hit-for-pass behaviour doesn?t do much else than avoid a unnecessary lookup for the object in the internal cache. Hope this is clear now. Traian From: ?? [mailto:weipeng.pengw at alibaba-inc.com] Sent: Thursday, April 07, 2011 12:39 PM To: Traian Bratucu Cc: varnish-misc at varnish-cache.org Subject: Re: a possible dead cycle? i don't express myself well, then i talk about in detail the whole process i understand, please help me to check it based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } 1) after varnish get the object from the backend server, then the process will enter the hook vcl_fetch accoding to the configuration code above 2) if the obj is not cacheable, then varnish switch to "pass mode" accoding to the configuration code above 3) entering the pass model means varnish will get the object from the backend server again? ? 2011?4?7? ??6:10?Traian Bratucu >??? Please read the documentation you are referring to. You are talking about ?hit for pass? behaviour, it is clearly explained there, in English. Since I cannot explain again in another language, please read it more carefully. From: ?? [mailto:pwlazy at gmail.com] Sent: Thursday, April 07, 2011 12:05 PM To: Traian Bratucu Cc: varnish-misc at varnish-cache.org Subject: Re: a possible dead cycle? based on the configuration as below: sub vcl_fetch { if (!obj.cacheable) { # Limit the lifetime of all 'hit for pass' objects to 10 seconds obj.ttl = 10s; pass; } } when the object is not cacheable, then go to "pass" which means going to fetch the object from the backend server again? 2011/4/7 Traian Bratucu > No, if an object is not cacheable, it will simply not be cached (stored in memory and later served from here instead of fetching from the backend), but IT WILL BE DELIVERED to the client. The only time varnish will re-fetch the object from the backend is on a new client request. Traian From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of ?? Sent: Thursday, April 07, 2011 11:25 AM To: varnish-misc at varnish-cache.org Subject: a possible dead cycle? when varnish fetch a page from backend server, and find it 's not cacheable, so go to fetch the page from backend sever again? and find it 's not cacheable again , so again and agian .......? please help me! thanks ! Regards! pwlazy _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc ________________________________ This email (including any attachments) is confidential and may be legally privileged. If you received this email in error, please delete it immediately and do not copy it or use it for any purpose or disclose its contents to any other person. Thank you. ???(??????)?????????????????????????????????????????????????????????????????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Thu Apr 7 23:14:33 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 7 Apr 2011 16:14:33 -0700 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: References: Message-ID: This is expected behavior with "-sfile" -- Varnish will use all available RAM to cache its much larger on-disk cache component. If you can get away with memory-only caching, you can use "-smalloc" to utilize a specific amount of RAM. If you do want to use "-sfile" but are having issue with swap or memory allocation, increasing /proc/sys/vm/min_free_kbytes (or vm.min_free_kbytes via sysctl) to something like 131072 will stabilize performance /in my experience/. Hope it helps, -- kb On Thu, Apr 7, 2011 at 03:38, Jean-Francois Laurens < jean-francois.laurens at rts.ch> wrote: > Hi there ! > > I recently updated varnish from version 2.0.6 to 2.1.5 on a centos 5.4 > server. > > After pretty much a day running 2.1.5, the system went out of memory. The > server has 8G of memory and we specified a file of 50Gb as varnish cache > file. > > I changed some options yesterday following Kristian Lyngstol > recommendations ( * > http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/)*and did a restart so the system is now running. > > I decreased the size of the cache file to 40Gb to. > > Here is a top output where you can see that varnish is consuming pretty > much all the memory available: > top -cbn 1 > top - 11:54:24 up 625 days, 19:51, 1 user, load average: 0.52, 0.41, 0.36 > Tasks: 110 total, 1 running, 109 sleeping, 0 stopped, 0 zombie > Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.1%id, 0.5%wa, 0.0%hi, 0.1%si, > 0.0%st > Mem: 8177764k total, 8132712k used, 45052k free, 13100k buffers > Swap: 4096564k total, 263324k used, 3833240k free, 7337568k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 3733 varnish 15 0 44.6g 7.1g 6.7g S 2.0 91.4 9:56.07 > /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f > /etc/varnish/default.vcl -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish > -g varn > .... > 3732 root 15 0 106m 720 492 S 0.0 0.0 0:00.72 > /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f > /etc/varnish/default.vcl -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish > -g varn > > Regularly it?s consuming more and more swap aswell: > From 15mn ago: > free -m > total used free shared buffers cached > Mem: 7986 7941 44 0 12 7163 > -/+ buffers/cache: 765 7220 > Swap: 4000 258 3742 > > From 5 mn ago: > free -m > total used free shared buffers cached > Mem: 7986 7940 45 0 15 7163 > -/+ buffers/cache: 761 7224 > Swap: 4000 260 3739 > > From 1mn ago: > free -m > total used free shared buffers cached > Mem: 7986 7940 45 0 17 7160 > -/+ buffers/cache: 763 7222 > Swap: 4000 262 3737 > > I understood that varnish relies on the system itself for memory > allocation. > Could we say that the bigger you file cache size is the more varnish need > memory ? > Or Am I facing an issue with my vcl that could have this side effect or ?? > > In addition to this I've seen the percentage hit decreasing from 65% > average with 2.0.6 to 40% with 2.1.5. > Apart from changing obj to beresp in the vcl_fetch, I had to specify > return(deliver) or return(pass) instead of only deliver or pass. > Would it be a reason for this ? > Would you be able to give me pointers for me to dig a bit more ? > > I Attached a plot from cacti wich would show the difference between 2.0.6 > and 2.1.5 (the gap is due to the time to update the server, the trafic was > sent to an other server ...) > > > Thanks in advance for your advices ! > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > *RTS - Radio T?l?vision Suisse > *Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Fri Apr 8 08:55:10 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Fri, 08 Apr 2011 10:55:10 +0200 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: Message-ID: Hi Ken, Thanks for the hint ! You?re affecting here 128Mb, how did you get to this munber ? I read somewhere that this value can be set to 10% of the actual memory size which would be in my case 800Mb, does it make sense for you ? I read aswell that setting this value to high would crash the system immediately. Yesterday evening, the system was in heavy load but varnish did not hang ! Instead it dropped all its objects ! Then the load went back fine. It seems setting ?sfile to 40Gb suits better the memory capability for this server. A question remains though ... Why all the objects were dropped ? Attached is a plot from cacti regarding the number of objects. The only thing I could get form the messages log is this : Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3 Apr 7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete Apr 7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child starts Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to mmap 42949672960 bytes of 42949672960 How could I get to know what is realy happening that could explain this behaviour ? Cheers, Jef Le 08/04/11 01:14, ??Ken Brownfield?? a ?crit?: > This is expected behavior with "-sfile" -- Varnish will use all available RAM > to cache its much larger on-disk cache component. ?If you can get away with > memory-only caching, you can use "-smalloc" to utilize a specific amount of > RAM. > > If you do want to use "-sfile" but are having issue with swap or memory > allocation, increasing /proc/sys/vm/min_free_kbytes (or vm.min_free_kbytes via > sysctl) to something like 131072 will stabilize performance /in my > experience/. > > Hope it helps, > --? > kb > > > > On Thu, Apr 7, 2011 at 03:38, Jean-Francois Laurens > wrote: >> Hi there ! >> >> I recently updated varnish from version 2.0.6 to 2.1.5 on a centos 5.4 >> server. >> >> After pretty much a day running 2.1.5, the system went out of memory. The >> server has 8G of memory and we specified a file of 50Gb as varnish cache >> file. >> >> I changed some options yesterday following Kristian Lyngstol recommendations >> ( http://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/) and >> did a restart so the system is now running. >> >> I decreased the size of the cache file to 40Gb to. >> >> Here is a top output where you can see that varnish is consuming pretty much >> all the memory available: >> top -cbn 1 >> top - 11:54:24 up 625 days, 19:51, ?1 user, ?load average: 0.52, 0.41, 0.36 >> Tasks: 110 total, ??1 running, 109 sleeping, ??0 stopped, ??0 zombie >> Cpu(s): ?0.1%us, ?0.2%sy, ?0.0%ni, 99.1%id, ?0.5%wa, ?0.0%hi, ?0.1%si, >> ?0.0%st >> Mem: ??8177764k total, ?8132712k used, ???45052k free, ???13100k buffers >> Swap: ?4096564k total, ??263324k used, ?3833240k free, ?7337568k cached >> >> ??PID USER ?????PR ?NI ?VIRT ?RES ?SHR S %CPU %MEM ???TIME+ ?COMMAND >> ????????????????????????????????????????????????????????????????????????????? >> ?????????????????????????????????????????????????? >> ?3733 varnish ??15 ??0 44.6g 7.1g 6.7g S ?2.0 91.4 ??9:56.07 >> /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl >> -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish -g >> varn >> .... >> 3732 root ?????15 ??0 ?106m ?720 ?492 S ?0.0 ?0.0 ??0:00.72 >> /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl >> -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish -g >> varn >> >> Regularly it?s consuming more and more swap aswell: >> From 15mn ago: >> free -m >> ?????????????total ??????used ??????free ????shared ???buffers ????cached >> Mem: ?????????7986 ??????7941 ????????44 ?????????0 ????????12 ??????7163 >> -/+ buffers/cache: ???????765 ??????7220 >> Swap: ????????4000 ???????258 ??????3742 >> >> From 5 mn ago: >> free -m >> ?????????????total ??????used ??????free ????shared ???buffers ????cached >> Mem: ?????????7986 ??????7940 ????????45 ?????????0 ????????15 ??????7163 >> -/+ buffers/cache: ???????761 ??????7224 >> Swap: ????????4000 ???????260 ??????3739 >> >> From 1mn ago: >> free -m >> ?????????????total ??????used ??????free ????shared ???buffers ????cached >> Mem: ?????????7986 ??????7940 ????????45 ?????????0 ????????17 ??????7160 >> -/+ buffers/cache: ???????763 ??????7222 >> Swap: ????????4000 ???????262 ??????3737 >> >> I understood that varnish relies on the system itself for memory allocation. >> Could we say that the bigger you file cache size is the more varnish need >> memory ? >> Or Am I facing an issue with my vcl that could have this side effect or ?? >> >> In addition to this I've seen the percentage hit decreasing from 65% average >> with 2.0.6 to 40% with 2.1.5. >> Apart from changing obj to beresp in the vcl_fetch, I had to specify >> return(deliver) or return(pass) instead of only deliver or pass. >> Would it be a reason for this ? >> Would you be able to give me pointers for me to dig a bit more ? >> >> I Attached a plot from cacti wich would show the difference between 2.0.6 and >> 2.1.5 (the gap is due to the time to update the server, the trafic was sent >> to an other server ...) >> >> >> Thanks in advance for your advices ! >> >> Jean-Francois Laurens >> Ing?nieur Syst?me Unix >> Resources et D?veloppement >> Secteur Backend >> RTS - Radio T?l?vision Suisse >> Quai Ernest-Ansermet 20 ??????????????????????? >> Case postale 234 ??????????????????????????????????? >> CH - 1211 Gen?ve 8 >> T +41 (0)58 236 81 63 >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> Jean-Francois Laurens >> Ing?nieur Syst?me Unix >> Resources et D?veloppement >> Secteur Backend >> RTS - Radio T?l?vision Suisse >> Quai Ernest-Ansermet 20 >> Case postale 234 >> CH - 1211 Gen?ve 8 >> T +41 (0)58 236 81 63 >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish_data_structure_size_cacti.png Type: application/octet-stream Size: 52759 bytes Desc: not available URL: From kbrownfield at google.com Fri Apr 8 20:55:29 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Fri, 8 Apr 2011 13:55:29 -0700 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: References: Message-ID: This means the child process died and restarted (the reason for this should appear earlier in the log; perhaps your cli_timeout is too low under a heavily loaded system -- try 20s). "-sfile" is not persistent storage, so when the child process restarts it uses a new, empty storage structure. You should have luck with "-spersistent" on the latest Varnish or trunk, at least for child process restarts. FWIW, -- kb On Fri, Apr 8, 2011 at 01:55, Jean-Francois Laurens < jean-francois.laurens at rts.ch> wrote: > Hi Ken, > > Thanks for the hint ! > You?re affecting here 128Mb, how did you get to this munber ? I read > somewhere that this value can be set to 10% of the actual memory size which > would be in my case 800Mb, does it make sense for you ? > I read aswell that setting this value to high would crash the system > immediately. > > > Yesterday evening, the system was in heavy load but varnish did not hang ! > Instead it dropped all its objects ! Then the load went back fine. > It seems setting ?sfile to 40Gb suits better the memory capability for this > server. > A question remains though ... Why all the objects were dropped ? > Attached is a plot from cacti regarding the number of objects. > > The only thing I could get form the messages log is this : > Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3 > Apr 7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete > Apr 7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started > Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said > Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child > starts > Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to > mmap 42949672960 bytes of 42949672960 > > > How could I get to know what is realy happening that could explain this > behaviour ? > > Cheers, > Jef > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Fri Apr 8 21:09:23 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Fri, 8 Apr 2011 14:09:23 -0700 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: References: Message-ID: I forgot about your min_free_kbytes question: While I would personally recommend 131072 as *a starting point*, this value does not translate directly to what is actually retained as free RAM. In my experience, the kernel's behavior is non-linear, non-deterministic, and very delicate. Usually the kernel will keep much more free RAM than specified (2-3x), and modifying this value too often under load will cause permanent behavior problems in the kernel. Setting it to 10% is a terrible idea under any circumstance I can imagine. The goal with this setting in the context of a backing-store cache is to set it high enough that you have 5-15 seconds of read/write I/O throughput available for bursts. For example, if Varnish is committing 5MB/s to/from disk, make sure you have 25-75MB of RAM free at a minimum. This might only translate to a min_free_kbytes of 12000-30000. I'd strongly suggest modifying the value slowly and carefully, ideally only once after a reboot via sysctl.conf. But once done, my 1TB -spersistent Varnish instances became very stable. -- kb On Fri, Apr 8, 2011 at 13:55, Ken Brownfield wrote: > This means the child process died and restarted (the reason for this should > appear earlier in the log; perhaps your cli_timeout is too low under a > heavily loaded system -- try 20s). > > "-sfile" is not persistent storage, so when the child process restarts it > uses a new, empty storage structure. You should have luck with > "-spersistent" on the latest Varnish or trunk, at least for child process > restarts. > > FWIW, > -- > kb > > > > On Fri, Apr 8, 2011 at 01:55, Jean-Francois Laurens < > jean-francois.laurens at rts.ch> wrote: > >> Hi Ken, >> >> Thanks for the hint ! >> You?re affecting here 128Mb, how did you get to this munber ? I read >> somewhere that this value can be set to 10% of the actual memory size which >> would be in my case 800Mb, does it make sense for you ? >> I read aswell that setting this value to high would crash the system >> immediately. >> >> >> Yesterday evening, the system was in heavy load but varnish did not hang ! >> Instead it dropped all its objects ! Then the load went back fine. >> It seems setting ?sfile to 40Gb suits better the memory capability for >> this server. >> A question remains though ... Why all the objects were dropped ? >> Attached is a plot from cacti regarding the number of objects. >> >> The only thing I could get form the messages log is this : >> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3 >> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete >> Apr 7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started >> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said >> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child >> starts >> Apr 7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to >> mmap 42949672960 bytes of 42949672960 >> >> >> How could I get to know what is realy happening that could explain this >> behaviour ? >> >> Cheers, >> Jef >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From krjeschke at omniti.com Fri Apr 8 14:51:29 2011 From: krjeschke at omniti.com (Katherine Jeschke) Date: Fri, 8 Apr 2011 10:51:29 -0400 Subject: Surge 2011 CFP Deadline Extended Message-ID: OmniTI is pleased to announce that the CFP deadline for Surge 2011, the Scalability and Performance Conference, (Baltimore: Sept 28-30, 2011) has been extended to 23:59:59 EDT, April 17, 2011. The event focuses upon case studies that demonstrate successes (and failures) in Web applications and Internet architectures. New this year: Hack Day and Unconference on September 28th. For information about topics: http://omniti.com/surge/2011. Get inspired by the 2010 sessions, now online at (http://omniti.com/surge/2010) 2010 attendees compared Surge to the early days of Velocity, and our speakers received 3.5-4 out of 4 stars for quality of presentation and quality of content! Nearly 90% of first-year attendees are planning to come again in 2011. For more information about the CFP or sponsorship of the event, please contact us: surge (AT) omniti (DOT) com. -- Katherine Jeschke Marketing Director OmniTI Computer Consulting, Inc. 7070 Samuel Morse Drive, Ste.150 Columbia, MD 21046 O: 410/872-4910, 222 C: 443/643-6140 omniti.com circonus.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Tue Apr 12 15:05:57 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Tue, 12 Apr 2011 17:05:57 +0200 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: Message-ID: Thanks for those advises, I?ll try with 131072 and see If I can get a better behavior already. Jef Le 08/04/11 23:09, ??Ken Brownfield?? a ?crit?: > I forgot about your min_free_kbytes question: > > While I would personally recommend 131072 as a starting point, this value does > not translate directly to what is actually retained as free RAM. ?In my > experience, the kernel's behavior is non-linear, non-deterministic, and very > delicate. ?Usually the kernel will keep much more free RAM than specified > (2-3x), and modifying this value too often under load will cause permanent > behavior problems in the kernel. > > Setting it to 10% is a terrible idea under any circumstance I can imagine. > ?The goal with this setting in the context of a backing-store cache is to set > it high enough that you have 5-15 seconds of read/write I/O throughput > available for bursts. ?For example, if Varnish is committing 5MB/s to/from > disk, make sure you have 25-75MB of RAM free at a minimum. ?This might only > translate to a min_free_kbytes of 12000-30000. > > I'd strongly suggest modifying the value slowly and carefully, ideally only > once after a reboot via sysctl.conf. ?But once done, my 1TB -spersistent > Varnish instances became very stable. > --? > kb > > > > On Fri, Apr 8, 2011 at 13:55, Ken Brownfield wrote: >> This means the child process died?and restarted?(the reason for this should >> appear earlier in the log; perhaps your cli_timeout is too low under a >> heavily loaded system -- try 20s). >> >> "-sfile" is not persistent storage, so when the child process restarts it >> uses a new, empty storage structure. ?You should have luck with >> "-spersistent" on the latest Varnish or trunk, at least for child process >> restarts. >> >> FWIW, >> --? >> kb >> >> >> >> On Fri, Apr 8, 2011 at 01:55, Jean-Francois Laurens >> wrote: >>> Hi Ken, >>> >>> Thanks for the hint ! >>> You?re affecting here 128Mb, how did you get to this munber ? I read >>> somewhere that this value can be set to 10% of the actual memory size which >>> would be in my case 800Mb, does it make sense for you ? >>> I read aswell that setting this value to high would crash the system >>> immediately. >>> >>> >>> Yesterday evening, the system was in heavy load but varnish did not hang ! >>> Instead it dropped all its objects ! Then the load went back fine. >>> It seems setting ?sfile to 40Gb suits better the memory capability for this >>> server. >>> A question remains though ... Why all the objects were dropped ? >>> Attached is a plot from cacti regarding the number of objects. >>> >>> The only thing I could get form the messages log is this : >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3 >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child starts >>> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to >>> mmap 42949672960 bytes of 42949672960 >>> >>> >>> How could I get to know what is realy happening that could explain this >>> behaviour ? >>> >>> Cheers, >>> Jef > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionathan at gmail.com Thu Apr 14 05:46:30 2011 From: ionathan at gmail.com (Jonathan Leibiusky) Date: Thu, 14 Apr 2011 02:46:30 -0300 Subject: show cached content Message-ID: Hi! Is there any way to know which are the URLs that are best cached in varnish. By that I mean those URLs with best hit ratio. Thanks! Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Thu Apr 14 06:39:56 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 14 Apr 2011 08:39:56 +0200 Subject: show cached content In-Reply-To: References: Message-ID: Hi Jonathan, On Thu, Apr 14, 2011 at 7:46 AM, Jonathan Leibiusky wrote: > > Hi! Is there any way to know which are the URLs that are best cached in varnish. By that I mean those URLs with best hit ratio. You could log obj.hits from vcl_deliver and then filter out the results from the shmlog and then crunch it. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Whitepapers?| Video?| Twitter From perbu at varnish-software.com Thu Apr 14 16:18:00 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 14 Apr 2011 18:18:00 +0200 Subject: web based forum now online Message-ID: Hi. Most of us old farts like 'old school' project discussion tools such as mailing lists and IRC. A bunch of people have told me that they are not entirely comfortable with these tools and would like to have a web based forum. I've heard the Pfsense people made a web forum some time back and it has worked out rather well for them. If you like web forums, I invite you to come on over and hang out. If not, don't. The forum is located at https://www.varnish-cache.org/forum/ Cheers, Per. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml at luv.guly.org Thu Apr 14 16:28:17 2011 From: ml at luv.guly.org (Sandro guly Zaccarini) Date: Thu, 14 Apr 2011 18:28:17 +0200 Subject: web based forum now online In-Reply-To: References: Message-ID: <20110414162817.GA17499@shivaya.guly.org> On Thu, Apr 14, 2011 at 06:18:00PM +0200, Per Buer wrote: > Hi. > > Most of us old farts like 'old school' project discussion tools such as > mailing lists and IRC. A bunch of people have told me that they are > not entirely comfortable with these tools and would like to have a web based > forum. I've heard the Pfsense people made a web forum some time back and it > has worked out rather well for them. > > If you like web forums, I invite you to come on over and hang out. If not, > don't. The forum is located at https://www.varnish-cache.org/forum/ snort recently dropped both ml and forum for a google group, which offers both at the same times (read: no duplication of information/problem). did you considered it? sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ From geoff at uplex.de Thu Apr 14 16:35:28 2011 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 14 Apr 2011 18:35:28 +0200 Subject: web based forum now online In-Reply-To: References: Message-ID: <4DA72250.5000908@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 04/14/11 06:18 PM, Per Buer wrote: > > If you like web forums, I invite you to come on over and hang out. If not, > don't. The forum is located at https://www.varnish-cache.org/forum/ Could the forum be set up so that if you already have a login/password for the Varnish wiki, then you can use the same login for the forum? Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNpyJQAAoJEOUwvh9pJNURJWgQAIcV8miZE/XZnxTXDQI3BohE LPTewCaRZIRrvtErp3NN5zSeyV5Y4pzDVKSBenhwgkuItyiRsYmqPTerQ2vaNJEu dY+wuMDBUewL7T918pryY1jgfDdMGrn7nCWyq8Xw02hgw0hMztmrrQCflwvK44We EeohoqqY//Ugj7ulnaohOMt+P/BO8aqdlqLC2I2H+3485xTYYH7eu/gosIBbP+SF pKw8F/lYRkim48aexzM+BPgfKfOhgI05dI0JIDvC1/lJYIaXHbeZydWk0P9hNu/k reVrSqzLBArDELn912hhd8VtHBmpOVDicsjOjYzFdLJFJTXbHv+vtkVweYxk+Gut i3vR13AA+WiN0P8tJAV8gtxku86SPc+bojAVwNCZhkyfiSw1kYouEtFMfRyJ24Qg rDPcVFpfaBPTGtzyGKkTCbIQqhqZ70SiMspJy7NMgwRVqsUk6mylMLJbITSDWQNR GBxvorRHb826Rwa0b12oyrRCMnrnGONXyTt9ONNw0Hw4Apy1kYq3ghKBDjY2YBoD Qbf4azmngPmWNYID4BWqA4BZyOfeZfurTfL9cTP9HJCEFCZHooSxq5s1kGKfFRpw GZRRhj2bWb6zd9fH+L3TbljduaV9cTO2bn8F5cJImr6tU/MiNCmWv1j34X56lQrH xoINLUp07spwWP8Me6q5 =D7ED -----END PGP SIGNATURE----- From perbu at varnish-software.com Thu Apr 14 17:11:02 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 14 Apr 2011 19:11:02 +0200 Subject: web based forum now online In-Reply-To: <4DA72250.5000908@uplex.de> References: <4DA72250.5000908@uplex.de> Message-ID: On Thu, Apr 14, 2011 at 6:35 PM, Geoff Simmons wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 04/14/11 06:18 PM, Per Buer wrote: > > > > If you like web forums, I invite you to come on over and hang out. If > not, > > don't. The forum is located at https://www.varnish-cache.org/forum/ > > Could the forum be set up so that if you already have a login/password > for the Varnish wiki, then you can use the same login for the forum? > Good question. Our Drupal authenticates against LDAP and an internal authentication database. It might also try against a htpasswd file and auto-create users if it succeeds - but it would not work the other way. I'll ask my Drupal experts. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Thu Apr 14 17:21:53 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 14 Apr 2011 19:21:53 +0200 Subject: web based forum now online In-Reply-To: <20110414162817.GA17499@shivaya.guly.org> References: <20110414162817.GA17499@shivaya.guly.org> Message-ID: On Thu, Apr 14, 2011 at 6:28 PM, Sandro guly Zaccarini wrote: > On Thu, Apr 14, 2011 at 06:18:00PM +0200, Per Buer wrote: > > Hi. > > > > Most of us old farts like 'old school' project discussion tools such as > > mailing lists and IRC. A bunch of people have told me that they are > > not entirely comfortable with these tools and would like to have a web > based > > forum. I've heard the Pfsense people made a web forum some time back and > it > > has worked out rather well for them. > > > > If you like web forums, I invite you to come on over and hang out. If > not, > > don't. The forum is located at https://www.varnish-cache.org/forum/ > > snort recently dropped both ml and forum for a google group, which > offers both at the same times (read: no duplication of > information/problem). > > did you considered it? > Yes. I decided not to touch the existing mailing lists. People are happy with the way they work and not everyone like Google groups. I am more or less the local Google fanboy at the office but not everyone seem to be as enthusiastic and trusting as me. :-) Controlling the actual content gives us a bit of freedom so we decided to use something we control. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhettwer at team.mobile.de Mon Apr 18 11:44:59 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Mon, 18 Apr 2011 12:44:59 +0100 Subject: cache warming strategies Message-ID: Hi All, I'm thinking about how to auto warm my varnish caches. I have a list of cacheable urls which I like to refresh once in a while to make sure that whenever somebody hits that page, it's really a cache hit. I'm wondering about best practices with regards to holding a cache warm. I remember that I saw an eMail on this list where somebody used a curl based script with a specific http header. Whenever that http header was set, the vcl logic said "go and fetch from backend". However, I can't find that mail anymore and I cant remember how it was accomplished. I also remember something about the reaper thread which is clearing the cache and whenever it's about to remove an object from cache, there was a way to tell varnish to not delete the object, but rather re-fetch it. Again, I can't find any documentation about. Is my mind playing tricks on me?! :) Long Story Short: Any advices on how to keep the cache warm? Thanks in advance, Marian -- Marian Hettwer | Site Ops Engineer | mobile.international GmbH Fon: + 49-(0)30-8109 - 7332 Fax: + 49-(0)30-8109 - 7131 Mail: mhettwer at team.mobile.de Web: www.mobile.de Marktplatz 1 | 14532 Europarc Dreilinden | Germany From james at jamesthornton.com Mon Apr 18 12:36:31 2011 From: james at jamesthornton.com (James Thornton) Date: Mon, 18 Apr 2011 07:36:31 -0500 Subject: Benefits of Varnish vs nginx/ncache? Message-ID: When you have nginx as a reverse proxy to your application servers, what are the benenfits of adding Varnish as the caching layer between nginx and the app servers (as depicted here http://www.heroku.com/how/architecture) vs having nginx perform the caching, now that ncache is built into nginx? For example, why is (nginx -> Varnish -> uWSGI) better than (nginx w/caching -> uWSGI)? I have read http://www.varnish-cache.org/trac/wiki/ArchitectNotes, but I'm still trying to understand the differences in this context. Thank you. - James From mail at danielbruessler.de Mon Apr 18 14:02:07 2011 From: mail at danielbruessler.de (=?ISO-8859-15?Q?Daniel_Br=FC=DFler?=) Date: Mon, 18 Apr 2011 16:02:07 +0200 Subject: symlink for debian-installation, trac Message-ID: <4DAC445F.2030602@danielbruessler.de> Hi varnish team, please do a symlink from http://repo.varnish-cache.org/ubuntu/dists/maverick to http://repo.varnish-cache.org/ubuntu/dists/lucid and to http://repo.varnish-cache.org/ubuntu/dists/natty so that apt can download the Packages.gz for the sources.list in Ubuntu maverick (10.10) and in the comming Natty Narwhal (11.04) what is out in 10 days. Thanks! --- I tried to write this as ticket in your trac, but I couldn't register. The form-field "Please enter the name of a VCL function" neither accepted "purge" nor "lookup". Is this not open for public? Greets from germany Daniel Br??ler From cosmih at gmail.com Mon Apr 18 19:45:31 2011 From: cosmih at gmail.com (cosmih) Date: Mon, 18 Apr 2011 21:45:31 +0200 Subject: is the POST content reforwarded in case of a restart ? Message-ID: Hello, Because I haven't found this information anywhere I would like to ask you if the POST content will be forwarded again in case a restart is involved like in the below example: sub vcl_recv { set req.http.ORIGINAL-REQUEST = req.url; if (req.url ~ "^/path1/somthing/else" ) { set req.url = reqsub(req.url, "/path1/(.*)$", "/path2/\1"); set backend = round_robin_director; } } sub vcl_fetch { if (beresp.status == 500 || beresp.status == 400) { set beresp.saintmode = 5s; set req.url = req.http.ORIGINAL-REQUEST; restart; } } sub vcl_error { if (obj.status == 503 && req.restarts < 6) { set req.url = req.http.ORIGINAL-REQUEST; restart; } } Regards, -- Cosmih From kbrownfield at google.com Mon Apr 18 19:56:35 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Mon, 18 Apr 2011 12:56:35 -0700 Subject: is the POST content reforwarded in case of a restart ? In-Reply-To: References: Message-ID: Yes, this has been confirmed a few times on varnish-misc. I'd suggest not restarting POST in the case of a 500 status. A 503 status (no available Varnish backend) is debatable. Personally, I restart POSTs that receive 503s because my app(s) don't return 503. FWIW, -- kb On Mon, Apr 18, 2011 at 12:45, cosmih wrote: > Hello, > > Because I haven't found this information anywhere I would like to ask > you if the POST content will be forwarded again in case a restart is > involved like in the below example: > > sub vcl_recv { > set req.http.ORIGINAL-REQUEST = req.url; > if (req.url ~ "^/path1/somthing/else" ) { > set req.url = reqsub(req.url, "/path1/(.*)$", "/path2/\1"); > set backend = round_robin_director; > } > } > > sub vcl_fetch { > if (beresp.status == 500 || beresp.status == 400) { > set beresp.saintmode = 5s; > set req.url = req.http.ORIGINAL-REQUEST; > restart; > } > } > > sub vcl_error { > if (obj.status == 503 && req.restarts < 6) { > set req.url = req.http.ORIGINAL-REQUEST; > restart; > } > } > > > Regards, > > -- > Cosmih > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosmih at gmail.com Mon Apr 18 20:29:06 2011 From: cosmih at gmail.com (cosmih) Date: Mon, 18 Apr 2011 22:29:06 +0200 Subject: is the POST content reforwarded in case of a restart ? In-Reply-To: References: Message-ID: I am asking because my varnish running setup (something like the example from my previous email but with lots of routing/rewriting/caching rules) the second POST request (after a restart) sent to backend doesn't contain a valid content and the backend returns a 400. I should mention that the first POST request is "restarted" because the backend doesn't respond within 30 sec. Please find below the a sample of the apache (the backend) error log containing the the post data. [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(51): mod_dumpio: dumpio_in (data-HEAP): 21 bytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(67): mod_dumpio: dumpio_in (data-HEAP): Content-Length: 249\r\n [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(103): mod_dumpio: dumpio_in [getline-blocking] 0 readbytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(51): mod_dumpio: dumpio_in (data-HEAP): 23 bytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(67): mod_dumpio: dumpio_in (data-HEAP): X-Varnish: 1464392690\r\n [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(103): mod_dumpio: dumpio_in [getline-blocking] 0 readbytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(51): mod_dumpio: dumpio_in (data-HEAP): 2 bytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(67): mod_dumpio: dumpio_in (data-HEAP): \r\n [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(103): mod_dumpio: dumpio_in [readbytes-blocking] 249 readbytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(51): mod_dumpio: dumpio_in (metadata-EOS): 0 bytes [Mon Apr 18 14:36:32 2011] [debug] mod_deflate.c(602): [client 172.16.16.13] Zlib: Compressed 0 to 2 : URL /path2/something/else, referer: http://www.example.net/path1/something/else [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(129): mod_dumpio: dumpio_out [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(51): mod_dumpio: dumpio_out (data-HEAP): 212 bytes [Mon Apr 18 14:36:32 2011] [debug] mod_dumpio.c(67): mod_dumpio: dumpio_out (data-HEAP): HTTP/1.1 400 Bad Request\r\nDate: Mon, 18 Apr 2011 12:36:32 GMT\r\nServer: Apache\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Length: 20\r\nConnection: close\r\nContent-Type: text/html; charset=iso-8859-1\r\n\r\n -- Cosmih On Mon, Apr 18, 2011 at 9:56 PM, Ken Brownfield wrote: > Yes, this has been confirmed a few times on varnish-misc. ?I'd suggest not > restarting POST in the case of a 500 status. ?A 503 status (no available > Varnish backend) is debatable. ?Personally, I restart POSTs that receive > 503s because my app(s) don't return 503. > > FWIW, > -- > kb > > > On Mon, Apr 18, 2011 at 12:45, cosmih wrote: >> >> Hello, >> >> Because I haven't found this information anywhere I would like to ask >> you if the POST content will be forwarded again in case a restart is >> involved like in the below example: >> >> sub vcl_recv { >> ? set req.http.ORIGINAL-REQUEST = req.url; >> ? if (req.url ~ "^/path1/somthing/else" ) { >> ? ? ?set req.url = reqsub(req.url, "/path1/(.*)$", "/path2/\1"); >> ? ? ?set backend = round_robin_director; >> ? } >> } >> >> sub vcl_fetch { >> ? if (beresp.status == 500 || beresp.status == 400) { >> ? ? ?set beresp.saintmode = 5s; >> ? ? ?set req.url = req.http.ORIGINAL-REQUEST; >> ? ? ?restart; >> ? } >> } >> >> sub vcl_error { >> ? if (obj.status == 503 && req.restarts < 6) { >> ? ? ?set req.url = req.http.ORIGINAL-REQUEST; >> ? ? ?restart; >> ? } >> } >> >> >> Regards, >> >> -- >> Cosmih >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From yanghatespam at gmail.com Tue Apr 19 00:46:41 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Mon, 18 Apr 2011 17:46:41 -0700 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: Hi, we're reconsidering using Varnish again, and this question I posted 4 months ago is the biggest blocker for us. We've been using Squid for a while, but it doesn't have collapsed forwarding, which we need (and Varnish provides). But we also need a persistent cache. We're ready to roll our own caching proxy, but if Varnish already provides persistence, then that would be a huge boon. We tried playing around with Varnish -s persistent 4 months ago but it didn't seem to be what the name implied. On Thu, Jan 27, 2011 at 8:25 PM, Yang Zhang wrote: > I've been playing around with the experimental persistent storage in > varnish-2.1.5 SVN 0843d7a, but I'm finding that the cache doesn't seem > to survive across restarts. > > This matches up with hints like "When storage is full, Varnish should > restart, cleaning storage" from > http://www.varnish-cache.org/trac/wiki/changelog_2.0.6-2.1.0. > > Can anyone clarify what's persistent about persistent storage, how it > differs from -sfile, etc.? I tried looking up info but didn't find > much beyond implementation details in > http://www.varnish-cache.org/trac/wiki/ArchitecturePersistentStorage. > Thanks in advance. > > -- > Yang Zhang > http://yz.mit.edu/ > -- Yang Zhang http://yz.mit.edu/ From traian.bratucu at eea.europa.eu Tue Apr 19 06:53:35 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Tue, 19 Apr 2011 08:53:35 +0200 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: There is no persistent cache in varnish. At least, not yet. -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Yang Zhang Sent: Tuesday, April 19, 2011 2:47 AM To: varnish-misc at varnish-cache.org Subject: Re: Understanding persistent storage Hi, we're reconsidering using Varnish again, and this question I posted 4 months ago is the biggest blocker for us. We've been using Squid for a while, but it doesn't have collapsed forwarding, which we need (and Varnish provides). But we also need a persistent cache. We're ready to roll our own caching proxy, but if Varnish already provides persistence, then that would be a huge boon. We tried playing around with Varnish -s persistent 4 months ago but it didn't seem to be what the name implied. From jean-francois.laurens at rts.ch Tue Apr 19 07:04:42 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Tue, 19 Apr 2011 09:04:42 +0200 Subject: Understanding persistent storage In-Reply-To: Message-ID: I guess you mean it?s present in the options but not working correctly now for production use ? Le 19/04/11 08:53, ??Traian Bratucu?? a ?crit?: > There is no persistent cache in varnish. At least, not yet. > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Yang Zhang > Sent: Tuesday, April 19, 2011 2:47 AM > To: varnish-misc at varnish-cache.org > Subject: Re: Understanding persistent storage > > Hi, we're reconsidering using Varnish again, and this question I posted 4 > months ago is the biggest blocker for us. We've been using Squid for a while, > but it doesn't have collapsed forwarding, which we need (and Varnish > provides). But we also need a persistent cache. > We're ready to roll our own caching proxy, but if Varnish already provides > persistence, then that would be a huge boon. We tried playing around with > Varnish -s persistent 4 months ago but it didn't seem to be what the name > implied. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yanghatespam at gmail.com Tue Apr 19 07:07:33 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Tue, 19 Apr 2011 00:07:33 -0700 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: Thanks. Out of curiosity, the file stevedore is documented with: "file: mmap's a file and uses it for storage" - why can't this be made persistent? And what's the difference from -s persistent? On Mon, Apr 18, 2011 at 11:53 PM, Traian Bratucu wrote: > There is no persistent cache in varnish. At least, not yet. > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Yang Zhang > Sent: Tuesday, April 19, 2011 2:47 AM > To: varnish-misc at varnish-cache.org > Subject: Re: Understanding persistent storage > > Hi, we're reconsidering using Varnish again, and this question I posted 4 months ago is the biggest blocker for us. ?We've been using Squid for a while, but it doesn't have collapsed forwarding, which we need (and Varnish provides). ?But we also need a persistent cache. > We're ready to roll our own caching proxy, but if Varnish already provides persistence, then that would be a huge boon. ?We tried playing around with Varnish -s persistent 4 months ago but it didn't seem to be what the name implied. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Yang Zhang http://yz.mit.edu/ From traian.bratucu at eea.europa.eu Tue Apr 19 07:16:14 2011 From: traian.bratucu at eea.europa.eu (Traian Bratucu) Date: Tue, 19 Apr 2011 09:16:14 +0200 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: Well, varnish documentation tends to kind of suck. The "-s file" does not mean persistent storage, but simply that the file will be mmapped, so even if you have 2Gb of RAM, you can use a mmapped file of 10Gb. There is a new "-s persistence" which is documented for now as "new, shiny, better", you can make whatever you want out of that (http://www.varnish-cache.org/docs/2.1/reference/varnishd.html - Storage Types). I am not a varnish developer, just using varnish. Perhaps one of the developers may explain more. Traian -----Original Message----- From: Yang Zhang [mailto:yanghatespam at gmail.com] Sent: Tuesday, April 19, 2011 9:08 AM To: Traian Bratucu Cc: varnish-misc at varnish-cache.org Subject: Re: Understanding persistent storage Thanks. Out of curiosity, the file stevedore is documented with: "file: mmap's a file and uses it for storage" - why can't this be made persistent? And what's the difference from -s persistent? -- Yang Zhang http://yz.mit.edu/ From yanghatespam at gmail.com Tue Apr 19 07:34:12 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Tue, 19 Apr 2011 00:34:12 -0700 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: On Tue, Apr 19, 2011 at 12:16 AM, Traian Bratucu wrote: > Well, varnish documentation tends to kind of suck. The "-s file" does not mean persistent storage, but simply that the file will be mmapped, so even if you have 2Gb of RAM, you can use a mmapped file of 10Gb. There is a new "-s persistence" which is documented for now as "new, shiny, better", you can make whatever you want out of that (http://www.varnish-cache.org/docs/2.1/reference/varnishd.html - Storage Types). > I am not a varnish developer, just using varnish. Perhaps one of the developers may explain more. This was precisely my limited understanding as well. I'm still curious whether -s file could be hacked to be persistent and what exactly -s persistence is. > > Traian > > -----Original Message----- > From: Yang Zhang [mailto:yanghatespam at gmail.com] > Sent: Tuesday, April 19, 2011 9:08 AM > To: Traian Bratucu > Cc: varnish-misc at varnish-cache.org > Subject: Re: Understanding persistent storage > > Thanks. ?Out of curiosity, the file stevedore is documented with: > "file: mmap's a file and uses it for storage" - why can't this be made persistent? ?And what's the difference from -s persistent? > -- > Yang Zhang > http://yz.mit.edu/ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Yang Zhang http://yz.mit.edu/ From perbu at varnish-software.com Tue Apr 19 08:01:06 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 19 Apr 2011 10:01:06 +0200 Subject: Understanding persistent storage In-Reply-To: References: Message-ID: On Tue, Apr 19, 2011 at 9:34 AM, Yang Zhang wrote: > On Tue, Apr 19, 2011 at 12:16 AM, Traian Bratucu > wrote: > > Well, varnish documentation tends to kind of suck. The "-s file" does not > mean persistent storage, but simply that the file will be mmapped, so even > if you have 2Gb of RAM, you can use a mmapped file of 10Gb. There is a new > "-s persistence" which is documented for now as "new, shiny, better", you > can make whatever you want out of that ( > http://www.varnish-cache.org/docs/2.1/reference/varnishd.html - Storage > Types). > > I am not a varnish developer, just using varnish. Perhaps one of the > developers may explain more. > > This was precisely my limited understanding as well. I'm still > curious whether -s file could be hacked to be persistent and what > exactly -s persistence is. > -sfile cannot be hacked into persistence. Varnish has no idea whatsoever what the file contains. It just writes to memory and the kernel takes care of the rest. It's the best way to allocate truly virtual memory. You can probably get a thorough explanation in a C book or something. -spersistence is experimental and I have intensionally left the documentation blank on purpose. -spersistence hasn't got any working LRU code yet. I expect -spersistence ready in the autumn. > > -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Apr 19 08:14:53 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 19 Apr 2011 08:14:53 +0000 Subject: Understanding persistent storage In-Reply-To: Your message of "Tue, 19 Apr 2011 08:53:35 +0200." Message-ID: <5594.1303200893@critter.freebsd.dk> In message , Traian Bratucu writes: >There is no persistent cache in varnish. At least, not yet. Uhm, you are not up to date. Varnish does have an -spersistent storage function, and it should work better in 3.0 than it did in 2.x, but it is far from perfect yet. I don't know what the original poster found wanting last time, I can only suggest that he gives -trunk a spin and see if he likes it better now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ruben at varnish-software.com Tue Apr 19 09:09:26 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Tue, 19 Apr 2011 11:09:26 +0200 Subject: VUG4 - Call for Papers Message-ID: Dear community, It is that time of the year where we plan the details of the next Varnish User Group meeting, which is the fourth of its kind, hence VUG4. VUG4 will be held at the Hack day of the Surge Conference, held in Baltimore, Maryland, USA. Key information: Place: Baltimore, Maryland, USA Date: September 28, from 9 am to 5 pm Price: Free on the VUG4 Hack day @ Surge (the un/conference the following 2 days has early bird pricing until July 31st) Expect: User Presentations, Varnish 3.x information, a Highly technical meeting If you want to hold a presentation on how Varnish helped you solve your challenges, please send us an abstract to with "VUG4 proposal" as the subject and we will add it to the agenda. For VUG4 detailed information and registration please visit the following wiki page: * http://www.varnish-cache.org/trac/wiki/VUG4 For information on Surge: * http://omniti.com/surge/2011 Last but not least: Hope to see you at the VUG4 in Baltimore! :-) -- Best wishes, -- Rub?n Romero Self-Appointed Secretary for VUG4 | Varnish Software e-mail ruben at varnish-software.com / skype: ruben_varnish P: +47 21 98 92 62 / M: +47 95 96 40 88 Online Sales Chat: http://www.varnish-software.com/contact-us ====================== Varnish makes websites fly! ====================== www.varnish-software.com twitter.com/varnishsoftware linkedin.com/companies/varnish-software Want to learn more about Varnish, its features, get tips and news? http://www.varnish-software.com/whitepapers http://www.varnish-software.com/about/newsletter -------------- next part -------------- An HTML attachment was scrubbed... URL: From yanghatespam at gmail.com Tue Apr 19 16:44:10 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Tue, 19 Apr 2011 09:44:10 -0700 Subject: Understanding persistent storage In-Reply-To: <5594.1303200893@critter.freebsd.dk> References: <5594.1303200893@critter.freebsd.dk> Message-ID: On Tue, Apr 19, 2011 at 1:14 AM, Poul-Henning Kamp wrote: > In message , > ?Traian Bratucu writes: > >>There is no persistent cache in varnish. At least, not yet. > > Uhm, you are not up to date. > > Varnish does have an -spersistent storage function, and it should work > better in 3.0 than it did in 2.x, but it is far from perfect yet. > > I don't know what the original poster found wanting last time, I can > only suggest that he gives -trunk a spin and see if he likes it better > now. What I found last time was: (1) Fetch a page. Miss first time, hit subsequent times. (2) Restart Varnish. (3) Fetch same page. Miss. Also, docs mention things like "When storage is full, Varnish should restart, cleaning storage." > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Yang Zhang http://yz.mit.edu/ From indranilc at rediff-inc.com Wed Apr 20 12:57:14 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 20 Apr 2011 12:57:14 -0000 Subject: =?utf-8?B?VmFybmlzaGxvZyBhbmQgdmFybmlzaG5jc2EgdW5hYmxlIHRvIGRldGVjdCBhbnl0aGluZw==?= Message-ID: <20110420125714.27867.qmail@pro236-134.mxout.rediffmailpro.com> I am facing a weird problem. I have varnish running and its serving everything fine. However, varnishlog is not logging anything. It shows blank screen. Would any one have any idea? varnishlog and varnishd are installed under same location. Thanks, Neel -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Apr 20 13:02:11 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 20 Apr 2011 13:02:11 +0000 Subject: =?utf-8?B?VmFybmlzaGxvZyBhbmQgdmFybmlzaG5jc2EgdW5hYmxlIHRvIGRldGVjdCBhbnl0aGluZw==?= In-Reply-To: Your message of "20 Apr 2011 12:57:14 GMT." <20110420125714.27867.qmail@pro236-134.mxout.rediffmailpro.com> Message-ID: <17485.1303304531@critter.freebsd.dk> In message <20110420125714.27867.qmail at pro236-134.mxout.rediffmailpro.com>, "In dranil Chakravorty" writes: >I am facing a weird problem. I have varnish running and its serving >everything fine. However, varnishlog is not logging anything. It >shows blank screen. Would any one have any idea? varnishlog and >varnishd are installed under same location. Make sure they have the same -n argument. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jean-francois.laurens at rts.ch Wed Apr 20 15:17:51 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Wed, 20 Apr 2011 17:17:51 +0200 Subject: Vanrish 2.1.5 eating memory, hit % decrease In-Reply-To: Message-ID: Hi Ken, others, Just a feedback for the record about persistent option; the varnish process failed at some point with the following error in the logs: Apr 15 00:16:47 server-01-39 varnishtsr[3726]: Child (13026) Panic message: Assert error in smp_open_segs(), storage_persistent.c line 1026: Condition(sg1->p.offset != sg->p.offset) not true. errno = 9 (Bad file descriptor) thread = (cache-main) ident = Linux,2.6.18-194.32.1.el5,x86_64,-spersistent,-hcritbit,no_waiter Backtrace: 0x424446: /usr/sbin/varnishd [0x424446] 0x43e505: /usr/sbin/varnishd [0x43e505] 0x43e6eb: /usr/sbin/varnishd [0x43e6eb] 0x439abe: /usr/sbin/varnishd(STV_open+0x1e) [0x439abe] 0x4234ef: /usr/sbin/varnishd(child_main+0xbf) [0x4234ef] 0x432630: /usr/sbin/varnishd [0x432630] 0x432e59: /usr/sbin/varnishd [0x432e59] 0x39316084f7: /usr/lib64/libvarnish.so.1 [0x39316084f7] 0x3931608b88: /usr/lib64/libvarnish.so.1(vev_schedule+0x88) [0x3931608b88] 0x432893: /usr/sbin/varnishd(MGT_Run+0x143) [0x432893] I just stoped using the persistent cache as I?m just unable to understand and investigate the root cause of the problem ( where is this ?. errno = 9 (Bad file descriptor) ? error coming from ?) Using it for production seems to me just not reasonable at the moment. Certainly a version 3 will handle it properly ! Nevertheless your suggestion about setting the vm.min_free_kbytes did the trick, I guess. I?m testing it right now with 64M and see over the time if the system remains stable. What I see now is that the load remains pretty equal no matter how heavy the trafic is. The number of objects seems to stay stable, meaning no child process get killed and objects lost. Le 08/04/11 22:55, ??Ken Brownfield?? a ?crit?: > This means the child process died?and restarted?(the reason for this should > appear earlier in the log; perhaps your cli_timeout is too low under a heavily > loaded system -- try 20s). > > "-sfile" is not persistent storage, so when the child process restarts it uses > a new, empty storage structure. ?You should have luck with "-spersistent" on > the latest Varnish or trunk, at least for child process restarts. > > FWIW, > --? > kb > > > > On Fri, Apr 8, 2011 at 01:55, Jean-Francois Laurens > wrote: >> Hi Ken, >> >> Thanks for the hint ! >> You?re affecting here 128Mb, how did you get to this munber ? I read >> somewhere that this value can be set to 10% of the actual memory size which >> would be in my case 800Mb, does it make sense for you ? >> I read aswell that setting this value to high would crash the system >> immediately. >> >> >> Yesterday evening, the system was in heavy load but varnish did not hang ! >> Instead it dropped all its objects ! Then the load went back fine. >> It seems setting ?sfile to 40Gb suits better the memory capability for this >> server. >> A question remains though ... Why all the objects were dropped ? >> Attached is a plot from cacti regarding the number of objects. >> >> The only thing I could get form the messages log is this : >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (3733) died signal=3 >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child cleanup complete >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: child (29359) Started >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said Child starts >> Apr ?7 19:00:29 server-01-39 varnishd[3732]: Child (29359) said managed to >> mmap 42949672960 bytes of 42949672960 >> >> >> How could I get to know what is realy happening that could explain this >> behaviour ? >> >> Cheers, >> Jef > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 50979 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 29315 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 31669 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 33211 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 38491 bytes Desc: not available URL: From indranilc at rediff-inc.com Wed Apr 20 17:10:29 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 20 Apr 2011 17:10:29 -0000 Subject: =?utf-8?B?UmU6IFZhcm5pc2hsb2cgYW5kIHZhcm5pc2huY3NhIHVuYWJsZSB0byBkZXRlY3QgYW55dGhpbmc=?= Message-ID: <1303304498.S.3623.39077.H.TlBvdWwtSGVubmluZyBLYW1wAFJlOiBWYXJuaXNobG9nIGFuZCB2YXJuaXNobmNzYSB1bmFibGUgdG8gZGU_.pro-237-56.old.1303319429.21024@webmail.rediffmail.com> I knew someone is pointing something wrong but was not able to nail the culprit. Thanks a ton Poul. Thanks, Neel On Wed, 20 Apr 2011 18:31:38 +0530 "Poul-Henning Kamp" <phk at phk.freebsd.dk> wrote >In message <20110420125714.27867.qmail at pro236-134.mxout.rediffmailpro.com>, "In > dranil Chakravorty" writes: > > >I am facing a weird problem. I have varnish running and its serving > >everything fine. However, varnishlog is not logging anything. It > >shows blank screen. Would any one have any idea? varnishlog and > >varnishd are installed under same location. > > Make sure they have the same -n argument. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Apr 20 17:21:53 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 20 Apr 2011 19:21:53 +0200 Subject: Understanding persistent storage In-Reply-To: References: <5594.1303200893@critter.freebsd.dk> Message-ID: On Tue, Apr 19, 2011 at 6:44 PM, Yang Zhang wrote: > > What I found last time was: > > (1) Fetch a page. Miss first time, hit subsequent times. > > (2) Restart Varnish. > > (3) Fetch same page. Miss. Right. This is actually valid?behavior. The goal of the persistent storage isn't to salvage _all_ of the data across a restart, only to salvage _most_ of the data. Salvaging all of the data would require database type storage semantics which is more or less impossible to achieve without?scarifying a lot of persistence. As -spersistent is implemented, the storage silo which is open at the time of the crash/restart is discarded completely, so most of the objects inserted into the cache the last few seconds before the crash will ?certainly be lost. Remember, we're a cache, not a data store. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Whitepapers?| Video?| Twitter From yanghatespam at gmail.com Wed Apr 20 17:26:02 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Wed, 20 Apr 2011 10:26:02 -0700 Subject: Understanding persistent storage In-Reply-To: References: <5594.1303200893@critter.freebsd.dk> Message-ID: On Wed, Apr 20, 2011 at 10:21 AM, Per Buer wrote: > On Tue, Apr 19, 2011 at 6:44 PM, Yang Zhang wrote: >> >> What I found last time was: >> >> (1) Fetch a page. Miss first time, hit subsequent times. >> >> (2) Restart Varnish. >> >> (3) Fetch same page. Miss. > > Right. This is actually valid?behavior. The goal of the persistent > storage isn't to salvage _all_ of the data across a restart, only to > salvage _most_ of the data. Salvaging all of the data would require > database type storage semantics which is more or less impossible to > achieve without?scarifying a lot of persistence. > > As -spersistent is implemented, the storage silo which is open at the > time of the crash/restart is discarded completely, so most of the > objects inserted into the cache the last few seconds before the crash > will ?certainly be lost. Remember, we're a cache, not a data store. Those are the semantics I'm interested in. That's perfect, thanks for your explanation - sounds like -spersistence may work for us. -- Yang Zhang http://yz.mit.edu/ From varnishlist at realvideosite.com Wed Apr 20 20:24:52 2011 From: varnishlist at realvideosite.com (Varnish List) Date: Wed, 20 Apr 2011 16:24:52 -0400 Subject: Varnish Purging with URL in browser Message-ID: I am trying to figure out if it is at all possible to purge objects in the varnish cache through the URL in the browser. I have been able to purge using curl -X and the CLI but, say you have an object called image.jpg and you want to open your browser and type: http://www.domain.com/images/image.jpg?purge=true In order to purge it ( TTL=0 ). Is this possible to do? Can it be implemented with VCL? If you know of a way to handle this please let me know. I'm somewhat convinced it can be done because it was mentioned here: http://kristianlyng.wordpress.com/2010/07/15/varnish-crash-course-for-sysadmins Thanks!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Wed Apr 20 20:43:56 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 20 Apr 2011 17:43:56 -0300 Subject: Varnish Purging with URL in browser In-Reply-To: References: Message-ID: Hi! I think you're looking for something like this, in your VCL (vcl_recv): if (req.url ~ ".*\?purge=true$") { set req.http.X-Purge = regsub( req.url, "(.*)\?purge=true$", "\1" ); purge_url(req.http.X-Purge); } I hope this helps you! Regards, Roberto O. Fern?ndez Crisial. @rofc On Wed, Apr 20, 2011 at 5:24 PM, Varnish List wrote: > I am trying to figure out if it is at all possible to purge objects in the > varnish cache through the URL in the browser. > > I have been able to purge using curl -X and the CLI but, say you have an > object called image.jpg and you want to open your browser and type: > > http://www.domain.com/images/image.jpg?purge=true > > In order to purge it ( TTL=0 ). Is this possible to do? Can it be > implemented with VCL? If you know of a way to handle this please let me > know. > > I'm somewhat convinced it can be done because it was mentioned here: > http://kristianlyng.wordpress.com/2010/07/15/varnish-crash-course-for-sysadmins > > Thanks!!! > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sime at sime.net.au Thu Apr 21 05:54:27 2011 From: sime at sime.net.au (Simon Males) Date: Thu, 21 Apr 2011 15:54:27 +1000 Subject: Multiple backends for same host with varying timeouts Message-ID: Hello, I'm interested to hear what is considered best practice in terms of setting different timeouts for different URLs. Example: /reports take a long while to generate and much longer then the default first/between_bytes_timeouts run time parameters. In my (simple) mind, a second reporting backend can be specified to the same host with a custom timeout parameter. A simple vcl example below: backend default { .host = "127.0.0.1"; .port = "8080"; } backend reports { .host = "127.0.0.1"; .port = "8080"; .first_byte_timeout = 300s; } sub vcl_recv { if (req.url ~ "^/reports") { set req.backend = reports; } } On the right track? Out of interest does varnish consider these two backends completely separate? -- Simon Males From perbu at varnish-software.com Thu Apr 21 07:55:56 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 21 Apr 2011 09:55:56 +0200 Subject: Benefits of Varnish vs nginx/ncache? In-Reply-To: References: Message-ID: On Mon, Apr 18, 2011 at 2:36 PM, James Thornton wrote: > When you have nginx as a reverse proxy to your application servers, > what are the benenfits of adding Varnish as the caching layer between > nginx and the app servers (as depicted here > http://www.heroku.com/how/architecture) vs having nginx perform the > caching, now that ncache is built into nginx? > Varnish has VCL, which makes it more flexible than the cache built into Nginx. The Nginx cache is a "look we can do caching as well" whereas Varnish is only built for caching. I don't know that much about nginx, but I would guess features such as "saint mode", grace, etc. are hard to implement with nginx. The simple stuff should be reasonably comparable. -- Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Thu Apr 21 08:51:23 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Thu, 21 Apr 2011 10:51:23 +0200 Subject: Varnish child killed Message-ID: Hi there, We?re run varnish 2.1.5 for some week now and we still do not understand some behavior regarding the shared memory activity. We specified a ?sfile,/var/lib/varnish/varnish_storage.bin,50G in the configuration but it?s impossible to go higher than 25G used by varnish. Please see the following cacti graph: In addition I can see varnish doesn?t seem to be able to handle more than 1 million objects: When the child process get killed, the load of the system was very high: Apr 20 21:46:44 server-01-39 varnishd[21087]: Child (5372) not responding to CLI, killing it. .... Apr 20 21:49:57 server-01-39 nrpe[18101]: Command completed with return code 2 and output: CRITICAL - load average: 159.00, 159.32, 77.02|load1=159.000;15.000;30.000;0; load5=159.320;10.000;25.000;0; load15=77.020;5.000;20.000;0; .... Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not responding to CLI, killing it. All this makes me believe we have an issue with some kernel parameters that do not allow varnish to handle as many objects as we configured it. Would anybody have an advice for this problem ? Jef Jean-Francois Laurens Ing?nieur Syst?me Unix Resources et D?veloppement Secteur Backend RTS - Radio T?l?vision Suisse Quai Ernest-Ansermet 20 Case postale 234 CH - 1211 Gen?ve 8 T +41 (0)58 236 81 63 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 31560 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 39527 bytes Desc: not available URL: From michal.taborsky at nrholding.com Thu Apr 21 09:12:46 2011 From: michal.taborsky at nrholding.com (Michal Taborsky) Date: Thu, 21 Apr 2011 11:12:46 +0200 Subject: Varnish child killed In-Reply-To: References: Message-ID: <4DAFF50E.5000403@nrholding.com> Hello Jean-Francois, we have seen similar behavior. You did not specify what platform you use so I assume Linux. After some studying and experimentation my recommendation is: a) always make sure varnish uses only memory, never disc, if you expect good performance. So specify the cache size smaller than the available memory you have (some memory should be reserved for other processes and some varnish control structures, we use 14G cache size on 16G box) b) use malloc storage type Hope this helps, Michal Dne 21.4.2011 10:51, Jean-Francois Laurens napsal(a): > Hi there, > > We're run varnish 2.1.5 for some week now and we still do not > understand some behavior regarding the shared memory activity. > We specified a --sfile,/var/lib/varnish/varnish_storage.bin,50G in the > configuration but it's impossible to go higher than 25G used by > varnish. Please see the following cacti graph: > > > > In addition I can see varnish doesn't seem to be able to handle more > than 1 million objects: > > > When the child process get killed, the load of the system was very high: > Apr 20 21:46:44 server-01-39 varnishd[21087]: Child (5372) not > responding to CLI, killing it. > .... > Apr 20 21:49:57 server-01-39 nrpe[18101]: Command completed with > return code 2 and output: CRITICAL -*load average: 159.00, 159.32, > 77.02*|load1=159.000;15.000;30.000;0; load5=159.320;10.000;25.000;0; > load15=77.020;5.000;20.000;0; > .... > Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not > responding to CLI, killing it. > > All this makes me believe we have an issue with some kernel parameters > that do not allow varnish to handle as many objects as we configured it. > > Would anybody have an advice for this problem ? > > Jef > > Jean-Francois Laurens > Ing?nieur Syste`me Unix > Resources et D?veloppement > Secteur Backend > *RTS - Radio T?l?vision Suisse > *Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gene`ve 8 > T +41 (0)58 236 81 63 > -- Michal T?borsk? chief systems architect Netretail Holding, B.V. http://www.nrholding.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Thu Apr 21 09:27:58 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Thu, 21 Apr 2011 11:27:58 +0200 Subject: Varnish child killed In-Reply-To: <4DAFF50E.5000403@nrholding.com> Message-ID: Hi Michal, Thanks for your advise ! It is kind of drastical though ! We have an other varnish instance in version 2.0.6 which is using memory (hum even if we specified a ?sfile ;-) and actually you?re right it?s pretty stable and fast. Thus I would be interested in having some feedback from folks running cache on disk file ! The thing is that the websites behind do get a lot of trafic and have a lot of content as we?re media broadcasters. The ugly thing is that the servers running varnish instances can not get more than 8G of memory which is clearly not enough. Cheers, Jef Le 21/04/11 11:12, ??Michal Taborsky?? a ?crit?: > Hello Jean-Francois, > > we have seen similar behavior. You did not specify what platform you use so I > assume Linux. After some studying and experimentation my recommendation is: > a) always make sure varnish uses only memory, never disc, if you expect good > performance. So specify the cache size smaller than the available memory you > have (some memory should be reserved for other processes and some varnish > control structures, we use 14G cache size on 16G box) > b) use malloc storage type > > Hope this helps, > Michal > > > Dne 21.4.2011 10:51, Jean-Francois Laurens napsal(a): >> Varnish child killed Hi there, >> >> We?re run varnish 2.1.5 for some week now and we still do not understand >> some behavior regarding the shared memory activity. >> We specified a ?sfile,/var/lib/varnish/varnish_storage.bin,50G in the >> configuration but it?s impossible to go higher than 25G used by varnish. >> Please see the following cacti graph: >> >> >> >> In addition I can see varnish doesn?t seem to be able to handle more than 1 >> million objects: >> >> >> When the child process get killed, the load of the system was very high: >> Apr 20 21:46:44 server-01-39 varnishd[21087]: Child (5372) not responding to >> CLI, killing it. >> .... >> Apr 20 21:49:57 server-01-39 nrpe[18101]: Command completed with return code >> 2 and output: CRITICAL - load average: 159.00, 159.32, >> 77.02|load1=159.000;15.000;30.000;0; load5=159.320;10.000;25.000;0; >> load15=77.020;5.000;20.000;0; >> .... >> Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not responding to >> CLI, killing it. >> >> All this makes me believe we have an issue with some kernel parameters that >> do not allow varnish to handle as many objects as we configured it. >> >> Would anybody have an advice for this problem ? >> >> Jef >> >> Jean-Francois Laurens >> Ing?nieur Syst?me Unix >> Resources et D?veloppement >> Secteur Backend >> RTS - Radio T?l?vision Suisse >> Quai Ernest-Ansermet 20 ??????????????????????? >> Case postale 234 ??????????????????????????????????? >> CH - 1211 Gen?ve 8 >> T +41 (0)58 236 81 63 >> >> > > Jean-Francois Laurens Ing?nieur Syst?me Unix Resources et D?veloppement Secteur Backend RTS - Radio T?l?vision Suisse Quai Ernest-Ansermet 20 Case postale 234 CH - 1211 Gen?ve 8 T +41 (0)58 236 81 63 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosmih at gmail.com Thu Apr 21 10:17:41 2011 From: cosmih at gmail.com (cosmih) Date: Thu, 21 Apr 2011 12:17:41 +0200 Subject: req.hash using req.url without query string ? Message-ID: Hi, By default the cache key is made using req.http.host and req.url and this means that for the same resource requested with a different query string we have a different cache key, object. However I need that for few specific URLs (static content) to have only one cache object per URL and serve it no matter what query string I have. Something like in the below example: Lets suppose that I have the following URLs: www.example.com/a_specific_path/image.png?parameter1=value1 www.example.com/a_specific_path/image.png?parameter2=value2 www.example.com/a_specific_path/image.png?parameter3=value3 : : www.example.com/a_specific_path/image.png?parameterN=valueN I want to use only /a_specific_path/image.png + req.http.host for creating/checking the cache key and to serve this cache object no matter what query string "parameterN=valueN" I have to this specific URL. It is possible ? Thanks, -- Cosmih -------------- next part -------------- An HTML attachment was scrubbed... URL: From l at lrowe.co.uk Thu Apr 21 11:36:11 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Thu, 21 Apr 2011 12:36:11 +0100 Subject: req.hash using req.url without query string ? In-Reply-To: References: Message-ID: On 21 April 2011 11:17, cosmih wrote: > Hi, > By default the cache key is made using req.http.host and req.url and this > means that for the same resource requested with a different query string we > have a different cache key, object. > However I need that for few?specific URLs (static content) to have only one > cache object per URL and serve it no matter what query string I have. > Something like in the below example: > Lets suppose that I have the following URLs: > www.example.com/a_specific_path/image.png?parameter1=value1 > www.example.com/a_specific_path/image.png?parameter2=value2 > www.example.com/a_specific_path/image.png?parameter3=value3 > : > : > www.example.com/a_specific_path/image.png?parameterN=valueN > I want to use only > /a_specific_path/image.png ? + ?req.http.host > for creating/checking ?the cache key and to serve this cache object?no > matter what query string "parameterN=valueN"?I have to this specific URL. > It is possible ? > Thanks, You can normalize the url on the way in, something like this should do the trick: sub vcl_recv { set req.url = regsub(req.url, "\?.*$", ""); } Laurence From geoff at uplex.de Thu Apr 21 11:42:03 2011 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 21 Apr 2011 13:42:03 +0200 Subject: Varnish child killed In-Reply-To: References: Message-ID: <4DB0180B.8080208@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 4/21/11 10:51 AM, Jean-Francois Laurens wrote: > > We?re run varnish 2.1.5 for some week now and we still do not understand > some behavior regarding the shared memory activity. There's not enough information here for anything better than guesses about what's going on. > We specified a ?sfile,/var/lib/varnish/varnish_storage.bin,50G in the > configuration but it?s impossible to go higher than 25G used by varnish. [...] > > In addition I can see varnish doesn?t seem to be able to handle more > than 1 million objects: It's not uncommon for Varnish to use significantly less memory than what was allocated, but not because Varnish can't "handle" it, but just because it works out that way. Due to a combination of factors like usage patterns, TTLs, your command line settings and your VCL, Varnish may decide that it doesn't need more than that. What do your cache hit ratios say? Do the logs or varnishstat give any indication that objects are not being cached when you think they should be? Do you have objects that, semantically, could be cached, but aren't because, for example, they are unnecessarily setting cookies? You might be able to get more into the cache more by tweaking VCL, but as I said, that's just a guess. > When the child process get killed, the load of the system was very high: > Apr 20 21:46:44 server-01-39 varnishd[21087]: Child (5372) not > responding to CLI, killing it. > .... > Apr 20 21:49:57 server-01-39 nrpe[18101]: Command completed with return > code 2 and output: CRITICAL -*load average: 159.00, 159.32, > 77.02*|load1=159.000;15.000;30.000;0; load5=159.320;10.000;25.000;0; > load15=77.020;5.000;20.000;0; > .... > Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not > responding to CLI, killing it. It looks like the message about high load came after the Varnish processes died, and that might have happened, at least in part, because Varnish was restarted and was getting nothing but cache misses. Unless the high load was caused by something else. Which processes were showing the highest CPU usage? The real question is why the Varnish child was no longer responding to pings. Do you have any panic messages from Varnish in your syslog, or anything else indicating the error? If the load was that high *before* the processes died, your system might have been under so much stress that the child processes just couldn't answer pings in time. In which case your real problem might be something other than Varnish. > All this makes me believe we have an issue with some kernel parameters > that do not allow varnish to handle as many objects as we configured it. It could be that, it could be another process that was causing heavy load, it could be your VCL or your command line settings. Too many open questions here. Best, Geoff - -- UPLEX Systemoptimierung Schwanenwik 24 22087 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNsBgKAAoJEOUwvh9pJNURzS0QAJOvVWr3Yi4DsA2x0Ck+/HTa pkL69dRhUskq5Ll6Ny+e0DBB9I3Dx48ZT9ZxzRcvIZQn4shPl1GPdQQRHCB0ek82 o8lLCdS/ta2HZhQI96FSUBj5RYDrPd3B78cAlvDLYzHsZIUbg90WmizHE/x9vPOi z5TOS/0S3Ao7JIuqkMpkWYyVs4AH6aKIX1L9er9jYLbHp5s8R2ilzs3USeLdC8Kl spGAaSn4mcCVHmhR+ZQ2XQjaf2nxN7oXEIviGOZOWfZ1XX1hQpDtjhp1D9BoInBW oNZmamt6Hd+m00LCu88YhTiBMRDD7zbom9C0NWLf6n7LaCIQteM/KEo1z9tPLAS6 qmQzv+EvBKG5Dpcp81v5TqiUyVDzsYFegoKR6FKCCXvTlCI6avBlik1AlXRhecsF 27da7zMVvoDC44Wo+zqRkwMrtzpmE/Y55wdkP3YBUg/m4nzvci1VYTy3W436NfMe ypjWJ+bQEL9erSURNVDZLl6+I/J4cdcRxPEn96/7vaoDnq9HlvSI9SbAGWj4TDhA ksyvDB2VBGyfaVPnmPy/4CdjbDFXB5lzF2PezUhChrehKoJXeKXPNqegKV89VAo9 EH298HuxKO+xZkVMfO9g0kHdFp6VGSCU8Y+ddU2/tMhxHGMCoXOC/sdcuCHl5HRW G6cSzXYum2Y1ootALk7U =OksF -----END PGP SIGNATURE----- From jean-francois.laurens at rts.ch Thu Apr 21 13:20:47 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Thu, 21 Apr 2011 15:20:47 +0200 Subject: Varnish child killed In-Reply-To: <4DB0180B.8080208@uplex.de> Message-ID: Hi Geoff, 1. You?re completely true about guesses ! In fact it?s very hard for me to get anything valuable from the logs in order to point a guilty ! I can just tell what I can see from cacti graphs and correlate with the behavior of varnish. What is the way in your opinion to have a better idea on what?s going on ? At this stage I?m just open to any idea ! Please don?t tell me to run varnish in debug mode in production ;-) I am just trying to explain what I can see as there was some obvious paterns (out of cacti graphs): Impossible to go higher the 25Gb of cache used. You can clearly see that the shared memory used is increasing gradually until 25G and can?t go higher. I believe there is some more objects to cache but varnish can?t for any reason. Impossible to go above 1 million object cached. Same observation as the shared memory. 2. hitratio My hit ratio is pretty low around 50%. It used to be around 70% with the version 2.0.6 and dropped down to 50%. I can?t explain it as what was changed was only : In vcl_fetch, replace obj by beresp Replace calls to pass or deliver by return(deliver) or return(pass) Attached is a graph in order to give you an idea about hit ratio: On the left side you see of the graph you see the hitrate when using 2.0.6 and on right side after upgrading. Here is my vcl: backend default { .host = "172.20.102.55"; .port = "8080"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 600s; } sub vcl_recv { unset req.http.Cookie; remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; set req.grace = 1m; if (req.url ~ "^/is-alive") { error 750 "Up"; } if (req.http.Authenticate) { return (pass); } if (req.url ~ "^http://") { set req.url = regsub(req.url, "http://[^/]*", ""); } if (req.http.host ~ "static.ece.tsr.ch") { return (pass); } if (req.http.host == "ece.tsr.ch" && req.url !~ "^/robots.txt") { set req.http.host = "www.tsr.ch"; } if (req.url ~ "^/tsr/.+/index\.html") { set req.url = regsub(req.url, "^/tsr/.+/index\.html", "/tsr/index.html"); } if (req.url ~ "&") { set req.url = regsuball(req.url, "&", "&"); } if (req.request == "GET" || req.request == "HEAD") { return (lookup); } } sub vcl_error { if (obj.status == 750) { set obj.http.Content-Type = "text/plain; charset=utf-8"; synthetic {"escenic-up"}; return (deliver); } } sub vcl_fetch { unset beresp.http.x-ece-cache; set beresp.http.X-ece-cache = server.hostname; set beresp.grace = 1m; unset beresp.http.X-ece-was-cached; # respect no-cache from the backend if (beresp.http.Cache-Control ~ "no-cache") { set beresp.http.X-ece-was-cached = "backend no-cache"; return (pass); } #respect custom cache from the backend if (beresp.http.X-Ece-Cache-Control ~ "custom") { unset beresp.http.X-Ece-Cache-Control; set beresp.http.X-ece-was-cached = "backend custom cache"; return (deliver); } unset beresp.http.cache-control; unset beresp.http.expires; unset beresp.http.pragma; unset beresp.http.Set-Cookie; # errors cache 5 min ttl + 2 min akamai if (beresp.status == 403 || beresp.status == 404 || beresp.status == 500 || beresp.status == 503) { set beresp.http.Cache-Control = "max-age=120"; set beresp.http.X-ece-was-cached = "errors: 5m ttl, max-age=120, age 0"; set beresp.ttl = 5m; # short cache 60 sec ttl + 60 sec akamai } else if ( req.url == "/" || req.url == "/info/" || req.url == "/sport/" ) { set beresp.http.Cache-Control = "max-age=60"; set beresp.http.X-ece-was-cached = "short: 60s, max-age=60, age 0"; set beresp.ttl = 60s; # very long cache 90 days ttl + 1 week akamai } else if ( beresp.status == 301 || # redirects req.url ~ "^/[0-9]{4}/[0-9]{2}/.+\.image($|\?)" || # images, see CMS-4354 req.url ~ "\?format=(css|js)&.*cKey=" # javascript or css with cache key ) { set beresp.http.Cache-Control = "max-age=604800"; set beresp.http.X-ece-was-cached = "very-long: 90d ttl, max-age=604800, age 0"; set beresp.ttl = 90d; } else if (req.url ~ "\?.*page=[0-9]+") { set beresp.http.Cache-Control = "max-age=7200"; set beresp.http.X-ece-was-cached = "intermediate: 22h ttl, max-age=7200, age 0"; set beresp.ttl = 22h; # medium cache 24h ttl + 24h akamai } else if (req.url ~ "\.(image|gif|jpg|jpe|css|js|png|swf|ico)($|\?)") { set beresp.http.Cache-Control = "max-age=86400"; set beresp.http.X-ece-was-cached = "medium: 1d ttl, max-age=86400, age 0"; set beresp.ttl = 1d; # default cache 10 min ttl + 5 min akamai } else { set beresp.http.Cache-Control = "max-age=300"; set beresp.http.X-ece-was-cached = "default: 10m ttl, max-age 300, age 0"; set beresp.ttl = 10m; } return (deliver); } Here are my varnish startup options: /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 0.0.0.0:6082 -t 120 -w 100,3000,120 -u varnish -g varnish -s file,/var/lib/varnish/varnish_storage.bin,50G -p thread_pools 4 -p thread_pool_add_delay 2 -p cli_timeout 20 -p session_linger 50/100/150 3. high load situation: * only varnish is running on this server no apache or whatever, there is no other process using resources as varnish does. * varnish was not restarted before, you would have see it in the graph sent in my previous email because the number of objects would have dropped to zero and for sure message in the logs * command line settings: as specified earlier, I only have a 20s timeout, should I specify something else or more parameters ? * there is no panic message in the logs You?re still completely about this statement: The real question is why the Varnish child was no longer responding to pings. In the end my real question is : How could I trace child activity ? Cheers, Jef Le 21/04/11 13:42, ??Geoff Simmons?? a ?crit?: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 4/21/11 10:51 AM, Jean-Francois Laurens wrote: >> > >> > We?re run varnish 2.1.5 for some week now and we still do not understand >> > some behavior regarding the shared memory activity. > > There's not enough information here for anything better than guesses > about what's going on. > >> > We specified a ?sfile,/var/lib/varnish/varnish_storage.bin,50G in the >> > configuration but it?s impossible to go higher than 25G used by varnish. > [...] >> > >> > In addition I can see varnish doesn?t seem to be able to handle more >> > than 1 million objects: > > It's not uncommon for Varnish to use significantly less memory than what > was allocated, but not because Varnish can't "handle" it, but just > because it works out that way. Due to a combination of factors like > usage patterns, TTLs, your command line settings and your VCL, Varnish > may decide that it doesn't need more than that. > > What do your cache hit ratios say? Do the logs or varnishstat give any > indication that objects are not being cached when you think they should > be? Do you have objects that, semantically, could be cached, but aren't > because, for example, they are unnecessarily setting cookies? You might > be able to get more into the cache more by tweaking VCL, but as I said, > that's just a guess. > >> > When the child process get killed, the load of the system was very high: >> > Apr 20 21:46:44 server-01-39 varnishd[21087]: Child (5372) not >> > responding to CLI, killing it. >> > .... >> > Apr 20 21:49:57 server-01-39 nrpe[18101]: Command completed with return >> > code 2 and output: CRITICAL -*load average: 159.00, 159.32, >> > 77.02*|load1=159.000;15.000;30.000;0; load5=159.320;10.000;25.000;0; >> > load15=77.020;5.000;20.000;0; >> > .... >> > Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not >> > responding to CLI, killing it. > > It looks like the message about high load came after the Varnish > processes died, and that might have happened, at least in part, because > Varnish was restarted and was getting nothing but cache misses. Unless > the high load was caused by something else. Which processes were showing > the highest CPU usage? > > The real question is why the Varnish child was no longer responding to > pings. Do you have any panic messages from Varnish in your syslog, or > anything else indicating the error? If the load was that high *before* > the processes died, your system might have been under so much stress > that the child processes just couldn't answer pings in time. In which > case your real problem might be something other than Varnish. > >> > All this makes me believe we have an issue with some kernel parameters >> > that do not allow varnish to handle as many objects as we configured it. > > It could be that, it could be another process that was causing heavy > load, it could be your VCL or your command line settings. Too many open > questions here. > > > Best, > Geoff > - -- > UPLEX Systemoptimierung > Schwanenwik 24 > 22087 Hamburg > http://uplex.de/ > Mob: +49-176-63690917 > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.14 (Darwin) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQIcBAEBCAAGBQJNsBgKAAoJEOUwvh9pJNURzS0QAJOvVWr3Yi4DsA2x0Ck+/HTa > pkL69dRhUskq5Ll6Ny+e0DBB9I3Dx48ZT9ZxzRcvIZQn4shPl1GPdQQRHCB0ek82 > o8lLCdS/ta2HZhQI96FSUBj5RYDrPd3B78cAlvDLYzHsZIUbg90WmizHE/x9vPOi > z5TOS/0S3Ao7JIuqkMpkWYyVs4AH6aKIX1L9er9jYLbHp5s8R2ilzs3USeLdC8Kl > spGAaSn4mcCVHmhR+ZQ2XQjaf2nxN7oXEIviGOZOWfZ1XX1hQpDtjhp1D9BoInBW > oNZmamt6Hd+m00LCu88YhTiBMRDD7zbom9C0NWLf6n7LaCIQteM/KEo1z9tPLAS6 > qmQzv+EvBKG5Dpcp81v5TqiUyVDzsYFegoKR6FKCCXvTlCI6avBlik1AlXRhecsF > 27da7zMVvoDC44Wo+zqRkwMrtzpmE/Y55wdkP3YBUg/m4nzvci1VYTy3W436NfMe > ypjWJ+bQEL9erSURNVDZLl6+I/J4cdcRxPEn96/7vaoDnq9HlvSI9SbAGWj4TDhA > ksyvDB2VBGyfaVPnmPy/4CdjbDFXB5lzF2PezUhChrehKoJXeKXPNqegKV89VAo9 > EH298HuxKO+xZkVMfO9g0kHdFp6VGSCU8Y+ddU2/tMhxHGMCoXOC/sdcuCHl5HRW > G6cSzXYum2Y1ootALk7U > =OksF > -----END PGP SIGNATURE----- > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 28088 bytes Desc: not available URL: From simon at darkmere.gen.nz Thu Apr 21 17:41:03 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Fri, 22 Apr 2011 05:41:03 +1200 (NZST) Subject: Varnish child killed In-Reply-To: References: Message-ID: On Thu, 21 Apr 2011, Jean-Francois Laurens wrote: > 2. hitratio > My hit ratio is pretty low around 50%. > It used to be around 70% with the version 2.0.6 and dropped down to 50%. > I can?t explain it as what was changed was only : > In vcl_fetch, replace obj by beresp > Replace calls to pass or deliver by return(deliver) or return(pass) Is this for http://www.tsr.ch/ ? If your hit ratio is 60% with a million objects then something is very wrong. You have enough images there you should be well over 90% hits. Run "varnishlog -b" and see what your cache misses are ( or do look in your backend logs ). -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From jean-francois.laurens at rts.ch Thu Apr 21 22:59:20 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Fri, 22 Apr 2011 00:59:20 +0200 Subject: Varnish child killed In-Reply-To: Message-ID: Please do not overlook the object of the email which is actually ?Varnish child killed?. For the explanation of the hitratio: some tests are running and half of the misses are requests with cachekillers on purpose. Would anybody please help on understanding why this child process get killed ? Le 21/04/11 19:41, ??Simon Lyall?? a ?crit?: > On Thu, 21 Apr 2011, Jean-Francois Laurens wrote: >> > 2. hitratio >> > My hit ratio is pretty low around 50%. >> > It used to be around 70% with the version 2.0.6 and dropped down to 50%. >> > I can?t explain it as what was changed was only : >> > In vcl_fetch, replace obj by beresp >> > Replace calls to pass or deliver by return(deliver) or return(pass) > > Is this for http://www.tsr.ch/ ? > > If your hit ratio is 60% with a million objects then something is very > wrong. You have enough images there you should be well over 90% hits. > > Run "varnishlog -b" > > and see what your cache misses are ( or do look in your backend logs ). > > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Fri Apr 22 01:06:22 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Thu, 21 Apr 2011 18:06:22 -0700 Subject: Varnish child killed In-Reply-To: References: Message-ID: It was likely killed because the child process didn't respond to the parent process within cli_timeout seconds. Two suggestions: * increase vm.min_free_kbytes to 64M or 128M (I use the latter) * increase cli_timeout to 40-60 seconds Then record 'vmstat' and 'ps' output, so you can go back and see what was happening on the machine at the time the child is killed. For example, did something start swapping 1G of RAM, and swap thrash caused the child to be slow to respond)? Was there an ECC error? Etc. My guess is that the two settings above will put you in a better place. Also, good to make sure your lru isn't too high, but it looks like you're using the (fine) default. -- kb On Thu, Apr 21, 2011 at 15:59, Jean-Francois Laurens < jean-francois.laurens at rts.ch> wrote: > Please do not overlook the object of the email which is actually ?Varnish > child killed?. > > For the explanation of the hitratio: some tests are running and half of the > misses are requests with cachekillers on purpose. > > Would anybody please help on understanding why this child process get > killed ? > > > Le 21/04/11 19:41, ? Simon Lyall ? a ?crit : > > On Thu, 21 Apr 2011, Jean-Francois Laurens wrote: > > 2. hitratio > > My hit ratio is pretty low around 50%. > > It used to be around 70% with the version 2.0.6 and dropped down to 50%. > > I can?t explain it as what was changed was only : > > In vcl_fetch, replace obj by beresp > > Replace calls to pass or deliver by return(deliver) or return(pass) > > Is this for http://www.tsr.ch/ ? > > If your hit ratio is 60% with a million objects then something is very > wrong. You have enough images there you should be well over 90% hits. > > Run "varnishlog -b" > > and see what your cache misses are ( or do look in your backend logs ). > > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > *RTS - Radio T?l?vision Suisse > *Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shib4u at gmail.com Mon Apr 25 15:12:29 2011 From: shib4u at gmail.com (Shibashish) Date: Mon, 25 Apr 2011 20:42:29 +0530 Subject: CDN subdomain handling (hiding) in Varnish Message-ID: Hi All, My website is being served by varnish. I also have a CDN setup from where i serve the static content for my websites (jpg, js, css). Before CDN, the static content on the site was being served as www/ example.com/images/abc.jpg, www.example.com/css/xyz.css, etc. After CDN, the static files are being served through a new domain as cdn.example.com/images/abc.jpg, cdn.example.com/css/xyz.css. How do I stop my site from being visible on cdn.example.com? I want to let the static content be served out of CDN and the origin pull be happening from Varnish. ShiB. while ( ! ( succeed = try() ) ); -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Mon Apr 25 15:26:04 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Mon, 25 Apr 2011 12:26:04 -0300 Subject: CDN subdomain handling (hiding) in Varnish In-Reply-To: References: Message-ID: Hi, I think what you need to do is to code your website (www.example.com) with img/css/js/etc source code from "cdn.example.com/images/abc.jpg" instead "/images/abc.jpg". All you need to do is to update your IMG/CSS/JS (and all your static files) liks from your source code and point them to "cdn.example.com". Roberto @rofc On Mon, Apr 25, 2011 at 12:12 PM, Shibashish wrote: > Hi All, > > My website is being served by varnish. I also have a CDN setup from where i > serve the static content for my websites (jpg, js, css). > > Before CDN, the static content on the site was being served as www/ > example.com/images/abc.jpg, www.example.com/css/xyz.css, etc. After CDN, > the static files are being served through a new domain as > cdn.example.com/images/abc.jpg, cdn.example.com/css/xyz.css. > > How do I stop my site from being visible on cdn.example.com? I want to let > the static content be served out of CDN and the origin pull be happening > from Varnish. > > ShiB. > while ( ! ( succeed = try() ) ); > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shib4u at gmail.com Mon Apr 25 15:53:42 2011 From: shib4u at gmail.com (Shibashish) Date: Mon, 25 Apr 2011 21:23:42 +0530 Subject: CDN subdomain handling (hiding) in Varnish In-Reply-To: References: Message-ID: 2011/4/25 Roberto O. Fern?ndez Crisial > Hi, > > I think what you need to do is to code your website (www.example.com) with > img/css/js/etc source code from "cdn.example.com/images/abc.jpg" instead > "/images/abc.jpg". > > All you need to do is to update your IMG/CSS/JS (and all your static files) > liks from your source code and point them to "cdn.example.com". > > Roberto > @rofc > > On Mon, Apr 25, 2011 at 12:12 PM, Shibashish wrote: > >> Hi All, >> >> My website is being served by varnish. I also have a CDN setup from where >> i serve the static content for my websites (jpg, js, css). >> >> Before CDN, the static content on the site was being served as www/ >> example.com/images/abc.jpg, www.example.com/css/xyz.css, etc. After CDN, >> the static files are being served through a new domain as >> cdn.example.com/images/abc.jpg, cdn.example.com/css/xyz.css. >> >> How do I stop my site from being visible on cdn.example.com? I want to >> let the static content be served out of CDN and the origin pull be happening >> from Varnish. >> >> ShiB. >> while ( ! ( succeed = try() ) ); >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > Thanks Roberto for explaining. My website source is already modified to serve content from the cdn domain, ie cdn.example.com/images/abc.jpg is live and being served from the CDN. Now, alongwith my images/css/js, the main website is also seen when i go to cdn.example.com. Since the codebase is same and the html/php files are present, the main site is also visible if someone types in the cdn domain. I don't want this and want not to serve the site on the cdn domain. Can I do a exact match on the cdn domain and redirect to the main domain? i.e. "cdn.example.com" gets directed to www.example.com, but cdn.example.com/images/abc.jpg should not be redirected. Or is there a better way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Tue Apr 26 07:45:20 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 26 Apr 2011 09:45:20 +0200 Subject: symlink for debian-installation, trac In-Reply-To: <4DAC445F.2030602@danielbruessler.de> ("Daniel =?utf-8?B?QnI=?= =?utf-8?B?w7zDn2xlciIncw==?= message of "Mon, 18 Apr 2011 16:02:07 +0200") References: <4DAC445F.2030602@danielbruessler.de> Message-ID: <87d3k9v4kv.fsf@qurzaw.varnish-software.com> ]] Daniel Br??ler Hi, | please do a symlink from | http://repo.varnish-cache.org/ubuntu/dists/maverick | to http://repo.varnish-cache.org/ubuntu/dists/lucid | and to http://repo.varnish-cache.org/ubuntu/dists/natty | | so that apt can download the Packages.gz for the sources.list in Ubuntu maverick (10.10) | and in the comming Natty Narwhal (11.04) what is out in 10 days. No, we only provide packages for 10.04, as noted on the Ubuntu installation page. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From tfheen at varnish-software.com Tue Apr 26 07:51:16 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Tue, 26 Apr 2011 09:51:16 +0200 Subject: Varnish child killed In-Reply-To: (Jean-Francois Laurens's message of "Fri, 22 Apr 2011 00:59:20 +0200") References: Message-ID: <878vuxv4az.fsf@qurzaw.varnish-software.com> ]] Jean-Francois Laurens Hi, | Would anybody please help on understanding why this child process get | killed Because your system load is too high, probably because your hard drives are too slow. Add more memory or use SSDs if you have high request rates. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From ahooper at bmjgroup.com Tue Apr 26 09:29:21 2011 From: ahooper at bmjgroup.com (Alex Hooper) Date: Tue, 26 Apr 2011 10:29:21 +0100 Subject: CDN subdomain handling (hiding) in Varnish In-Reply-To: References: Message-ID: <4DB69071.10000@bmjgroup.com> Shibashish uttered: > 2011/4/25 Roberto O. Fern?ndez Crisial > > > [snip] > > Can I do a exact match on the cdn domain and redirect to the main > domain? i.e. "cdn.example.com " gets directed to > www.example.com , > but cdn.example.com/images/abc.jpg > should not be redirected. Or is > there a better way? Hi, What I do (with no claim that it's the canonical, or even a better, solution) is to put a rewrite rule on the origin server that says "if the request comes from the CDN caches and is not for (jpg|gif|js|css|whatever) then issue a permanent redirect to the canonical site". So the CDN ends up storing pointers back to the canonical site for those resources you do not wish served from CDN. The result being that, even if a user does go to http://cdn.example.com/ they are bounced straight back to http://www.example.com/). HTH, etc, Alex. -- Alex Hooper Operations Team Leader, BMJ Group, BMA House, London WC1H 9JR Tel: +44 (0) 20 7383 6049 http://group.bmj.com/ _______________________________________________________________________ The BMJ Group is one of the world's most trusted providers of medical information for doctors, researchers, health care workers and patients group.bmj.com. This email and any attachments are confidential. If you have received this email in error, please delete it and kindly notify us. If the email contains personal views then the BMJ Group accepts no responsibility for these statements. The recipient should check this email and attachments for viruses because the BMJ Group accepts no liability for any damage caused by viruses. Emails sent or received by the BMJ Group may be monitored for size, traffic, distribution and content. BMJ Publishing Group Limited trading as BMJ Group. A private limited company, registered in England and Wales under registration number 03102371. Registered office: BMA House, Tavistock Square, London WC1H 9JR, UK. _______________________________________________________________________ From armdan20 at gmail.com Tue Apr 26 13:11:40 2011 From: armdan20 at gmail.com (andan andan) Date: Tue, 26 Apr 2011 15:11:40 +0200 Subject: Question about first_byte_timeout and ESI Message-ID: Hi there. I've been playing with first_byte_timeout parameter. Supposing the following script: When I request http://test/test.html, the 503 is thrown after 2400 milliseconds. As you can see is exactly double of first_byte_timeout. If other esi:included is set, the elapsed time goes to 3600 milliseconds, and so on. My question, is this the expected behaviour ???? Thanks in advance. Kind Regards. From TFigueiro at au.westfield.com Tue Apr 26 22:26:10 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Wed, 27 Apr 2011 08:26:10 +1000 Subject: Varnish child killed In-Reply-To: References: Message-ID: From: Jean-Francois Laurens > Hi there, > Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not responding > to CLI, killing it. Hi! I'm a bit late to the discussion (happy Easter everyone!) but we came across this issue on Linux earlier this year. You didn't send charts for your disk IO but I'm betting it looks busy. If this is so, it may be the kernel pre-empting writing of dirty mmap pages to disk. I investigated the issue at the time and found that on the RHEL 5 vanilla kernel the Varnish child was being blocked by kernel IO. This caused the parent pings to time-out: http://microrants.blogspot.com/2010/07/varnish-and-linux-io-bottleneck.html TL;DR: echo 0 > /proc/sys/vm/flush_mmap_pages ? Good luck, Thiago. ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From moseleymark at gmail.com Wed Apr 27 00:25:40 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 26 Apr 2011 17:25:40 -0700 Subject: Avoiding big objects Message-ID: I was working on something in my quest to keep big (eventually uncacheable) objects from wreaking havoc on my cache. Even if I employ a scheme to call "restart" from vcl_fetch, after adding a header that tells vcl_recv to call 'pipe', the object still gets fetched from the origin server. And if it's 1.5 gig, it can be pretty painful. So I was hoping to throw this by you guys, esp the Varnish devs. Mainly I wanted to hear if anyone thought this was a tremendously bad idea. I wrote this about 45 minutes ago, so it's not particularly well-tested out, but if you guys said this was the worst idea ever, then I might reconsider putting a lot more time into perfecting it. Thus there are likely to be big corner cases here. There was another recent thread about this subject, so I know there are some other people looking for a similar solution, so I thought I'd throw this out there too. This doesn't protect me from 1.5 gig JPEG files but it does most of the job. and a further comment is that, yes, I'm ok with all the extra backend reqs, providing their HEADs. Mainly what it's doing is this: 1. Huge files won't ever be HITs in my environment, since I'll have piped them. 2. If a MISS (as it should be), rewrite backend method from GET (I don't do POSTs on varnish) to HEAD in vcl_miss if it's a file extension likely to be a biggish file and matches other conditions. 3. In vcl_fetch, if it's a rewritten HEAD, do size check. If it's too big, add the header that indicates to vcl_fetch to drop immediately to 'pipe' 4. In either case, in vcl_fetch, rewrite the method back to GET and call 'restart'. Here's the essence of the VCL (imagine regularly-working VCL alongside it). I typed this out so ignore dumb typos: sub vcl_fetch { .... # If we've got the header that says to pipe this request, pipe it (thanks Tollef) if ( req.http.X-PIPEME && req.restarts > 0 ) { return( pipe ); } .... } # The URLs in this regex are some sample ones that are often huge in size; the eventual list would be bigger and have others like 'mpg' etc. Note that I don't send POSTs over varnish, so ignore lack of POST sub vcl_miss { # If no headcheck header and GET and type is on big list, rewrite to HEAD if ( ! req.http.X-HEADCHECK && bereq.request == "GET" && req.url ~ "\.(gz|wmv|zip|flv|avi)$" && req.restarts == 0 ) { set req.http.X-HEADCHECK = "1"; set bereq.request = "HEAD"; set bereq.http.User-Agent = "HEAD Check"; log "DEBUG: Rewriting to HEAD"; } } sub vcl_fetch { # If this used to be a GET request that we changed to HEAD, do length check. But try to avoid restart loops. if ( req.http.X-HEADCHECK && req.request == "GET" && bereq.request == "HEAD" && req.url ~ "\.(gz|wmv|zip|flv|avi)$" && req.restarts < 1) { unset req.http.X-HEADCHECK; set bereq.request = "GET"; log "DEBUG: [fetch] Rewriting to HEAD"; # If content is over 10 meg, pipe it if ( beresp.http.Content-Length ~ "[0-9]{8,}" ) { set req.http.X-PIPEME = "1"; } restart; } .... } Mainly I'm just looking for whether the Varnish devs think that this would cause something to completely explode and/or melt down or this is the worst security hole ever. It seems to work ok so far. For reqs that match 'beresp.http.Content-Length ~ "[0-9]{8,}"', the "SMA bytes allocated" counter never budges, where it normally does for anything fetched (memory backend). Thanks! Hope someone else can benefit from this too. If someone else uses this (after thorough testing), be sure to remove the 'log' calls in production. From jean-francois.laurens at rts.ch Wed Apr 27 07:57:20 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Wed, 27 Apr 2011 09:57:20 +0200 Subject: Varnish child killed In-Reply-To: Message-ID: Thanks Ken, I updated the vm.min_free_kbytes to 128M and increased the cli_timeout to 40. No there was no errors from any parts of the HW (ram,disk ...) Jef Le 22/04/11 03:06, ??Ken Brownfield?? a ?crit?: > It was likely killed because the child process didn't respond to the parent > process within cli_timeout seconds. > > Two suggestions: > ?* increase vm.min_free_kbytes to 64M or 128M (I use the latter) > ?* increase cli_timeout to 40-60 seconds > > Then record 'vmstat' and 'ps' output, so you can go back and see what was > happening on the machine at the time the child is killed. ?For example, did > something start swapping 1G of RAM, and swap thrash caused the child to be > slow to respond)? ?Was there an ECC error? ?Etc. > > My guess is that the two settings above will put you in a better place. ?Also, > good to make sure your lru isn't too high, but it looks like you're using the > (fine) default. > --? > kb > > > > On Thu, Apr 21, 2011 at 15:59, Jean-Francois Laurens > wrote: >> Please do not overlook the object of the email which is actually ?Varnish >> child killed?. >> >> For the explanation of the hitratio: some tests are running and half of the >> misses are requests with cachekillers on purpose. >> >> Would anybody please help on understanding why this child process get killed >> ? >> >> >> Le 21/04/11 19:41, ??Simon Lyall?? a ?crit?: >> >>> On Thu, 21 Apr 2011, Jean-Francois Laurens wrote: >>>> > 2. hitratio >>>> > My hit ratio is pretty low around 50%. >>>> > It used to be around 70% with the version 2.0.6 and dropped down to 50%. >>>> > I can?t explain it as what was changed was only : >>>> > In vcl_fetch, replace obj by beresp >>>> > Replace calls to pass or deliver by return(deliver) or return(pass) >>> >>> Is this for http://www.tsr.ch/ ? >>> >>> If your hit ratio is 60% with a million objects then something is very >>> wrong. You have enough images there you should be well over 90% hits. >>> >>> Run "varnishlog -b" >>> >>> and see what your cache misses are ( or do look in your backend logs ). >>> >>> >>> Jean-Francois Laurens >>> Ing?nieur Syst?me Unix >>> Resources et D?veloppement >>> Secteur Backend >>> RTS - Radio T?l?vision Suisse >>> Quai Ernest-Ansermet 20 ??????????????????????? >>> Case postale 234 ??????????????????????????????????? >>> CH - 1211 Gen?ve 8 >>> T +41 (0)58 236 81 63 >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> Jean-Francois Laurens >>> Ing?nieur Syst?me Unix >>> Resources et D?veloppement >>> Secteur Backend >>> RTS - Radio T?l?vision Suisse >>> Quai Ernest-Ansermet 20 >>> Case postale 234 >>> CH - 1211 Gen?ve 8 >>> T +41 (0)58 236 81 63 >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.laurens at rts.ch Wed Apr 27 07:59:33 2011 From: jean-francois.laurens at rts.ch (Jean-Francois Laurens) Date: Wed, 27 Apr 2011 09:59:33 +0200 Subject: Varnish child killed In-Reply-To: Message-ID: HI Thiago ! Thank you a lot for your very interesting comment ! In fact I get the same limitation with a 8GB memory server. Please correct me if I?m wrong : My system has only 8GB of memory. Thus I can assume that the kernel should have stared to write dirty mmap'ed pages longer in advance than your server does. But the very high loads I?ve seen are only occuring once the cache became close to 20GB+. Why is that ? I still don?t understand this 20GB+ limitation. I addition of the vmstat and ps that I collect regularly, I will add an iostat to watch the disk activity. Jef Le 27/04/11 00:26, ??Thiago Figueiro?? a ?crit?: > From: Jean-Francois Laurens > >> > Hi there, > >> > Apr 20 21:48:43 server-01-39 varnishd[21087]: Child (5372) not responding >> > to CLI, killing it. > > Hi! I'm a bit late to the discussion (happy Easter everyone!) but we came > across this issue on Linux earlier this year. > > You didn't send charts for your disk IO but I'm betting it looks busy. If > this is so, it may be the kernel pre-empting writing of dirty mmap pages to > disk. > > I investigated the issue at the time and found that on the RHEL 5 vanilla > kernel the Varnish child was being blocked by kernel IO. This caused the > parent pings to time-out: > > http://microrants.blogspot.com/2010/07/varnish-and-linux-io-bottleneck.html > > > TL;DR: > echo 0 > /proc/sys/vm/flush_mmap_pages > > > ? > Good luck, > Thiago. > > > ______________________________________________________ > CONFIDENTIALITY NOTICE > This electronic mail message, including any and/or all attachments, is for the > sole use of the intended recipient(s), and may contain confidential and/or > privileged information, pertaining to business conducted under the direction > and supervision of the sending organization. All electronic mail messages, > which may have been established as expressed views and/or opinions (stated > either within the electronic mail message or any of its attachments), are left > to the sole responsibility of that of the sender, and are not necessarily > attributed to the sending organization. Unauthorized interception, review, > use, disclosure or distribution of any such information contained within this > electronic mail message and/or its attachment(s), is (are) strictly > prohibited. If you are not the intended recipient, please contact the sender > by replying to this electronic mail message, along with the destruction all > copies of the original electronic mail message (along with any attachments). > ______________________________________________________ > > Jean-Francois Laurens > Ing?nieur Syst?me Unix > Resources et D?veloppement > Secteur Backend > RTS - Radio T?l?vision Suisse > Quai Ernest-Ansermet 20 > Case postale 234 > CH - 1211 Gen?ve 8 > T +41 (0)58 236 81 63 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Thu Apr 28 02:16:45 2011 From: straightflush at gmail.com (AD) Date: Wed, 27 Apr 2011 22:16:45 -0400 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: Just re-surfacing an old thread. Could you do this another way by creating a director of both the real backend and "failapp". Then, when you restart you should be out of backend choices, no ? On Tue, Mar 8, 2011 at 7:23 PM, Drew Smathers wrote: > On Tue, Mar 8, 2011 at 3:51 PM, Per Buer > wrote: > > Hi Drew, list. > > On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers > > wrote: > >> > >> Sorry to bump my own thread, but does anyone know of a way to set > >> saintmode if a backend is down, vs. up and misbehaving (returning 500, > >> etc)? > >> > >> Also, I added a backend probe and this indeed caused grace to kick in > >> once the probe determined the backend as sick.I think the docs should > >> be clarified if this isn't a bug (grace not working without probe): > >> > >> > http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers > > > > Check out the trunk version of the docs. Committed some earlier today. > > > > Thanks, I see a lot is getting > > >> > >> Finally it's somewhat disconcerting that in the interim between a > >> cache expiry and before varnish determines a backend as down (sick) it > >> will 503 - so this could affect many clients during that window. > >> Ideally, I'd like to successfully service requests if there's an > >> object in the cache - period - but I guess this isn't possible now > >> with varnish? > > > > Actually it is. In the docs there is a somewhat dirty trick where set a > > marker in vcl_error, restart and pick up on the error and switch backend > to > > one that is permanetly down. Grace kicks in and serves the stale content. > > Sometime post 3.0 there will be a refactoring of the whole vcl_error > > handling and we'll end up with something a bit more elegant. > > > > Well a dirty trick is good enough if makes a paying customer for me. :P > > This is working perfectly now. I would suggest giving an example of > "magic marker" mentioned in the document which mentions the trick > ( > http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html > ). > Here's a stripped down version of my VCL incorporating the trick: > > backend webapp { > .host = "127.0.0.1"; > .port = "8000"; > .probe = { > .url = "/hello/"; > .interval = 5s; > .timeout = 1s; > .window = 5; > .threshold = 3; > } > } > > /* A backend that will always fail. */ > backend failapp { > .host = "127.0.0.1"; > .port = "9000"; > .probe = { > .url = "/hello/"; > .interval = 12h; > .timeout = 1s; > .window = 1; > .threshold = 1; > } > } > > sub vcl_recv { > > if (req.http.X-Varnish-Error == "1") { > set req.backend = failapp; > unset req.http.X-Varnish-Error; > } else { > set req.backend = webapp; > } > > if (! req.backend.healthy) { > set req.grace = 24h; > } else { > set req.grace = 1m; > } > } > > sub vcl_error { > if ( req.http.X-Varnish-Error != "1" ) { > set req.http.X-Varnish-Error = "1"; > return (restart); > } > > } > > sub vcl_fetch { > set beresp.grace = 24h; > } > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From timball at gmail.com Thu Apr 28 04:40:49 2011 From: timball at gmail.com (Timothy Ball) Date: Thu, 28 Apr 2011 00:40:49 -0400 Subject: HELP ! Message-ID: i can't figure out why but all the sudden whenever my varnish server talks to one of it's configured backends it's adding a trailing '/' after every request . none the less this is breaking many of my sites and i'm at a total loss as to why . i've tried to comment out as much as i can and now am at nearly the bare minimum and it's still broken . i've tested my backend nginx server and it's serving things correctly but in my logs whenever varnish connects to a backend i get log entries that look like this : "184.73.176.218 - - [28/Apr/2011:04:33:32 +0000] "GET /css/main.css/ HTTP/1.1" 404 18 "http://congrelate.org/" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.33 Safari/534.3" not it's asking for /css/main.css/ which is totally wrong ! i don't want to have to put some crazy logic into my nginx configs to fix this , but i have no idea why . find below all of my complete vcl any help would be much appreciated --timball # # default.vcl # --timball at sunlighfoundation.com # # main varnish default.vcl for # # Wed Apr 27 14:45:21 EDT 2011 # $Id$ # # Default backend definition. Set this to point to your content # server. backend default { .host = "10.122.223.11"; .port = "80"; } backend live { .host = "10.203.43.106"; .port = "80"; } backend bdad { .host = "10.242.213.188"; .port = "80"; } backend cupcake { .host = "10.196.241.236"; .port = "80"; } backend capwords { .host = "10.126.35.19"; .port = "80"; } backend committeewatch { .host = "67.207.135.137"; .port = "80"; } backend congrelate { .host = "67.207.135.137"; .port = "80"; } backend natdatcat { .host = "66.135.42.55"; .port = "80"; } backend earmarkwatch { .host = "127.0.0.1"; .port = "80"; } backend fara { .host = "67.207.135.137"; .port = "80"; } #backend default { # .host = "127.0.0.1"; # .port = "8080"; #} # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. # sub vcl_recv { if (req.restarts == 0) { # remove req.http.X-Forwarded-For; # set req.http.X-Forwarded-For = req.http.rlnclientipaddr; if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } # LOGIC TO ROUTE HOSTS # remove req.http.X-Forwarded-For; # set req.http.X-Forwarded-For = req.http.rlnclientipaddr; set req.grace = 30s; if (req.request == "GET" && req.url ~ "\.(js)") { remove req.http.Cookie; remove req.http.Authorization; return(lookup); } ## images if (req.request == "GET" && req.url ~ "\.(gif|jpg|jpeg|bmp|png|tiff|tif|ico|img|tga|wmf)$") { remove req.http.Cookie; remove req.http.Authorization; return(lookup); } ## various other content pages if (req.request == "GET" && req.url ~ "\.(css|html)$") { remove req.http.Cookie; remove req.http.Authorization; return(lookup); } if ( req.http.host ~ "^sunlightfoundation\.com") { set req.backend = default; set req.http.host = "sunlightfoundation.com"; } elsif ( req.http.host ~ "(www)\.sunlightfoundation\.com") { error 750 "http://sunlightfoundation.com"; # sunlight live } elsif ( req.http.host ~ "live\.sunlightlabs\.com") { set req.backend = live; set req.http.host = "live.sunlightlabs.com"; # this host's external ip } elsif ( req.http.host ~ "184\.73\.176\.218") { error 750 "http://sunlightfoundation.com"; # old blog links } elsif ( req.http.host ~ "blog\.sunlightfoundation\.com") { error 750 "http://sunlightfoundation.com/blog"; # SUNLGIHT CAMPAIGN AD MONITORING } elsif ( req.http.host ~ "(www\.)?sunlightcam\.(net|org)") { error 750 "http://sunlightcam.com"; } elsif ( req.http.host ~ "www\.sunlightcam\.com") { error 750 "http://sunlightcam.com"; } elsif ( req.http.host ~ "^sunlightcam\.com") { set req.backend = cupcake; set req.http.host = "sunlightcam.com"; } elsif ( req.http.host ~ "(www\.)?campaignadmonitor\.(net|com|org)") { error 750 "http://sunlightcam.com"; # clearspending } elsif ( req.http.host ~ "(www\.)?clearspending\.(org|net|com)") { error 750 "http://sunlightfoundation.com/clearspending/"; # poligraft #} elsif ( req.http.host ~ "(www\.)?poligraft\.net") { # error 750 "http://poligraft.com"; } elsif ( req.http.host ~ "(www\.)?poligraft\.(net|org|com)") { error 750 "http://poligraft.com"; # transparencyjobs } elsif ( req.http.host ~ "(www\.)?transparencyjobs\.(net|org|com)") { error 750 "http://transparencyjobs.com"; # congrelate } elsif ( req.http.host ~ "(www\.)?congrelate\.(net|com)") { error 750 "http://congrelate.org"; } elsif ( req.http.host ~ "www\.congrelate\.org") { error 750 "http://congrelate.org"; } elsif ( req.http.host ~ "^congrelate\.org") { set req.backend = congrelate; set req.http.host = "congrelate.org"; # capitolwords } elsif ( req.http.host ~ "(www\.)?capitalwords\.(net|com)") { error 750 "http://capitolwords.org"; } elsif ( req.http.host ~ "www\.capitolwords\.org") { error 750 "http://capitolwords.org"; } elsif ( req.http.host ~ "capitolwords\.org") { set req.backend = capwords; set req.http.host = "capitolwords.org"; # sunlightmediaservices } elsif ( req.http.host ~ "(www\.)?sunlightmediaservices\.(net|org|com)") { error 750 "http://sunlightmediaservices.com"; # transparencycaucus } elsif ( req.http.host ~ "(www\.)?transparencycaucus.(net|org|com|us)") { error 750 "http://transparencycaucus.org"; # transparencycaucus } elsif ( req.http.host ~ "(www\.)?subsidyscope\.(net|org|com)") { error 750 "http://subsidyscope.org"; # oxtail } elsif ( req.http.host ~ "inbox\.influenceexplorer\.com") { error 750 "https://inbox.influenceexplorer.com"; # this gets delt w/ via pound # publicmarkup } elsif ( req.http.host ~ "(www\.)?publicmarkup\.(net|com)") { error 750 "http://publicmarkup.org"; # politicalpartytime } elsif ( req.http.host ~ "(www\.)?politicalpartytime\.(net|org|com)") { error 750 "http://politicalpartytime.org"; } elsif ( req.http.host ~ "partytime\.sunlightfoundation\.com") { error 750 "http://politicalpartytime.org"; # sunlightlabs } elsif ( req.http.host ~ "(www\.)?sunlightlabs\.(net|org|com)") { error 750 "http://sunlightlabs.com"; # wiki.sunlightlabs.org # XXX think we can axe this XXX } elsif ( req.http.host ~ "wiki\.sunlightlabs\.(net|org|com)") { error 750 "http://wiki.sunlightlabs.com"; # XXX think we can axe this XXX } elsif ( req.http.host ~ "blog\.sunlightlabs\.(net|org|com)") { error 750 "http://sunlightlabs.com/blog"; # publicequalsonline } elsif ( req.http.host ~ "(www\.)?publicequalsonline\.(net|org|com)") { error 750 "http://publicequalsonline.com"; } elsif ( req.http.host ~ "(www\.)?publicmeansonline\.(net|org|com)") { error 750 "http://publicequalsonline.com"; # thepoia } elsif ( req.http.host ~ "(www\.)?thepoia\.(net|org|com)") { error 750 "http://sunlightfoundation.com/policy/poia/"; # realtime.sunlightprojects } elsif ( req.http.host ~ "(www\.)?realtime.sunlightprojects\.(net|org|com)") { error 750 "http://reporting.sunlightfoundation.com/"; # benefitwiki } elsif ( req.http.host ~ "(www\.)?benefitwiki\.(net|org|com)") { error 750 "http://www.opencongress.org/wiki/Project:Benefit_Wiki"; # fara } elsif ( req.http.host ~ "(www\.)?foreignlobbying.org\.(net|org|com)") { error 750 "http://foreignlobbying.org/"; } elsif ( req.http.host ~ "fara(db)?\.sunlightfoundation\.com") { error 750 "http://foreignlobbying.org/"; # readthebill } elsif ( req.http.host ~ "(www\.)?readthebill\.(net|org|com|info)") { error 750 "http://readthebill.org"; # congresspedia XXX NEED TO FIXME w/ correct redirects # rewrite ^/[wW]iki(.*) http://www.opencongress.org/wiki$1 permanent; } elsif ( req.http.host ~ "(www\.)?congresspedia\.(net|org|com|info)") { error 750 "http://www.opencongress.org/wiki"; # transparencycorps } elsif ( req.http.host ~ "(www\.)?transparencycorps\.(net|org|com)") { error 750 "http://transparencycorps.org"; # pass482 } elsif ( req.http.host ~ "(www\.)?pass482\.(net|org|com)") { error 750 "http://sunlightfoundation.com/pass482/"; } elsif ( req.http.host ~ "(www\.)?pass223\.(net|org|com)") { error 750 "http://sunlightfoundation.com/pass482/"; } elsif ( req.http.host ~ "(www\.)?72hourrule\.(net|org|com)") { error 750 "http://sunlightfoundation.com/pass482/"; # fortune535 } elsif ( req.http.host ~ "(www\.)?fortune535\.(net|org|com)") { error 750 "http://sunlightfoundation.com/projects/2007/fortune535/"; } elsif ( req.http.host ~ "fortune535\.sunlightprojects\.org") { error 750 "http://sunlightfoundation.com/projects/2007/fortune535/"; # letourcongresstweet } elsif ( req.http.host ~ "(www\.)?letourcongresstweet\.(net|org|com)") { error 750 "http://www.sunlightfoundation.com/capitoltweets/"; } elsif ( req.http.host ~ "(www\.)?capitoltweets\.(net|org|com)") { error 750 "http://www.sunlightfoundation.com/capitoltweets/"; # sunlightalinazinosec } elsif ( req.http.host ~ "(www\.)?sunlightalinazinosec\.(net|org|com)") { error 750 "http://www.youtube.com/watch?v=dtiMa_xcPLY"; # TransparencyHub } elsif ( req.http.host ~ "(www\.)?TransparencyHub\.(net|org|com)") { error 750 "http://www.opencongress.org/wiki/Project:Transparency_Hub"; # m.transparencycamp.org } elsif ( req.http.host ~ "m.transparencycamp\.(net|org|com)") { error 750 "http://transparencycamp.org/mobile/"; } elsif ( req.http.host ~ "m.tcamp\.(net|org|com|us)") { error 750 "http://transparencycamp.org/mobile/"; # old api site } elsif ( req.http.host ~ "api\.sunlightlabs\.(net|org|com)") { error 750 "http://services.sunlightlabs.com/api/"; # punchclockmap } elsif ( req.http.host ~ "^punchclock(map)?\.sunlight(s|projects)\.(net|org|com)") { error 750 "http://sunlightfoundation.com/projects/2007/punchclockmap/"; # superdelegateinfo } elsif ( req.http.host ~ "(www\.)?superdelegateinfo\.(net|org|com)") { error 750 "http://www.sourcewatch.org/index.php?title=Portal:Superdelegate_Transparency_Project"; # transparencycamp.com } elsif ( req.http.host ~ "(www\.)?transparencycamp\.(net|org|com)") { error 750 "http://transparencycamp.org"; } elsif ( req.http.host ~ "(www\.)?tcamp\.(net|org|com|us)") { error 750 "http://transparencycamp.org"; # littlesis } elsif ( req.http.host ~ "(www\.)?littlesis\.(net|org|com)") { error 750 "http://littlesis.org"; # appsforamerica } elsif ( req.http.host ~ "(www\.)?appsforamerica\.(net|org|com|us)") { error 750 "http://sunlightlabs.com/contests/appsforamerica/"; # bdad -- labs olympics 2010 } elsif ( req.http.host ~ "(www\.)?betterdrawadistrict\.(net|org|com)") { error 750 "http://betterdrawadistrict.com"; } elsif ( req.http.host ~ "www\.betterdrawadistrict\.com") { error 750 "http://betterdrawadistrict.com"; } elsif ( req.http.host ~ "betterdrawadistrict\.com") { set req.backend = bdad; set req.http.host = "http://betterdrawadistrict.com"; # committeewatch } elsif ( req.http.host ~ "(www\.)?committeewatch\.(net|com)") { error 750 "http://committeewatch.org"; } elsif ( req.http.host ~ "www\.committeewatch\.org") { error 750 "http://committeewatch.org"; } elsif ( req.http.host ~ "(www\.)?committeewatch\.org") { set req.backend = committeewatch; set req.http.host = "committeewatch.org"; # natdatcat } elsif ( req.http.host ~ "(www\.)?datacatalog\.(net|org|com|us)") { error 750 "http://nationaldatacatalog.com/"; } elsif ( req.http.host ~ "(www\.)?nationaldatacatalog\.(net|org)") { error 750 "http://nationaldatacatalog.com/"; } elsif ( req.http.host ~ "www\.nationaldatacatalog\.com") { error 750 "http://nationaldatacatalog.com/"; } elsif ( req.http.host ~ "nationaldatacatalog\.com") { set req.backend = natdatcat; set req.http.host = "nationaldatacatalog.com"; # earmarkwatch } elsif ( req.http.host ~ "(db|www)?(\.)?earmarkwatch\.(net|com)") { error 750 "http://earmarkwatch.org"; } elsif ( req.http.host ~ "^www\.earmarkwatch\.org") { error 750 "http://earmarkwatch.org"; } elsif ( req.http.host ~ "^earmarkwatch\.org") { set req.backend = earmarkwatch; set req.http.host = "earmarkwatch.org"; # elenasinbox } elsif ( req.http.host ~ "(www\.)?elenasinbox\.(net|org|com)") { error 750 "http://sunlightfoundation.com/blog/2010/06/25/top-25-viewed-pages-in-elenas-inbox/"; # foreignlobbying } elsif ( req.http.host ~ "(www\.)?foreignlobbying\.(net|com)") { error 750 "http://foreignlobbying.org"; } elsif ( req.http.host ~ "www\.foreignlobbying\.org") { error 750 "http://foreignlobbying.org"; } elsif ( req.http.host ~ "foreignlobbying\.org") { set req.backend = fara; set req.http.host = "foreignlobbying.org"; # sunlightlive } elsif ( req.http.host ~ "(www\.)?sunlightlive\.(net|org|com)") { error 750 "http://sunlightfoundation.com/live"; # XXXX } elsif ( req.http.host ~ "(www\.)?XXXX\.(net|org|com)") { error 750 "http://XXXX.com"; # everything else goes to foundation site } else { # greatjobbobbauer.org # greatamericanhackathon.(net|com|org) # fedsubsidywatch.(net|com|org) # fedsubsidy.(net|com|org) # fedsubsidieswatch.(net|com|org) # fedsubsidies.(net|com|org) # datatransparency.org # datajam.org # crooknotes.(net|com|org) # congresscommons.(com|org) # bearsareawesome.com # sunlightprojects.org # research.sunlightprojects.org # thesunlightfoundation.com; # news.sunlightfoundtion.com # press.sunlightfoundation.com # set req.backend = default; # set req.http.host = "sunlightfoundation.com"; # error 404 "NONE SHALL PASS! Unknown virtual host ... srykbyenow"; error 750 "http://sunlightfoundation.com"; } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return (pass); } return (lookup); } # sub vcl_pipe { # Note that only the first request to the backend will have # X-Forwarded-For set. If you use X-Forwarded-For and want to # have it set for all requests, make sure to have: # set bereq.http.connection = "close"; # here. It is not set by default as it might break some broken web # applications, like IIS with NTLM authentication. return (pipe); } # sub vcl_pass { return (pass); } #sub vcl_hash { # # Normally it hashes on URL and Host but we rewrite the host # # into a VirtualHostBase URL. Therefore we can hash on URL alone. # set req.hash += req.url; # # # One needs to include compression state normalised above # if (req.http.Accept-Encoding) { # set req.hash += req.http.Accept-Encoding; # } # # # Differentiate based on login cookie too # #set req.hash += req.http.cookie; # # return (hash); #} #sub vcl_hash { # set req.hash += req.url; # if (req.http.host) { # set req.hash += req.http.host; # } else { # set req.hash += server.ip; # } # return (hash); #} sub vcl_hit { if (!obj.cacheable) { return (pass); } return (deliver); } # sub vcl_miss { return (fetch); } # sub vcl_fetch { set req.grace = 30s; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { unset beresp.http.set-cookie; } #if ( req.url ~ "^/admin") { # return(pass); #} return(pass); } # sub vcl_fetch { # if (!beresp.cacheable) { # return (pass); # } # if (beresp.http.Set-Cookie) { # return (pass); # } # return (deliver); # } # sub vcl_deliver { return (deliver); } # ## deals w/ errors .. 750 from above means trigger this redirect sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } # sub vcl_error { # set obj.http.Content-Type = "text/html; charset=utf-8"; # synthetic {" # # # # # "} obj.status " " obj.response {" # # #

Error "} obj.status " " obj.response {"

#

"} obj.response {"

#

Guru Meditation:

#

XID: "} req.xid {"

#
#

Varnish cache server

# # # "}; # return (deliver); # } # vim:set background=dark:expandtab:shiftwidth=5:tabstop=5: -- ? ? ? ? GPG key available on pgpkeys.mit.edu pub? 1024D/511FBD54 2001-07-23 Timothy Lu Hu Ball Key fingerprint = B579 29B0 F6C8 C7AA 3840? E053 FE02 BB97 511F BD54 From timball at gmail.com Thu Apr 28 05:22:17 2011 From: timball at gmail.com (Timothy Ball) Date: Thu, 28 Apr 2011 01:22:17 -0400 Subject: HELP ! In-Reply-To: References: Message-ID: holy crap i just found what was the problem : 121 # 122 # ## various other content pages 123 # if (req.request == "GET" && req.url ~ "\.(css|html)$") { 124 # return(lookup); 125 # } 126 those lines caused the trailing / to be added to all requests ... i don't know why , if someone has an explanation that would rule , but as is i'm now getting over 90% hit rate ! boy i sure do love it when software works . --timball On Thu, Apr 28, 2011 at 12:40 AM, Timothy Ball wrote: > i can't figure out why but all the sudden whenever my varnish server > talks to one of it's configured backends it's adding a trailing '/' > after every request . none the less this is breaking many of my sites > and i'm at a total loss as to why . i've tried to comment out as much > as i can and now am at nearly the bare minimum and it's still broken . > > i've tested my backend nginx server and it's serving things correctly > but in my logs whenever varnish connects to a backend i get log > entries that look like this : > "184.73.176.218 - - [28/Apr/2011:04:33:32 +0000] ?"GET /css/main.css/ > HTTP/1.1" 404 18 "http://congrelate.org/" "Mozilla/5.0 (Macintosh; U; > Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.3 (KHTML, like Gecko) > Chrome/6.0.472.33 Safari/534.3" > > not it's asking for /css/main.css/ which is totally wrong ! i don't > want to have to put some crazy logic into my nginx configs to fix this > , but i have no idea why . > > find below all of my complete vcl > > any help would be much appreciated > > --timball > > # > # default.vcl > # ? ? ? ? ? ?--timball at sunlighfoundation.com > # > # main varnish default.vcl for > # > # Wed Apr 27 14:45:21 EDT 2011 > # $Id$ > # > > > # Default backend definition. ?Set this to point to your content > # server. > backend default { > ? ?.host = "10.122.223.11"; > ? ?.port = "80"; > } > > backend live { > ? ?.host = "10.203.43.106"; > ? ?.port = "80"; > } > > backend bdad { > ? ?.host = "10.242.213.188"; > ? ?.port = "80"; > } > > backend cupcake { > ? ?.host = "10.196.241.236"; > ? ?.port = "80"; > } > > backend capwords { > ? ?.host = "10.126.35.19"; > ? ?.port = "80"; > } > > backend committeewatch { > ? ?.host = "67.207.135.137"; > ? ?.port = "80"; > } > > backend congrelate { > ? ?.host = "67.207.135.137"; > ? ?.port = "80"; > } > > backend natdatcat { > ? ?.host = "66.135.42.55"; > ? ?.port = "80"; > } > > backend earmarkwatch { > ? ?.host = "127.0.0.1"; > ? ?.port = "80"; > } > > backend fara { > ? ?.host = "67.207.135.137"; > ? ?.port = "80"; > } > > #backend default { > # ? ?.host = "127.0.0.1"; > # ? ?.port = "8080"; > #} > # > # Below is a commented-out copy of the default VCL logic. ?If you > # redefine any of these subroutines, the built-in logic will be > # appended to your code. > # > sub vcl_recv { > > ? ?if (req.restarts == 0) { > ? ? ? ?# remove req.http.X-Forwarded-For; > ? ? ? ?# set ? ?req.http.X-Forwarded-For = req.http.rlnclientipaddr; > > ? ? ? ?if (req.http.x-forwarded-for) { > ? ? ? ? ? ?set req.http.X-Forwarded-For = > ? ? ? ? ? ?req.http.X-Forwarded-For ", " client.ip; > ? ? ? ?} else { > ? ? ? ? ? ?set req.http.X-Forwarded-For = client.ip; > ? ? ? ?} > ? ?} > ? ?if (req.request != "GET" ? ? && > ? ? ? ?req.request != "HEAD" ? ?&& > ? ? ? ?req.request != "PUT" ? ? && > ? ? ? ?req.request != "POST" ? ?&& > ? ? ? ?req.request != "TRACE" ? && > ? ? ? ?req.request != "OPTIONS" && > ? ? ? ?req.request != "DELETE") { > ? ? ? ?/* Non-RFC2616 or CONNECT which is weird. */ > ? ? ? ? ? ?return (pipe); > ? ?} > ? ?if (req.request != "GET" && req.request != "HEAD") { > ? ? ? ?/* We only deal with GET and HEAD by default */ > ? ? ? ?return (pass); > ? ?} > ? ? # LOGIC TO ROUTE HOSTS > ? ? # remove req.http.X-Forwarded-For; > ? ? # set ? ?req.http.X-Forwarded-For = req.http.rlnclientipaddr; > ? ? set req.grace = 30s; > > ? ? if (req.request == "GET" && req.url ~ "\.(js)") { > ? ? ? ?remove req.http.Cookie; > ? ? ? ?remove req.http.Authorization; > ? ? ? ?return(lookup); > ? ? } > > ? ? ## images > ? ? if (req.request == "GET" && req.url ~ > "\.(gif|jpg|jpeg|bmp|png|tiff|tif|ico|img|tga|wmf)$") { > ? ? ? ?remove req.http.Cookie; > ? ? ? ?remove req.http.Authorization; > ? ? ? ?return(lookup); > ? ? } > > ? ? ## various other content pages > ? ? if (req.request == "GET" && req.url ~ "\.(css|html)$") { > ? ? ? ?remove req.http.Cookie; > ? ? ? ?remove req.http.Authorization; > ? ? ? ?return(lookup); > ? ? } > > ? ? if ( req.http.host ~ "^sunlightfoundation\.com") { > ? ? ? ?set req.backend = default; > ? ? ? ?set req.http.host = "sunlightfoundation.com"; > ? ? } elsif ( req.http.host ~ "(www)\.sunlightfoundation\.com") { > ? ? ? ?error 750 "http://sunlightfoundation.com"; > > ? ? # sunlight live > ? ? } elsif ( req.http.host ~ "live\.sunlightlabs\.com") { > ? ? ? ?set req.backend = live; > ? ? ? ?set req.http.host = "live.sunlightlabs.com"; > > ? ? # this host's external ip > ? ? } elsif ( req.http.host ~ "184\.73\.176\.218") { > ? ? ? ?error 750 "http://sunlightfoundation.com"; > > ? ? # old blog links > ? ? } elsif ( req.http.host ~ "blog\.sunlightfoundation\.com") { > ? ? ? ?error 750 "http://sunlightfoundation.com/blog"; > > ? ? # SUNLGIHT CAMPAIGN AD MONITORING > ? ? } elsif ( req.http.host ~ "(www\.)?sunlightcam\.(net|org)") { > ? ? ? ?error 750 "http://sunlightcam.com"; > ? ? } elsif ( req.http.host ~ "www\.sunlightcam\.com") { > ? ? ? ?error 750 "http://sunlightcam.com"; > ? ? } elsif ( req.http.host ~ "^sunlightcam\.com") { > ? ? ? ?set req.backend = cupcake; > ? ? ? ?set req.http.host = "sunlightcam.com"; > > ? ? } elsif ( req.http.host ~ "(www\.)?campaignadmonitor\.(net|com|org)") { > ? ? ? ?error 750 "http://sunlightcam.com"; > > ? ? # clearspending > ? ? } elsif ( req.http.host ~ "(www\.)?clearspending\.(org|net|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/clearspending/"; > > ? ? # poligraft > ? ? #} elsif ( req.http.host ~ "(www\.)?poligraft\.net") { > ? ? # ? error 750 "http://poligraft.com"; > ? ? } elsif ( req.http.host ~ "(www\.)?poligraft\.(net|org|com)") { > ? ? ? ?error 750 "http://poligraft.com"; > > ? ? # transparencyjobs > ? ? } elsif ( req.http.host ~ "(www\.)?transparencyjobs\.(net|org|com)") { > ? ? ? ?error 750 "http://transparencyjobs.com"; > > ? ? # congrelate > ? ? } elsif ( req.http.host ~ "(www\.)?congrelate\.(net|com)") { > ? ? ? ?error 750 "http://congrelate.org"; > ? ? } elsif ( req.http.host ~ "www\.congrelate\.org") { > ? ? ? ?error 750 "http://congrelate.org"; > ? ? } elsif ( req.http.host ~ "^congrelate\.org") { > ? ? ? ?set req.backend = congrelate; > ? ? ? ?set req.http.host = "congrelate.org"; > > ? ? # capitolwords > ? ? } elsif ( req.http.host ~ "(www\.)?capitalwords\.(net|com)") { > ? ? ? ?error 750 "http://capitolwords.org"; > ? ? } elsif ( req.http.host ~ "www\.capitolwords\.org") { > ? ? ? ?error 750 "http://capitolwords.org"; > ? ? } elsif ( req.http.host ~ "capitolwords\.org") { > ? ? ? ?set req.backend = capwords; > ? ? ? ?set req.http.host = "capitolwords.org"; > > ? ? # sunlightmediaservices > ? ? } elsif ( req.http.host ~ "(www\.)?sunlightmediaservices\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightmediaservices.com"; > > ? ? # transparencycaucus > ? ? } elsif ( req.http.host ~ "(www\.)?transparencycaucus.(net|org|com|us)") { > ? ? ? ?error 750 "http://transparencycaucus.org"; > > ? ? # transparencycaucus > ? ? } elsif ( req.http.host ~ "(www\.)?subsidyscope\.(net|org|com)") { > ? ? ? ?error 750 "http://subsidyscope.org"; > > ? ? # oxtail > ? ? } elsif ( req.http.host ~ "inbox\.influenceexplorer\.com") { > ? ? ? ?error 750 "https://inbox.influenceexplorer.com"; > ? ? # this gets delt w/ via pound > > ? ? # publicmarkup > ? ? } elsif ( req.http.host ~ "(www\.)?publicmarkup\.(net|com)") { > ? ? ? ?error 750 "http://publicmarkup.org"; > > ? ? # politicalpartytime > ? ? } elsif ( req.http.host ~ "(www\.)?politicalpartytime\.(net|org|com)") { > ? ? ? ?error 750 "http://politicalpartytime.org"; > ? ? } elsif ( req.http.host ~ "partytime\.sunlightfoundation\.com") { > ? ? ? ?error 750 "http://politicalpartytime.org"; > > ? ? # sunlightlabs > ? ? } elsif ( req.http.host ~ "(www\.)?sunlightlabs\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightlabs.com"; > ? ? # wiki.sunlightlabs.org > ? ? # XXX think we can axe this XXX > ? ? } elsif ( req.http.host ~ "wiki\.sunlightlabs\.(net|org|com)") { > ? ? ? ?error 750 "http://wiki.sunlightlabs.com"; > ? ? # XXX think we can axe this XXX > ? ? } elsif ( req.http.host ~ "blog\.sunlightlabs\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightlabs.com/blog"; > > ? ? # publicequalsonline > ? ? } elsif ( req.http.host ~ "(www\.)?publicequalsonline\.(net|org|com)") { > ? ? ? ?error 750 "http://publicequalsonline.com"; > ? ? } elsif ( req.http.host ~ "(www\.)?publicmeansonline\.(net|org|com)") { > ? ? ? ?error 750 "http://publicequalsonline.com"; > > ? ? # thepoia > ? ? } elsif ( req.http.host ~ "(www\.)?thepoia\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/policy/poia/"; > > ? ? # realtime.sunlightprojects > ? ? } elsif ( req.http.host ~ > "(www\.)?realtime.sunlightprojects\.(net|org|com)") { > ? ? ? ?error 750 "http://reporting.sunlightfoundation.com/"; > > ? ? # benefitwiki > ? ? } elsif ( req.http.host ~ "(www\.)?benefitwiki\.(net|org|com)") { > ? ? ? ?error 750 "http://www.opencongress.org/wiki/Project:Benefit_Wiki"; > > ? ? # fara > ? ? } elsif ( req.http.host ~ "(www\.)?foreignlobbying.org\.(net|org|com)") { > ? ? ? ?error 750 "http://foreignlobbying.org/"; > ? ? } elsif ( req.http.host ~ "fara(db)?\.sunlightfoundation\.com") { > ? ? ? ?error 750 "http://foreignlobbying.org/"; > > ? ? # readthebill > ? ? } elsif ( req.http.host ~ "(www\.)?readthebill\.(net|org|com|info)") { > ? ? ? ?error 750 "http://readthebill.org"; > > ? ? # congresspedia XXX NEED TO FIXME w/ correct redirects > ? ? # rewrite ^/[wW]iki(.*) http://www.opencongress.org/wiki$1 permanent; > ? ? } elsif ( req.http.host ~ "(www\.)?congresspedia\.(net|org|com|info)") { > ? ? ? ?error 750 "http://www.opencongress.org/wiki"; > > ? ? # transparencycorps > ? ? } elsif ( req.http.host ~ "(www\.)?transparencycorps\.(net|org|com)") { > ? ? ? ?error 750 "http://transparencycorps.org"; > > ? ? # pass482 > ? ? } elsif ( req.http.host ~ "(www\.)?pass482\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/pass482/"; > ? ? } elsif ( req.http.host ~ "(www\.)?pass223\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/pass482/"; > ? ? } elsif ( req.http.host ~ "(www\.)?72hourrule\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/pass482/"; > > ? ? # fortune535 > ? ? } elsif ( req.http.host ~ "(www\.)?fortune535\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/projects/2007/fortune535/"; > ? ? } elsif ( req.http.host ~ "fortune535\.sunlightprojects\.org") { > ? ? ? ?error 750 "http://sunlightfoundation.com/projects/2007/fortune535/"; > > ? ? # letourcongresstweet > ? ? } elsif ( req.http.host ~ "(www\.)?letourcongresstweet\.(net|org|com)") { > ? ? ? ?error 750 "http://www.sunlightfoundation.com/capitoltweets/"; > ? ? } elsif ( req.http.host ~ "(www\.)?capitoltweets\.(net|org|com)") { > ? ? ? ?error 750 "http://www.sunlightfoundation.com/capitoltweets/"; > > ? ? # sunlightalinazinosec > ? ? } elsif ( req.http.host ~ "(www\.)?sunlightalinazinosec\.(net|org|com)") { > ? ? ? ?error 750 "http://www.youtube.com/watch?v=dtiMa_xcPLY"; > > ? ? # TransparencyHub > ? ? } elsif ( req.http.host ~ "(www\.)?TransparencyHub\.(net|org|com)") { > ? ? ? ?error 750 "http://www.opencongress.org/wiki/Project:Transparency_Hub"; > > ? ? # m.transparencycamp.org > ? ? } elsif ( req.http.host ~ "m.transparencycamp\.(net|org|com)") { > ? ? ? ?error 750 "http://transparencycamp.org/mobile/"; > ? ? } elsif ( req.http.host ~ "m.tcamp\.(net|org|com|us)") { > ? ? ? ?error 750 "http://transparencycamp.org/mobile/"; > > ? ? # old api site > ? ? } elsif ( req.http.host ~ "api\.sunlightlabs\.(net|org|com)") { > ? ? ? ?error 750 "http://services.sunlightlabs.com/api/"; > > ? ? # punchclockmap > ? ? } elsif ( req.http.host ~ > "^punchclock(map)?\.sunlight(s|projects)\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/projects/2007/punchclockmap/"; > > ? ? # superdelegateinfo > ? ? } elsif ( req.http.host ~ "(www\.)?superdelegateinfo\.(net|org|com)") { > ? ? ? ?error 750 > "http://www.sourcewatch.org/index.php?title=Portal:Superdelegate_Transparency_Project"; > > ? ? # transparencycamp.com > ? ? } elsif ( req.http.host ~ "(www\.)?transparencycamp\.(net|org|com)") { > ? ? ? ?error 750 "http://transparencycamp.org"; > ? ? } elsif ( req.http.host ~ "(www\.)?tcamp\.(net|org|com|us)") { > ? ? ? ?error 750 "http://transparencycamp.org"; > > ? ? # littlesis > ? ? } elsif ( req.http.host ~ "(www\.)?littlesis\.(net|org|com)") { > ? ? ? ?error 750 "http://littlesis.org"; > > ? ? # appsforamerica > ? ? } elsif ( req.http.host ~ "(www\.)?appsforamerica\.(net|org|com|us)") { > ? ? ? ?error 750 "http://sunlightlabs.com/contests/appsforamerica/"; > > ? ? # bdad -- labs olympics 2010 > ? ? } elsif ( req.http.host ~ "(www\.)?betterdrawadistrict\.(net|org|com)") { > ? ? ? ?error 750 "http://betterdrawadistrict.com"; > ? ? } elsif ( req.http.host ~ "www\.betterdrawadistrict\.com") { > ? ? ? ?error 750 "http://betterdrawadistrict.com"; > ? ? } elsif ( req.http.host ~ "betterdrawadistrict\.com") { > ? ? ? ?set req.backend = bdad; > ? ? ? ?set req.http.host = "http://betterdrawadistrict.com"; > > ? ? # committeewatch > ? ? } elsif ( req.http.host ~ "(www\.)?committeewatch\.(net|com)") { > ? ? ? ?error 750 "http://committeewatch.org"; > ? ? } elsif ( req.http.host ~ "www\.committeewatch\.org") { > ? ? ? ?error 750 "http://committeewatch.org"; > ? ? } elsif ( req.http.host ~ "(www\.)?committeewatch\.org") { > ? ? ? ?set req.backend = committeewatch; > ? ? ? ?set req.http.host = "committeewatch.org"; > > ? ? # natdatcat > ? ? } elsif ( req.http.host ~ "(www\.)?datacatalog\.(net|org|com|us)") { > ? ? ? ?error 750 "http://nationaldatacatalog.com/"; > ? ? } elsif ( req.http.host ~ "(www\.)?nationaldatacatalog\.(net|org)") { > ? ? ? ?error 750 "http://nationaldatacatalog.com/"; > ? ? } elsif ( req.http.host ~ "www\.nationaldatacatalog\.com") { > ? ? ? ?error 750 "http://nationaldatacatalog.com/"; > ? ? } elsif ( req.http.host ~ "nationaldatacatalog\.com") { > ? ? ? ?set req.backend = natdatcat; > ? ? ? ?set req.http.host = "nationaldatacatalog.com"; > > ? ? # earmarkwatch > ? ? } elsif ( req.http.host ~ "(db|www)?(\.)?earmarkwatch\.(net|com)") { > ? ? ? ?error 750 "http://earmarkwatch.org"; > ? ? } elsif ( req.http.host ~ "^www\.earmarkwatch\.org") { > ? ? ? ?error 750 "http://earmarkwatch.org"; > ? ? } elsif ( req.http.host ~ "^earmarkwatch\.org") { > ? ? ? ?set req.backend = earmarkwatch; > ? ? ? ?set req.http.host = "earmarkwatch.org"; > > ? ? # elenasinbox > ? ? } elsif ( req.http.host ~ "(www\.)?elenasinbox\.(net|org|com)") { > ? ? ? ?error 750 > "http://sunlightfoundation.com/blog/2010/06/25/top-25-viewed-pages-in-elenas-inbox/"; > > ? ? # foreignlobbying > ? ? } elsif ( req.http.host ~ "(www\.)?foreignlobbying\.(net|com)") { > ? ? ? ?error 750 "http://foreignlobbying.org"; > ? ? } elsif ( req.http.host ~ "www\.foreignlobbying\.org") { > ? ? ? ?error 750 "http://foreignlobbying.org"; > ? ? } elsif ( req.http.host ~ "foreignlobbying\.org") { > ? ? ? ?set req.backend = fara; > ? ? ? ?set req.http.host = "foreignlobbying.org"; > > ? ? # sunlightlive > ? ? } elsif ( req.http.host ~ "(www\.)?sunlightlive\.(net|org|com)") { > ? ? ? ?error 750 "http://sunlightfoundation.com/live"; > > > ? ? # XXXX > ? ? } elsif ( req.http.host ~ "(www\.)?XXXX\.(net|org|com)") { > ? ? ? ?error 750 "http://XXXX.com"; > > ? ? # everything else goes to foundation site > ? ? } else { > ? ? ? ?# greatjobbobbauer.org > ? ? ? ?# greatamericanhackathon.(net|com|org) > ? ? ? ?# fedsubsidywatch.(net|com|org) > ? ? ? ?# fedsubsidy.(net|com|org) > ? ? ? ?# fedsubsidieswatch.(net|com|org) > ? ? ? ?# fedsubsidies.(net|com|org) > ? ? ? ?# datatransparency.org > ? ? ? ?# datajam.org > ? ? ? ?# crooknotes.(net|com|org) > ? ? ? ?# congresscommons.(com|org) > ? ? ? ?# bearsareawesome.com > ? ? ? ?# sunlightprojects.org > ? ? ? ?# research.sunlightprojects.org > ? ? ? ?# thesunlightfoundation.com; > ? ? ? ?# news.sunlightfoundtion.com > ? ? ? ?# press.sunlightfoundation.com > > ? ? ? ?# set req.backend = default; > ? ? ? ?# set req.http.host = "sunlightfoundation.com"; > ? ? ? ?# error 404 "NONE SHALL PASS! Unknown virtual host ... srykbyenow"; > ? ? ? ?error 750 "http://sunlightfoundation.com"; > ? ? } > > ? ?if (req.http.Authorization || req.http.Cookie) { > ? ? ? ?/* Not cacheable by default */ > ? ? ? ?return (pass); > ? ?} > ? ?return (lookup); > } > # > sub vcl_pipe { > ? ?# Note that only the first request to the backend will have > ? ?# X-Forwarded-For set. ?If you use X-Forwarded-For and want to > ? ?# have it set for all requests, make sure to have: > ? ?# set bereq.http.connection = "close"; > ? ?# here. ?It is not set by default as it might break some broken web > ? ?# applications, like IIS with NTLM authentication. > ? ?return (pipe); > } > # > sub vcl_pass { > ? ?return (pass); > } > > #sub vcl_hash { > # ? ?# Normally it hashes on URL and Host but we rewrite the host > # ? ?# into a VirtualHostBase URL. Therefore we can hash on URL alone. > # ? ?set req.hash += req.url; > # > # ? ?# One needs to include compression state normalised above > # ? ?if (req.http.Accept-Encoding) { > # ? ? ? ?set req.hash += req.http.Accept-Encoding; > # ? ?} > # > # ? ?# Differentiate based on login cookie too > # ? ?#set req.hash += req.http.cookie; > # > # ? ?return (hash); > #} > #sub vcl_hash { > # ? ?set req.hash += req.url; > # ? ?if (req.http.host) { > # ? ? ? ?set req.hash += req.http.host; > # ? ?} else { > # ? ? ? ?set req.hash += server.ip; > # ? ?} > # ? ?return (hash); > #} > > sub vcl_hit { > ? ?if (!obj.cacheable) { > ? ? ? ?return (pass); > ? ?} > ? ?return (deliver); > } > # > sub vcl_miss { > ? ?return (fetch); > } > # > sub vcl_fetch { > ? ? ? ?set req.grace = 30s; > ? ? ? ?if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { > ? ? ? ? ? ?unset beresp.http.set-cookie; > ? ? ? ?} > > ? ?#if ( req.url ~ "^/admin") { > ? ?# ? return(pass); > ? ?#} > ? ?return(pass); > } > # sub vcl_fetch { > # ? ? if (!beresp.cacheable) { > # ? ? ? ? return (pass); > # ? ? } > # ? ? if (beresp.http.Set-Cookie) { > # ? ? ? ? return (pass); > # ? ? } > # ? ? return (deliver); > # } > # > sub vcl_deliver { > ? ?return (deliver); > } > # > > ## deals w/ errors .. 750 from above means trigger this redirect > sub vcl_error { > ? ? ?if (obj.status == 750) { > ? ? ? ? ?set obj.http.Location = obj.response; > ? ? ? ? ?set obj.status = 301; > ? ? ? ? ?return(deliver); > ? ? ?} > } > > # sub vcl_error { > # ? ? set obj.http.Content-Type = "text/html; charset=utf-8"; > # ? ? synthetic {" > # > # # ?"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> > # > # ? > # ? ? "} obj.status " " obj.response {" > # ? > # ? > # ? ?

Error "} obj.status " " obj.response {"

> # ? ?

"} obj.response {"

> # ? ?

Guru Meditation:

> # ? ?

XID: "} req.xid {"

> # ? ?
> # ? ?

Varnish cache server

> # ? > # > # "}; > # ? ? return (deliver); > # } > > # vim:set background=dark:expandtab:shiftwidth=5:tabstop=5: > > -- > ? ? ? ? GPG key available on pgpkeys.mit.edu > pub? 1024D/511FBD54 2001-07-23 Timothy Lu Hu Ball > Key fingerprint = B579 29B0 F6C8 C7AA 3840? E053 FE02 BB97 511F BD54 > -- ? ? ? ? GPG key available on pgpkeys.mit.edu pub? 1024D/511FBD54 2001-07-23 Timothy Lu Hu Ball Key fingerprint = B579 29B0 F6C8 C7AA 3840? E053 FE02 BB97 511F BD54 From adam at dberg.org Thu Apr 28 02:03:57 2011 From: adam at dberg.org (Adam Denenberg) Date: Wed, 27 Apr 2011 22:03:57 -0400 Subject: Varnish still 503ing after adding grace to VCL In-Reply-To: References: Message-ID: Just re-surfacing an old thread. Could you do this another way by creating a director of both the real backend and "failapp". Then, when you restart you should be out of backend choices, no ? On Tue, Mar 8, 2011 at 7:23 PM, Drew Smathers wrote: > On Tue, Mar 8, 2011 at 3:51 PM, Per Buer > wrote: > > Hi Drew, list. > > On Tue, Mar 8, 2011 at 9:34 PM, Drew Smathers > > wrote: > >> > >> Sorry to bump my own thread, but does anyone know of a way to set > >> saintmode if a backend is down, vs. up and misbehaving (returning 500, > >> etc)? > >> > >> Also, I added a backend probe and this indeed caused grace to kick in > >> once the probe determined the backend as sick.I think the docs should > >> be clarified if this isn't a bug (grace not working without probe): > >> > >> > http://www.varnish-cache.org/docs/2.1/tutorial/handling_misbehaving_servers.html#tutorial-handling-misbehaving-servers > > > > Check out the trunk version of the docs. Committed some earlier today. > > > > Thanks, I see a lot is getting > > >> > >> Finally it's somewhat disconcerting that in the interim between a > >> cache expiry and before varnish determines a backend as down (sick) it > >> will 503 - so this could affect many clients during that window. > >> Ideally, I'd like to successfully service requests if there's an > >> object in the cache - period - but I guess this isn't possible now > >> with varnish? > > > > Actually it is. In the docs there is a somewhat dirty trick where set a > > marker in vcl_error, restart and pick up on the error and switch backend > to > > one that is permanetly down. Grace kicks in and serves the stale content. > > Sometime post 3.0 there will be a refactoring of the whole vcl_error > > handling and we'll end up with something a bit more elegant. > > > > Well a dirty trick is good enough if makes a paying customer for me. :P > > This is working perfectly now. I would suggest giving an example of > "magic marker" mentioned in the document which mentions the trick > ( > http://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html > ). > Here's a stripped down version of my VCL incorporating the trick: > > backend webapp { > .host = "127.0.0.1"; > .port = "8000"; > .probe = { > .url = "/hello/"; > .interval = 5s; > .timeout = 1s; > .window = 5; > .threshold = 3; > } > } > > /* A backend that will always fail. */ > backend failapp { > .host = "127.0.0.1"; > .port = "9000"; > .probe = { > .url = "/hello/"; > .interval = 12h; > .timeout = 1s; > .window = 1; > .threshold = 1; > } > } > > sub vcl_recv { > > if (req.http.X-Varnish-Error == "1") { > set req.backend = failapp; > unset req.http.X-Varnish-Error; > } else { > set req.backend = webapp; > } > > if (! req.backend.healthy) { > set req.grace = 24h; > } else { > set req.grace = 1m; > } > } > > sub vcl_error { > if ( req.http.X-Varnish-Error != "1" ) { > set req.http.X-Varnish-Error = "1"; > return (restart); > } > > } > > sub vcl_fetch { > set beresp.grace = 24h; > } > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rrauenza at gmail.com Fri Apr 29 01:34:11 2011 From: rrauenza at gmail.com (Rich Rauenzahn) Date: Thu, 28 Apr 2011 18:34:11 -0700 Subject: varnish not bundling simultaneous requests? Message-ID: We're thinking of switching from squid primarily because squid 3.x doesn't bundle simultaneous requests into a single backend request. We use squid as a cache for accelerating the distribution of large files via http off of NFS. We have a central one at corp, and then leaf ones at remote sites. The lack of bundling is killing our performance as when these files are made available, many automated processes start simultaneously and begin consuming the new files. Some sites also try to use axel to accelerate the download, which means the file doesn't get cached in squid. I just ran a test where I requested (via curl) the same resource in parallel from varnish, and I got multiple backend requests (I checked my apache log). Could it be that there is a small race condition that is possible to hit such that two backend requests get initiated if they are close enough in time? (my test was a curl ... & curl ... & -- if I do it by hand, which is slower, they seem to get bundled.) I'm hoping it is just something subtle I need to adjust in my vcl? backend download1 { .host = "download1"; .port = "80"; } sub vcl_fetch { set beresp.ttl = 1d; set beresp.grace = 1h; set beresp.keep = 365d; } sub vcl_recv { set req.ttl = 1d; set req.grace = 1h; set req.keep = 365d; } sub vcl_hit { set obj.ttl = 1d; set obj.grace = 1h; set obj.keep = 365d; } From straightflush at gmail.com Fri Apr 29 02:20:39 2011 From: straightflush at gmail.com (AD) Date: Thu, 28 Apr 2011 22:20:39 -0400 Subject: replace URL in req.hash Message-ID: Hello, Is there a way to define the host that gets used in req.hash without modifying req.http.host in vcl_recv? I have a situation where i have several hostnames that use the same backend but i dont want to force req.http.host to change in order to accomplish this. Is there a way to remove adding req.http.host from the req.hash and let me define the host to be the "cache key"? Cheers AD -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Fri Apr 29 08:31:23 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 29 Apr 2011 10:31:23 +0200 Subject: varnish not bundling simultaneous requests? In-Reply-To: References: Message-ID: <20110429083123.GA2862@localhost.localdomain> On Thu, Apr 28, 2011 at 06:34:11PM -0700, Rich Rauenzahn wrote: > I just ran a test where I requested (via curl) the same resource in > parallel from varnish, and I got multiple backend requests (I checked > my apache log). > > Could it be that there is a small race condition that is possible to > hit such that two backend requests get initiated if they are close > enough in time? (my test was a curl ... & curl ... & -- if I do it by > hand, which is slower, they seem to get bundled.) That sounds unlikely. It's more likely that the object isn't being cached at all. Can you provide varnishlog output from a test like that? Also, setting obj.ttl to 1d in vcl_hit means never expiring objects, as you reset the ttl every time they are accessed... So the only way they are expired is if they aren't requested for a day. - Kristian From rrauenza at gmail.com Fri Apr 29 14:20:47 2011 From: rrauenza at gmail.com (Rich Rauenzahn) Date: Fri, 29 Apr 2011 07:20:47 -0700 Subject: varnish not bundling simultaneous requests? In-Reply-To: <20110429083123.GA2862@localhost.localdomain> References: <20110429083123.GA2862@localhost.localdomain> Message-ID: On Fri, Apr 29, 2011 at 1:31 AM, Kristian Lyngstol wrote: > On Thu, Apr 28, 2011 at 06:34:11PM -0700, Rich Rauenzahn wrote: >> I just ran a test where I requested (via curl) the same resource in >> parallel from varnish, and I got multiple backend requests (I checked >> my apache log). >> >> Could it be that there is a small race condition that is possible to >> hit such that two backend requests get initiated if they are close >> enough in time? ?(my test was a curl ... & curl ... & -- if I do it by >> hand, which is slower, they seem to get bundled.) > > That sounds unlikely. It's more likely that the object isn't being > cached at all. Can you provide varnishlog output from a test like that? Sure... although I've had no luck so far getting output from it. On the other hand I have two versions of varnish installed (the latest official release, and the latest git) -- perhaps I was using the wrong varnishlog executable. > Also, setting obj.ttl to 1d in vcl_hit means never expiring objects, as > you reset the ttl every time they are accessed... So the only way they > are expired is if they aren't requested for a day. Ah, yes -- thanks -- I see what you mean. I need to study the vcl_* subs in more detail. What I want is to have squid behavior as much as possible -- always check if-modified-since at the end of the ttl and never throw the object out due to age (rather than LRU) unless it has changed on the backend (which should rarely happen in our environment) or unless it has been deleted (this is more likely in our environment.) quick unrelated question -- I saw the RFC for streaming fetch/pass (or something named like that) -- is that implemented yet, or does the client still need to wait for the object to get cached? Rich From howardr at gmail.com Fri Apr 29 16:26:26 2011 From: howardr at gmail.com (Howard Rauscher) Date: Fri, 29 Apr 2011 11:26:26 -0500 Subject: JSONP cache busting varnish Message-ID: My company had an interactive widget posted on the homepage of a very high traffic media company during the Royal Wedding. We were hosting most of the widget on Amazon S3, but every 5 seconds we polled our server via JSONP to get updated stats about the event. A couple of weeks ago we realized that JSONP effectively cache busts varnish. I searched around the internet and found a novel way to use our varnish server to mitigate this situation. Here is an example of our VCL: https://gist.github.com/f1d91b64acdb3f1d7769 At around 5:00am US CST (maybe earlier), about a 1/3 of our api through varnish returned 503's. We ended up just removing the relevant JSONP solution and let varnish to be cache busted. Now that the situation has cooled off a little, I was wondering if I could get some advice on this config. Is this a good solution to the JSONP problem? Our setup: - 6 EC2 servers (not sure which setup) - Each server had one varnish and one webapp - 7188 megs of ram each - 2 GB allocated to varnish - ~1500 req/s - 99.2% hit rate - And example backend success were ~600/s and backend failures were ~300/sec An example url would be: /username/streamname.json?jsonp=callback8&items=item1,item2 The 503 response would contained text "Guru Meditation:XID: 1518618356". Not sure what that means though. thanks, Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From howardr at gmail.com Fri Apr 29 16:29:30 2011 From: howardr at gmail.com (Howard Rauscher) Date: Fri, 29 Apr 2011 11:29:30 -0500 Subject: JSONP cache busting varnish In-Reply-To: References: Message-ID: Using varnish-2.1.5 SVN On Fri, Apr 29, 2011 at 11:26 AM, Howard Rauscher wrote: > My company had an interactive widget posted on the homepage of a very high > traffic media company during the Royal Wedding. We were hosting most of the > widget on Amazon S3, but every 5 seconds we polled our server via JSONP to > get updated stats about the event. > > A couple of weeks ago we realized that JSONP effectively cache busts > varnish. I searched around the internet and found a novel way to use our > varnish server to mitigate this situation. > > Here is an example of our VCL: > https://gist.github.com/f1d91b64acdb3f1d7769 > > At around 5:00am US CST (maybe earlier), about a 1/3 of our api through > varnish returned 503's. We ended up just removing the relevant JSONP > solution and let varnish to be cache busted. Now that the situation has > cooled off a little, I was wondering if I could get some advice on this > config. > > Is this a good solution to the JSONP problem? > > Our setup: > > - 6 EC2 servers (not sure which setup) > - Each server had one varnish and one webapp > - 7188 megs of ram each > - 2 GB allocated to varnish > - ~1500 req/s > - 99.2% hit rate > - And example backend success were ~600/s and backend failures were > ~300/sec > > An example url would be: > /username/streamname.json?jsonp=callback8&items=item1,item2 > > The 503 response would contained text "Guru Meditation:XID: 1518618356". > Not sure what that means though. > > thanks, > Howard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at themesforge.com Fri Apr 29 23:14:14 2011 From: ed at themesforge.com (Ed Bloom) Date: Sat, 30 Apr 2011 00:14:14 +0100 Subject: Not sure I've got varnish configured correctly Message-ID: Hi folks, Hope I'm on the right list. I have Varnish setup and running correctly with a good hitrate (i think) 1011 Cache hits - 13 misses I've setup a test Linode VPS configured as follows: Ubuntu 10.10 512MB RAM PHP-FPM NGINX 1.0 (8080) Varnish (80) I've got WordPress running the latest W3 Total Cache development version with Page Caching (enhanced) working correctly. My default.vcl is posted below. I've assigned 128MB memory to Varnish. Prior to running Varnish I was getting about 40 reqs/sec running the follow bench test ab -n 1000 -c 100 http://staging2.themesforge.com/ Post Varnish install I'm getting the same or slightly less. I've tried the bench from a few different servers and getting similar results. Has anyone any ideas where I might be going wrong? Thanks, Ed default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Let's make sure we aren't compressing already compressed formats. if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|mp3|mp4|m4v)(\?.*|)$") { remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { remove req.http.Accept-Encoding; } } if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Fri Apr 29 23:24:04 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Sat, 30 Apr 2011 11:24:04 +1200 (NZST) Subject: SPDY in Varnish ? Message-ID: Just wondering what the varnish devs thoughts are on the SPDY protocol [1] and it's likelihood of going into varnish alongside http? [1] http://www.chromium.org/spdy -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From checker at d6.com Fri Apr 29 23:44:37 2011 From: checker at d6.com (Chris Hecker) Date: Fri, 29 Apr 2011 16:44:37 -0700 Subject: Not sure I've got varnish configured correctly In-Reply-To: References: Message-ID: <4DBB4D65.2080708@d6.com> Run varnishstat and see if you're actually getting cache hits during the test. Chris On 2011/04/29 16:14, Ed Bloom wrote: > Hi folks, > > Hope I'm on the right list. I have Varnish setup and running correctly > with a good hitrate (i think) 1011 Cache hits - 13 misses > I've setup a test Linode VPS configured as follows: > > Ubuntu 10.10 > 512MB RAM > PHP-FPM > NGINX 1.0 (8080) > Varnish (80) > > I've got WordPress running the latest W3 Total Cache development version > with Page Caching (enhanced) working correctly. > > My default.vcl is posted below. I've assigned 128MB memory to Varnish. > > Prior to running Varnish I was getting about 40 reqs/sec running the > follow bench test > > ab -n 1000 -c 100 http://staging2.themesforge.com/ > > Post Varnish install I'm getting the same or slightly less. I've tried > the bench from a few different servers and getting similar results. > > Has anyone any ideas where I might be going wrong? > > Thanks, > > Ed > > default.vcl > > backend default { > .host = "127.0.0.1"; > .port = "8080"; > } > acl purge { > "127.0.0.1"; > } > > sub vcl_recv { > # Add a unique header containing the client address > remove req.http.X-Forwarded-For; > set req.http.X-Forwarded-For = client.ip; > > # Let's make sure we aren't compressing already compressed formats. > if (req.http.Accept-Encoding) { > if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|mp3|mp4|m4v)(\?.*|)$") { > remove req.http.Accept-Encoding; > } elsif (req.http.Accept-Encoding ~ "gzip") { > set req.http.Accept-Encoding = "gzip"; > } elsif (req.http.Accept-Encoding ~ "deflate") { > set req.http.Accept-Encoding = "deflate"; > } else { > remove req.http.Accept-Encoding; > } > } > > if (req.request == "PURGE") { > if (!client.ip ~ purge) { > error 405 "Not allowed."; > } > return(lookup); > } > if (req.url ~ "^/$") { > unset req.http.cookie; > } > } > > sub vcl_hit { > if (req.request == "PURGE") { > set obj.ttl = 0s; > error 200 "Purged."; > } > } > sub vcl_miss { > if (req.request == "PURGE") { > error 404 "Not in cache."; > } > if (!(req.url ~ "wp-(login|admin)")) { > unset req.http.cookie; > } > if (req.url ~ > "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") > { > unset req.http.cookie; > set req.url = regsub(req.url, "\?.$", ""); > } > if (req.url ~ "^/$") { > unset req.http.cookie; > } > } > sub vcl_fetch { > if (req.url ~ "^/$") { > unset beresp.http.set-cookie; > } > if (!(req.url ~ "wp-(login|admin)")) { > unset beresp.http.set-cookie; > } > } > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From james at jamesthornton.com Sat Apr 30 00:00:31 2011 From: james at jamesthornton.com (James Thornton) Date: Fri, 29 Apr 2011 19:00:31 -0500 Subject: SPDY in Varnish ? In-Reply-To: References: Message-ID: I'm interested in this too. A few weeks ago Roberto De Ioris, the creator or uWSGI (http://projects.unbit.it/uwsgi/), told me his company is working on a uWSGI module for both Varnish and HAProxy. uWSGI is already in nginx, and as he said, http parsing is 300% slower than uwsgi parsing and fastcgi parsing is 80% slower. - James On Fri, Apr 29, 2011 at 6:24 PM, Simon Lyall wrote: > > Just wondering what the varnish devs thoughts are on the SPDY protocol [1] > and it's likelihood of going into varnish alongside http? > > [1] http://www.chromium.org/spdy > > -- > Simon Lyall ?| ?Very Busy ?| ?Web: http://www.darkmere.gen.nz/ > "To stay awake all night adds a day to your life" - Stilgar | eMT. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Latest Blog: http://jamesthornton.com/blog/how-to-get-to-genius From perbu at varnish-software.com Sat Apr 30 07:06:52 2011 From: perbu at varnish-software.com (Per Buer) Date: Sat, 30 Apr 2011 09:06:52 +0200 Subject: SPDY in Varnish ? In-Reply-To: References: Message-ID: Hi. We've discussed SPDY several times. SPDY is interesting but it would be premature to implement it this early. SPDY isn't even an official IETF draft yet and Google is still experimenting with it. I think, for us to consider implementing it, we would need at least two things to happen. 1) An IETF standard and 2) browser adoption. Even then it might not make sense. The earlier drafts of SPDY weren't very proxy friendly. I'm not familiar with Googles infrastructure but they might not use generic web proxies in their web applications so it wasn't relevant for them. Until then I think it's better to sit tight and se who wins. HTTP-MPLEX, Waka and SCTP are other technologies that operate in the same problem space. We have no idea who might win. Per. On Sat, Apr 30, 2011 at 1:24 AM, Simon Lyall wrote: > > Just wondering what the varnish devs thoughts are on the SPDY protocol [1] > and it's likelihood of going into varnish alongside http? > > [1] http://www.chromium.org/spdy > > -- > Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ > "To stay awake all night adds a day to your life" - Stilgar | eMT. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: