From thebog at gmail.com Mon Apr 5 14:48:49 2010 From: thebog at gmail.com (thebog) Date: Mon, 5 Apr 2010 16:48:49 +0200 Subject: Regards to freshing up the website Message-ID: Hi, during VUG2 I volunteered to "do something" to our website :) I wanted a way to contribute to Varnish again, and I saw it as a good opportunity. I have written down what I think is a good approach to this. Please let my know if you have any thoughts about it. Out of experience a webpage is something that some take really personal, hence the attached document so there are no surprises :) If no feedback is given I will slowly start the work in the document and keep posting updates on this list. P.S Sorry for the big file. Don't ask me how it's possible for so little text to make a 500 KB PDF file. It's beyond me... YS Anders Berg -------------- next part -------------- A non-text attachment was scrubbed... Name: Proposal_for_new_webpagelayout.pdf Type: application/pdf Size: 505960 bytes Desc: not available URL: From glen at delfi.ee Mon Apr 5 16:58:53 2010 From: glen at delfi.ee (Elan =?iso-8859-15?q?Ruusam=E4e?=) Date: Mon, 5 Apr 2010 19:58:53 +0300 Subject: vim syntax In-Reply-To: <20100122140158.GA9672@kjeks.varnish-cache.com> References: <200911241957.54742.glen@delfi.ee> <20100122140158.GA9672@kjeks.varnish-cache.com> Message-ID: <201004051958.53424.glen@delfi.ee> On Friday 22 January 2010 16:01:58 Kristian Lyngstol wrote: > On Tue, Nov 24, 2009 at 07:57:54PM +0200, Elan Ruusam?e wrote: > > @phk mentioned in the irc, that somebody has written varnish syntax for > > vim. i tried to search the archives but found no info about it. > > > > anyone knows? would be pity to start writing one from scratch if somebody > > already spend some time of yours on it. > > I've heard rumours too, but never seen it. Since it's close to C, I usually > get by with C-syntax. > > There are some differences (ie: set foo = bar, not foo = bar, and ~...) but > I suppose using C as a base will save you a lot of time. If you do write > one, drop it here and I wouldn't mind testing it, I do work with VCL from > time to time ;) publishing url of my efforts: http://cvs.pld-linux.org/cgi-bin/cvsweb.cgi/packages/vim-syntax-vcl/vcl.vim i'd love to get feedback and improvements on it :) also i added link to vim syntax i've written so far, i hope it's right place to refer to it's existence: http://www.varnish-cache.org/wiki/VCL#Resources -- glen From stewsnooze at gmail.com Mon Apr 5 22:31:08 2010 From: stewsnooze at gmail.com (Stewart Robinson) Date: Mon, 5 Apr 2010 23:31:08 +0100 Subject: Regards to freshing up the website In-Reply-To: References: Message-ID: Hi, I think Pressflow is a good choice to build a new site. I have good experience of creating drupal/pressflow sites behind varnish cache and could offer advice and assistance should we get into trouble. Stew On 5 April 2010 15:48, thebog wrote: > Hi, > > during VUG2 I volunteered to "do something" to our website :) I wanted > a way to contribute to Varnish again, and I saw it as a good > opportunity. > > I have written down what I think is a good approach to this. > > Please let my know if you have any thoughts about it. Out of > experience a webpage is something that some take really personal, > hence the attached document so there are no surprises :) > > If no feedback is given I will slowly start the work in the document > and keep posting updates on this list. > > P.S Sorry for the big file. Don't ask me how it's possible for so > little text to make a 500 KB PDF file. It's beyond me... > > YS > Anders Berg > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From kb+varnish at slide.com Mon Apr 5 22:33:51 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Mon, 5 Apr 2010 15:33:51 -0700 Subject: Release schedule for saint mode. In-Reply-To: References: <7ae911871001181247l6626f29dl1c32d4509d5d3b50@mail.gmail.com> <878wbtqipn.fsf@qurzaw.linpro.no> <7ae911871001211057u56e71dbye60ede47c5beb0c1@mail.gmail.com> <87wrz6v8lh.fsf@qurzaw.linpro.no> <877C0798-DB47-4D8B-9E2F-76BF2F14E69E@slide.com> <87aavznbkb.fsf@qurzaw.linpro.no> Message-ID: FYI, continuing in Varnish 2.1 persist does not offer persistence across a parent restart (via process crash or host crash). I do understand that there is some benefit to persistent storage in the case of the child crashing. But in any situation where I could not survive the child crashing (origin spike), I couldn't survive the parent bouncing, either. To me the difference between file and persist is academic; am I missing something either in the implementation or the intent of this feature? Thanks, -- kb On Jan 27, 2010, at 12:08 PM, Ken Brownfield wrote: > Right, -spersistent. Child restarts are persistent, parent process stop/start isn't. > > Maybe there's a graceful, undocumented method of stopping the parent that I'm not aware of? > -- > kb > > On Jan 27, 2010, at 1:26 AM, Tollef Fog Heen wrote: > >> ]] Ken Brownfield >> >> | I'd love to test persistent under production load, but right now it's >> | not persistent. :-( (Storage doesn't persist through a parent restart) >> >> That sounds like a real bug. Just to be sure, you're testing with >> -spersistent, not -smalloc or -sfile? >> >> -- >> Tollef Fog Heen >> Redpill Linpro -- Changing the game! >> t: +47 21 54 41 73 >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > From kb+varnish at slide.com Mon Apr 5 22:41:43 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Mon, 5 Apr 2010 15:41:43 -0700 Subject: Release schedule for saint mode. In-Reply-To: References: <7ae911871001181247l6626f29dl1c32d4509d5d3b50@mail.gmail.com> <878wbtqipn.fsf@qurzaw.linpro.no> <7ae911871001211057u56e71dbye60ede47c5beb0c1@mail.gmail.com> <87wrz6v8lh.fsf@qurzaw.linpro.no> <877C0798-DB47-4D8B-9E2F-76BF2F14E69E@slide.com> <87aavznbkb.fsf@qurzaw.linpro.no> Message-ID: <0F300887-358C-4F65-A59C-DF73076283FD@slide.com> FYI, continuing in Varnish 2.1 persist does not offer persistence across a parent restart (via process restart, process crash, or host crash). I do understand that there is some benefit to persistent storage in the case of the child crashing. But in any situation where I could not survive the child crashing (origin spike), I couldn't survive the parent bouncing, either. To me the difference between file and persist is academic; am I missing something either in the implementation or the intent of this feature? Thanks, -- kb On Jan 27, 2010, at 12:08 PM, Ken Brownfield wrote: > Right, -spersistent. Child restarts are persistent, parent process stop/start isn't. > > Maybe there's a graceful, undocumented method of stopping the parent that I'm not aware of? > -- > kb > > On Jan 27, 2010, at 1:26 AM, Tollef Fog Heen wrote: > >> ]] Ken Brownfield >> >> | I'd love to test persistent under production load, but right now it's >> | not persistent. :-( (Storage doesn't persist through a parent restart) >> >> That sounds like a real bug. Just to be sure, you're testing with >> -spersistent, not -smalloc or -sfile? >> >> -- >> Tollef Fog Heen >> Redpill Linpro -- Changing the game! >> t: +47 21 54 41 73 >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From pablort+varnish at gmail.com Tue Apr 6 22:04:09 2010 From: pablort+varnish at gmail.com (pablort) Date: Tue, 6 Apr 2010 19:04:09 -0300 Subject: Varnish 2.1 RPM form Redhat In-Reply-To: References: <345BD8B3F8775748A4676A625EADA22417D87665@DURAN.firechaser.local> Message-ID: FYI, Just noticed the spec file still says 2.0.7 Cheers, On Wed, Mar 31, 2010 at 9:06 AM, Vladimir wrote: > I just built those myself. All you need to do is put varnish-2.1.tar.gz is > install rpm-build package then put varnish-2.1.tar.gz in > /usr/src/redhat/SOURCES/ then > > tar -xvf varnish-2.1.tar.gz varnish-2.1/redhat/varnish.spec > > edit the varnish.spec file to say 2.1 instead of 2.0.7. Then > > rpmbuild -bb varnish-2.1/redhat/varnish.spec > > That should build the packages for you. One note is that for whatever > reason make check fails on any machine I have tried this on so only way I > could get it going was to take out everything from %check to %install and > rebuild. Anyone else encounter this issue ? > > > > > On Wed, 31 Mar 2010, David Murphy wrote: > > Could someone kindly point me to the Varnish 2.1 rpm for RHEL5, 64bit >> please? >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schmidt at ze.tum.de Wed Apr 7 12:47:00 2010 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Wed, 07 Apr 2010 14:47:00 +0200 Subject: varnish with ssl Message-ID: <4BBC7EC4.5010108@ze.tum.de> Hi, I've a Problem using varnish and ssl. I trying to setup varnish to act as reverse proxy for our website. I need both unencrypted requests and requests via ssl. I know that varnish can not accept ssl connections itself. So I tried to setup stunnel to accept connections. That's not the problem. The problem I have is that I loose information from which IP the request originated. Are there plans to include ssl in varnish directly or is there a setup to retain this information. Regards Estartu -- ---------------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de Technische Universit?t M?nchen | WWW & Online Services | Tel: +49 89 289-25270 | PGP-PublicKey Fax: +49 89 289-25257 | on request -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 544 bytes Desc: OpenPGP digital signature URL: From svein-listmail at stillbilde.net Wed Apr 7 13:07:36 2010 From: svein-listmail at stillbilde.net (Svein Skogen (Listmail Account)) Date: Wed, 07 Apr 2010 15:07:36 +0200 Subject: varnish with ssl In-Reply-To: <4BBC7EC4.5010108@ze.tum.de> References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: <4BBC8398.1070202@stillbilde.net> On 07.04.2010 14:47, Gerhard Schmidt wrote: > Hi, > > I've a Problem using varnish and ssl. I trying to setup varnish to act as > reverse proxy for our website. > > I need both unencrypted requests and requests via ssl. > > I know that varnish can not accept ssl connections itself. So I tried to setup > stunnel to accept connections. That's not the problem. The problem I have is > that I loose information from which IP the request originated. > > Are there plans to include ssl in varnish directly or is there a setup to > retain this information. > > Regards > Estartu I have a ... similar setup, with the need to server both SSL and non-SSL pages. My "solution" was to place APSIS'es Pound bound to http and https, set pound to contact the varnish (via plain http), and varnish to ask the two apaches in my back end. Not as simple as it could've been, but on the other hand it means that only Pound need to know anything about SSL. Pound can be set up to log for you. ;) //Svein -- --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile at stillbilde.net This mailbox goes directly to my cellphone and is checked even when I'm not in front of my computer. ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ ------------------------------------------------------------ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: OpenPGP digital signature URL: From perbu at varnish-software.com Wed Apr 7 14:00:59 2010 From: perbu at varnish-software.com (Per Buer) Date: Wed, 7 Apr 2010 16:00:59 +0200 Subject: varnish with ssl In-Reply-To: <4BBC7EC4.5010108@ze.tum.de> References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: On Wed, Apr 7, 2010 at 2:47 PM, Gerhard Schmidt wrote: > > Are there plans to include ssl in varnish directly or is there a setup to > retain this information. At the moment, no. One problem is that we would need a proper SSL library under an acceptable licence. Such a library is not available, atm. -- Per Buer, Varnish Software Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer From kb+varnish at slide.com Wed Apr 7 17:47:22 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 7 Apr 2010 10:47:22 -0700 Subject: varnish with ssl In-Reply-To: <4BBC7EC4.5010108@ze.tum.de> References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: <0589D181-A825-4D6D-A3F1-01F3278A5FBC@slide.com> This is far-ranging problem that isn't unique to Varnish or SSL. What is typical of CDNs, load-balancers, and proxies of all sorts is to set a header with the IP of the request *it* received. That header is then passed down and can be parsed by your upstream. X-Forwarded-For is the standard header for this, but the format and naming of this header can vary (no pun intended). You can imagine how fun it is to handle IPs for a client request that goes through a CDN's proxy/cache network, through your load-balancer, then Varnish, then the upstream web server: Client = 1.1.1.1 CDN = 2.2.2.2 sets => CDN-Client-IP: 1.1.1.1 LB (e.g., Pound) = 3.3.3.3 sets => LB-Client-IP: 2.2.2.2 Varnish = 4.4.4.4 sets => X-Forwarded-For: 3.3.3.3 Your upstream receives the request from 4.4.4.4 with the following headers: CDN-Client-IP: 1.1.1.1 LB-Client-IP: 2.2.2.2 X-Forwarded-For: 3.3.3.3 You'll care about the highest level one (CDN-Client-IP in this case), something like: IP = CDN-Client-IP or LB-Client-IP or X-Forwarded-For or Hope it helps, -- kb PS: The Pound suggestion is good -- probably a cleaner solution than stunnel given that we're talking HTTP(S). On Apr 7, 2010, at 5:47 AM, Gerhard Schmidt wrote: > Hi, > > I've a Problem using varnish and ssl. I trying to setup varnish to act as > reverse proxy for our website. > > I need both unencrypted requests and requests via ssl. > > I know that varnish can not accept ssl connections itself. So I tried to setup > stunnel to accept connections. That's not the problem. The problem I have is > that I loose information from which IP the request originated. > > Are there plans to include ssl in varnish directly or is there a setup to > retain this information. > > Regards > Estartu > > -- > ---------------------------------------------------------- > Gerhard Schmidt | E-Mail: schmidt at ze.tum.de > Technische Universit?t M?nchen | > WWW & Online Services | > Tel: +49 89 289-25270 | PGP-PublicKey > Fax: +49 89 289-25257 | on request > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From michael at dynamine.net Wed Apr 7 18:01:22 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 11:01:22 -0700 Subject: varnish with ssl In-Reply-To: References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: On Wed, Apr 7, 2010 at 7:00 AM, Per Buer wrote: > On Wed, Apr 7, 2010 at 2:47 PM, Gerhard Schmidt wrote: >> >> Are there plans to include ssl in varnish directly or is there a setup to >> retain this information. > > At the moment, no. One problem is that we would need a proper SSL > library under an acceptable licence. Such a library is not available, > atm. What's the incompatibility with OpenSSL? --Michael From perbu at varnish-software.com Wed Apr 7 19:38:17 2010 From: perbu at varnish-software.com (Per Buer) Date: Wed, 7 Apr 2010 21:38:17 +0200 Subject: varnish with ssl In-Reply-To: References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: On Wed, Apr 7, 2010 at 8:01 PM, Michael Fischer wrote: > On Wed, Apr 7, 2010 at 7:00 AM, Per Buer wrote: >> On Wed, Apr 7, 2010 at 2:47 PM, Gerhard Schmidt wrote: >>> >>> Are there plans to include ssl in varnish directly or is there a setup to >>> retain this information. >> >> At the moment, no. One problem is that we would need a proper SSL >> library under an acceptable licence. Such a library is not available, >> atm. > > What's the incompatibility with OpenSSL? The licence is wildly incompatible with the two-clause BSD licence. If I recall correctly they still have this horrid advertising clause (ugh!). We have jumped through a number of hoops to get exceptions for each file we have in Varnish that had an advertising clause in it. -- Per Buer, Varnish Software Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer From michael at dynamine.net Wed Apr 7 19:42:02 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 12:42:02 -0700 Subject: varnish with ssl In-Reply-To: References: <4BBC7EC4.5010108@ze.tum.de> Message-ID: On Wed, Apr 7, 2010 at 12:38 PM, Per Buer wrote: >> What's the incompatibility with OpenSSL? > > The licence is wildly incompatible with the two-clause BSD licence. If > I recall correctly they still have this horrid advertising clause > (ugh!). We have jumped through a number of hoops to get exceptions for > each file we have in Varnish that had an advertising clause in it. But the Varnish team is not distributing OpenSSL, so it's not subject to its terms. Why do you care? --Michael From phk at phk.freebsd.dk Wed Apr 7 21:07:41 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 07 Apr 2010 21:07:41 +0000 Subject: varnish with ssl In-Reply-To: Your message of "Wed, 07 Apr 2010 11:01:22 MST." Message-ID: <73973.1270674461@critter.freebsd.dk> In message , Mi chael Fischer writes: >What's the incompatibility with OpenSSL? I have two main reservations about SSL in Varnish: 1. OpenSSL is almost 350.000 lines of code, Varnish is only 58.000, Adding such a massive amount of code to Varnish footprint, should result in a very tangible benefit. Compared to running a SSL proxy in front of Varnish, I can see very, very little benefit from integration. Yeah, one process less and only one set of config parameters. But that all sounds like "second systems syndrome" thinking to me, it does not really sound lige a genuine "The world would become a better place" feature request. But I do see some some serious drawbacks: The necessary changes to Varnish internal logic will almost certainly hurt varnish performance for the plain HTTP case. We need to add an inordinate about of overhead code, to configure and deal with the key/cert bits. 2. I have looked at the OpenSSL source code, I think it is a catastrophe waiting to happen. In fact, the only thing that prevents attackers from exploiting problems more actively, is that the source code is fundamentally unreadable and impenetrable. Unless those two issues can be addressed, I don't see SSL in Varnish any time soon. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From svein-listmail at stillbilde.net Wed Apr 7 21:14:00 2010 From: svein-listmail at stillbilde.net (Svein Skogen (Listmail Account)) Date: Wed, 07 Apr 2010 23:14:00 +0200 Subject: varnish with ssl In-Reply-To: <73973.1270674461@critter.freebsd.dk> References: <73973.1270674461@critter.freebsd.dk> Message-ID: <4BBCF598.8020201@stillbilde.net> On 07.04.2010 23:07, Poul-Henning Kamp wrote: > 2. I have looked at the OpenSSL source code, I think it is a catastrophe > waiting to happen. In fact, the only thing that prevents attackers > from exploiting problems more actively, is that the source code is > fundamentally unreadable and impenetrable. You mean to tell me they didn't read style(9)? //Svein -- --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile at stillbilde.net This mailbox goes directly to my cellphone and is checked even when I'm not in front of my computer. ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ ------------------------------------------------------------ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: OpenPGP digital signature URL: From michael at dynamine.net Wed Apr 7 21:20:17 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 14:20:17 -0700 Subject: varnish with ssl In-Reply-To: <73973.1270674461@critter.freebsd.dk> References: <73973.1270674461@critter.freebsd.dk> Message-ID: On Wed, Apr 7, 2010 at 2:07 PM, Poul-Henning Kamp wrote: > In message , Mi > chael Fischer writes: > >>What's the incompatibility with OpenSSL? > > I have two main reservations about SSL in Varnish: > > 1. OpenSSL is almost 350.000 lines of code, Varnish is only 58.000, > ? Adding such a massive amount of code to Varnish footprint, should > ? result in a very tangible benefit. RAM is cheap. Besides, as a shared library the cost is amortized among all processes using it. > > ? Compared to running a SSL proxy in front of Varnish, I can see > ? very, very little benefit from integration. ?Yeah, one process > ? less and only one set of config parameters. > > ? But that all sounds like "second systems syndrome" thinking to me, > ? it does not really sound lige a genuine "The world would become > ? a better place" feature request. Well, there are a couple of benefits: (1) stunnel doesn't scale particularly well, and can't scale across multiple CPUs in any event; (2) As someone else pointed out, Varnish can only do effective logging of and access control pertaining to the peer IP if the SSL negotiation is done in-process. stunnel won't spoof the peer IP for Varnish (and arguably no secure kernel should allow it to). > ? But I do see some some serious drawbacks: ?The necessary changes > ? to Varnish internal logic will almost certainly hurt varnish > ? performance for the plain HTTP case. ?We need to add an inordinate > ? about of overhead code, to configure and deal with the key/cert > ? bits. I defer to your judgment on that issue. > 2. I have looked at the OpenSSL source code, I think it is a catastrophe > ? waiting to happen. ?In fact, the only thing that prevents attackers > ? from exploiting problems more actively, is that the source code is > ? fundamentally unreadable and impenetrable. Is GNU TLS any better? I've not used it. --Michael From phk at phk.freebsd.dk Wed Apr 7 21:24:22 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 07 Apr 2010 21:24:22 +0000 Subject: varnish with ssl In-Reply-To: Your message of "Wed, 07 Apr 2010 23:14:00 +0200." <4BBCF598.8020201@stillbilde.net> Message-ID: <74132.1270675462@critter.freebsd.dk> In message <4BBCF598.8020201 at stillbilde.net>, "Svein Skogen (Listmail Account)" writes: >> 2. I have looked at the OpenSSL source code, I think it is a catastroph= >e >> waiting to happen. In fact, the only thing that prevents attackers >> from exploiting problems more actively, is that the source code is >> fundamentally unreadable and impenetrable. > >You mean to tell me they didn't read style(9)? It is not so much the fact that they certainly didn't read style(9), as the fact that openssl started out as a researchers tool to play with crypto algorithms, and got a facelift and was suddenly everybodys crypto implementation by default. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Apr 7 21:30:49 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 07 Apr 2010 21:30:49 +0000 Subject: varnish with ssl In-Reply-To: Your message of "Wed, 07 Apr 2010 14:20:17 MST." Message-ID: <74162.1270675849@critter.freebsd.dk> In message , Mi chael Fischer writes: >On Wed, Apr 7, 2010 at 2:07 PM, Poul-Henning Kamp wrot= >e: >RAM is cheap. Besides, as a shared library the cost is amortized >among all processes using it. You're missing my point by a wide margin here. >> But that all sounds like "second systems syndrome" thinking to me, >> it does not really sound lige a genuine "The world would become >> a better place" feature request. > >Well, there are a couple of benefits: > >(1) stunnel doesn't scale particularly well, and can't scale across >multiple CPUs in any event; There are other SSL proxies than stunnel. >(2) As someone else pointed out, Varnish can only do effective logging >of and access control pertaining to the peer IP if the SSL negotiation >is done in-process. stunnel won't spoof the peer IP for Varnish (and >arguably no secure kernel should allow it to). We're working on that bit, as long as your SSL proxy sends a trustworthy header with the client IP, you will be able to test on it. >> 2. I have looked at the OpenSSL source code, I think it is a catastrophe >> =A0 waiting to happen. =A0In fact, the only thing that prevents attackers >> =A0 from exploiting problems more actively, is that the source code is >> =A0 fundamentally unreadable and impenetrable. > >Is GNU TLS any better? I've not used it. Not significantly, and furthermore, we try very hard to stay clear of GPL code, in order to not encumber Varnish with a multiple incompatible licenses. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kb+varnish at slide.com Wed Apr 7 22:22:10 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 7 Apr 2010 15:22:10 -0700 Subject: varnish with ssl In-Reply-To: <74162.1270675849@critter.freebsd.dk> References: <74162.1270675849@critter.freebsd.dk> Message-ID: <8111179B-3FE7-4528-841B-F9866C0A3926@slide.com> My $0.02 agrees with you. I'd also add that by definition, SSL represents data that is secure between the server and the user. Caching that data (or having it even pass through a cache) is conceptually incompatible. Obviously, SSL pages often contain static content (images), which would be nice to serve from a cache. But there are already easily-applied and scalable SSL solutions to Varnish installations: Varnish's lack of SSL support is essentially irrelevant and entirely /consistent/. Additionally, if you have various backend webservers (say, Apache plus nginx plus varnish) it's nice to have one common SSL layer to configure and maintain, especially if some of those backends also lack built-in SSL. Commercial load balancers typically have SSL support and work as that common SSL layer, and Pound is one OSS example. Varnish is a great caching reverse proxy. If I want SSL+PHP+LDAP+cache+proxy in one executable (and I don't) there's an app for that. -- kb On Apr 7, 2010, at 2:30 PM, Poul-Henning Kamp wrote: > In message , Mi > chael Fischer writes: >> On Wed, Apr 7, 2010 at 2:07 PM, Poul-Henning Kamp wrot= >> e: > >> RAM is cheap. Besides, as a shared library the cost is amortized >> among all processes using it. > > You're missing my point by a wide margin here. > >>> But that all sounds like "second systems syndrome" thinking to me, >>> it does not really sound lige a genuine "The world would become >>> a better place" feature request. >> >> Well, there are a couple of benefits: >> >> (1) stunnel doesn't scale particularly well, and can't scale across >> multiple CPUs in any event; > > There are other SSL proxies than stunnel. > >> (2) As someone else pointed out, Varnish can only do effective logging >> of and access control pertaining to the peer IP if the SSL negotiation >> is done in-process. stunnel won't spoof the peer IP for Varnish (and >> arguably no secure kernel should allow it to). > > We're working on that bit, as long as your SSL proxy sends a trustworthy > header with the client IP, you will be able to test on it. > >>> 2. I have looked at the OpenSSL source code, I think it is a catastrophe >>> =A0 waiting to happen. =A0In fact, the only thing that prevents attackers >>> =A0 from exploiting problems more actively, is that the source code is >>> =A0 fundamentally unreadable and impenetrable. >> >> Is GNU TLS any better? I've not used it. > > Not significantly, and furthermore, we try very hard to stay clear > of GPL code, in order to not encumber Varnish with a multiple incompatible > licenses. > > Poul-Henning > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From michael at dynamine.net Wed Apr 7 23:11:42 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 16:11:42 -0700 Subject: varnish with ssl In-Reply-To: <74162.1270675849@critter.freebsd.dk> References: <74162.1270675849@critter.freebsd.dk> Message-ID: On Wed, Apr 7, 2010 at 2:30 PM, Poul-Henning Kamp wrote: >>(1) stunnel doesn't scale particularly well, and can't scale across >>multiple CPUs in any event; > > There are other SSL proxies than stunnel. I'm not aware of any that both do what stunnel does and is more scalable. Any examples? >>Is GNU TLS any better? I've not used it. > > Not significantly, and furthermore, we try very hard to stay clear > of GPL code, in order to not encumber Varnish with a multiple incompatible > licenses. It's LGPL, for the record. --Michael From michael at dynamine.net Wed Apr 7 23:39:43 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 16:39:43 -0700 Subject: varnish with ssl In-Reply-To: <8111179B-3FE7-4528-841B-F9866C0A3926@slide.com> References: <74162.1270675849@critter.freebsd.dk> <8111179B-3FE7-4528-841B-F9866C0A3926@slide.com> Message-ID: On Wed, Apr 7, 2010 at 3:22 PM, Ken Brownfield wrote: > I'd also add that by definition, SSL represents data that is secure between the server and the user. ?Caching that data (or having it even pass through a cache) is conceptually incompatible. > > Obviously, SSL pages often contain static content (images), which would be nice to serve from a cache. Well, which is it? Is there a use case for SSL support in a caching proxy, or isn't there? I don't follow your argument. --Michael From kb+varnish at slide.com Thu Apr 8 00:01:05 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 7 Apr 2010 17:01:05 -0700 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> <8111179B-3FE7-4528-841B-F9866C0A3926@slide.com> Message-ID: <886F0170-E4C0-4FBA-9439-901E3E7787EB@slide.com> There is a case, it's just not a sound case given the drawbacks and lack of *necessity*, as described in that email. Again, IMHO. -- kb On Apr 7, 2010, at 4:39 PM, Michael Fischer wrote: > On Wed, Apr 7, 2010 at 3:22 PM, Ken Brownfield wrote: > >> I'd also add that by definition, SSL represents data that is secure between the server and the user. Caching that data (or having it even pass through a cache) is conceptually incompatible. >> >> Obviously, SSL pages often contain static content (images), which would be nice to serve from a cache. > > Well, which is it? Is there a use case for SSL support in a caching > proxy, or isn't there? I don't follow your argument. > > --Michael From kb+varnish at slide.com Thu Apr 8 00:05:35 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 7 Apr 2010 17:05:35 -0700 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> Message-ID: <99CAE2E9-7162-4083-B597-DA691D69D04A@slide.com> > On Wed, Apr 7, 2010 at 2:30 PM, Poul-Henning Kamp wrote: >>> (1) stunnel doesn't scale particularly well, and can't scale across >>> multiple CPUs in any event; >> >> There are other SSL proxies than stunnel. > > I'm not aware of any that both do what stunnel does and is more > scalable. Any examples? Pound. Maybe eventually in haproxy. Plus a half dozen or so smaller projects that aren't likely production-ready. Plus various commercial solutions. You could drop Apache+mod_ssl+mod_proxy in front of Varnish. You can even choose between prefork or worker. Of course, it would be painful to set up and diagnose, and it scales poorly compared to the single-process model. But your ps output will be longer. The single-process model as regards scalability is a red herring. -- kb From michael at dynamine.net Thu Apr 8 00:20:20 2010 From: michael at dynamine.net (Michael Fischer) Date: Wed, 7 Apr 2010 17:20:20 -0700 Subject: varnish with ssl In-Reply-To: <99CAE2E9-7162-4083-B597-DA691D69D04A@slide.com> References: <74162.1270675849@critter.freebsd.dk> <99CAE2E9-7162-4083-B597-DA691D69D04A@slide.com> Message-ID: On Wed, Apr 7, 2010 at 5:05 PM, Ken Brownfield wrote: >> On Wed, Apr 7, 2010 at 2:30 PM, Poul-Henning Kamp wrote: >>>> (1) stunnel doesn't scale particularly well, and can't scale across >>>> multiple CPUs in any event; >>> >>> There are other SSL proxies than stunnel. >> >> I'm not aware of any that both do what stunnel does and is more >> scalable. ?Any examples? > > Pound. ?Maybe eventually in haproxy. ?Plus a half dozen or so smaller projects that aren't likely production-ready. ?Plus various commercial solutions. > > You could drop Apache+mod_ssl+mod_proxy in front of Varnish. ?You can even choose between prefork or worker. ?Of course, it would be painful to set up and diagnose, and it scales poorly compared to the single-process model. ?But your ps output will be longer. None of those do what stunnel does. As a listener, stunnel merely decrypts the data on the SSL socket (which may not necessarily be HTTP) and forwards the decrypted data to the real server. The other solutions parse HTTP and thus incur more expense. > The single-process model as regards scalability is a red herring. It matters a lot with SSL. The handshaking process is very CPU-intensive. You really want something that's SMP-scalable. --Michael From r at roze.lv Thu Apr 8 08:37:35 2010 From: r at roze.lv (Reinis Rozitis) Date: Thu, 8 Apr 2010 11:37:35 +0300 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> Message-ID: From: "Michael Fischer" > I'm not aware of any that both do what stunnel does and is more > scalable. Any examples? nginx. Bassically now with the few latest modules it ends up as as 'ssl-compressing-cache-highperformance-loadbalancer-with serverincludes(remote ESI) - andwhatnotelse' capable of even doing TCP balancing besides the basic http(s). I still like and use varnish for what it does best.. rr From michael at dynamine.net Thu Apr 8 16:21:45 2010 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 8 Apr 2010 09:21:45 -0700 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> Message-ID: On Apr 8, 2010, at 1:37 AM, Reinis Rozitis wrote: > Bassically now with the few latest modules it ends up as as 'ssl-compressing-cache-highperformance-loadbalancer-with serverincludes(remote ESI) - andwhatnotelse' capable of even doing TCP balancing besides the basic http(s). You're saying that nginx now does connection-layer proxying and SSL stripping? That's news to me. --Michael From kb+varnish at slide.com Thu Apr 8 17:37:41 2010 From: kb+varnish at slide.com (Ken Brownfield) Date: Thu, 8 Apr 2010 10:37:41 -0700 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> <99CAE2E9-7162-4083-B597-DA691D69D04A@slide.com> Message-ID: On Apr 7, 2010, at 5:20 PM, Michael Fischer wrote: >> The single-process model as regards scalability is a red herring. > > It matters a lot with SSL. The handshaking process is very > CPU-intensive. You really want something that's SMP-scalable. Run one single-process model process for each core in your machine. You also get the rather academic bonus of less context-switching and less cache thrash (assuming a decent scheduler and affinity). This is also how you would leverage multiple machines. Persistent load balancing involves slightly more human setup than a single process on a single machine that handles the multi-(process|thread)ing itself, but I think discussing the finer points of performance and architecture are dubious if one's entire site runs on a single machine. :-) I am not a proponent of the single-process model, nor any other particular model for that matter -- they can all be made to work and scale, and deciding between them depends on circumstances beyond their scope. But I do know that if I was a proponent of any single one, it would be because I had too much time on my hands. :-P -- kb > --Michael From michael at dynamine.net Thu Apr 8 17:39:48 2010 From: michael at dynamine.net (Michael Fischer) Date: Thu, 8 Apr 2010 10:39:48 -0700 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> <99CAE2E9-7162-4083-B597-DA691D69D04A@slide.com> Message-ID: On Thu, Apr 8, 2010 at 10:37 AM, Ken Brownfield wrote: > On Apr 7, 2010, at 5:20 PM, Michael Fischer wrote: >>> The single-process model as regards scalability is a red herring. >> >> It matters a lot with SSL. ?The handshaking process is very >> CPU-intensive. ?You really want something that's SMP-scalable. > > Run one single-process model process for each core in your machine. ?You also get the rather academic bonus of less context-switching and less cache thrash (assuming a decent scheduler and affinity). ?This is also how you would leverage multiple machines. I don't disagree with you. But stunnel doesn't do that. --Michael From rtshilston at gmail.com Thu Apr 8 19:31:27 2010 From: rtshilston at gmail.com (Rob S) Date: Thu, 08 Apr 2010 20:31:27 +0100 Subject: Reducing backend connections Message-ID: <4BBE2F0F.5000404@gmail.com> Hi, For most sites, we use Varnish to do both caching and load balancing. However, for one or two, where it's not currently practical to cache, we use it solely for load balancing etc. What's the best way of using Varnish so that the backends aren't kept holding the connections whilst the client is patiently downloading? Should we PASS, PIPE, or just work as normal and set ttl=0, grace=0 etc? Thanks, Rob From michael at dynamine.net Thu Apr 8 19:38:25 2010 From: michael at dynamine.net (Michael Fischer) Date: Thu, 8 Apr 2010 12:38:25 -0700 Subject: Reducing backend connections In-Reply-To: <4BBE2F0F.5000404@gmail.com> References: <4BBE2F0F.5000404@gmail.com> Message-ID: Make sure your send buffer is large enough to hold the entire response. This will free up your backend server and Varnish as quickly as possible. The kernel will take care of the rest. --Michael On Thu, Apr 8, 2010 at 12:31 PM, Rob S wrote: > Hi, > > For most sites, we use Varnish to do both caching and load balancing. > ?However, for one or two, where it's not currently practical to cache, we > use it solely for load balancing etc. ?What's the best way of using Varnish > so that the backends aren't kept holding the connections whilst the client > is patiently downloading? ?Should we PASS, PIPE, or just work as normal and > set ttl=0, grace=0 etc? > > Thanks, > > > Rob > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From perbu at varnish-software.com Thu Apr 8 20:12:10 2010 From: perbu at varnish-software.com (Per Buer) Date: Thu, 8 Apr 2010 22:12:10 +0200 Subject: Reducing backend connections In-Reply-To: <4BBE2F0F.5000404@gmail.com> References: <4BBE2F0F.5000404@gmail.com> Message-ID: On Thu, Apr 8, 2010 at 9:31 PM, Rob S wrote: > Hi, > > For most sites, we use Varnish to do both caching and load balancing. > ?However, for one or two, where it's not currently practical to cache, we > use it solely for load balancing etc. ?What's the best way of using Varnish > so that the backends aren't kept holding the connections whilst the client > is patiently downloading? If you're sure you don't want _any_ caching (not even 1s); pass. However, if you're clients are downloading ISO images, the whole image must be fetched from the backend and stored before it is given to the client. In such conditions it makes sense to use pipe. -- Per Buer, Varnish Software Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer From henrikno at gmail.com Thu Apr 8 20:21:48 2010 From: henrikno at gmail.com (Henrik Nordvik) Date: Thu, 8 Apr 2010 13:21:48 -0700 Subject: Reducing backend connections In-Reply-To: References: <4BBE2F0F.5000404@gmail.com> Message-ID: On a different but related note: What would be best to use for long-polling connections? -- Henrik On Thu, Apr 8, 2010 at 1:12 PM, Per Buer wrote: > On Thu, Apr 8, 2010 at 9:31 PM, Rob S wrote: > > Hi, > > > > For most sites, we use Varnish to do both caching and load balancing. > > However, for one or two, where it's not currently practical to cache, we > > use it solely for load balancing etc. What's the best way of using > Varnish > > so that the backends aren't kept holding the connections whilst the > client > > is patiently downloading? > > If you're sure you don't want _any_ caching (not even 1s); pass. > > However, if you're clients are downloading ISO images, the whole image > must be fetched from the backend and stored before it is given to the > client. In such conditions it makes sense to use pipe. > > > -- > Per Buer, Varnish Software > Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Thu Apr 8 20:39:09 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 08 Apr 2010 20:39:09 +0000 Subject: Regards to freshing up the website In-Reply-To: Your message of "Mon, 05 Apr 2010 16:48:49 +0200." Message-ID: <48828.1270759149@critter.freebsd.dk> Hi Anders, Sorry for the delay, still digging through my mailbox... I have recently been using the Python docs a fair bit, and I sort of like how they have done it: An installation document, A tutorial to get you started, and a reference document with all the nitty-gritty details. I would like to see the Varnish documentation develop along the same lines, here is a strawman index: Installation manual: on this os, pull this package .. that ..//.. to compile from source how to get help - mailing list - IRC - varnish-software.com - other listed consultants reporting bugs - using varnishtest to reproduce - what data do we need - confidentiality - ... Tutorial starting varnish with -d, seeing a transaction go through explain varnishlog output for a miss and a hit a few simple VCL tricks, including switching VCL on the fly The helpers: varnishstat, varnishhist, varnishtop varnishncsa Now that you know how it works, lets talk planning: - backend, directors and polling - storage - logging - management CLI & security - ESI Real life examples: - A real life varnish explained - A more complex real life varnish explained - Sky's Wikia Setup Varnishtest - What varnishtest does and why - writing simple test-cases - using varnishtest to test your VCL - using varnishtest to reproduce bugs Reference The programs: . varnishd manual page . varnishstat . - counters explained . common filtering options for shmlog tools . varnishlog .. . varnsihtop .. . varnsihncsa .. . varnsihhist .. The CLI: . connections: -T -S -M . varnishadm . CLI commands and what they do . - vcl.load . - stop . - start . - ... VCL language . The functions: . - vcl_recv . - vcl_miss . --- . The things you can do . - set . - unset . - esi . - rollback Varnishtest . syntax etc. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From r at roze.lv Fri Apr 9 06:23:45 2010 From: r at roze.lv (Reinis Rozitis) Date: Fri, 9 Apr 2010 09:23:45 +0300 Subject: varnish with ssl In-Reply-To: References: <74162.1270675849@critter.freebsd.dk> Message-ID: > You're saying that nginx now does connection-layer proxying and SSL > stripping? That's news to me. Well isnt SSL stripping a somewhat reverse process to what a proxy should do? :) But http://github.com/yaoweibin/nginx_tcp_proxy_module for example.. rr From phk at phk.freebsd.dk Fri Apr 9 08:59:33 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 09 Apr 2010 08:59:33 +0000 Subject: a post-VUG2 sort of report Message-ID: <51610.1270803573@critter.freebsd.dk> First A big thanks to Marco Walraven and eBay/Marktplaats.nl for hosting the VUG2 meeting, that was absolute wonderful and worked out great. Second, an equally big thank to all the participants: I really hope you get as much out of these meetings as I do, they are supposed to be meetings for the users, I just sneak in there, even though I don't run varnish myself. Third, who and were will VUG3 happen ? We had no offers at the VUG2 meeting, but a lot of people seemed to think Island would be a great place, although I suspect people just want to see a volcano or buy their own bank or something :-) I think Anders Berg promised to send a sort of official VUG2 summary out, and quite possibly he did and it is stuck in my mailbox and I havn't spotted it yet, but I thought I owed you a report from the development side, so you sort of know what to expect now. As some of you heard me mumble, I got an idea in the train down to Amsterdam, and it transpires that it was a pretty interesting one. It will probably not improve Varnish performance too much, but it has allowed me to fulfill a old-ish promise to ACM Queue to write an article, where I get to plug Varnish etc. More on that when it is published. I have started picking up various low hanging fruit from my VUG2 notes, and already have the VCL compiler taken apart on the operating table, in my secret basement development laboratories, all the screws and springs spread out over the place and the secret blue smoke safely stored in a jar. I think I may be able to put it together in a smarter way etc. etc. I have a confession to make: currently I have 177 varnish related emails stuck in my inbox, and that just doesn't work for any of us. Varnish has too many users now, for me, with my limited knowledge about web-content, to be able to keep track of all issues and questions people raise in email. I run a two-stage bayesian mail-filter, the first sorts email into "spam" vs "mail", the second sorts my mail into "inbox" vs. "maybe not interesting". Until now, all email with "varnish" in it hit my inbox, but I will have to start letting some varnish subjects hit the second box, and trust the community to help each other out. Eventually, we may want to add a -questions mailing list, but I feel that is too early yet. Please don't be offended if I do not reply to an email, if my answer is required but not forthcoming, send me a direct email, or ask me by name in the email, to make sure it makes it into the right mailbox. Various random notes, in no particular order: --------------------------------------------- We have really tightned our trac-ticket processing: we usually go over the open tickets monday afternoon (feel free to join us in #varnish-hacking) and the goal is to only have open tickets which represent bugs that have not yet been fixed. Ideas, suggestions, requests and wishes should go into our "Shopping list" on the wiki, from which I will pick as time permits under the VML. About the VML: A big thanks to the license holders: during Q1/10 217 hours of varnish work pretty much caused 2.1.0 to be our best release ever. (You can follow the bookkeeping here: http://phk.freebsd.dk/VML/) And now, back to work... Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From david.birdsong at gmail.com Fri Apr 9 22:37:09 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 9 Apr 2010 15:37:09 -0700 Subject: different object hashed for firefox and chrome Message-ID: I'm trying to understand how an identical request to Firefox and Chrome are hashing to a different object in Varnish. Here's the Firefox request: http://pastebin.com/RkA9uhMp Here's the Chrome request: http://pastebin.com/4pAe8Cac There are a few differences in the request between the two browsers, but I've set vcl_hash to only use req.url: sub vcl_hash { set req.hash = req.url; # set req.hash += req.url; # set req.hash += req.http.host; hash; } I'm running varnish from trunk at R4390. From l at lrowe.co.uk Fri Apr 9 23:25:41 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Sat, 10 Apr 2010 00:25:41 +0100 Subject: different object hashed for firefox and chrome In-Reply-To: References: Message-ID: This is almost certainly the result of a Vary: Accept-Encoding header on your response (Chrome, Firefox and IE all have slightly different Accept-Encoding headers). See http://varnish-cache.org/wiki/FAQ/Compression and add the snippet to normalise the Accept-Encoding header in vcl_recv. You don't normally need to customise vcl_hash. Laurence On 9 April 2010 23:37, David Birdsong wrote: > I'm trying to understand how an identical request to Firefox and > Chrome are hashing to a different object in Varnish. > > Here's the Firefox request: > http://pastebin.com/RkA9uhMp > > Here's the Chrome request: > http://pastebin.com/4pAe8Cac > > There are a few differences in the request between the two browsers, > but I've set vcl_hash to only use req.url: > sub vcl_hash { > ? ? ? ?set req.hash = req.url; > # ? ? ? set req.hash += req.url; > # ? ? ? set req.hash += req.http.host; > ? ? ? ?hash; > } > > I'm running varnish from trunk at R4390. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From david.birdsong at gmail.com Fri Apr 9 23:31:14 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 9 Apr 2010 16:31:14 -0700 Subject: different object hashed for firefox and chrome In-Reply-To: References: Message-ID: On Fri, Apr 9, 2010 at 4:25 PM, Laurence Rowe wrote: > This is almost certainly the result of a Vary: Accept-Encoding header > on your response (Chrome, Firefox and IE all have slightly different > Accept-Encoding headers). See > http://varnish-cache.org/wiki/FAQ/Compression and add the snippet to > normalise the Accept-Encoding header in vcl_recv. You don't normally > need to customise vcl_hash. > thanks for that link. i've customized vcl_hash to exclude req.http.host which i dont want as part of the hash. my understanding was that this was included by default. > Laurence > > On 9 April 2010 23:37, David Birdsong wrote: >> I'm trying to understand how an identical request to Firefox and >> Chrome are hashing to a different object in Varnish. >> >> Here's the Firefox request: >> http://pastebin.com/RkA9uhMp >> >> Here's the Chrome request: >> http://pastebin.com/4pAe8Cac >> >> There are a few differences in the request between the two browsers, >> but I've set vcl_hash to only use req.url: >> sub vcl_hash { >> ? ? ? ?set req.hash = req.url; >> # ? ? ? set req.hash += req.url; >> # ? ? ? set req.hash += req.http.host; >> ? ? ? ?hash; >> } >> >> I'm running varnish from trunk at R4390. >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc >> > From l at lrowe.co.uk Fri Apr 9 23:53:33 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Sat, 10 Apr 2010 00:53:33 +0100 Subject: different object hashed for firefox and chrome In-Reply-To: References: Message-ID: On 10 April 2010 00:31, David Birdsong wrote: > On Fri, Apr 9, 2010 at 4:25 PM, Laurence Rowe wrote: >> This is almost certainly the result of a Vary: Accept-Encoding header >> on your response (Chrome, Firefox and IE all have slightly different >> Accept-Encoding headers). See >> http://varnish-cache.org/wiki/FAQ/Compression and add the snippet to >> normalise the Accept-Encoding header in vcl_recv. You don't normally >> need to customise vcl_hash. >> > thanks for that link. ?i've customized vcl_hash to exclude > req.http.host which i dont want as part of the hash. my understanding > was that this was included by default. It may be better to set req.host to a canonical value in vcl_recv, that way you can be sure that your backends will always produce consistent responses (so you don't end up with some urls pointing to http://example.com and others to http://example.org). My point was rather that the requests still hash the same, you're getting a different object on Vary. vcl_hash lets you set the cache path based only on the request, vcl_fetch and Vary let you set the cache path further based on the response. It's not yet shown on the flowchart at http://varnish-cache.org/wiki/VCLExampleDefault. Laurence From david.birdsong at gmail.com Sat Apr 10 00:23:51 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 9 Apr 2010 17:23:51 -0700 Subject: different object hashed for firefox and chrome In-Reply-To: References: Message-ID: On Fri, Apr 9, 2010 at 4:53 PM, Laurence Rowe wrote: > On 10 April 2010 00:31, David Birdsong wrote: >> On Fri, Apr 9, 2010 at 4:25 PM, Laurence Rowe wrote: >>> This is almost certainly the result of a Vary: Accept-Encoding header >>> on your response (Chrome, Firefox and IE all have slightly different >>> Accept-Encoding headers). See >>> http://varnish-cache.org/wiki/FAQ/Compression and add the snippet to >>> normalise the Accept-Encoding header in vcl_recv. You don't normally >>> need to customise vcl_hash. >>> >> thanks for that link. ?i've customized vcl_hash to exclude >> req.http.host which i dont want as part of the hash. my understanding >> was that this was included by default. > > It may be better to set req.host to a canonical value in vcl_recv, > that way you can be sure that your backends will always produce > consistent responses (so you don't end up with some urls pointing to > http://example.com and others to http://example.org). > > My point was rather that the requests still hash the same, you're > getting a different object on Vary. vcl_hash lets you set the cache > path based only on the request, vcl_fetch and Vary let you set the > cache path further based on the response. It's not yet shown on the > flowchart at http://varnish-cache.org/wiki/VCLExampleDefault. so how is that cache path that is set in vcf_fetch based on response reached after the object is stored and then re-requested? > > Laurence > From l at lrowe.co.uk Sun Apr 11 14:59:14 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Sun, 11 Apr 2010 15:59:14 +0100 Subject: different object hashed for firefox and chrome In-Reply-To: References: Message-ID: On 10 April 2010 01:23, David Birdsong wrote: >> My point was rather that the requests still hash the same, you're >> getting a different object on Vary. vcl_hash lets you set the cache >> path based only on the request, vcl_fetch and Vary let you set the >> cache path further based on the response. It's not yet shown on the >> flowchart at http://varnish-cache.org/wiki/VCLExampleDefault. > > so how is that cache path that is set in vcf_fetch based on response > reached after the object is stored and then re-requested? Based on the request headers specified in the Vary header of the response. e.g. (Response) Vary: Accept-Encoding, Accept-Language Will cause different objects to be returned for each variation of req.http.Accept-Encoding * req.http.Accept-Language Laurence From sfoutrel at bcstechno.com Mon Apr 12 08:33:04 2010 From: sfoutrel at bcstechno.com (=?iso-8859-1?Q?S=E9bastien_FOUTREL?=) Date: Mon, 12 Apr 2010 10:33:04 +0200 Subject: File storage problem. Message-ID: Hello, We use Varnish 2.0.6 in a production site that do about 100Mb/s and 1K Hits/s per cache. We found a strange phenomon when we added some 24H TTL to images. The file storage seems to have some garbage collection that deeply impact performances when freeing some large amount of cache. We initialy used 8Gb file storage and encountered some GC at about 80% usage that freed almost all file storage (monitored with cacti using sm_balloc,sm_bfree values). We tried on one of our cache to test with 32GB and 64GB files but same happened even worse because on the 64GB file it did the GC at 30GB and freed until sm_balloc reached 2GB. We do not understand what is that and and to control it so if someone as ideas around that problem ? These are some infos about the server : Hardware/OS/updates is same on cache02 and cache05 root at cache02:~# ps auxwww|grep varnishd root 22731 0.0 0.0 110836 1164 ? Ss Apr09 0:00 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/default.vcl -T :6082 -t 60 -w 100,1000,120 -s file,/var/lib/varnish/cache02/varnish_storage.bin,8G nobody 22732 11.5 86.9 10972872 7109076 ? Sl Apr09 535:23 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/default.vcl -T :6082 -t 60 -w 100,1000,120 -s file,/var/lib/varnish/cache02/varnish_storage.bin,8G root at cache05:~# uname -a Linux cache05 2.6.27-11-generic #1 SMP Thu Jan 29 19:28:32 UTC 2009 x86_64 GNU/Linux root at cache05:~# free total used free shared buffers cached Mem: 8178736 8130800 47936 0 30188 6606748 -/+ buffers/cache: 1493864 6684872 Swap: 0 0 0 Graphs for cache02 and cache05 [cid:image001.png at 01CADA2A.BEDC7140] [cid:image002.png at 01CADA2B.30A09680] [cid:image003.png at 01CADA2B.30A09680] [cid:image004.png at 01CADA2B.8893FFD0] [cid:image005.png at 01CADA2B.8893FFD0] [cid:image006.png at 01CADA2B.8893FFD0] Sorry for the email size. -- S?bastien FOUTREL -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 19730 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 26035 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 23628 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 13307 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 21852 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 19196 bytes Desc: image006.png URL: From dirk.taggesell at proximic.com Mon Apr 12 16:18:34 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Mon, 12 Apr 2010 18:18:34 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? Message-ID: <4BC347DA.3000505@proximic.com> Hi all, I am just trying to get familiar with varnish. I installed the latest version 2.1.0 and with the minimal config it works as intended: backend default { .host = "backend.example.com"; .port = "80"; } But the back-end sends an expires-header of "0", thus varnish tries to fetch the item again from the back-end for every client requesting it. So I need a simple rule that tells varnish to just cache every object for say 2h, regardless what the back-end says. I just have to prevent varnish to ask the back-end at every request. Unfortunately the config language is not entirely self-explaining. and most expamples on the web won't work (too old?). Can anyone help me? -- Dirk Taggesell From rtshilston at gmail.com Mon Apr 12 16:22:28 2010 From: rtshilston at gmail.com (Rob S) Date: Mon, 12 Apr 2010 17:22:28 +0100 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC347DA.3000505@proximic.com> References: <4BC347DA.3000505@proximic.com> Message-ID: <4BC348C4.9000008@gmail.com> Dirk, The example at http://varnish-cache.org/wiki/VCLExampleIgnoreCacheHeadersFromBackend should work fine. In vcl_fetch, pop: set obj.ttl = 7200s; and everything will be cached for 2hrs. If that doesn't work, then can you post the full VCL you're using. We'll take a look, and update the website if necessary. Rob Dirk Taggesell wrote: > Hi all, > > I am just trying to get familiar with varnish. I installed the latest > version 2.1.0 and with the minimal config it works as intended: > > backend default { > .host = "backend.example.com"; > .port = "80"; > } > > But the back-end sends an expires-header of "0", thus varnish tries to > fetch the item again from the back-end for every client requesting it. > So I need a simple rule that tells varnish to just cache every object > for say 2h, regardless what the back-end says. I just have to prevent > varnish to ask the back-end at every request. > > Unfortunately the config language is not entirely self-explaining. and > most expamples on the web won't work (too old?). > Can anyone help me? > > From richard.chiswell at mangahigh.com Mon Apr 12 16:23:48 2010 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 12 Apr 2010 17:23:48 +0100 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC347DA.3000505@proximic.com> References: <4BC347DA.3000505@proximic.com> Message-ID: <4BC34914.4080303@mangahigh.com> You'll want something like: sub vcl_fetch { unset obj.http.Expires; unset obj.http.Cache-Control; set obj.ttl = 60m; // cache for 60 minutes within Varnish set obj.http.Cache-Control = "public, max-age=2678400"; // cache for 31 days at User end deliver; } However, depending on exactly what you want to cache for who and for how long (do you want to cache things for people with cookies? do you really want to cache a 1Gb .avi file within Varnish? etc), the configuration can get a bit complex. Rich On 12/04/2010 17:18, Dirk Taggesell wrote: > Hi all, > > I am just trying to get familiar with varnish. I installed the latest > version 2.1.0 and with the minimal config it works as intended: > > backend default { > .host = "backend.example.com"; > .port = "80"; > } > > But the back-end sends an expires-header of "0", thus varnish tries to > fetch the item again from the back-end for every client requesting it. > So I need a simple rule that tells varnish to just cache every object > for say 2h, regardless what the back-end says. I just have to prevent > varnish to ask the back-end at every request. > > Unfortunately the config language is not entirely self-explaining. and > most expamples on the web won't work (too old?). > Can anyone help me? > > From dirk.taggesell at proximic.com Mon Apr 12 16:26:39 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Mon, 12 Apr 2010 18:26:39 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC34914.4080303@mangahigh.com> References: <4BC347DA.3000505@proximic.com> <4BC34914.4080303@mangahigh.com> Message-ID: <4BC349BF.5040307@proximic.com> On 12.04.10 18:23, Richard Chiswell wrote: > You'll want something like: Thanks, Richard, will try that. > However, depending on exactly what you want to cache for who and for how > long (do you want to cache things for people with cookies? do you really > want to cache a 1Gb .avi file within Varnish? etc), the configuration > can get a bit complex. This is no issue as the back-end delivers only very small objects, but it somewhat slow. There's definitely nothing big coming from the back-end. -- mit freundlichen Gruessen Dirk Taggesell Systemadministrator Proximic GmbH From dirk.taggesell at proximic.com Mon Apr 12 16:27:18 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Mon, 12 Apr 2010 18:27:18 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC348C4.9000008@gmail.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> Message-ID: <4BC349E6.5010907@proximic.com> On 12.04.10 18:22, Rob S wrote: > The example at > http://varnish-cache.org/wiki/VCLExampleIgnoreCacheHeadersFromBackend > should work fine. > > In vcl_fetch, pop: > > set obj.ttl = 7200s; > > and everything will be cached for 2hrs. Thanks Rob, will try that. -- mit freundlichen Gruessen Dirk Taggesell Systemadministrator Proximic GmbH From rtshilston at gmail.com Mon Apr 12 16:39:04 2010 From: rtshilston at gmail.com (Rob S) Date: Mon, 12 Apr 2010 17:39:04 +0100 Subject: Mobile redirects Message-ID: <4BC34CA8.2060403@gmail.com> Hi, I can see that Alexc's example at http://varnish-cache.org/wiki/VCLExampleAlexc incorporates logic for redirecting users on mobile devices. How have other people solved this? Has anyone got anything to share? Thanks, Rob From A.Hongens at netmatch.nl Mon Apr 12 16:47:18 2010 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Mon, 12 Apr 2010 18:47:18 +0200 Subject: server name in config? Message-ID: <2903443B3710364B814B820238DDEF2CA761B62C@TIL-EXCH-01.netmatch.local> Hey, Is there a way to use fields in the VCL like the current server's name? I'd like to output some extra header in the response like 'X-Varnish-host: balancer1.domain.local' or something like that, and use the balancer hostname in the custom error message I've created. (I could of course change the config for each balancers, but I'd like to keep one single config for all balancer nodes, easier to copy around and such.) If this is not possible, please consider it a feature request ;) Thanks in advance. -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ From richard.chiswell at mangahigh.com Mon Apr 12 16:53:25 2010 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Mon, 12 Apr 2010 17:53:25 +0100 Subject: server name in config? In-Reply-To: <2903443B3710364B814B820238DDEF2CA761B62C@TIL-EXCH-01.netmatch.local> References: <2903443B3710364B814B820238DDEF2CA761B62C@TIL-EXCH-01.netmatch.local> Message-ID: <4BC35005.8000903@mangahigh.com> Try something like: sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-MH-Cache = "HIT " obj.hits " "* server.hostname* " " resp.http.Age; } else { set resp.http.X-MH-Cache = "MISS " *server.hostname* " " resp.http.Age; } } Richard On 12/04/2010 17:47, Angelo H?ngens wrote: > Hey, > > Is there a way to use fields in the VCL like the current server's name? I'd like to output some extra header in the response like 'X-Varnish-host: balancer1.domain.local' or something like that, and use the balancer hostname in the custom error message I've created. > > (I could of course change the config for each balancers, but I'd like to keep one single config for all balancer nodes, easier to copy around and such.) > > If this is not possible, please consider it a feature request ;) Thanks in advance. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.hongens at netmatch.nl Mon Apr 12 17:04:51 2010 From: a.hongens at netmatch.nl (=?ISO-8859-1?Q?Angelo_H=F6ngens?=) Date: Mon, 12 Apr 2010 19:04:51 +0200 Subject: server name in config? In-Reply-To: <4BC35005.8000903@mangahigh.com> References: <2903443B3710364B814B820238DDEF2CA761B62C@TIL-EXCH-01.netmatch.local> <4BC35005.8000903@mangahigh.com> Message-ID: <4BC352B3.5000008@netmatch.nl> On 12-4-2010 18:53, Richard Chiswell wrote: > Try something like: > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-MH-Cache = "HIT " obj.hits " "* > server.hostname* " " resp.http.Age; > } else { > set resp.http.X-MH-Cache = "MISS " *server.hostname* " " > resp.http.Age; > } > } > Thanks, the server.hostname field works like a charm :) -- With kind regards, Angelo H?ngens systems administrator MCSE on Windows 2003 MCSE on Windows 2000 MS Small Business Specialist ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg +31 (0)13 5811088 +31 (0)13 5821239 A.Hongens at netmatch.nl www.netmatch.nl ------------------------------------------ From paul.p.carey at gmail.com Mon Apr 12 18:17:31 2010 From: paul.p.carey at gmail.com (Paul Carey) Date: Mon, 12 Apr 2010 19:17:31 +0100 Subject: Proxying POST body through Varnish Message-ID: Hi I've been trying out Varnish with CouchDB. CouchDB is a document-oriented data store where documents are keyed by id. To retrieve a list of documents from CouchDB a POST is made to /db_name/_all_docs with a JSON encoded list of keys as the body. For example: curl -X POST -d '{"keys":["bar","foo"]}' 127.0.0.1/scratch/_all_docs I'd like Varnish to cache these requests. My VCL config defines vcl_recv and doesn't 'pass' on POSTs that match _all_docs. However, CouchDB doesn't like the POST requests proxied through Varnish, returning an error message stating 'invalid UTF-8 JSON'. I suspect Varnish is stripping the POST body. I say this because when I run varnishlog I see a Rx Content-Length header received by Varnish but not a corresponding Tx header proxied to CouchDB. If it's likely that this is what's happening, is there any many to get Varnish to pass the POST body along? Many thanks Paul From perbu at varnish-software.com Mon Apr 12 19:26:31 2010 From: perbu at varnish-software.com (Per Buer) Date: Mon, 12 Apr 2010 21:26:31 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC347DA.3000505@proximic.com> References: <4BC347DA.3000505@proximic.com> Message-ID: On Mon, Apr 12, 2010 at 6:18 PM, Dirk Taggesell wrote: > > Unfortunately the config language is not entirely self-explaining. and > most expamples on the web won't work (too old?). > Can anyone help me? I guess setting beresp.ttl = 7200s; in vcl_fetch should do it (untested). -- Per Buer, Varnish Software Phone: +47 21 54 41 21 / Mobile: +47 958 39 117 / skype: per.buer From cosimo at streppone.it Mon Apr 12 19:56:37 2010 From: cosimo at streppone.it (Cosimo Streppone) Date: Mon, 12 Apr 2010 21:56:37 +0200 Subject: File storage problem. In-Reply-To: References: Message-ID: In data 12 aprile 2010 alle ore 10:33:04, S?bastien FOUTREL ha scritto: > We use Varnish 2.0.6 in a production site that do about 100Mb/s and 1K > Hits/s per cache. > > We found a strange phenomon when we added some 24H TTL to images. > > The file storage seems to have some garbage collection that deeply > impact performances when freeing some large amount of cache. > > We initialy used 8Gb file storage and encountered some GC at about 80% > usage that freed almost all file storage (monitored with cacti using > sm_balloc,sm_bfree values). > > We tried on one of our cache to test with 32GB and 64GB files but same > happened even worse because on the 64GB file it did the GC at 30GB and > freed until sm_balloc reached 2GB. > > We do not understand what is that and and to control it so if someone as > ideas around that problem ? Hi S?bastien, I'm not sure you have the same problem I experienced, but it looks very much like the same. Read this mail from the archives, especially the "CPU IO wait" paragraph, http://www.mail-archive.com/varnish-misc at projects.linpro.no/msg01571.html it's very old as it refers to 1.1.2, but as soon as we switched to the "malloc" storage as suggested there, all iowait/load problems magically disappeared. See also here: http://varnish-cache.org/wiki/Performance We're not using any swap, as we have servers with enough ram to keep a useful share of the file set in memory. I think it's worth a try. -- Cosimo From david.birdsong at gmail.com Mon Apr 12 21:50:42 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Mon, 12 Apr 2010 14:50:42 -0700 Subject: influencing beresp.cacheable on the backend Message-ID: How can I influence bereps.cacheable in a backend such that it will evaluate to False? I set Expires and Cache-Control in the backend, this is what the backend generates: HTTP/1.1 200 OK Server: nginx/0.7.65 Date: Mon, 12 Apr 2010 21:48:25 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: close Expires: Thu, 01 Jan 1970 00:00:01 GMT Cache-Control: no-cache >From the man page: """ A response is considered cacheable if it is valid (see above), the HTTP status code is 200, 203, 300, 301, 302, 404 or 410 and it has a non-zero time-to-live when Expires and Cache-Control headers are taken into account. """ And yet this object is getting cached in varnish. Here is my vcl_fetch: sub vcl_fetch { if (beresp.http.Set-Cookie) { unset beresp.http.Set-Cookie; } if (beresp.cacheable) { unset beresp.http.expires; set beresp.ttl = 1h; if (beresp.status >= 300 && beresp.status <= 399) { set beresp.ttl = 10m; } if (beresp.status >= 399) { set beresp.ttl = 0s; } } remove beresp.http.X-Varnish-IP; remove beresp.http.X-Varnish-Port; } From l at lrowe.co.uk Mon Apr 12 23:49:37 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Tue, 13 Apr 2010 00:49:37 +0100 Subject: influencing beresp.cacheable on the backend In-Reply-To: References: Message-ID: On 12 April 2010 22:50, David Birdsong wrote: > How can I influence bereps.cacheable in a backend such that it will > evaluate to False? beresp.cacheable is not re-evaluated. Decide whether to cache an object or not by either returning pass or deliver, as is done by the default VCL: sub vcl_fetch { if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } return (deliver); } Laurence From david.birdsong at gmail.com Mon Apr 12 23:58:20 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Mon, 12 Apr 2010 16:58:20 -0700 Subject: influencing beresp.cacheable on the backend In-Reply-To: References: Message-ID: On Mon, Apr 12, 2010 at 4:49 PM, Laurence Rowe wrote: > On 12 April 2010 22:50, David Birdsong wrote: >> How can I influence bereps.cacheable in a backend such that it will >> evaluate to False? > > beresp.cacheable is not re-evaluated. Decide whether to cache an > object or not by either returning pass or deliver, as is done by the > default VCL: > Ok, but what I'm trying to figure out is what does it take in a backend response for beresp.cacheable to be False. The backend set 'Cache-Control: no-cache' and 'Expires: ', shouldn't that response be uncacheable? I'm finding that that response is cached. > sub vcl_fetch { > ? ?if (!beresp.cacheable) { > ? ? ? ?return (pass); > ? ?} > ? ?if (beresp.http.Set-Cookie) { > ? ? ? ?return (pass); > ? ?} > ? ?return (deliver); > } > > Laurence > From hap at spamproof.nospammail.net Tue Apr 13 00:45:10 2010 From: hap at spamproof.nospammail.net (Hank A. Paulson) Date: Mon, 12 Apr 2010 17:45:10 -0700 Subject: The quest for 1e-6 Message-ID: <4BC3BE96.2040208@spamproof.nospammail.net> I was curious if people see much activity in the 1e-5 to 1e-6 range on varnishhist, I occasionally get one or two vertical bars on the 1e-6 marker, but nothing regular. See the below screenshot. Any parameters I can tune to reduce the response time to the 1e-5 to 1e-6 range? Fedora 12 VM on Fedora Core 8 on E5520/2.27GHz running Varnish 2.0.6 and varnish-2.1 SVN 4640:4641 - Thanks. | | | | | | | | | | | | | | || | | || | | || | | || | | || | | || | | || | | || | | || | # | || | # | |||| # | |||| # || |||| # || |||| ## || ||||| ## || ||||| ## || ||||| ### || ||||| #### || ||||| #### || ||||| #### || ||||| #### || ||||| ##### || |||||| ##### |||||||||| ###### |||||||||| ###### ||||||||||| ####### ||||||||||| | ######## +---------------------+---------------------+---------------------+ |1e-6 |1e-5 |1e-4 |1e-3 From hijinks at gmail.com Tue Apr 13 02:57:52 2010 From: hijinks at gmail.com (Mike) Date: Mon, 12 Apr 2010 22:57:52 -0400 Subject: The quest for 1e-6 In-Reply-To: <4BC3BE96.2040208@spamproof.nospammail.net> References: <4BC3BE96.2040208@spamproof.nospammail.net> Message-ID: Here is mine.. we get a lot more if traffic is high.. this is about 2 minutes of running varnishhist centos 5 dual quad core amd 2374 | || || || || || || || || ||| ||| # ||| # ||| # |||| ## |||| ## |||| ## |||| ## |||| ## |||| ## |||| ## |||| ## ||||| ## |||||| ### | |||||| # #### | ||||||| # #### | ||||||| # #### | ||||||||| # ##### +-------------+-------------+-------------+------------- |1e-6 |1e-5 |1e-4 |1e-3 On Mon, Apr 12, 2010 at 8:45 PM, Hank A. Paulson < hap at spamproof.nospammail.net> wrote: > I was curious if people see much activity in the 1e-5 to 1e-6 range on > varnishhist, I occasionally get one or two vertical bars on the 1e-6 marker, > but nothing regular. See the below screenshot. > > Any parameters I can tune to reduce the response time to the 1e-5 to 1e-6 > range? Fedora 12 VM on Fedora Core 8 on E5520/2.27GHz running Varnish 2.0.6 > and varnish-2.1 SVN 4640:4641 - Thanks. > > > | > | > | > | > | | > | | > | | > | | | > | || | > | || | > | || | > | || | > | || | > | || | > | || | > | || | > | || | # > | || | # > | |||| # > | |||| # > || |||| # > || |||| ## > || ||||| ## > || ||||| ## > || ||||| ### > || ||||| #### > || ||||| #### > || ||||| #### > || ||||| #### > || ||||| ##### > || |||||| ##### > |||||||||| ###### > |||||||||| ###### > ||||||||||| ####### > ||||||||||| | ######## > +---------------------+---------------------+---------------------+ > |1e-6 |1e-5 |1e-4 |1e-3 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahesho at rediff.co.in Tue Apr 13 06:10:53 2010 From: mahesho at rediff.co.in (Mahesh Ollalwar) Date: Tue, 13 Apr 2010 11:40:53 +0530 Subject: obj.status and obj.cacheable in varnish 2.1 Message-ID: <4BC40AED.20605@rediff.co.in> An HTML attachment was scrubbed... URL: From A.Hongens at netmatch.nl Tue Apr 13 06:20:09 2010 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Tue, 13 Apr 2010 08:20:09 +0200 Subject: x-forwarded-for problems since 2.1 Message-ID: <2903443B3710364B814B820238DDEF2CA761B637@TIL-EXCH-01.netmatch.local> In my vcl_recv I have: remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = req.http.rlnclientipaddr; Since I upgraded to 2.1 yesterday, the header is no longer sent to backends.. Any ideas? -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ From phk at phk.freebsd.dk Tue Apr 13 06:28:33 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 13 Apr 2010 06:28:33 +0000 Subject: The quest for 1e-6 In-Reply-To: Your message of "Mon, 12 Apr 2010 17:45:10 MST." <4BC3BE96.2040208@spamproof.nospammail.net> Message-ID: <64419.1271140113@critter.freebsd.dk> In message <4BC3BE96.2040208 at spamproof.nospammail.net>, "Hank A. Paulson" write s: >I was curious if people see much activity in the 1e-5 to 1e-6 range on >varnishhist, I occasionally get one or two vertical bars on the 1e-6 marker, >but nothing regular. See the below screenshot. Most likely your 1e-6 are handled by vcl_error. I don't think there is much chance of significantly breaching the 1e-5 barrier on current hardware. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From A.Hongens at netmatch.nl Tue Apr 13 06:31:41 2010 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Tue, 13 Apr 2010 08:31:41 +0200 Subject: x-forwarded-for problems since 2.1 In-Reply-To: <2903443B3710364B814B820238DDEF2CA761B637@TIL-EXCH-01.netmatch.local> References: <2903443B3710364B814B820238DDEF2CA761B637@TIL-EXCH-01.netmatch.local> Message-ID: <2903443B3710364B814B820238DDEF2CA761B639@TIL-EXCH-01.netmatch.local> Fixed, found it in http://varnish-cache.org/changeset/4467 -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc- > bounces at varnish-cache.org] On Behalf Of Angelo H?ngens > Sent: dinsdag 13 april 2010 8:20 > To: 'varnish-misc at varnish-cache.org' > Subject: x-forwarded-for problems since 2.1 > > > In my vcl_recv I have: > > remove req.http.X-Forwarded-For; > set req.http.X-Forwarded-For = req.http.rlnclientipaddr; > > Since I upgraded to 2.1 yesterday, the header is no longer sent to > backends.. Any ideas? > > -- > > > With kind regards, > > > Angelo H?ngens > > Systems Administrator > > ------------------------------------------ > NetMatch > tourism internet software solutions > > Ringbaan Oost 2b > 5013 CA Tilburg > T: +31 (0)13 5811088 > F: +31 (0)13 5821239 > > mailto:A.Hongens at netmatch.nl > http://www.netmatch.nl > ------------------------------------------ > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From David at firechaser.com Tue Apr 13 06:35:37 2010 From: David at firechaser.com (David Murphy) Date: Tue, 13 Apr 2010 07:35:37 +0100 Subject: obj.status and obj.cacheable in varnish 2.1 In-Reply-To: <4BC40AED.20605@rediff.co.in> References: <4BC40AED.20605@rediff.co.in> Message-ID: <0A0A34BA-BE20-4FBB-8562-A66D6629F062@firechaser.com> Hi Mahesh obj.* is called beresp.* in vcl_fetch in Varnish 2.1. Best, David On 13 Apr 2010, at 08:10, Mahesh Ollalwar wrote: Hi, I'm getting below errors in Varnish 2.1 whereas it was working in the older versions. Message from VCC-compiler: Variable 'obj.cacheable' not accessible in method 'vcl_fetch'. At: (input Line 349 Pos 14) if (!obj.cacheable) { -------------#############--- Running VCC-compiler failed, exit 1 Message from VCC-compiler: Variable 'obj.status' not accessible in method 'vcl_fetch'. At: (input Line 357 Pos 13) if (obj.status == 500 || obj.status == 501 || obj.status == 502 || obj.status == 503 || obj.status == 504 || obj.status == 404){ ------------##########------------------------------------------------------------------------------------------------------------------ Running VCC-compiler failed, exit 1 Can anyone help me ? Thanks, Mahesh. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://lists.varnish-cache.org/mailman/listinfo/varnish-misc --------- David Murphy Technical Director Firechaser Limited 78 York Street London W1H 1DP 0870 735 0800 www.firechaser.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahesho at rediff.co.in Tue Apr 13 06:42:01 2010 From: mahesho at rediff.co.in (Mahesh Ollalwar) Date: Tue, 13 Apr 2010 12:12:01 +0530 Subject: obj.status and obj.cacheable in varnish 2.1 In-Reply-To: <0A0A34BA-BE20-4FBB-8562-A66D6629F062@firechaser.com> References: <4BC40AED.20605@rediff.co.in> <0A0A34BA-BE20-4FBB-8562-A66D6629F062@firechaser.com> Message-ID: <4BC41239.6060705@rediff.co.in> An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Apr 13 06:42:58 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 13 Apr 2010 06:42:58 +0000 Subject: x-forwarded-for problems since 2.1 In-Reply-To: Your message of "Tue, 13 Apr 2010 08:20:09 +0200." <2903443B3710364B814B820238DDEF2CA761B637@TIL-EXCH-01.netmatch.local> Message-ID: <64571.1271140978@critter.freebsd.dk> In message <2903443B3710364B814B820238DDEF2CA761B637 at TIL-EXCH-01.netmatch.local >, =?iso-8859-1?Q?Angelo_H=F6ngens?= writes: > >In my vcl_recv I have: > >remove req.http.X-Forwarded-For; >set req.http.X-Forwarded-For = req.http.rlnclientipaddr; > >Since I upgraded to 2.1 yesterday, the header is no longer sent to backends >.. Any ideas? In 2.1 we have moved X-F-F processing to the default VCL, you need to make sure you do not hit that code if you want to do you own X-F-F processing. Poul-Henning PS: I wonder if we should change the default.vcl to not touch an existing X-F-F header by default ? Input from the list ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From a.hongens at netmatch.nl Tue Apr 13 06:50:39 2010 From: a.hongens at netmatch.nl (=?ISO-8859-1?Q?Angelo_H=F6ngens?=) Date: Tue, 13 Apr 2010 08:50:39 +0200 Subject: x-forwarded-for problems since 2.1 In-Reply-To: <64571.1271140978@critter.freebsd.dk> References: <64571.1271140978@critter.freebsd.dk> Message-ID: <4BC4143F.9070402@netmatch.nl> On 13-4-2010 8:42, Poul-Henning Kamp wrote: > In message <2903443B3710364B814B820238DDEF2CA761B637 at TIL-EXCH-01.netmatch.local >> , =?iso-8859-1?Q?Angelo_H=F6ngens?= writes: >> >> In my vcl_recv I have: >> >> remove req.http.X-Forwarded-For; >> set req.http.X-Forwarded-For = req.http.rlnclientipaddr; >> >> Since I upgraded to 2.1 yesterday, the header is no longer sent to backends >> .. Any ideas? > > In 2.1 we have moved X-F-F processing to the default VCL, you need > to make sure you do not hit that code if you want to do you own > X-F-F processing. > > Poul-Henning > > PS: I wonder if we should change the default.vcl to not touch an > existing X-F-F header by default ? Input from the list ? > I don't think that was the problem here, I did handle the header in my vcl_recv (as per http://varnish-cache.org/wiki/VCLExampleAlexc), but suddenly the header was missing. Now I changed the value being set from req.http.rlnclientipaddr to client.ip, and now it's setting the header again. I don't want to use the default handling of the header, because I want to specifically remove all xff headers te client requests come with, not just add mine. However, I think the default behaviour you describe in http://varnish-cache.org/changeset/4467, adding an xff header to an existing one if there is one, is exactly what I expect a proxy to do (and squid does this in the same way). -- With kind regards, Angelo H?ngens systems administrator MCSE on Windows 2003 MCSE on Windows 2000 MS Small Business Specialist ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg +31 (0)13 5811088 +31 (0)13 5821239 A.Hongens at netmatch.nl www.netmatch.nl ------------------------------------------ From David at firechaser.com Tue Apr 13 06:52:21 2010 From: David at firechaser.com (David Murphy) Date: Tue, 13 Apr 2010 07:52:21 +0100 Subject: obj.status and obj.cacheable in varnish 2.1 In-Reply-To: <4BC41239.6060705@rediff.co.in> References: <4BC40AED.20605@rediff.co.in> <0A0A34BA-BE20-4FBB-8562-A66D6629F062@firechaser.com>, <4BC41239.6060705@rediff.co.in> Message-ID: <345BD8B3F8775748A4676A625EADA22417D87977@DURAN.firechaser.local> Snippet from my vcl working under 2.1, which includes some debug header responses ( http://varnish-cache.org/wiki/VCLExampleHitMissHeader ) to see what's going in: # Varnish determined the object was not cacheable if (!beresp.cacheable) { set beresp.http.X-Cacheable = "NO:Not Cacheable"; # You don't wish to cache content for logged in users } elsif(req.http.Cookie ~"(UserID|_session)") { set beresp.http.X-Cacheable = "NO:Got Session"; return(pass); # You are respecting the Cache-Control=private header from the backend } elsif ( beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(pass); # You are extending the lifetime of the object artificially } elsif ( beresp.ttl < 1s ) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; # Varnish determined the object was cacheable } else { set beresp.http.X-Cacheable = "YES"; } Best, David ________________________________________ From: Mahesh Ollalwar [mahesho at rediff.co.in] Sent: 13 April 2010 07:42 To: David Murphy; varnish-misc at varnish-cache.org Subject: Re: obj.status and obj.cacheable in varnish 2.1 Thanks David, What about setting TTL in vcl_fetch Like, set obj.http.Cache-Control = "max-age=432000"; set obj.ttl = 5d; Thanks, Mahesh. On Tuesday 13 April 2010 12:05 PM, David Murphy wrote: Hi Mahesh obj.* is called beresp.* in vcl_fetch in Varnish 2.1. Best, David On 13 Apr 2010, at 08:10, Mahesh Ollalwar wrote: Hi, I'm getting below errors in Varnish 2.1 whereas it was working in the older versions. Message from VCC-compiler: Variable 'obj.cacheable' not accessible in method 'vcl_fetch'. At: (input Line 349 Pos 14) if (!obj.cacheable) { -------------#############--- Running VCC-compiler failed, exit 1 Message from VCC-compiler: Variable 'obj.status' not accessible in method 'vcl_fetch'. At: (input Line 357 Pos 13) if (obj.status == 500 || obj.status == 501 || obj.status == 502 || obj.status == 503 || obj.status == 504 || obj.status == 404){ ------------##########------------------------------------------------------------------------------------------------------------------ Running VCC-compiler failed, exit 1 Can anyone help me ? Thanks, Mahesh. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://lists.varnish-cache.org/mailman/listinfo/varnish-misc --------- David Murphy Technical Director Firechaser Limited 78 York Street London W1H 1DP 0870 735 0800 www.firechaser.com _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From cosimo at streppone.it Tue Apr 13 07:09:54 2010 From: cosimo at streppone.it (Cosimo Streppone) Date: Tue, 13 Apr 2010 09:09:54 +0200 Subject: x-forwarded-for problems since 2.1 In-Reply-To: <64571.1271140978@critter.freebsd.dk> References: <64571.1271140978@critter.freebsd.dk> Message-ID: On Tue, 13 Apr 2010 08:42:58 +0200, Poul-Henning Kamp wrote: > In 2.1 we have moved X-F-F processing to the default VCL, you need > to make sure you do not hit that code if you want to do you own > X-F-F processing. Good to know this before upgrading :) > PS: I wonder if we should change the default.vcl to not touch an > existing X-F-F header by default ? Input from the list ? One of the aspects of Varnish that I like the most is that it tries to be as transparent as possible, so: 1) No XFF is no XFF comes from the "client" 2) XFF untouched if client sends one. At that point, it would be nice to have some functions to "massage" the X-Forwarded-For header before it hits the backends. This could be useful if Varnish runs behind BigIP, for example for transparent SSL processing. We're probably going to try something like this in the near future. I might have time to experiment with this. Another solution we're thinking about is nginx listening on :443 and using localhost:6081 (local varnish instance) as "backend". -- Cosimo From morten at startsiden.no Tue Apr 13 07:49:35 2010 From: morten at startsiden.no (Morten Bekkelund) Date: Tue, 13 Apr 2010 09:49:35 +0200 (CEST) Subject: Mobile redirects In-Reply-To: <4BC34CA8.2060403@gmail.com> Message-ID: <9021170.604251271144975214.JavaMail.root@ms1.startsiden.no> Hi Rob. I used the mobile redirect part of Alexc's example and created a simplified version on our test-servers. It worked well. http://dingleberry.me/2010/03/mobile-redirects-using-varnish/ Morten ----- Original Message ----- From: "Rob S" To: varnish-misc at varnish-cache.org Sent: Monday, April 12, 2010 6:39:04 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna Subject: Mobile redirects Hi, I can see that Alexc's example at http://varnish-cache.org/wiki/VCLExampleAlexc incorporates logic for redirecting users on mobile devices. How have other people solved this? Has anyone got anything to share? Thanks, Rob _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://lists.varnish-cache.org/mailman/listinfo/varnish-misc From audun at ytterdal.net Tue Apr 13 08:06:13 2010 From: audun at ytterdal.net (Audun Ytterdal) Date: Tue, 13 Apr 2010 10:06:13 +0200 Subject: Mobile redirects In-Reply-To: <9021170.604251271144975214.JavaMail.root@ms1.startsiden.no> References: <4BC34CA8.2060403@gmail.com> <9021170.604251271144975214.JavaMail.root@ms1.startsiden.no> Message-ID: On Tue, Apr 13, 2010 at 9:49 AM, Morten Bekkelund wrote: > Hi Rob. > > I used the mobile redirect part of Alexc's example and created a simplified version > on our test-servers. It worked well. > > http://dingleberry.me/2010/03/mobile-redirects-using-varnish/ I've used this method in stead: sub vcl_recv { <.....> call identify_device; } -- Audun Ytterdal http://audun.ytterdal.net From rtshilston at gmail.com Tue Apr 13 08:09:57 2010 From: rtshilston at gmail.com (Rob S) Date: Tue, 13 Apr 2010 09:09:57 +0100 Subject: Mobile redirects In-Reply-To: References: <4BC34CA8.2060403@gmail.com> <9021170.604251271144975214.JavaMail.root@ms1.startsiden.no> Message-ID: <4BC426D5.3050304@gmail.com> Audun Ytterdal wrote: > call identify_device; > Can you add a little more explanation? Is this a C routine you've written (and if so, can you share either the source, or an overview of what it's doing), or did you just move an Alexc-style user-agent detection into a separate VCL function? Rob From audun at ytterdal.net Tue Apr 13 08:09:50 2010 From: audun at ytterdal.net (Audun Ytterdal) Date: Tue, 13 Apr 2010 10:09:50 +0200 Subject: Mobile redirects In-Reply-To: References: <4BC34CA8.2060403@gmail.com> <9021170.604251271144975214.JavaMail.root@ms1.startsiden.no> Message-ID: Ops. Accidently hit send button... On Tue, Apr 13, 2010 at 10:06 AM, Audun Ytterdal wrote: > On Tue, Apr 13, 2010 at 9:49 AM, Morten Bekkelund wrote: >> Hi Rob. >> >> I used the mobile redirect part of Alexc's example and created a simplified version >> on our test-servers. It worked well. >> >> http://dingleberry.me/2010/03/mobile-redirects-using-varnish/ > I've used this method: sub vcl_recv { ? ? ? ?<.....> ? ? ? ?call identify_device; } sub identify_device { unset req.http.hash-input; set req.http.X-VG-Device = "pc"; if (!req.url ~ "^/[^?]+\.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.*|)$") { if (req.http.Cookie ~ "vg_nomobile") { set req.http.X-VG-Device = "pc-forced"; } elsif (req.http.User-Agent ~ "iP(hone|od)") { set req.http.X-VG-Device = "mobile-iphone"; } elsif (req.http.User-Agent ~ "^HTC" || req.http.User-Agent ~ "IEMobile" || req.http.User-Agent ~ "Android") { set req.http.X-VG-Device = "mobile-smartphone"; } elsif (req.http.User-Agent ~ "SymbianOS" || req.http.User-Agent ~ "^BlackBerry" || req.http.User-Agent ~ "^SonyEricsson" || req.http.User-Agent ~ "^Nokia" || req.http.User-Agent ~ "^SAMSUNG" || req.http.User-Agent ~ "^LG") { set req.http.X-VG-Device = "mobile-dumbphone"; } } if (req.http.X-VG-Device != "pc" && req.http.X-VG-Device != "pc-forced") { set req.http.hash-input = req.http.X-VG-Device; } } sub vcl_hash { set req.hash += req.http.hash-input; } And then the backend apache or the php-scripts can decide what to present to the client... So then you have a few choises 1) Apache sends redirect if X-VG-Devvice = ^mobile 2) Apache rewrites/proxyes 3) Php-code redirects or just presents a different template -- Audun Ytterdal http://audun.ytterdal.net From schmidt at ze.tum.de Tue Apr 13 12:04:06 2010 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Tue, 13 Apr 2010 14:04:06 +0200 Subject: varnish with ssl In-Reply-To: <0589D181-A825-4D6D-A3F1-01F3278A5FBC@slide.com> References: <4BBC7EC4.5010108@ze.tum.de> <0589D181-A825-4D6D-A3F1-01F3278A5FBC@slide.com> Message-ID: <4BC45DB6.8040209@ze.tum.de> Ken Brownfield wrote: > This is far-ranging problem that isn't unique to Varnish or SSL. What is typical of CDNs, load-balancers, and proxies of all sorts is to set a header with the IP of the request *it* received. That header is then passed down and can be parsed by your upstream. X-Forwarded-For is the standard header for this, but the format and naming of this header can vary (no pun intended). > > You can imagine how fun it is to handle IPs for a client request that goes through a CDN's proxy/cache network, through your load-balancer, then Varnish, then the upstream web server: > > Client = 1.1.1.1 > CDN = 2.2.2.2 > sets => CDN-Client-IP: 1.1.1.1 > LB (e.g., Pound) = 3.3.3.3 > sets => LB-Client-IP: 2.2.2.2 > Varnish = 4.4.4.4 > sets => X-Forwarded-For: 3.3.3.3 > > Your upstream receives the request from 4.4.4.4 with the following headers: > CDN-Client-IP: 1.1.1.1 > LB-Client-IP: 2.2.2.2 > X-Forwarded-For: 3.3.3.3 > > You'll care about the highest level one (CDN-Client-IP in this case), something like: > > IP = CDN-Client-IP or LB-Client-IP or X-Forwarded-For or > > Hope it helps, At Least it would be consistent for both if varnish able to handle both and not have to go through another system. KISS apply's here too. Every new program adds new Bugs, new security holes and increases the maintenance work. Squid does handle https: request and so do all the other reverse proxies I know. Would make replacing squid with varnish a lot less painful. I don't see the license problem. It should be optional. Use it when is OpenSSL is there, leave it if not. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 543 bytes Desc: OpenPGP digital signature URL: From A.Hongens at netmatch.nl Tue Apr 13 13:09:29 2010 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Tue, 13 Apr 2010 15:09:29 +0200 Subject: keeping varnishstat open will bring down server Message-ID: <2903443B3710364B814B820238DDEF2CA761B759@TIL-EXCH-01.netmatch.local> Hey guys, I've seen something I'd like to share with you, perhaps it could be seen as a bug in varnishstat. Yesterday I opened ssh sessions to my 4 balancers, to run some scripts, and then I opened varnishstat to monitor them. A while later I had to leave in a rush and closed my laptop's lid, and in that process killed my vpn tunnel and ssh sessions. However, the varnishstat process (apparently) keeps running. (FreeBSD 7.2 x64) Just a few hours ago (so around 16 hours later), I had one balancer die on my (become completely unresponsive, refuse connections to port 80). I immediately restarted varnishd, and I also saw a varnishstat instance eat 100% cpu, which I killed. Now when I just looked on the other balancers, I see the varnishstat instance using up a lot of CPU (only one out of 4 cores though): last pid: 77863; load averages: 1.40, 1.48, 1.47 up 105+00:24:26 14:56:40 166 processes: 2 running, 164 sleeping CPU: 27.1% user, 0.0% nice, 4.2% system, 1.9% interrupt, 66.8% idle Mem: 6430M Active, 550M Inact, 709M Wired, 189M Cache, 399M Buf, 32M Free Swap: 4096M Total, 228M Used, 3868M Free, 5% Inuse PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 69587 root 1 112 0 95640K 1044K CPU3 3 19.1H 77.20% varnishstat 76211 haproxy 1 4 0 48928K 18944K kqread 1 16:34 3.17% haproxy 68762 www 116 44 0 8756M 6412M select 0 0:01 0.39% varnishd 31203 root 1 44 0 176M 5476K select 2 439:16 0.00% snmpd 69527 root 1 8 0 94312K 83384K nanslp 0 11:59 0.00% varnishncsa 37934 root 1 4 0 66244K 3164K kqread 0 8:46 0.00% squid 1912 root 1 44 0 10484K 724K select 0 7:50 0.00% ntpd 2036 root 1 44 0 85732K 3528K select 1 4:12 0.00% httpd 56664 root 1 44 0 5692K 616K select 2 0:51 0.00% syslogd 2056 root 1 8 0 6748K 392K nanslp 2 0:33 0.00% cron 2023 root 1 4 0 5808K 428K kqread 0 0:23 0.00% master 2031 postfix 1 4 0 5808K 408K kqread 0 0:22 0.00% qmgr 76181 www 1 4 0 85732K 3732K kqread 3 0:01 0.00% httpd 76182 www 1 20 0 85732K 3716K lockf 3 0:01 0.00% httpd 76185 www 1 20 0 85732K 3696K lockf 2 0:01 0.00% httpd 76298 www 1 20 0 85732K 3868K lockf 3 0:01 0.00% httpd So it seems running varnishstat for a long time, it will use more and more resources, and in my case, even cause varnishd to fail somehow (it could be a coincidence, but I don't think so). After killing varnishstat, load went back from 1.5 to 0.2, around the usual. -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ From phk at phk.freebsd.dk Tue Apr 13 13:13:52 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 13 Apr 2010 13:13:52 +0000 Subject: keeping varnishstat open will bring down server In-Reply-To: Your message of "Tue, 13 Apr 2010 15:09:29 +0200." <2903443B3710364B814B820238DDEF2CA761B759@TIL-EXCH-01.netmatch.local> Message-ID: <69451.1271164432@critter.freebsd.dk> Please open a ticket. In message <2903443B3710364B814B820238DDEF2CA761B759 at TIL-EXCH-01.netmatch.local >, =?iso-8859-1?Q?Angelo_H=F6ngens?= writes: >Hey guys, > >I've seen something I'd like to share with you, perhaps it could be seen as= > a bug in varnishstat. > >Yesterday I opened ssh sessions to my 4 balancers, to run some scripts, and= > then I opened varnishstat to monitor them. A while later I had to leave in= > a rush and closed my laptop's lid, and in that process killed my vpn tunne= >l and ssh sessions. However, the varnishstat process (apparently) keeps run= >ning. (FreeBSD 7.2 x64) > >Just a few hours ago (so around 16 hours later), I had one balancer die on = >my (become completely unresponsive, refuse connections to port 80). I immed= >iately restarted varnishd, and I also saw a varnishstat instance eat 100% c= >pu, which I killed. > >Now when I just looked on the other balancers, I see the varnishstat instan= >ce using up a lot of CPU (only one out of 4 cores though): > > >last pid: 77863; load averages: 1.40, 1.48, 1.47 up 105+00:24:26 14= >:56:40 >166 processes: 2 running, 164 sleeping >CPU: 27.1% user, 0.0% nice, 4.2% system, 1.9% interrupt, 66.8% idle >Mem: 6430M Active, 550M Inact, 709M Wired, 189M Cache, 399M Buf, 32M Free >Swap: 4096M Total, 228M Used, 3868M Free, 5% Inuse > > PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND >69587 root 1 112 0 95640K 1044K CPU3 3 19.1H 77.20% varnishst= >at >76211 haproxy 1 4 0 48928K 18944K kqread 1 16:34 3.17% haproxy >68762 www 116 44 0 8756M 6412M select 0 0:01 0.39% varnishd >31203 root 1 44 0 176M 5476K select 2 439:16 0.00% snmpd >69527 root 1 8 0 94312K 83384K nanslp 0 11:59 0.00% varnishnc= >sa >37934 root 1 4 0 66244K 3164K kqread 0 8:46 0.00% squid > 1912 root 1 44 0 10484K 724K select 0 7:50 0.00% ntpd > 2036 root 1 44 0 85732K 3528K select 1 4:12 0.00% httpd >56664 root 1 44 0 5692K 616K select 2 0:51 0.00% syslogd > 2056 root 1 8 0 6748K 392K nanslp 2 0:33 0.00% cron > 2023 root 1 4 0 5808K 428K kqread 0 0:23 0.00% master > 2031 postfix 1 4 0 5808K 408K kqread 0 0:22 0.00% qmgr >76181 www 1 4 0 85732K 3732K kqread 3 0:01 0.00% httpd >76182 www 1 20 0 85732K 3716K lockf 3 0:01 0.00% httpd >76185 www 1 20 0 85732K 3696K lockf 2 0:01 0.00% httpd >76298 www 1 20 0 85732K 3868K lockf 3 0:01 0.00% httpd > > >So it seems running varnishstat for a long time, it will use more and more = >resources, and in my case, even cause varnishd to fail somehow (it could be= > a coincidence, but I don't think so). > >After killing varnishstat, load went back from 1.5 to 0.2, around the usual. > >-- = > > > = > >With kind regards, > = > > = > >Angelo H=F6ngens > = > >Systems Administrator > = > >------------------------------------------ >NetMatch >tourism internet software solutions > = > >Ringbaan Oost 2b >5013 CA Tilburg >T: +31 (0)13 5811088 >F: +31 (0)13 5821239 > = > >mailto:A.Hongens at netmatch.nl >http://www.netmatch.nl >------------------------------------------ > > > >_______________________________________________ >varnish-misc mailing list >varnish-misc at varnish-cache.org >http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dirk.taggesell at proximic.com Tue Apr 13 15:03:28 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Tue, 13 Apr 2010 17:03:28 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC348C4.9000008@gmail.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> Message-ID: <4BC487C0.6030907@proximic.com> On 12.04.10 18:22, Rob S wrote: > In vcl_fetch, pop: > set obj.ttl = 7200s; > and everything will be cached for 2hrs. > > If that doesn't work, then can you post the full VCL you're using. The config compiler complains when trying to start varnish. My initial and working config is this: >> backend default { >> .host = "backend.example.com"; >> .port = "80"; >> } nothing else, neither a vcl_fetch part, and I start the varnish with: varnishd -a 0.0.0.0:80 -f etc/varnish/myconf.vcl -F What I want is to fetch items from the back-end and not contacting the back-end again for the particular item for the next X hours, instead caching and delivering it. Adding a custom expires header for delivered contents is the second feature. That's pretty much it. With the very simple config like above, varnish works, but contacts the back-end again every time an item is fetched (may be for an if-modified-since request). But just adding a vcl_fetch subroutine to the minimal config appears not to be sufficient. This: sub vcl_fetch { set obj.ttl = 7200s; return(deliver); } Throws an error when i try to start varnish: Message from VCC-compiler: Variable 'obj.ttl' not accessible in method 'vcl_fetch'. At: (input Line 13 Pos 7) set obj.ttl = 7200s; ------#######--------- Running VCC-compiler failed, exit 1 VCL compilation failed I also uncommented everything in the standard vcl file which is claimed to replicate the default behaviour. That works, but just inserting the sub vcl_fetch above doesn't work. It appears one cannot simply take a minimal config and add rules. There seems to be missing some tutorial explaining the concept of varnish config files. If I ever come to understand it, maybe I'll write something like a tut. -- Dirk Taggesell From m.walraven at terantula.com Tue Apr 13 15:26:44 2010 From: m.walraven at terantula.com (Marco Walraven) Date: Tue, 13 Apr 2010 17:26:44 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC487C0.6030907@proximic.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> <4BC487C0.6030907@proximic.com> Message-ID: <20100413152644.GB2405@cotton.terantula.com> On Tue, Apr 13, 2010 at 05:03:28PM +0200, Dirk Taggesell wrote: > sub vcl_fetch { > set obj.ttl = 7200s; > return(deliver); > } > > Throws an error when i try to start varnish: > > Message from VCC-compiler: > Variable 'obj.ttl' not accessible in method 'vcl_fetch'. > At: (input Line 13 Pos 7) > set obj.ttl = 7200s; > ------#######--------- > Running VCC-compiler failed, exit 1 > VCL compilation failed Since you are running 2.1.0 (as you stated in a previous mail) you are running into some VCL changes. 'obj.* is now called beresp.* in vcl_fetch, and obj.* is now read-only' So try 'set beresp.ttl =7200s;' Marco -- Terantula - Industrial Strength Open Source phone:+31 64 3232 400 / www: http://www.terantula.com / pgpkey: E7EE7A46 pgp fingerprint: F2EE 122D 964C DE68 7380 6F95 3710 7719 E7EE 7A46 From dirk.taggesell at proximic.com Tue Apr 13 16:31:27 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Tue, 13 Apr 2010 18:31:27 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <20100413152644.GB2405@cotton.terantula.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> <4BC487C0.6030907@proximic.com> <20100413152644.GB2405@cotton.terantula.com> Message-ID: <4BC49C5F.3000700@proximic.com> On 13.04.10 17:26, Marco Walraven wrote: > Since you are running 2.1.0 (as you stated in a previous mail) you are running into > some VCL changes. thanks for clarifying this. I already suspected it. > 'obj.* is now called beresp.* in vcl_fetch, and obj.* is now read-only' > > So try 'set beresp.ttl =7200s;' Thanks, this at least doesn't throw errors at me and varnish runs. I also appears to not check the back-ends any more when items are already in the cache :) -- mit freundlichen Gruessen Dirk Taggesell Systemadministrator Proximic GmbH From dirk.taggesell at proximic.com Tue Apr 13 16:33:46 2010 From: dirk.taggesell at proximic.com (Dirk Taggesell) Date: Tue, 13 Apr 2010 18:33:46 +0200 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC49C5F.3000700@proximic.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> <4BC487C0.6030907@proximic.com> <20100413152644.GB2405@cotton.terantula.com> <4BC49C5F.3000700@proximic.com> Message-ID: <4BC49CEA.7050809@proximic.com> BTW: the config now looks like this: backend default { .host = "backend.example.com"; .port = "80"; } sub vcl_fetch { set beresp.ttl =7200s; return(deliver); } -- mit freundlichen Gruessen Dirk Taggesell Systemadministrator Proximic GmbH From niklas.norberg at bahnhof.se Wed Apr 14 12:28:11 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Wed, 14 Apr 2010 14:28:11 +0200 Subject: Sticky Load Balancing with Varnish Message-ID: <1271248091.5141.33.camel@app-srv-debian-amdmp2.idni> Hi, last week I started writing a sticky load balancer. At that time (2.0.6) I lacked that req.* wasn't available in vcl_deliver so I had to use some global C-variables to share data from vcl_recv to vcl_deliver. Because of this I had to add some thread guard C-code and all together it worked but I wasn't fully satisfied. The guard code contained sleep in order to wait for other thread so that just one thread at a time would go through this "set/get" cycle. This week I discovered that 2.1.0 was released and that: - req.* is now available in vcl_deliver. So I rewrote it, just by removing the thread C-code, and here it is. I've tested it with JMeter and it balances correct (i.e. according to the defined weights). So I also vote for keeping this as a documented configuration rather than a built-in feature. Unless the planned sticky load balancing will have something above the rudimentary. Comments? With kind regards, Niklas Norberg LBsubs.vcl (which I also atach for row break sanity): C{ // Obs This string is also used hard coded in pure VCL code static const char VARNISH_LB_COOKIE_NAME[] = "VARNISH_LB="; // Let STICKY balance everything(/) for four hours a time: static const char VARNISH_LB_ENDING[] = "; path=/; Max-Age=14400; Comment=Varnish Sticky Load Balancing Cookie"; static const int CANDIDATE_CNT = 3; static const int MAX_LEN_LB_ID = 2; // covers 1-99 // Load balancing weights: // The first value should be the sum of the others. static const int LB_FACTOR[4] = {10, 7, 1, 2}; // The first is not used, just to keep indexes same in the arrays. static int lbStatus[4] = {-1, 0, 0, 0}; static int lastCandidate = 1; /** Load Balancing according to Request Counting Algorithm, see: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#requests or write your own. */ int getCandidate4LB() { /* for each worker in workers worker lbstatus += worker lbfactor total factor += worker lbfactor if worker lbstatus > candidate lbstatus candidate = worker candidate lbstatus -= total factor */ int worker=1; // This local copy is really overkill int candidate = lastCandidate; while ( worker<=CANDIDATE_CNT ) { lbStatus[worker] += LB_FACTOR[worker]; if ( lbStatus[worker] > lbStatus[candidate] ) candidate = worker; worker++; } //int totalFactor = LB_FACTOR[0]; //lbStatus[candidate] -= totalFactor; lbStatus[candidate] -= LB_FACTOR[0]; lastCandidate = candidate; return candidate; } }C /** Check if a LB cookie is present. If it is - Copy the LB cookie value to a marker. - set backend - unset marker If it isn't - find candidate for LB and set value to marker - set backend - don't unset marker so a cookie can be set in vcl_deliver. */ sub recv_loadBalancingOnStickyCookie { if (req.http.Cookie ~ "VARNISH_LB=") { set req.http.StickyVarnish = regsub( req.http.Cookie, "^.*?VARNISH_LB=([^;]*);*.*$", "\1" ); call chooseBackend; unset req.http.StickyVarnish; } else { call findCandidate4LB; call chooseBackend; } } /** Get candidate from some fancy smanchy algorithm. Set value to marker. */ sub findCandidate4LB { // set req.http.StickyVarnish = "X";: C{ // get index for backend int idxBackend = getCandidate4LB(); // Store index in a header marker so we can use this later char strBackendIndex[MAX_LEN_LB_ID + 1]; sprintf(strBackendIndex, "%d", idxBackend); VRT_SetHdr(sp, HDR_REQ, "\016StickyVarnish:", strBackendIndex, vrt_magic_string_end); }C // C{ // //Debug test: // int i=1; // while(i<=100) { // syslog( LOG_INFO, "getCandidate4LB(): %d", getCandidate4LB() ); // i++; // } // }C } /** Choose backend (via director) from the marker ("StickyVarnish"). Change this as necessary. Directors and Backends are defined elsewhere. */ sub chooseBackend { // if else in weight order, switch statements would have been better if (req.http.StickyVarnish == "1") { set req.backend = Backends1; } else if (req.http.StickyVarnish == "3") { set req.backend = Backends3; } else if (req.http.StickyVarnish == "2") { set req.backend = Backends2; } } /** Set the Sticky Varnish Load Balancing Cookie as: "VARNISH_LB=X; ..." */ sub deliver_setLBCookie { if (req.http.StickyVarnish) { C{ // Also send cookie from backend if any char* existing_set_cookie = VRT_GetHdr(sp, HDR_OBJ, "\013Set-Cookie:"); int len; if (existing_set_cookie == NULL) len = 0; else len = strlen(existing_set_cookie) + 1; // + 1 for "\r" len += strlen(VARNISH_LB_COOKIE_NAME); len += MAX_LEN_LB_ID; len += strlen(VARNISH_LB_ENDING); char set_cookie[len + 1]; set_cookie[0] = '\0'; if (existing_set_cookie != 0) { strcat( set_cookie, existing_set_cookie ); strcat( set_cookie, "\r" ); } strcat( set_cookie, VARNISH_LB_COOKIE_NAME ); char* strBackend = VRT_GetHdr(sp, HDR_REQ, "\016StickyVarnish:"); strcat( set_cookie, strBackend ); strcat( set_cookie, VARNISH_LB_ENDING ); // Send cookie(s) VRT_SetHdr(sp, HDR_RESP, "\013Set-Cookie:", set_cookie, vrt_magic_string_end); }C } } /** Call the LB subs from the subs: - vcl_recv - vcl_deliver */ /* sub vcl_recv { call recv_loadBalancingOnStickyCookie; } sub vcl_deliver { call deliver_setLBCookie; } */ /** If directors are defined as below we can in practice: Load balance to a specific backend by choosing a director and still have a fallback function. Risk for wrong backend = (1+1)/4294967295 = 5 * 10^-10, a price I'm willing to take for this fallback function. This way I can restart the backends and still keep the load balancing intact when all is up again. */ /* director Backends1 random { { .backend = www1; .weight = 4294967295;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 1;} } director Backends2 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 4294967295;} { .backend = www3; .weight = 1;} } director Backends3 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 4294967295;} } */ -------------- next part -------------- C{ // Obs This string is also used hard coded in pure VCL code static const char VARNISH_LB_COOKIE_NAME[] = "VARNISH_LB="; // Let STICKY balance everything(/) for four hours a time: static const char VARNISH_LB_ENDING[] = "; path=/; Max-Age=14400; Comment=Varnish Sticky Load Balancing Cookie"; static const int CANDIDATE_CNT = 3; static const int MAX_LEN_LB_ID = 2; // covers 1-99 // Load balancing weights: // The first value should be the sum of the others. static const int LB_FACTOR[4] = {10, 7, 1, 2}; // The first is not used, just to keep indexes same in the arrays. static int lbStatus[4] = {-1, 0, 0, 0}; static int lastCandidate = 1; /** Load Balancing according to Request Counting Algorithm, see: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#requests or write your own. */ int getCandidate4LB() { /* for each worker in workers worker lbstatus += worker lbfactor total factor += worker lbfactor if worker lbstatus > candidate lbstatus candidate = worker candidate lbstatus -= total factor */ int worker=1; // This local copy is really overkill int candidate = lastCandidate; while ( worker<=CANDIDATE_CNT ) { lbStatus[worker] += LB_FACTOR[worker]; if ( lbStatus[worker] > lbStatus[candidate] ) candidate = worker; worker++; } //int totalFactor = LB_FACTOR[0]; //lbStatus[candidate] -= totalFactor; lbStatus[candidate] -= LB_FACTOR[0]; lastCandidate = candidate; return candidate; } }C /** Check if a LB cookie is present. If it is - Copy the LB cookie value to a marker. - set backend - unset marker If it isn't - find candidate for LB and set value to marker - set backend - don't unset marker so a cookie can be set in vcl_deliver. */ sub recv_loadBalancingOnStickyCookie { if (req.http.Cookie ~ "VARNISH_LB=") { set req.http.StickyVarnish = regsub( req.http.Cookie, "^.*?VARNISH_LB=([^;]*);*.*$", "\1" ); call chooseBackend; unset req.http.StickyVarnish; } else { call findCandidate4LB; call chooseBackend; } } /** Get candidate from some fancy smanchy algorithm. Set value to marker. */ sub findCandidate4LB { // set req.http.StickyVarnish = "X";: C{ // get index for backend int idxBackend = getCandidate4LB(); // Store index in a header marker so we can use this later char strBackendIndex[MAX_LEN_LB_ID + 1]; sprintf(strBackendIndex, "%d", idxBackend); VRT_SetHdr(sp, HDR_REQ, "\016StickyVarnish:", strBackendIndex, vrt_magic_string_end); }C // C{ // //Debug test: // int i=1; // while(i<=100) { // syslog( LOG_INFO, "getCandidate4LB(): %d", getCandidate4LB() ); // i++; // } // }C } /** Choose backend (via director) from the marker ("StickyVarnish"). Change this as necessary. Directors and Backends are defined elsewhere. */ sub chooseBackend { // if else in weight order, switch statements would have been better if (req.http.StickyVarnish == "1") { set req.backend = Backends1; } else if (req.http.StickyVarnish == "3") { set req.backend = Backends3; } else if (req.http.StickyVarnish == "2") { set req.backend = Backends2; } } /** Set the Sticky Varnish Load Balancing Cookie as: "VARNISH_LB=X; ..." */ sub deliver_setLBCookie { if (req.http.StickyVarnish) { C{ // Also send cookie from backend if any char* existing_set_cookie = VRT_GetHdr(sp, HDR_OBJ, "\013Set-Cookie:"); int len; if (existing_set_cookie == NULL) len = 0; else len = strlen(existing_set_cookie) + 1; // + 1 for "\r" len += strlen(VARNISH_LB_COOKIE_NAME); len += MAX_LEN_LB_ID; len += strlen(VARNISH_LB_ENDING); char set_cookie[len + 1]; set_cookie[0] = '\0'; if (existing_set_cookie != 0) { strcat( set_cookie, existing_set_cookie ); strcat( set_cookie, "\r" ); } strcat( set_cookie, VARNISH_LB_COOKIE_NAME ); char* strBackend = VRT_GetHdr(sp, HDR_REQ, "\016StickyVarnish:"); strcat( set_cookie, strBackend ); strcat( set_cookie, VARNISH_LB_ENDING ); // Send cookie(s) VRT_SetHdr(sp, HDR_RESP, "\013Set-Cookie:", set_cookie, vrt_magic_string_end); }C } } /** Call the LB subs from the subs: - vcl_recv - vcl_deliver */ /* sub vcl_recv { call recv_loadBalancingOnStickyCookie; } sub vcl_deliver { call deliver_setLBCookie; } */ /** If directors are defined as below we can in practice: Load balance to a specific backend by choosing a director and still have a fallback function. Risk for wrong backend = (1+1)/4294967295 = 5 * 10^-10, a price I'm willing to take for this fallback function. This way I can restart the backends and still keep the load balancing intact when all is up again. */ /* director Backends1 random { { .backend = www1; .weight = 4294967295;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 1;} } director Backends2 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 4294967295;} { .backend = www3; .weight = 1;} } director Backends3 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 4294967295;} } */ From tdevelioglu at ebuddy.com Wed Apr 14 15:46:11 2010 From: tdevelioglu at ebuddy.com (Taylan Develioglu) Date: Wed, 14 Apr 2010 17:46:11 +0200 Subject: varnish changes HEAD to GET on backend request Message-ID: <1271259971.28720.86.camel@oasis> First, hi to all. I have the following problem I was hoping someone could shed some light on: The default behavior of varnish 2.1 seems to be changing HEAD requests into GET before sending them to the backend. I tried changing bereq.request to "HEAD" if req.request is "HEAD" in vcl_pass and vcl_miss: sub vcl_pass { if (req.request == "HEAD") { set bereq.request = "HEAD"; } } sub vcl_miss { if (req.request == "HEAD") { set bereq.request = "HEAD"; } } And it didn't help, what am I doing wrong here? From phk at phk.freebsd.dk Wed Apr 14 15:52:22 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 Apr 2010 15:52:22 +0000 Subject: varnish changes HEAD to GET on backend request In-Reply-To: Your message of "Wed, 14 Apr 2010 17:46:11 +0200." <1271259971.28720.86.camel@oasis> Message-ID: <53300.1271260342@critter.freebsd.dk> In message <1271259971.28720.86.camel at oasis>, Taylan Develioglu writes: >The default behavior of varnish 2.1 seems to be changing HEAD requests >into GET before sending them to the backend. Yes, in the vcl_miss{} path, varnish currently only uses GET to the backend, the presumption is that if we get a HEAD request, we're also likely to need the object body soon after. > sub vcl_pass { > if (req.request == "HEAD") { > set bereq.request = "HEAD"; > } > } A HEAD being pass'ed will go unmodified to the backend, so this should do nothing > sub vcl_miss { > if (req.request == "HEAD") { > set bereq.request = "HEAD"; > } > } This will confused varnish program logic, and do nothing good. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Apr 14 16:00:31 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 Apr 2010 16:00:31 +0000 Subject: Sticky Load Balancing with Varnish In-Reply-To: Your message of "Wed, 14 Apr 2010 14:28:11 +0200." <1271248091.5141.33.camel@app-srv-debian-amdmp2.idni> Message-ID: <53364.1271260831@critter.freebsd.dk> In message <1271248091.5141.33.camel at app-srv-debian-amdmp2.idni>, Niklas Norber g writes: >So I also vote for keeping this as a documented configuration rather >than a built-in feature. Unless the planned sticky load balancing will >have something above the rudimentary. Check the "hash" and "client" directors in 2.1, depending on what you want to be "sticky" based on (object or client) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tdevelioglu at ebuddy.com Wed Apr 14 16:22:37 2010 From: tdevelioglu at ebuddy.com (Taylan Develioglu) Date: Wed, 14 Apr 2010 18:22:37 +0200 Subject: varnish changes HEAD to GET on backend request In-Reply-To: <53300.1271260342@critter.freebsd.dk> References: <53300.1271260342@critter.freebsd.dk> Message-ID: <1271262157.28720.110.camel@oasis> Hello Poul-Henning, On Wed, 2010-04-14 at 17:52 +0200, Poul-Henning Kamp wrote: > In message <1271259971.28720.86.camel at oasis>, Taylan Develioglu writes: > > >The default behavior of varnish 2.1 seems to be changing HEAD requests > >into GET before sending them to the backend. > > Yes, in the vcl_miss{} path, varnish currently only uses GET to the > backend, the presumption is that if we get a HEAD request, we're > also likely to need the object body soon after. Sadly this presumption is wrong for our application. The HEAD call is performed to make a client aware of the existence of the object and validate its location. The client only fetches the object when needed. If it is not needed, it never gets fetched. Always performing a GET puts unnecessary load on the backend in our situation and creates objects cached by varnish that never are requested. > > sub vcl_pass { > > if (req.request == "HEAD") { > > set bereq.request = "HEAD"; > > } > > } > > A HEAD being pass'ed will go unmodified to the backend, so this > should do nothing Thank you for clearing that up. Letting HEAD requests pass unmodified is a workable solution, but it would be nice if varnish could cache the HEAD's too. > > > sub vcl_miss { > > if (req.request == "HEAD") { > > set bereq.request = "HEAD"; > > } > > } > > This will confused varnish program logic, and do nothing good. > Really? I'm not familiar with the inner workings of Varnish. I admit it's strange to do a set bereq.request = req.request, but that's only because the default behavior was unexpected (to me at least) and I'm trying to work around it. I would expect a HEAD would stay a HEAD unless specified otherwise. From phk at phk.freebsd.dk Wed Apr 14 16:36:05 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 Apr 2010 16:36:05 +0000 Subject: varnish changes HEAD to GET on backend request In-Reply-To: Your message of "Wed, 14 Apr 2010 18:22:37 +0200." <1271262157.28720.110.camel@oasis> Message-ID: <53464.1271262965@critter.freebsd.dk> In message <1271262157.28720.110.camel at oasis>, Taylan Develioglu writes: >Always performing a GET puts unnecessary load on the backend in our >situation and creates objects cached by varnish that never are >requested. In that case, just always pass the HEAD requests. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tdevelioglu at ebuddy.com Wed Apr 14 16:55:59 2010 From: tdevelioglu at ebuddy.com (Taylan Develioglu) Date: Wed, 14 Apr 2010 18:55:59 +0200 Subject: varnish changes HEAD to GET on backend request In-Reply-To: <53464.1271262965@critter.freebsd.dk> References: <53464.1271262965@critter.freebsd.dk> Message-ID: <1271264159.28720.118.camel@oasis> I feel incredibly dense, but I can't get it to do what I want. redefined vcl_recv: sub vcl_recv { if (req.request == "HEAD") { return(pass); } } HEAD is still changed into GET. On Wed, 2010-04-14 at 18:36 +0200, Poul-Henning Kamp wrote: > Poul-Henning Kamp From robertbrogers at gmail.com Wed Apr 14 17:10:08 2010 From: robertbrogers at gmail.com (Rob Rogers) Date: Wed, 14 Apr 2010 10:10:08 -0700 Subject: sticky load balancing: a core feature or Message-ID: <8F54EBA3-7F40-46D7-82FE-961D9FD08494@gmail.com> """ So I also vote for keeping this as a documented configuration rather than a built-in feature. Unless the planned sticky load balancing will have something above the rudimentary. """ i understand varnish is a fast reverse proxy/httpaccelerator not an uber-does-it-all service. but, for many implementations we don't want to stack 3-4 services on top of each other. would having this as a documented config easily solve the problem of sticky load balancing, without a lot of poking around and configuration? (note: i found the default vanilla implementation only working well for images; other pages without the most rudimentary caching config failed to work correctly for me due to two reasons: ignorance, stupidity. In other words, the Vary headers stung me as browser to browser caused different objects to be cached and the default purge implementation could not reach all objects easily) finally, per adding this config to MY config. how do you do that? That is, i don't want to just copy and paste this vcl snippet into my main varnish.vcl. Is there a way to include vcls from vcls? Thanks, Rob Date: Wed, 14 Apr 2010 14:28:11 +0200 From: Niklas Norberg To: varnish-misc at varnish-cache.org Subject: Sticky Load Balancing with Varnish Message-ID: <1271248091.5141.33.camel at app-srv-debian-amdmp2.idni> Content-Type: text/plain; charset="us-ascii" Hi, last week I started writing a sticky load balancer. At that time (2.0.6) I lacked that req.* wasn't available in vcl_deliver so I had to use some global C-variables to share data from vcl_recv to vcl_deliver. Because of this I had to add some thread guard C-code and all together it worked but I wasn't fully satisfied. The guard code contained sleep in order to wait for other thread so that just one thread at a time would go through this "set/get" cycle. This week I discovered that 2.1.0 was released and that: - req.* is now available in vcl_deliver. So I rewrote it, just by removing the thread C-code, and here it is. I've tested it with JMeter and it balances correct (i.e. according to the defined weights). So I also vote for keeping this as a documented configuration rather than a built-in feature. Unless the planned sticky load balancing will have something above the rudimentary. Comments? With kind regards, Niklas Norberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Apr 14 17:16:05 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 Apr 2010 17:16:05 +0000 Subject: varnish changes HEAD to GET on backend request In-Reply-To: Your message of "Wed, 14 Apr 2010 18:55:59 +0200." <1271264159.28720.118.camel@oasis> Message-ID: <53930.1271265365@critter.freebsd.dk> In message <1271264159.28720.118.camel at oasis>, Taylan Develioglu writes: >sub vcl_recv { > if (req.request == "HEAD") { > return(pass); > } >} > >HEAD is still changed into GET. That's a bug, please open a ticket. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From niklas.norberg at bahnhof.se Wed Apr 14 17:27:29 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Wed, 14 Apr 2010 19:27:29 +0200 Subject: sticky load balancing: a core feature or In-Reply-To: <8F54EBA3-7F40-46D7-82FE-961D9FD08494@gmail.com> References: <8F54EBA3-7F40-46D7-82FE-961D9FD08494@gmail.com> Message-ID: <1271266049.5141.41.camel@app-srv-debian-amdmp2.idni> ons 2010-04-14 klockan 10:10 -0700 skrev Rob Rogers: > > finally, per adding this config to MY config. how do you do that? That > is, i don't want to just copy and paste this vcl snippet into my main > varnish.vcl. Is there a way to include vcls from vcls? > > Include VCL-snips with: include "/etc/varnish/c.vcl"; include "/etc/varnish/subs.vcl"; include "/etc/varnish/defaultBackend.vcl"; include "/etc/varnish/backends.vcl"; include "/etc/varnish/LBsubs.vcl"; Best Regards Niklas Norberg From robertbrogers at gmail.com Wed Apr 14 17:32:28 2010 From: robertbrogers at gmail.com (Rob Rogers) Date: Wed, 14 Apr 2010 10:32:28 -0700 Subject: sticky load balancing: a core feature or In-Reply-To: <1271266049.5141.41.camel@app-srv-debian-amdmp2.idni> References: <8F54EBA3-7F40-46D7-82FE-961D9FD08494@gmail.com> <1271266049.5141.41.camel@app-srv-debian-amdmp2.idni> Message-ID: thanks, would you mind sending or posting the entire LBSubs.vcl as a file? thank you. rob On Apr 14, 2010, at 10:27 AM, Niklas Norberg wrote: > > ons 2010-04-14 klockan 10:10 -0700 skrev Rob Rogers: > > > >> >> finally, per adding this config to MY config. how do you do that? That >> is, i don't want to just copy and paste this vcl snippet into my main >> varnish.vcl. Is there a way to include vcls from vcls? >> >> > > Include VCL-snips with: > > include "/etc/varnish/c.vcl"; > include "/etc/varnish/subs.vcl"; > include "/etc/varnish/defaultBackend.vcl"; > include "/etc/varnish/backends.vcl"; > include "/etc/varnish/LBsubs.vcl"; > > Best Regards > > Niklas Norberg > > > From niklas.norberg at bahnhof.se Wed Apr 14 17:48:24 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Wed, 14 Apr 2010 19:48:24 +0200 Subject: sticky load balancing: a core feature or In-Reply-To: References: <8F54EBA3-7F40-46D7-82FE-961D9FD08494@gmail.com> <1271266049.5141.41.camel@app-srv-debian-amdmp2.idni> Message-ID: <1271267304.5141.58.camel@app-srv-debian-amdmp2.idni> You're welcome, here it is. If you find it useful or find anything else please send me feedback. Actually I figured out that it would have been best performance wise to only set the backends i.e. look if there is a LB cookie and... when it's necessary that is in vcl_fetch, vcl_pass, vcl_pipe and so on. But I'm lasy and take this performance penalty to be sure to cover all cases without digging to much, at least at this point. So maybe the call: call recv_loadBalancingOnStickyCookie; should have been best placed in vcl_prefetch but that sub was removed in 2.1.0 :( One can also hard code some C string lengths and so on... Regards Niklas ons 2010-04-14 klockan 10:32 -0700 skrev Rob Rogers: > thanks, > > would you mind sending or posting the entire LBSubs.vcl as a file? > > thank you. > > rob > On Apr 14, 2010, at 10:27 AM, Niklas Norberg wrote: > > > > > ons 2010-04-14 klockan 10:10 -0700 skrev Rob Rogers: > > > > > > > >> > >> finally, per adding this config to MY config. how do you do that? That > >> is, i don't want to just copy and paste this vcl snippet into my main > >> varnish.vcl. Is there a way to include vcls from vcls? > >> > >> > > > > Include VCL-snips with: > > > > include "/etc/varnish/c.vcl"; > > include "/etc/varnish/subs.vcl"; > > include "/etc/varnish/defaultBackend.vcl"; > > include "/etc/varnish/backends.vcl"; > > include "/etc/varnish/LBsubs.vcl"; > > > > Best Regards > > > > Niklas Norberg > > > > > > > -------------- next part -------------- C{ // Obs This string is also used hard coded in pure VCL code static const char VARNISH_LB_COOKIE_NAME[] = "VARNISH_LB="; // Let STICKY balance everything(/) for four hours a time: static const char VARNISH_LB_ENDING[] = "; path=/; Max-Age=14400; Comment=Varnish Sticky Load Balancing Cookie"; static const int CANDIDATE_CNT = 3; static const int MAX_LEN_LB_ID = 2; // covers 1-99 // Load balancing weights: // The first value should be the sum of the others. static const int LB_FACTOR[4] = {10, 7, 1, 2}; // The first is not used, just to keep indexes same in the arrays. static int lbStatus[4] = {-1, 0, 0, 0}; static int lastCandidate = 1; /** Load Balancing according to Request Counting Algorithm, see: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#requests or write your own. */ int getCandidate4LB() { /* for each worker in workers worker lbstatus += worker lbfactor total factor += worker lbfactor if worker lbstatus > candidate lbstatus candidate = worker candidate lbstatus -= total factor */ int worker=1; // This local copy is really overkill int candidate = lastCandidate; while ( worker<=CANDIDATE_CNT ) { lbStatus[worker] += LB_FACTOR[worker]; if ( lbStatus[worker] > lbStatus[candidate] ) candidate = worker; worker++; } //int totalFactor = LB_FACTOR[0]; //lbStatus[candidate] -= totalFactor; lbStatus[candidate] -= LB_FACTOR[0]; lastCandidate = candidate; return candidate; } }C /** Check if a LB cookie is present. If it is - Copy the LB cookie value to a marker. - set backend - unset marker If it isn't - find candidate for LB and set value to marker - set backend - don't unset marker so a cookie can be set in vcl_deliver. */ sub recv_loadBalancingOnStickyCookie { if (req.http.Cookie ~ "VARNISH_LB=") { set req.http.StickyVarnish = regsub( req.http.Cookie, "^.*?VARNISH_LB=([^;]*);*.*$", "\1" ); call chooseBackend; unset req.http.StickyVarnish; } else { call findCandidate4LB; call chooseBackend; } } /** Get candidate from some fancy smanchy algorithm. Set value to marker. */ sub findCandidate4LB { // set req.http.StickyVarnish = "X";: C{ // get index for backend int idxBackend = getCandidate4LB(); // Store index in a header marker so we can use this later char strBackendIndex[MAX_LEN_LB_ID + 1]; sprintf(strBackendIndex, "%d", idxBackend); VRT_SetHdr(sp, HDR_REQ, "\016StickyVarnish:", strBackendIndex, vrt_magic_string_end); }C // C{ // //Debug test: // int i=1; // while(i<=100) { // syslog( LOG_INFO, "getCandidate4LB(): %d", getCandidate4LB() ); // i++; // } // }C } /** Choose backend (via director) from the marker ("StickyVarnish"). Change this as necessary. Directors and Backends are defined elsewhere. */ sub chooseBackend { // if else in weight order, switch statements would have been better if (req.http.StickyVarnish == "1") { set req.backend = Backends1; } else if (req.http.StickyVarnish == "3") { set req.backend = Backends3; } else if (req.http.StickyVarnish == "2") { set req.backend = Backends2; } } /** Set the Sticky Varnish Load Balancing Cookie as: "VARNISH_LB=X; ..." */ sub deliver_setLBCookie { if (req.http.StickyVarnish) { C{ // Also send cookie from backend if any char* existing_set_cookie = VRT_GetHdr(sp, HDR_OBJ, "\013Set-Cookie:"); int len; if (existing_set_cookie == NULL) len = 0; else len = strlen(existing_set_cookie) + 1; // + 1 for "\r" len += strlen(VARNISH_LB_COOKIE_NAME); len += MAX_LEN_LB_ID; len += strlen(VARNISH_LB_ENDING); char set_cookie[len + 1]; set_cookie[0] = '\0'; if (existing_set_cookie != 0) { strcat( set_cookie, existing_set_cookie ); strcat( set_cookie, "\r" ); } strcat( set_cookie, VARNISH_LB_COOKIE_NAME ); char* strBackend = VRT_GetHdr(sp, HDR_REQ, "\016StickyVarnish:"); strcat( set_cookie, strBackend ); strcat( set_cookie, VARNISH_LB_ENDING ); // Send cookie(s) VRT_SetHdr(sp, HDR_RESP, "\013Set-Cookie:", set_cookie, vrt_magic_string_end); }C } } /** Call the LB subs from the subs: - vcl_recv - vcl_deliver */ /* sub vcl_recv { call recv_loadBalancingOnStickyCookie; } sub vcl_deliver { call deliver_setLBCookie; } */ /** If directors are defined as below we can in practice: Load balance to a specific backend by choosing a director and still have a fallback function. Risk for wrong backend = (1+1)/4294967295 = 5 * 10^-10, a price I'm willing to take for this fallback function. This way I can restart the backends and still keep the load balancing intact when all is up again. */ /* director Backends1 random { { .backend = www1; .weight = 4294967295;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 1;} } director Backends2 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 4294967295;} { .backend = www3; .weight = 1;} } director Backends3 random { { .backend = www1; .weight = 1;} { .backend = www2; .weight = 1;} { .backend = www3; .weight = 4294967295;} } */ From niklas.norberg at bahnhof.se Wed Apr 14 18:57:09 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Wed, 14 Apr 2010 20:57:09 +0200 Subject: Sticky Load Balancing with Varnish In-Reply-To: <1271268819.5141.85.camel@app-srv-debian-amdmp2.idni> References: <53364.1271260831@critter.freebsd.dk> <1271268819.5141.85.camel@app-srv-debian-amdmp2.idni> Message-ID: <1271271429.29611.5.camel@app-srv-debian-amdmp2.idni> I was actually referring, see below, to http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#busyness but you probably figured that :) -- Niklas > > ons 2010-04-14 klockan 20:13 +0200 skrev Niklas Norberg: > > ons 2010-04-14 klockan 16:00 +0000 skrev Poul-Henning Kamp: > > In message <1271248091.5141.33.camel at app-srv-debian-amdmp2.idni>, Niklas Norber > > g writes: > > > > >So I also vote for keeping this as a documented configuration rather > > >than a built-in feature. Unless the planned sticky load balancing will > > >have something above the rudimentary. > > > > Check the "hash" and "client" directors in 2.1, depending on what > > you want to be "sticky" based on (object or client) > > > > Thanks (but...), > > Does the hash director balance differently, in the big picture, than the > already existing random director? > > The client director comes close to my intended setup but the problem > with balancing on client ip is, as have been mentioned before, that lots > of clients (real users behind their browsers) can share the same > ip-adress. I worked with a site where the end users were schools and > 8.30 the traffic always got high and the ip-adresses were too few to > balance on, so we did it in the hardware load balancer on level 7 (i.e. > with a cookie). > > As I see it there are only two cases: > * Either we want to have sticky (ip or cookie) for the traffic that has > uses session cookies, because in this case it is costful to hit the > wrong backend (session redundancy costs performance (and setup hours). > or > * It's stateless traffic and therefor it can be distributed randomly. In > this case preferably with weights (if the backends differ) and with > traffic memory according to: > http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#traffic > > It is of course good if they can be combined as one for example easily > can do with VCL in Varnish. > > As I can figure these two covers all, or? > > So the question is, are there any plans for any traffic based > director? :) > > > Best Regards > > Niklas Norberg > > From niklas.norberg at bahnhof.se Wed Apr 14 18:13:39 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Wed, 14 Apr 2010 20:13:39 +0200 Subject: Sticky Load Balancing with Varnish In-Reply-To: <53364.1271260831@critter.freebsd.dk> References: <53364.1271260831@critter.freebsd.dk> Message-ID: <1271268819.5141.85.camel@app-srv-debian-amdmp2.idni> > ons 2010-04-14 klockan 16:00 +0000 skrev Poul-Henning Kamp: > In message <1271248091.5141.33.camel at app-srv-debian-amdmp2.idni>, Niklas Norber > g writes: > > >So I also vote for keeping this as a documented configuration rather > >than a built-in feature. Unless the planned sticky load balancing will > >have something above the rudimentary. > > Check the "hash" and "client" directors in 2.1, depending on what > you want to be "sticky" based on (object or client) > Thanks (but...), Does the hash director balance differently, in the big picture, than the already existing random director? The client director comes close to my intended setup but the problem with balancing on client ip is, as have been mentioned before, that lots of clients (real users behind their browsers) can share the same ip-adress. I worked with a site where the end users were schools and 8.30 the traffic always got high and the ip-adresses were too few to balance on, so we did it in the hardware load balancer on level 7 (i.e. with a cookie). As I see it there are only two cases: * Either we want to have sticky (ip or cookie) for the traffic that has uses session cookies, because in this case it is costful to hit the wrong backend (session redundancy costs performance (and setup hours). or * It's stateless traffic and therefor it can be distributed randomly. In this case preferably with weights (if the backends differ) and with traffic memory according to: http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#traffic It is of course good if they can be combined as one for example easily can do with VCL in Varnish. As I can figure these two covers all, or? So the question is, are there any plans for any traffic based director? :) Best Regards Niklas Norberg From phk at phk.freebsd.dk Wed Apr 14 21:04:23 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 14 Apr 2010 21:04:23 +0000 Subject: Sticky Load Balancing with Varnish In-Reply-To: Your message of "Wed, 14 Apr 2010 20:13:39 +0200." <1271268819.5141.85.camel@app-srv-debian-amdmp2.idni> Message-ID: <68131.1271279063@critter.freebsd.dk> In message <1271268819.5141.85.camel at app-srv-debian-amdmp2.idni>, Niklas Norber g writes: >Does the hash director balance differently, in the big picture, than the >already existing random director? The hash director selects backend based on the object hash value (as produced by vcl_hash{}). This allows you to distribute your content over your backends such that they pool their resources. The client director will send the same client to the same backend all the time. >The client director comes close to my intended setup but the problem >with balancing on client ip is, as have been mentioned before, that lots >of clients (real users behind their browsers) can share the same >ip-adress. Yes, I'm aware of this. I have a patch which I hope will be in 2.1.1 where you can control what the client director selects on, somewhat like you do with vcl_hash{} for the hash-key. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From moseleymark at gmail.com Wed Apr 14 23:25:21 2010 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 14 Apr 2010 16:25:21 -0700 Subject: PID stored in _.vsl doesn't appear to be correct in 2.1 Message-ID: This could easily be a misconfiguration, but I've been playing with 2.1 lately with minor modifications to a long-running 2.0.5/2.0.6 setup. My monit has an annoying tendency to lose track of varnishd and try to start up a new instance, when one is already running. When this happens, the original varnishd keeps running just fine but the _.vsl is trashed, so things like varnishstat and varnishlog are just idle, though I can see varnishd serving up traffic. >From strace'ing a subsequent run of varnishd while one is already running, I see it doing the kill( , 0 ) but it's not using the PID that varnishd is running as. The /var/run/varnishd PID is correct for the master process, prior to the second run (gets overwritten by each subsequent start-up). The PID that kill() is trying is the original parent of the master process, i.e. the parent clone()'s the eventual master process and closes itself but when varnishd gets the master PID as recorded in the _.vsl, it's that now-closed parent of the master whose PID has been recorded. Just a handful of strace lines above the clone() in the master's parent process, I can see it doing the open() on _.vsl and mmap()'ing it -- though I'm not ambitious enough to sift through gigs of ltrace to see what/when it's writing the PID :) With actual PIDs, it looks like this 25084 (parent) -> 25099 (master; 25084 immediately calls exit_group()) -> 25106 (child process) For this example, the subsequent varnishd is calling kill() on 25084 On the command line, this error is generated by the 2nd run: storage_file: filename: /var/cache/varnish/cache size 15000 MB. SHMFILE used by orphan varnishd child process (pid=25106) (We assume that process is busy dying.) Creating new SHMFILE Presumably it's due to this change: "Try to detect the case of two running varnishes with the same shmlog and storage by writing the master and child ids to the shmlog and refusing to start if they are still running." If you guys would like me to try something or send along something, let me know. My varnishd invocation is (ps output doesn't show the ='s for the -p's for some reason): varnishd -a :8099 -T :8100 -f /etc/varnish/my.vcl -t 0 -l 80m -s file,/var/cache/varnish/cache,15000M -u nobody -P /var/run/varnishd -p listen_depth 4096 -p thread_pools 6 -p thread_pool_max 800 -p thread_pool_min 200 -p lru_interval 60 -p cli_timeout 20 -p default_grace 120 -p thread_pool_stack 1048576 From phk at phk.freebsd.dk Thu Apr 15 07:14:26 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 15 Apr 2010 07:14:26 +0000 Subject: PID stored in _.vsl doesn't appear to be correct in 2.1 In-Reply-To: Your message of "Wed, 14 Apr 2010 16:25:21 MST." Message-ID: <87090.1271315666@critter.freebsd.dk> In message , Mar k Moseley writes: You're right. Fixed in r4661, will be in 2.1.1 Thanks! Poul-Henning >This could easily be a misconfiguration, but I've been playing with >2.1 lately with minor modifications to a long-running 2.0.5/2.0.6 >setup. My monit has an annoying tendency to lose track of varnishd and >try to start up a new instance, when one is already running. When this >happens, the original varnishd keeps running just fine but the _.vsl >is trashed, so things like varnishstat and varnishlog are just idle, >though I can see varnishd serving up traffic. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ebe at dmi.dk Thu Apr 15 10:34:03 2010 From: ebe at dmi.dk (Eivind Bengtsson) Date: Thu, 15 Apr 2010 12:34:03 +0200 Subject: Setting costum ttl in two-tier varnish set-up Message-ID: <4BC6EB9B.4010806@dmi.dk> Hi list, We are testing a set-up, where our varnish first asks its neighbour and then if the neighbour also does not have the object, fetch it at the "real" backend. We do not use the expirery headers in the backend response, but set a ttl, depending on the page requested. Therefore we wish the second level varnish to set a ttl maching the remainder of ttl in the first varnish. so in vcl_deliver we do sub vcl_deliver { if(client.ip ~ varnish_acl && resp.status == 200) { unset resp.http.Via; unset resp.http.X-Varnish; set resp.http.varnish-age = resp.http.Age; } } and in vcl_fetch: sub vcl_fetch { ... ... if ( req.url ~ "^/dmi/index/danmark/varsler/" ) { set obj.ttl = 60s; } #1 min if ( req.url ~ "^/dmi/radaranim2.gif") { set obj.ttl = 300s; } #5 min ... ... if (req.backend == varnish_neighbour && obj.status == 200) { C{ VRT_l_obj_ttl(sp, (VRT_r_obj_ttl(sp) - atoi(VRT_GetHdr(sp, HDR_OBJ, "\014Varnish-Age:") ) ) ); }C } } Is this a viable way to set the ttl? Which special cases can you guys think of, in which the proposed c-code doesnt work as expected? e.g. what happens if obj_ttl is negative ? Regards -- Eivind Bengtsson Systemadministrator - Cand.merc.(dat) Danmarks Meteorologiske Institut Lyngbyvej 100 2100 K?benhavn ? Direkte tlf. : 39157544 Email: ebe at dmi.dk echo 'This is not a pipe.' | cat -> /dev/tty From niklas.norberg at bahnhof.se Thu Apr 15 11:10:47 2010 From: niklas.norberg at bahnhof.se (Niklas Norberg) Date: Thu, 15 Apr 2010 13:10:47 +0200 Subject: Sticky Load Balancing with Varnish In-Reply-To: <68131.1271279063@critter.freebsd.dk> References: <68131.1271279063@critter.freebsd.dk> Message-ID: <1271329847.6165.37.camel@app-srv-debian-amdmp2.idni> ons 2010-04-14 klockan 21:04 +0000 Poul-Henning Kamp writes: > The hash director selects backend based on the object hash value > (as produced by vcl_hash{}). This allows you to distribute your > content over your backends such that they pool their resources. Aha, point taken. It seems to follow the main design idea (http://varnish-cache.org/wiki/ArchitectNotes), nice. > The client director will send the same client to the same backend > all the time. > > >The client director comes close to my intended setup but the problem > >with balancing on client ip is, as have been mentioned before, that lots > >of clients (real users behind their browsers) can share the same > >ip-adress. > > Yes, I'm aware of this. > > I have a patch which I hope will be in 2.1.1 where you can control > what the client director selects on, somewhat like you do with > vcl_hash{} for the hash-key. Ok, but it would be nice if this load-balancing supports backends that uses session-cookies. And that the "control" can be choosen to look at a LB-cookie. In that case non-secure/common traffic can be load-balanced with the hash director and the private with the client director. I think it's vital that the private traffic for each end-client goes to the same backend since backend-session-redundancy is expensive. Thanks Niklas Norberg From schmidt at ze.tum.de Thu Apr 15 11:41:27 2010 From: schmidt at ze.tum.de (Gerhard Schmidt) Date: Thu, 15 Apr 2010 13:41:27 +0200 Subject: varnish with ssl In-Reply-To: <73973.1270674461@critter.freebsd.dk> References: <73973.1270674461@critter.freebsd.dk> Message-ID: <4BC6FB67.9010708@ze.tum.de> Poul-Henning Kamp schrieb: > In message , Mi > chael Fischer writes: > >> What's the incompatibility with OpenSSL? > > I have two main reservations about SSL in Varnish: > > 1. OpenSSL is almost 350.000 lines of code, Varnish is only 58.000, > Adding such a massive amount of code to Varnish footprint, should > result in a very tangible benefit. > > Compared to running a SSL proxy in front of Varnish, I can see > very, very little benefit from integration. Yeah, one process > less and only one set of config parameters. > > But that all sounds like "second systems syndrome" thinking to me, > it does not really sound lige a genuine "The world would become > a better place" feature request. > > But I do see some some serious drawbacks: The necessary changes > to Varnish internal logic will almost certainly hurt varnish > performance for the plain HTTP case. We need to add an inordinate > about of overhead code, to configure and deal with the key/cert > bits. > > 2. I have looked at the OpenSSL source code, I think it is a catastrophe > waiting to happen. In fact, the only thing that prevents attackers > from exploiting problems more actively, is that the source code is > fundamentally unreadable and impenetrable. > > Unless those two issues can be addressed, I don't see SSL in Varnish > any time soon. > I don't see your Problem with that. 1. You should not include OpenSSL in varnish. Varnish should use OpenSSL. 2. There are other SSL Libraries maybe other are better suited. 3. I should be off by default and enabled by need. So it's the decision of the Admin if he uses SSL and his risk. But I really think https is a major show stopper for the use of Varnish. Regards Estartu -- ------------------------------------------------- Gerhard Schmidt | E-Mail: schmidt at ze.tum.de TU-M?nchen | WWW & Online Services | Tel: 089/289-25270 | Fax: 089/289-25257 | PGP-Publickey auf Anfrage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 543 bytes Desc: OpenPGP digital signature URL: From phk at phk.freebsd.dk Thu Apr 15 17:43:15 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 15 Apr 2010 17:43:15 +0000 Subject: Sticky Load Balancing with Varnish In-Reply-To: Your message of "Thu, 15 Apr 2010 13:10:47 +0200." <1271329847.6165.37.camel@app-srv-debian-amdmp2.idni> Message-ID: <1987.1271353395@critter.freebsd.dk> In message <1271329847.6165.37.camel at app-srv-debian-amdmp2.idni>, Niklas Norber g writes: >Ok, but it would be nice if this load-balancing supports backends that >uses session-cookies. And that the "control" can be choosen to look at a >LB-cookie. You mean: sub vcl_recv { set client.id = req.cookie[SESSION_BLAH]; } Yes, that's the idea :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From niklas.norberg at bahnhof.se Fri Apr 16 14:38:12 2010 From: niklas.norberg at bahnhof.se (niklas.norberg at bahnhof.se) Date: Fri, 16 Apr 2010 16:38:12 +0200 (CEST) Subject: Sticky Load Balancing with Varnish In-Reply-To: <1987.1271353395@critter.freebsd.dk> References: <1987.1271353395@critter.freebsd.dk> Message-ID: <1998.79.160.168.96.1271428692.squirrel@webmail.bahnhof.se> > In message <1271329847.6165.37.camel at app-srv-debian-amdmp2.idni>, Niklas > Norber > g writes: > >>Ok, but it would be nice if this load-balancing supports backends that >>uses session-cookies. And that the "control" can be choosen to look at a >>LB-cookie. > > You mean: > > sub vcl_recv { > set client.id = req.cookie[SESSION_BLAH]; > } > > Yes, that's the idea :-) > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by > incompetence. > Perfect. Thanks, then I'm very satisfied. -- Niklas From felix at seconddrawer.com.au Fri Apr 16 17:07:14 2010 From: felix at seconddrawer.com.au (Felix) Date: Sat, 17 Apr 2010 00:07:14 +0700 Subject: Handling duplicate headers Message-ID: <20100416170714.GA4512@thinkpad> Hi all, Just a question about handling duplicate headers. I am currently using Chrome and trying to get it to pass to the backend on a client forced refresh. Chrome sends the following headers: RxHeader c Cache-Control: max-age=0 RxHeader c Cache-Control: no-cache and I have been trying to catch 'no-cache' but it never gets it with this rule: if (req.http.Cache-Control ~ "no-cache") as it seems to fill the 'Cache-Control' header spot with the first one it finds, 'max-age' Is there a way to cope with this or is this a bug? -felix From phk at phk.freebsd.dk Fri Apr 16 17:13:25 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 16 Apr 2010 17:13:25 +0000 Subject: Handling duplicate headers In-Reply-To: Your message of "Sat, 17 Apr 2010 00:07:14 +0700." <20100416170714.GA4512@thinkpad> Message-ID: <85097.1271438005@critter.freebsd.dk> In message <20100416170714.GA4512 at thinkpad>, Felix writes: >Just a question about handling duplicate headers. I am currently using >Chrome and trying to get it to pass to the backend on a client forced >refresh. Chrome sends the following headers: > > RxHeader c Cache-Control: max-age=0 > RxHeader c Cache-Control: no-cache > >and I have been trying to catch 'no-cache' but it never gets it with >this rule: > > if (req.http.Cache-Control ~ "no-cache") > >as it seems to fill the 'Cache-Control' header spot with the first one >it finds, 'max-age' > >Is there a way to cope with this or is this a bug? I'll have to re-read the RFC, Chrome may be within spec, but they're certainly breaking tradition... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From samcrawford at gmail.com Fri Apr 16 19:40:33 2010 From: samcrawford at gmail.com (Sam Crawford) Date: Fri, 16 Apr 2010 20:40:33 +0100 Subject: Handling duplicate headers In-Reply-To: <85097.1271438005@critter.freebsd.dk> References: <20100416170714.GA4512@thinkpad> <85097.1271438005@critter.freebsd.dk> Message-ID: Hrmmm, which version of Chrome is that? I'm running the latest 5.0 beta and I'm seeing the following: ** Using F5 only ** GET /Headers/ HTTP/1.1 Host: xxxxxxxxxxx Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 Cache-Control: no-cache Pragma: no-cache Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 ** Using CTRL+F5 ** GET /Headers/ HTTP/1.1 Host: xxxxxxxxxxxxxxxxx Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 Cache-Control: max-age=0 Pragma: no-cache Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 If I remember rightly, the above is also how Firefox handles Cache-Control headers. I've not been able to make Chrome emit two Cache-Control headers on a single request. Thanks, Sam On 16 April 2010 18:13, Poul-Henning Kamp wrote: > In message <20100416170714.GA4512 at thinkpad>, Felix writes: > >>Just a question about handling duplicate headers. I am currently using >>Chrome and trying to get it to pass to the backend on a client forced >>refresh. Chrome sends the following headers: >> >> ?RxHeader ? ? c Cache-Control: max-age=0 >> ?RxHeader ? ? c Cache-Control: no-cache >> >>and I have been trying to catch 'no-cache' but it never gets it with >>this rule: >> >> ?if (req.http.Cache-Control ~ "no-cache") >> >>as it seems to fill the 'Cache-Control' header spot with the first one >>it finds, 'max-age' >> >>Is there a way to cope with this or is this a bug? > > I'll have to re-read the RFC, Chrome may be within spec, but they're > certainly breaking tradition... > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From samcrawford at gmail.com Fri Apr 16 19:51:15 2010 From: samcrawford at gmail.com (Sam Crawford) Date: Fri, 16 Apr 2010 20:51:15 +0100 Subject: Handling duplicate headers In-Reply-To: References: <20100416170714.GA4512@thinkpad> <85097.1271438005@critter.freebsd.dk> Message-ID: Gah, what a fool, I got the two muddled - swap F5 and Ctrl+F5 around below. On 16 April 2010 20:40, Sam Crawford wrote: > Hrmmm, which version of Chrome is that? I'm running the latest 5.0 > beta and I'm seeing the following: > > ** Using F5 only ** > GET /Headers/ HTTP/1.1 > Host: xxxxxxxxxxx > Connection: keep-alive > User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) > AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 > Cache-Control: no-cache > Pragma: no-cache > Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > Accept-Encoding: gzip,deflate,sdch > Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 > > ** Using CTRL+F5 ** > GET /Headers/ HTTP/1.1 > Host: xxxxxxxxxxxxxxxxx > Connection: keep-alive > User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) > AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 > Cache-Control: max-age=0 > Pragma: no-cache > Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > Accept-Encoding: gzip,deflate,sdch > Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 > > If I remember rightly, the above is also how Firefox handles > Cache-Control headers. > > I've not been able to make Chrome emit two Cache-Control headers on a > single request. > > Thanks, > > Sam > > > > On 16 April 2010 18:13, Poul-Henning Kamp wrote: >> In message <20100416170714.GA4512 at thinkpad>, Felix writes: >> >>>Just a question about handling duplicate headers. I am currently using >>>Chrome and trying to get it to pass to the backend on a client forced >>>refresh. Chrome sends the following headers: >>> >>> ?RxHeader ? ? c Cache-Control: max-age=0 >>> ?RxHeader ? ? c Cache-Control: no-cache >>> >>>and I have been trying to catch 'no-cache' but it never gets it with >>>this rule: >>> >>> ?if (req.http.Cache-Control ~ "no-cache") >>> >>>as it seems to fill the 'Cache-Control' header spot with the first one >>>it finds, 'max-age' >>> >>>Is there a way to cope with this or is this a bug? >> >> I'll have to re-read the RFC, Chrome may be within spec, but they're >> certainly breaking tradition... >> >> -- >> Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 >> phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 >> FreeBSD committer ? ? ? | BSD since 4.3-tahoe >> Never attribute to malice what can adequately be explained by incompetence. >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc >> > From felix at seconddrawer.com.au Sat Apr 17 05:51:57 2010 From: felix at seconddrawer.com.au (Felix) Date: Sat, 17 Apr 2010 12:51:57 +0700 Subject: Handling duplicate headers In-Reply-To: References: <20100416170714.GA4512@thinkpad> <85097.1271438005@critter.freebsd.dk> Message-ID: <20100417055157.GA5109@thinkpad> I am using Chromium 5.0.342.9 on Linux and I am only referring to the headers display while running varnishlog -c -o ReqStart ... For a forced refresh Ctrl+F5 I get the following: Chromium (5.0.342.9): 13 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 13 RxHeader c Cache-Control: max-age=0 13 RxHeader c Pragma: no-cache 13 RxHeader c Cache-Control: no-cache Firefox (3.6.3): 13 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100402 Namoroka/3.6.3 13 RxHeader c Pragma: no-cache 13 RxHeader c Cache-Control: no-cache (just the relevant headers) When using the browser developer tool in Chromium it only shows one of the headers ('no-cache') but Varnish seems to get two. Firebug in Firefox only shows one header being sent and Varnish agrees. -felix On Fri, Apr 16, 2010 at 08:51:15PM +0100, Sam Crawford wrote: > Gah, what a fool, I got the two muddled - swap F5 and Ctrl+F5 around below. > > > > On 16 April 2010 20:40, Sam Crawford wrote: > > Hrmmm, which version of Chrome is that? I'm running the latest 5.0 > > beta and I'm seeing the following: > > > > ** Using F5 only ** > > GET /Headers/ HTTP/1.1 > > Host: xxxxxxxxxxx > > Connection: keep-alive > > User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) > > AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 > > Cache-Control: no-cache > > Pragma: no-cache > > Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > > Accept-Encoding: gzip,deflate,sdch > > Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 > > > > ** Using CTRL+F5 ** > > GET /Headers/ HTTP/1.1 > > Host: xxxxxxxxxxxxxxxxx > > Connection: keep-alive > > User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) > > AppleWebKit/533.5 (KHTML, like Gecko) Chrome/5.0.378.0 Safari/533.5 > > Cache-Control: max-age=0 > > Pragma: no-cache > > Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > > Accept-Encoding: gzip,deflate,sdch > > Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 > > > > If I remember rightly, the above is also how Firefox handles > > Cache-Control headers. > > > > I've not been able to make Chrome emit two Cache-Control headers on a > > single request. > > > > Thanks, > > > > Sam > > > > > > > > On 16 April 2010 18:13, Poul-Henning Kamp wrote: > >> In message <20100416170714.GA4512 at thinkpad>, Felix writes: > >> > >>>Just a question about handling duplicate headers. I am currently using > >>>Chrome and trying to get it to pass to the backend on a client forced > >>>refresh. Chrome sends the following headers: > >>> > >>> ?RxHeader ? ? c Cache-Control: max-age=0 > >>> ?RxHeader ? ? c Cache-Control: no-cache > >>> > >>>and I have been trying to catch 'no-cache' but it never gets it with > >>>this rule: > >>> > >>> ?if (req.http.Cache-Control ~ "no-cache") > >>> > >>>as it seems to fill the 'Cache-Control' header spot with the first one > >>>it finds, 'max-age' > >>> > >>>Is there a way to cope with this or is this a bug? > >> > >> I'll have to re-read the RFC, Chrome may be within spec, but they're > >> certainly breaking tradition... > >> > >> -- > >> Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > >> phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > >> FreeBSD committer ? ? ? | BSD since 4.3-tahoe > >> Never attribute to malice what can adequately be explained by incompetence. > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > >> > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc -- email: felix at seconddrawer.com.au web: http://seconddrawer.com.au/ gpg: E6FC 5BC6 268D B874 E546 8F6F A2BB 220B D5F6 92E3 Please don't send me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html From felix at seconddrawer.com.au Sat Apr 17 06:06:44 2010 From: felix at seconddrawer.com.au (Felix) Date: Sat, 17 Apr 2010 13:06:44 +0700 Subject: Handling duplicate headers In-Reply-To: <85097.1271438005@critter.freebsd.dk> References: <20100416170714.GA4512@thinkpad> <85097.1271438005@critter.freebsd.dk> Message-ID: <20100417060644.GB5109@thinkpad> Looking at the specs at http://www.faqs.org/rfcs/rfc2616.html section 4.2 seems to indicate that multiple duplicate headers can be present if they can be concatinated using commas and kept in the same order. -felix On Fri, Apr 16, 2010 at 05:13:25PM +0000, Poul-Henning Kamp wrote: > In message <20100416170714.GA4512 at thinkpad>, Felix writes: > > >Just a question about handling duplicate headers. I am currently using > >Chrome and trying to get it to pass to the backend on a client forced > >refresh. Chrome sends the following headers: > > > > RxHeader c Cache-Control: max-age=0 > > RxHeader c Cache-Control: no-cache > > > >and I have been trying to catch 'no-cache' but it never gets it with > >this rule: > > > > if (req.http.Cache-Control ~ "no-cache") > > > >as it seems to fill the 'Cache-Control' header spot with the first one > >it finds, 'max-age' > > > >Is there a way to cope with this or is this a bug? > > I'll have to re-read the RFC, Chrome may be within spec, but they're > certainly breaking tradition... > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. -- email: felix at seconddrawer.com.au web: http://seconddrawer.com.au/ gpg: E6FC 5BC6 268D B874 E546 8F6F A2BB 220B D5F6 92E3 Please don't send me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From dev at soeren-soerries.de Sat Apr 17 20:00:04 2010 From: dev at soeren-soerries.de (Soesoe) Date: Sat, 17 Apr 2010 22:00:04 +0200 Subject: Varnish 2.1.0: ESI and POST Requests? Message-ID: Hi, we want to use ESI, it works Fine for all GET-requests, but we have some Post-Request for forms and Logins etc.... There the result of the post Request isn't parsed for esi. We tryd a Lot, but din't get it workings... Any hints? Thx a Lot from Hamburg, Germany Soeren From felix at seconddrawer.com.au Sun Apr 18 16:04:18 2010 From: felix at seconddrawer.com.au (Felix) Date: Sun, 18 Apr 2010 23:04:18 +0700 Subject: Handling duplicate headers In-Reply-To: <20100417055157.GA5109@thinkpad> References: <20100416170714.GA4512@thinkpad> <85097.1271438005@critter.freebsd.dk> <20100417055157.GA5109@thinkpad> Message-ID: <20100418160418.GB4760@thinkpad> ...and I forgot to mention that I am using Varnish 2.1.0-2 on a Debian based server. So if this has already been accounted for in a more recent release then I apologise. -felix -- email: felix at seconddrawer.com.au web: http://seconddrawer.com.au/ gpg: E6FC 5BC6 268D B874 E546 8F6F A2BB 220B D5F6 92E3 Please don't send me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From tfheen at varnish-software.com Mon Apr 19 06:10:22 2010 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 19 Apr 2010 08:10:22 +0200 Subject: server name in config? In-Reply-To: <4BC352B3.5000008@netmatch.nl> ("Angelo =?utf-8?Q?H=C3=B6ngen?= =?utf-8?Q?s=22's?= message of "Mon, 12 Apr 2010 19:04:51 +0200") References: <2903443B3710364B814B820238DDEF2CA761B62C@TIL-EXCH-01.netmatch.local> <4BC35005.8000903@mangahigh.com> <4BC352B3.5000008@netmatch.nl> Message-ID: <87vdbokm5t.fsf@qurzaw.linpro.no> ]] Angelo H?ngens | Thanks, the server.hostname field works like a charm :) There is also server.identity which can be set using -i. This is useful if you're running multiple varnishes on a single box. -- Tollef Fog Heen Varnish Software t: +47 21 54 41 73 From tfheen at varnish-software.com Mon Apr 19 06:11:51 2010 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 19 Apr 2010 08:11:51 +0200 Subject: Proxying POST body through Varnish In-Reply-To: (Paul Carey's message of "Mon, 12 Apr 2010 19:17:31 +0100") References: Message-ID: <87r5mckm3c.fsf@qurzaw.linpro.no> ]] Paul Carey | I'd like Varnish to cache these requests. My VCL config defines | vcl_recv and doesn't 'pass' on POSTs that match _all_docs. However, | CouchDB doesn't like the POST requests proxied through Varnish, | returning an error message stating 'invalid UTF-8 JSON'. I suspect | Varnish is stripping the POST body. I say this because when I run | varnishlog I see a Rx Content-Length header received by Varnish but | not a corresponding Tx header proxied to CouchDB. | | If it's likely that this is what's happening, is there any many to get | Varnish to pass the POST body along? Short answer: No, there isn't. -- Tollef Fog Heen Varnish Software t: +47 21 54 41 73 From tfheen at varnish-software.com Mon Apr 19 06:22:23 2010 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 19 Apr 2010 08:22:23 +0200 Subject: influencing beresp.cacheable on the backend In-Reply-To: (David Birdsong's message of "Mon, 12 Apr 2010 14:50:42 -0700") References: Message-ID: <87mxx0klls.fsf@qurzaw.linpro.no> ]] David Birdsong | How can I influence bereps.cacheable in a backend such that it will | evaluate to False? | | I set Expires and Cache-Control in the backend, this is what the | backend generates: | HTTP/1.1 200 OK | Server: nginx/0.7.65 | Date: Mon, 12 Apr 2010 21:48:25 GMT | Content-Type: image/gif | Content-Length: 43 | Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT | Connection: close | Expires: Thu, 01 Jan 1970 00:00:01 GMT | Cache-Control: no-cache | | From the man page: | """ | A response is considered cacheable if it is valid (see above), the | HTTP status code is 200, 203, 300, 301, 302, 404 or 410 and it has a | non-zero time-to-live when Expires and Cache-Control headers are taken | into account. | """ The documentation is wrong for 2.1.0, it's correct for 2.0.x. I'm sorry the docs were wrong, I've updated them now. | And yet this object is getting cached in varnish. Here is my vcl_fetch: You can set beresp.cacheable based on the TTL, something like this should work: sub vcl_fetch { if (beresp.ttl < 1s) { set beresp.cacheable = false; } } -- Tollef Fog Heen Varnish Software t: +47 21 54 41 73 From phk at phk.freebsd.dk Mon Apr 19 06:41:57 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 19 Apr 2010 06:41:57 +0000 Subject: Varnish 2.1.0: ESI and POST Requests? In-Reply-To: Your message of "Sat, 17 Apr 2010 22:00:04 +0200." Message-ID: <5300.1271659317@critter.freebsd.dk> In message , So esoe writes: >Hi, > >we want to use ESI, it works Fine for all GET-requests, but we have >some Post-Request for forms and Logins etc.... > >There the result of the post Request isn't parsed for esi. Hmm that should work, provided you "pass" the POST requests and mark them for esi processing in vcl_fetch{} -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From A.Hongens at netmatch.nl Mon Apr 19 18:27:55 2010 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Mon, 19 Apr 2010 20:27:55 +0200 Subject: backend write error: 35 Message-ID: <2903443B3710364B814B820238DDEF2CA761BD5C@TIL-EXCH-01.netmatch.local> After rolling back from 2.1.0 to 2.0.5 (because of #678), at least my balancers stayed up throughout the weekend. So I'm happy again, and until it's sorted out, I'm staying on 2.0.x. Today I've noticed some other strange stuff though, where images on some domains gave 503's. No idea when that started. I don't see anything strange on the backend (haproxy listening on multiple ports on localhost), and I only saw 200's being retrieved by varnish. Varnishlog showed the error "FetchError c http read error: 0". No idea what it means, but googling it, I saw ticket #624, talking about http 1.1 vs http 1.0 and httpcloses. I don't think it's relevant, since I only see http 1.1 requests and responses. Does anyone know what this "http read error: 0" error means? Inspired though, I saw I set haproxy to force client connections to disconnect (global option httpclose). I turned it off, and now varnish reuses connections to haproxy, and the "http read error: 0" suddenly disappears. (FYI: My setup: I have varnish running on a box, and based on the 8900 different hostnames, it selects one of the 325 backends. These backends are all haproxy listeners, listening on localhost:8001, localhost:8002, etc.. The fact that using keepalives to the backends solves that message, gives me the feeling I might be running into problems with the number of tcp sockets available on my localhost interface or something like that. Should I be able to see any indication of this on my bsd boxes? I don't see anything strange in my messages.log or the like. I don't see that many connections using netstat: $ netstat -an | grep 127.0.0 | awk '{print $6}' | sort -n | uniq -c 41 CLOSE_WAIT 172 ESTABLISHED 23 FIN_WAIT_2 347 LISTEN 1127 TIME_WAIT However, that seems to be solved for now as well. Now however, I get yet another error I did not get (that much) before. I now have some post requests (that are being passed) that show: FetchError c backend write error: 35 Any idea what this error means? -- With kind regards, Angelo H?ngens Systems Administrator ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg T: +31 (0)13 5811088 F: +31 (0)13 5821239 mailto:A.Hongens at netmatch.nl http://www.netmatch.nl ------------------------------------------ From David at firechaser.com Tue Apr 20 11:02:01 2010 From: David at firechaser.com (David Murphy) Date: Tue, 20 Apr 2010 12:02:01 +0100 Subject: Cookies - set on non-cached pages, read on all pages Message-ID: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local> Hello We use a javascript cookie to display the user's first name in the header of our site. Our site uses Varnish to cache all pages except a login page, where the cookie is set. What we need to be able to do is set the cookie once (name='David') on the login page so that we can display 'Hello David' on all other pages. For a given cached page, if the user's browser already has a cookie saved, then we display its value ('Hello David'). If no cookie is found then we display 'Login'. We only want a single instance of each cached page in Varnish (rather than a unique cached page per user) so that all users - whether they have the cookie or not - will be given this same page from the cache. So, if we set the cookie on the dynamic login page, how would we go about making sure that for all the other pages cached by Varnish, the cookie value (already stored in browser) is available to be read by JS code that is part of the cached XHTML markup. Thanks for any help. Best, David From info at makeityourway.de Tue Apr 20 12:18:03 2010 From: info at makeityourway.de (Mike Schiessl) Date: Tue, 20 Apr 2010 14:18:03 +0200 Subject: sick backend used by director ( 503 on healthy backend ) Message-ID: <001e01cae083$9deca630$d9c5f290$@de> Hi list, i am having a strange issue regarding the "director" with 2 backend hosts. I've two webservers running lighty and one frontend (running the 2.1 varnishd). The webs are put together as a redirector backend in varnishd as web01 and web02. When web02 fails, web01 takes over the work and everything works fine. When I do it the other way (web01 fails and web02 is healthy) I got 503 (guru meditation). The varnishlog reports web01 correctly as sick - but it seems to me that it is used anyway ! A very similar behavior was reported in ticket #589 . Here is he regarding snippet of my .vlc backend web01 { .host = "web01.cluster.local"; .port = "8800"; .max_connections = 500; .probe = { .url = "/"; .timeout = 0.3s; .window = 8; .threshold = 3; } } backend web02 { .host = "web02.cluster.local"; .port = "8800"; .max_connections = 500; .probe = { .url = "/"; .timeout = 0.3s; .window = 8; .threshold = 3; } } director webcluster random { { .backend = web01; .weight = 1; } { .backend = web02; .weight = 1; } } As "set req.backend" I use webcluster I am using varnishd (varnish-2.1 SVN ) on kernel 2.6.31.11. The following hours I will be able to test the behavior if there are any suggestions, toady In the evening, the system will go live (knowing that in failover issue there might occour guru meditation) Thanks for your support Mike Schiessl Makeityourway.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.chiswell at mangahigh.com Tue Apr 20 14:41:16 2010 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Tue, 20 Apr 2010 15:41:16 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local> Message-ID: <4BCDBD0C.5060904@mangahigh.com> Hi David, > We use a javascript cookie to display the user's first name in the header of our site. Our site uses Varnish to cache all pages except a login page, where the cookie is set. > > What we need to be able to do is set the cookie once (name='David') on the login page so that we can display 'Hello David' on all other pages. > > For a given cached page, if the user's browser already has a cookie saved, then we display its value ('Hello David'). If no cookie is found then we display 'Login'. We've done something similar - just get Varnish to ignore the cookie entirely and then just use Javascript to check for the cookie and write out the data. If there is no cookie, it shows "Login". Rich From David at firechaser.com Tue Apr 20 15:08:18 2010 From: David at firechaser.com (David Murphy) Date: Tue, 20 Apr 2010 16:08:18 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <4BCDBD0C.5060904@mangahigh.com> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> Message-ID: <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local> Thanks Rich When you say ignore do you mean unset e.g. sub vcl_recv { //snip unset req.http.cookie; } sub vcl_fetch { //beresp replaces obj in 2.1 unset beresp.http.set-cookie; } We've tried several combinations of unsetting cookies so that the cached pages are not duplicated for each client, but without much luck. What we find is that a cookie set in the browser on the /login page is not available on subsequent cached Varnish pages. That said, I'm glad you've got something similar working. So when you view your Varnish stats you see just a single instance of a cached page e.g. /about-us or whatever, and when that page is viewed by different users with different cookie values, the expected "Hello " appears just fine? Best, David ________________________________________ From: Richard Chiswell [richard.chiswell at mangahigh.com] Sent: 20 April 2010 15:41 To: David Murphy Cc: varnish-misc at varnish-cache.org Subject: Re: Cookies - set on non-cached pages, read on all pages Hi David, > We use a javascript cookie to display the user's first name in the header of our site. Our site uses Varnish to cache all pages except a login page, where the cookie is set. > > What we need to be able to do is set the cookie once (name='David') on the login page so that we can display 'Hello David' on all other pages. > > For a given cached page, if the user's browser already has a cookie saved, then we display its value ('Hello David'). If no cookie is found then we display 'Login'. We've done something similar - just get Varnish to ignore the cookie entirely and then just use Javascript to check for the cookie and write out the data. If there is no cookie, it shows "Login". Rich From richard.chiswell at mangahigh.com Tue Apr 20 15:13:00 2010 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Tue, 20 Apr 2010 16:13:00 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local> Message-ID: <4BCDC47C.40203@mangahigh.com> Hi David, On 20/04/2010 16:08, David Murphy wrote: > Thanks Rich > > When you say ignore do you mean unset e.g. > > sub vcl_recv { > //snip > unset req.http.cookie; > } > We do something like: sub vcl_recv { ... if (req.http.Cookie) { set req.http.Cookie = ";" req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(Cookies|WeWantTo|Keep)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { remove req.http.Cookie; } } ... } The Cookies|We... regular expression are for things like admin cookies which we want to be set. Rich From David at firechaser.com Tue Apr 20 15:32:09 2010 From: David at firechaser.com (David Murphy) Date: Tue, 20 Apr 2010 16:32:09 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <4BCDC47C.40203@mangahigh.com> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local>, <4BCDC47C.40203@mangahigh.com> Message-ID: <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local> Very helpful, thanks. So the admin cookies are different from the simple JS cookies that provide the 'Hello ' value? My understanding is that if a page is cached with unique cookie then there will be an object for every unique cookie value (tom, dick, harry etc) an as a result we'll get a low hit-rate. However, my guess is that I've misunderstood how this works, and that I'm wrong :) Is it just the cookie name ('firstname') that is important rather than the cookie value ('Tom') when decided whether to unset the cookie on a varnish cached page? Thanks, David ________________________________________ From: Richard Chiswell [richard.chiswell at mangahigh.com] Sent: 20 April 2010 16:13 To: David Murphy Cc: varnish-misc at varnish-cache.org Subject: Re: Cookies - set on non-cached pages, read on all pages Hi David, On 20/04/2010 16:08, David Murphy wrote: > Thanks Rich > > When you say ignore do you mean unset e.g. > > sub vcl_recv { > //snip > unset req.http.cookie; > } > We do something like: sub vcl_recv { ... if (req.http.Cookie) { set req.http.Cookie = ";" req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(Cookies|WeWantTo|Keep)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { remove req.http.Cookie; } } ... } The Cookies|We... regular expression are for things like admin cookies which we want to be set. Rich From rtshilston at gmail.com Tue Apr 20 15:40:13 2010 From: rtshilston at gmail.com (Rob S) Date: Tue, 20 Apr 2010 16:40:13 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local>, <4BCDC47C.40203@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local> Message-ID: <4BCDCADD.9080304@gmail.com> We too operate a Varnish cache + JS for personalisation. Our approach is as follows: Normal GET request for normal public pages: unset cookie, serve cached page. Requests for login page, admin or pages that are more personal than can be achieved with JS: Make varnish transparent. This is pretty simple, and works well for us. However, if you're not able to identify the admin / login areas from the URL, then you might find this quite hard. Rob David Murphy wrote: > Very helpful, thanks. > > So the admin cookies are different from the simple JS cookies that provide the 'Hello ' value? > > My understanding is that if a page is cached with unique cookie then there will be an object for every unique cookie value (tom, dick, harry etc) an as a result we'll get a low hit-rate. However, my guess is that I've misunderstood how this works, and that I'm wrong :) > > Is it just the cookie name ('firstname') that is important rather than the cookie value ('Tom') when decided whether to unset the cookie on a varnish cached page? > > Thanks, David > ________________________________________ > From: Richard Chiswell [richard.chiswell at mangahigh.com] > Sent: 20 April 2010 16:13 > To: David Murphy > Cc: varnish-misc at varnish-cache.org > Subject: Re: Cookies - set on non-cached pages, read on all pages > > Hi David, > > On 20/04/2010 16:08, David Murphy wrote: > >> Thanks Rich >> >> When you say ignore do you mean unset e.g. >> >> sub vcl_recv { >> //snip >> unset req.http.cookie; >> } >> >> > We do something like: > sub vcl_recv { > ... > if (req.http.Cookie) { > set req.http.Cookie = ";" req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, > ";(Cookies|WeWantTo|Keep)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > if (req.http.Cookie == "") { > remove req.http.Cookie; > } > } > ... > } > > The Cookies|We... regular expression are for things like admin cookies > which we want to be set. > > Rich > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From David at firechaser.com Tue Apr 20 15:59:41 2010 From: David at firechaser.com (David Murphy) Date: Tue, 20 Apr 2010 16:59:41 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <4BCDCADD.9080304@gmail.com> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local>, <4BCDC47C.40203@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local>, <4BCDCADD.9080304@gmail.com> Message-ID: <345BD8B3F8775748A4676A625EADA22417D87B02@DURAN.firechaser.local> Thanks Rob We use req.url ~ "^/admin/" to identify the admin area of the site and we force Varnish to grab content from back end and not cache anything if this is part of URL. Works fine for us. So,for JS personalisation you're unsetting cookies when saving the pages to cache, and then unsetting when serving from cache? Something like? ... sub vcl_recv { if (!req.url ~ "^/admin") { unset req.http.cookie; } //snip } sub vcl_fetch { if (req.url ~ "^/admin") { unset beresp.http.set-cookie; } //snip } Best, David ________________________________________ From: Rob S [rtshilston at gmail.com] Sent: 20 April 2010 16:40 To: David Murphy Cc: Richard Chiswell; varnish-misc at varnish-cache.org Subject: Re: Cookies - set on non-cached pages, read on all pages We too operate a Varnish cache + JS for personalisation. Our approach is as follows: Normal GET request for normal public pages: unset cookie, serve cached page. Requests for login page, admin or pages that are more personal than can be achieved with JS: Make varnish transparent. This is pretty simple, and works well for us. However, if you're not able to identify the admin / login areas from the URL, then you might find this quite hard. Rob David Murphy wrote: > Very helpful, thanks. > > So the admin cookies are different from the simple JS cookies that provide the 'Hello ' value? > > My understanding is that if a page is cached with unique cookie then there will be an object for every unique cookie value (tom, dick, harry etc) an as a result we'll get a low hit-rate. However, my guess is that I've misunderstood how this works, and that I'm wrong :) > > Is it just the cookie name ('firstname') that is important rather than the cookie value ('Tom') when decided whether to unset the cookie on a varnish cached page? > > Thanks, David > ________________________________________ > From: Richard Chiswell [richard.chiswell at mangahigh.com] > Sent: 20 April 2010 16:13 > To: David Murphy > Cc: varnish-misc at varnish-cache.org > Subject: Re: Cookies - set on non-cached pages, read on all pages > > Hi David, > > On 20/04/2010 16:08, David Murphy wrote: > >> Thanks Rich >> >> When you say ignore do you mean unset e.g. >> >> sub vcl_recv { >> //snip >> unset req.http.cookie; >> } >> >> > We do something like: > sub vcl_recv { > ... > if (req.http.Cookie) { > set req.http.Cookie = ";" req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, > ";(Cookies|WeWantTo|Keep)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > if (req.http.Cookie == "") { > remove req.http.Cookie; > } > } > ... > } > > The Cookies|We... regular expression are for things like admin cookies > which we want to be set. > > Rich > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From rtshilston at gmail.com Tue Apr 20 16:08:52 2010 From: rtshilston at gmail.com (Rob S) Date: Tue, 20 Apr 2010 17:08:52 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <345BD8B3F8775748A4676A625EADA22417D87B02@DURAN.firechaser.local> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local>, <4BCDC47C.40203@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local>, <4BCDCADD.9080304@gmail.com> <345BD8B3F8775748A4676A625EADA22417D87B02@DURAN.firechaser.local> Message-ID: <4BCDD194.1020408@gmail.com> Not quite. We don't unset the cookies in vcl_recv, but we've ensured that that function ends with "lookup", so we never hit the default recipe which would otherwise prevent caching if a cookie was set. Then, we approximately do something like: sub vcl_fetch { if (obj.ttl > 0) { unset obj.http.Set-Cookie; } .... } What this means is that: If the backend thinks the response is cacheable, then make sure we strip cookies. If it's not cacheable, then we don't care if cookies are set or not. Obviously this can't be applied blindly in front of an arbitrary backend. Fortunately, our backends are running apps completely under our control, so this isn't a worry. Rob David Murphy wrote: > Thanks Rob > > We use > > req.url ~ "^/admin/" > > to identify the admin area of the site and we force Varnish to grab content from back end and not cache anything if this is part of URL. Works fine for us. > > So,for JS personalisation you're unsetting cookies when saving the pages to cache, and then unsetting when serving from cache? > > Something like? ... > > sub vcl_recv { > if (!req.url ~ "^/admin") { > unset req.http.cookie; > } > //snip > } > > sub vcl_fetch { > if (req.url ~ "^/admin") { > unset beresp.http.set-cookie; > } > //snip > } > > > Best, David > > ________________________________________ > From: Rob S [rtshilston at gmail.com] > Sent: 20 April 2010 16:40 > To: David Murphy > Cc: Richard Chiswell; varnish-misc at varnish-cache.org > Subject: Re: Cookies - set on non-cached pages, read on all pages > > We too operate a Varnish cache + JS for personalisation. Our approach > is as follows: > > Normal GET request for normal public pages: unset cookie, serve cached page. > Requests for login page, admin or pages that are more personal than can > be achieved with JS: Make varnish transparent. > > This is pretty simple, and works well for us. However, if you're not > able to identify the admin / login areas from the URL, then you might > find this quite hard. > > > Rob > > > > David Murphy wrote: > >> Very helpful, thanks. >> >> So the admin cookies are different from the simple JS cookies that provide the 'Hello ' value? >> >> My understanding is that if a page is cached with unique cookie then there will be an object for every unique cookie value (tom, dick, harry etc) an as a result we'll get a low hit-rate. However, my guess is that I've misunderstood how this works, and that I'm wrong :) >> >> Is it just the cookie name ('firstname') that is important rather than the cookie value ('Tom') when decided whether to unset the cookie on a varnish cached page? >> >> Thanks, David >> ________________________________________ >> From: Richard Chiswell [richard.chiswell at mangahigh.com] >> Sent: 20 April 2010 16:13 >> To: David Murphy >> Cc: varnish-misc at varnish-cache.org >> Subject: Re: Cookies - set on non-cached pages, read on all pages >> >> Hi David, >> >> On 20/04/2010 16:08, David Murphy wrote: >> >> >>> Thanks Rich >>> >>> When you say ignore do you mean unset e.g. >>> >>> sub vcl_recv { >>> //snip >>> unset req.http.cookie; >>> } >>> >>> >>> >> We do something like: >> sub vcl_recv { >> ... >> if (req.http.Cookie) { >> set req.http.Cookie = ";" req.http.Cookie; >> set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); >> set req.http.Cookie = regsuball(req.http.Cookie, >> ";(Cookies|WeWantTo|Keep)=", "; \1="); >> set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); >> set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); >> if (req.http.Cookie == "") { >> remove req.http.Cookie; >> } >> } >> ... >> } >> >> The Cookies|We... regular expression are for things like admin cookies >> which we want to be set. >> >> Rich >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc >> >> > > From David at firechaser.com Tue Apr 20 17:11:22 2010 From: David at firechaser.com (David Murphy) Date: Tue, 20 Apr 2010 18:11:22 +0100 Subject: Cookies - set on non-cached pages, read on all pages In-Reply-To: <4BCDD194.1020408@gmail.com> References: <345BD8B3F8775748A4676A625EADA22417D87AF9@DURAN.firechaser.local>, <4BCDBD0C.5060904@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87AFF@DURAN.firechaser.local>, <4BCDC47C.40203@mangahigh.com> <345BD8B3F8775748A4676A625EADA22417D87B01@DURAN.firechaser.local>, <4BCDCADD.9080304@gmail.com> <345BD8B3F8775748A4676A625EADA22417D87B02@DURAN.firechaser.local>, <4BCDD194.1020408@gmail.com> Message-ID: <345BD8B3F8775748A4676A625EADA22417D87B06@DURAN.firechaser.local> Rob, Rich - thank you both very much. I have enough to crack on with this myself now. Best, David ________________________________________ From: Rob S [rtshilston at gmail.com] Sent: 20 April 2010 17:08 To: David Murphy Cc: Richard Chiswell; varnish-misc at varnish-cache.org Subject: Re: Cookies - set on non-cached pages, read on all pages Not quite. We don't unset the cookies in vcl_recv, but we've ensured that that function ends with "lookup", so we never hit the default recipe which would otherwise prevent caching if a cookie was set. Then, we approximately do something like: sub vcl_fetch { if (obj.ttl > 0) { unset obj.http.Set-Cookie; } .... } What this means is that: If the backend thinks the response is cacheable, then make sure we strip cookies. If it's not cacheable, then we don't care if cookies are set or not. Obviously this can't be applied blindly in front of an arbitrary backend. Fortunately, our backends are running apps completely under our control, so this isn't a worry. Rob David Murphy wrote: > Thanks Rob > > We use > > req.url ~ "^/admin/" > > to identify the admin area of the site and we force Varnish to grab content from back end and not cache anything if this is part of URL. Works fine for us. > > So,for JS personalisation you're unsetting cookies when saving the pages to cache, and then unsetting when serving from cache? > > Something like? ... > > sub vcl_recv { > if (!req.url ~ "^/admin") { > unset req.http.cookie; > } > //snip > } > > sub vcl_fetch { > if (req.url ~ "^/admin") { > unset beresp.http.set-cookie; > } > //snip > } > > > Best, David > > ________________________________________ > From: Rob S [rtshilston at gmail.com] > Sent: 20 April 2010 16:40 > To: David Murphy > Cc: Richard Chiswell; varnish-misc at varnish-cache.org > Subject: Re: Cookies - set on non-cached pages, read on all pages > > We too operate a Varnish cache + JS for personalisation. Our approach > is as follows: > > Normal GET request for normal public pages: unset cookie, serve cached page. > Requests for login page, admin or pages that are more personal than can > be achieved with JS: Make varnish transparent. > > This is pretty simple, and works well for us. However, if you're not > able to identify the admin / login areas from the URL, then you might > find this quite hard. > > > Rob > > > > David Murphy wrote: > >> Very helpful, thanks. >> >> So the admin cookies are different from the simple JS cookies that provide the 'Hello ' value? >> >> My understanding is that if a page is cached with unique cookie then there will be an object for every unique cookie value (tom, dick, harry etc) an as a result we'll get a low hit-rate. However, my guess is that I've misunderstood how this works, and that I'm wrong :) >> >> Is it just the cookie name ('firstname') that is important rather than the cookie value ('Tom') when decided whether to unset the cookie on a varnish cached page? >> >> Thanks, David >> ________________________________________ >> From: Richard Chiswell [richard.chiswell at mangahigh.com] >> Sent: 20 April 2010 16:13 >> To: David Murphy >> Cc: varnish-misc at varnish-cache.org >> Subject: Re: Cookies - set on non-cached pages, read on all pages >> >> Hi David, >> >> On 20/04/2010 16:08, David Murphy wrote: >> >> >>> Thanks Rich >>> >>> When you say ignore do you mean unset e.g. >>> >>> sub vcl_recv { >>> //snip >>> unset req.http.cookie; >>> } >>> >>> >>> >> We do something like: >> sub vcl_recv { >> ... >> if (req.http.Cookie) { >> set req.http.Cookie = ";" req.http.Cookie; >> set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); >> set req.http.Cookie = regsuball(req.http.Cookie, >> ";(Cookies|WeWantTo|Keep)=", "; \1="); >> set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); >> set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); >> if (req.http.Cookie == "") { >> remove req.http.Cookie; >> } >> } >> ... >> } >> >> The Cookies|We... regular expression are for things like admin cookies >> which we want to be set. >> >> Rich >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc >> >> > > From augusto at jadedpixel.com Thu Apr 22 20:27:35 2010 From: augusto at jadedpixel.com (Augusto Becciu) Date: Thu, 22 Apr 2010 17:27:35 -0300 Subject: Crazy value for sms_nbytes in varnish 2.1 Message-ID: Hey guys, We're running varnish 2.1 in two 64bit linux servers with 17Gb of ram. Here's the config we use: varnishd -P /var/run/varnishd.pid -a 0.0.0.0:2000 -T 127.0.0.1:6082 -w 200,2000 -s malloc,12G -p lru_interval=20 -f /etc/varnish/varnish.vcl Suddenly the value of sms_nbytes in one of those went from 0 to 18446744073709550272. What does that mean? Other than that everything seems to be normal. Here's the output of varnishstat -1: client_conn 198874 7.77 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 1924202 75.17 Client requests received cache_hit 1292505 50.49 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 607985 23.75 Cache misses backend_conn 605971 23.67 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 281 0.01 Backend conn. failures backend_reuse 1867 0.07 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 1867 0.07 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 579004 22.62 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 28803 1.13 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 3 0.00 Fetch failed n_sess_mem 148 . N struct sess_mem n_sess 25 . N struct sess n_object 577014 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 577074 . N struct objectcore n_objecthead 577659 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 4 . N struct vbe_conn n_wrk 400 . N worker threads n_wrk_create 400 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 15 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 2 . N backends n_expired 28911 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 1139364 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 1760199 68.76 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 198871 7.77 Total Sessions s_req 1924202 75.17 Total Requests s_pipe 2 0.00 Total pipe s_pass 12 0.00 Total pass s_fetch 607804 23.74 Total fetch s_hdrbytes 864214404 33759.69 Total header bytes s_bodybytes 32272815948 1260706.12 Total body bytes sess_closed 148553 5.80 Session Closed sess_pipeline 2 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 1826330 71.34 Session Linger sess_herd 421778 16.48 Session herd shm_records 111041082 4337.71 SHM records shm_writes 3637366 142.09 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 3147 0.12 SHM MTX contention shm_cycles 47 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 1213629 47.41 SMA allocator requests sma_nobj 1153922 . SMA outstanding allocations sma_nbytes 11134381624 . SMA outstanding bytes sma_balloc 15050369209 . SMA bytes allocated sma_bfree 3915987585 . SMA bytes free sms_nreq 23891 0.93 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 18446744073709550272 . SMS outstanding bytes sms_balloc 10708772 . SMS bytes allocated sms_bfree 10710116 . SMS bytes freed backend_req 607863 23.75 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 23699 . N total active purges n_purge_add 23699 0.93 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 1209477 47.25 N objects tested n_purge_re_test 708554915 27679.01 N regexps tested against n_purge_dups 16004 0.63 N duplicate purges removed hcb_nolock 1295385 50.60 HCB Lookups without lock hcb_lock 1 0.00 HCB Lookups with lock hcb_insert 605109 23.64 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 25599 1.00 Client uptime Thanks in advance. Augusto From david.birdsong at gmail.com Thu Apr 22 20:30:58 2010 From: david.birdsong at gmail.com (David Birdsong) Date: Thu, 22 Apr 2010 13:30:58 -0700 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: References: Message-ID: This happens to me a lot also. I'm not on the same version, but I've seen it happen on many version of trunk. On Thu, Apr 22, 2010 at 1:27 PM, Augusto Becciu wrote: > Hey guys, > > We're running varnish 2.1 in two 64bit linux servers with 17Gb of ram. > Here's the config we use: > > varnishd -P /var/run/varnishd.pid -a 0.0.0.0:2000 -T 127.0.0.1:6082 -w > 200,2000 -s malloc,12G -p lru_interval=20 -f /etc/varnish/varnish.vcl > > Suddenly the value of sms_nbytes in one of those went from 0 to > 18446744073709550272. What does that mean? > Other than that everything seems to be normal. > > Here's the output of varnishstat -1: > > client_conn ? ? ? ? ? ?198874 ? ? ? ? 7.77 Client connections accepted > client_drop ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Connection dropped, no sess/wrk > client_req ? ? ? ? ? ?1924202 ? ? ? ?75.17 Client requests received > cache_hit ? ? ? ? ? ? 1292505 ? ? ? ?50.49 Cache hits > cache_hitpass ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Cache hits for pass > cache_miss ? ? ? ? ? ? 607985 ? ? ? ?23.75 Cache misses > backend_conn ? ? ? ? ? 605971 ? ? ? ?23.67 Backend conn. success > backend_unhealthy ? ? ? ? ? ?0 ? ? ? ? 0.00 Backend conn. not attempted > backend_busy ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 Backend conn. too many > backend_fail ? ? ? ? ? ? ?281 ? ? ? ? 0.01 Backend conn. failures > backend_reuse ? ? ? ? ? ?1867 ? ? ? ? 0.07 Backend conn. reuses > backend_toolate ? ? ? ? ? ? 0 ? ? ? ? 0.00 Backend conn. was closed > backend_recycle ? ? ? ? ?1867 ? ? ? ? 0.07 Backend conn. recycles > backend_unused ? ? ? ? ? ? ?0 ? ? ? ? 0.00 Backend conn. unused > fetch_head ? ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 Fetch head > fetch_length ? ? ? ? ? 579004 ? ? ? ?22.62 Fetch with Length > fetch_chunked ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Fetch chunked > fetch_eof ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Fetch EOF > fetch_bad ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Fetch had bad headers > fetch_close ? ? ? ? ? ? 28803 ? ? ? ? 1.13 Fetch wanted close > fetch_oldhttp ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Fetch pre HTTP/1.1 closed > fetch_zero ? ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 Fetch zero len > fetch_failed ? ? ? ? ? ? ? ?3 ? ? ? ? 0.00 Fetch failed > n_sess_mem ? ? ? ? ? ? ? ?148 ? ? ? ? ?. ? N struct sess_mem > n_sess ? ? ? ? ? ? ? ? ? ? 25 ? ? ? ? ?. ? N struct sess > n_object ? ? ? ? ? ? ? 577014 ? ? ? ? ?. ? N struct object > n_vampireobject ? ? ? ? ? ? 0 ? ? ? ? ?. ? N unresurrected objects > n_objectcore ? ? ? ? ? 577074 ? ? ? ? ?. ? N struct objectcore > n_objecthead ? ? ? ? ? 577659 ? ? ? ? ?. ? N struct objecthead > n_smf ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? N struct smf > n_smf_frag ? ? ? ? ? ? ? ? ?0 ? ? ? ? ?. ? N small free smf > n_smf_large ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? N large free smf > n_vbe_conn ? ? ? ? ? ? ? ? ?4 ? ? ? ? ?. ? N struct vbe_conn > n_wrk ? ? ? ? ? ? ? ? ? ? 400 ? ? ? ? ?. ? N worker threads > n_wrk_create ? ? ? ? ? ? ?400 ? ? ? ? 0.02 N worker threads created > n_wrk_failed ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 N worker threads not created > n_wrk_max ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 N worker threads limited > n_wrk_queue ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 N queued work requests > n_wrk_overflow ? ? ? ? ? ? 15 ? ? ? ? 0.00 N overflowed work requests > n_wrk_drop ? ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 N dropped work requests > n_backend ? ? ? ? ? ? ? ? ? 2 ? ? ? ? ?. ? N backends > n_expired ? ? ? ? ? ? ? 28911 ? ? ? ? ?. ? N expired objects > n_lru_nuked ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? N LRU nuked objects > n_lru_saved ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? N LRU saved objects > n_lru_moved ? ? ? ? ? 1139364 ? ? ? ? ?. ? N LRU moved objects > n_deathrow ? ? ? ? ? ? ? ? ?0 ? ? ? ? ?. ? N objects on deathrow > losthdr ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 HTTP header overflows > n_objsendfile ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Objects sent with sendfile > n_objwrite ? ? ? ? ? ?1760199 ? ? ? ?68.76 Objects sent with write > n_objoverflow ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Objects overflowing workspace > s_sess ? ? ? ? ? ? ? ? 198871 ? ? ? ? 7.77 Total Sessions > s_req ? ? ? ? ? ? ? ? 1924202 ? ? ? ?75.17 Total Requests > s_pipe ? ? ? ? ? ? ? ? ? ? ?2 ? ? ? ? 0.00 Total pipe > s_pass ? ? ? ? ? ? ? ? ? ? 12 ? ? ? ? 0.00 Total pass > s_fetch ? ? ? ? ? ? ? ?607804 ? ? ? ?23.74 Total fetch > s_hdrbytes ? ? ? ? ?864214404 ? ? 33759.69 Total header bytes > s_bodybytes ? ? ? 32272815948 ? 1260706.12 Total body bytes > sess_closed ? ? ? ? ? ?148553 ? ? ? ? 5.80 Session Closed > sess_pipeline ? ? ? ? ? ? ? 2 ? ? ? ? 0.00 Session Pipeline > sess_readahead ? ? ? ? ? ? ?0 ? ? ? ? 0.00 Session Read Ahead > sess_linger ? ? ? ? ? 1826330 ? ? ? ?71.34 Session Linger > sess_herd ? ? ? ? ? ? ?421778 ? ? ? ?16.48 Session herd > shm_records ? ? ? ? 111041082 ? ? ?4337.71 SHM records > shm_writes ? ? ? ? ? ?3637366 ? ? ? 142.09 SHM writes > shm_flushes ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 SHM flushes due to overflow > shm_cont ? ? ? ? ? ? ? ? 3147 ? ? ? ? 0.12 SHM MTX contention > shm_cycles ? ? ? ? ? ? ? ? 47 ? ? ? ? 0.00 SHM cycles through buffer > sm_nreq ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 allocator requests > sm_nobj ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? outstanding allocations > sm_balloc ? ? ? ? ? ? ? ? ? 0 ? ? ? ? ?. ? bytes allocated > sm_bfree ? ? ? ? ? ? ? ? ? ?0 ? ? ? ? ?. ? bytes free > sma_nreq ? ? ? ? ? ? ?1213629 ? ? ? ?47.41 SMA allocator requests > sma_nobj ? ? ? ? ? ? ?1153922 ? ? ? ? ?. ? SMA outstanding allocations > sma_nbytes ? ? ? ?11134381624 ? ? ? ? ?. ? SMA outstanding bytes > sma_balloc ? ? ? ?15050369209 ? ? ? ? ?. ? SMA bytes allocated > sma_bfree ? ? ? ? ?3915987585 ? ? ? ? ?. ? SMA bytes free > sms_nreq ? ? ? ? ? ? ? ?23891 ? ? ? ? 0.93 SMS allocator requests > sms_nobj ? ? ? ? ? ? ? ? ? ?0 ? ? ? ? ?. ? SMS outstanding allocations > sms_nbytes ? ? ? 18446744073709550272 ? ? ? ? ?. ? SMS outstanding bytes > sms_balloc ? ? ? ? ? 10708772 ? ? ? ? ?. ? SMS bytes allocated > sms_bfree ? ? ? ? ? ?10710116 ? ? ? ? ?. ? SMS bytes freed > backend_req ? ? ? ? ? ?607863 ? ? ? ?23.75 Backend requests made > n_vcl ? ? ? ? ? ? ? ? ? ? ? 1 ? ? ? ? 0.00 N vcl total > n_vcl_avail ? ? ? ? ? ? ? ? 1 ? ? ? ? 0.00 N vcl available > n_vcl_discard ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 N vcl discarded > n_purge ? ? ? ? ? ? ? ? 23699 ? ? ? ? ?. ? N total active purges > n_purge_add ? ? ? ? ? ? 23699 ? ? ? ? 0.93 N new purges added > n_purge_retire ? ? ? ? ? ? ?0 ? ? ? ? 0.00 N old purges deleted > n_purge_obj_test ? ? ?1209477 ? ? ? ?47.25 N objects tested > n_purge_re_test ? ? 708554915 ? ? 27679.01 N regexps tested against > n_purge_dups ? ? ? ? ? ?16004 ? ? ? ? 0.63 N duplicate purges removed > hcb_nolock ? ? ? ? ? ?1295385 ? ? ? ?50.60 HCB Lookups without lock > hcb_lock ? ? ? ? ? ? ? ? ? ?1 ? ? ? ? 0.00 HCB Lookups with lock > hcb_insert ? ? ? ? ? ? 605109 ? ? ? ?23.64 HCB Inserts > esi_parse ? ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Objects ESI parsed (unlock) > esi_errors ? ? ? ? ? ? ? ? ?0 ? ? ? ? 0.00 ESI parse errors (unlock) > accept_fail ? ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Accept failures > client_drop_late ? ? ? ? ? ?0 ? ? ? ? 0.00 Connection dropped late > uptime ? ? ? ? ? ? ? ? ?25599 ? ? ? ? 1.00 Client uptime > > Thanks in advance. > > Augusto > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From phk at phk.freebsd.dk Thu Apr 22 22:07:59 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 22 Apr 2010 22:07:59 +0000 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: Your message of "Thu, 22 Apr 2010 17:27:35 -0300." Message-ID: <6971.1271974079@critter.freebsd.dk> In message , Au gusto Becciu writes: >Hey guys, > >We're running varnish 2.1 in two 64bit linux servers with 17Gb of ram. >Here's the config we use: >sms_nbytes 18446744073709550272 . SMS outstanding bytes It's a bug, but it is harmless, it only affects the statistics. File a ticket please. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From augusto at jadedpixel.com Fri Apr 23 14:52:09 2010 From: augusto at jadedpixel.com (Augusto Becciu) Date: Fri, 23 Apr 2010 11:52:09 -0300 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: <6971.1271974079@critter.freebsd.dk> References: <6971.1271974079@critter.freebsd.dk> Message-ID: Thanks. I tried to file a ticket but trac registration is not working for me. On Thu, Apr 22, 2010 at 7:07 PM, Poul-Henning Kamp wrote: > In message , Au > gusto Becciu writes: >>Hey guys, >> >>We're running varnish 2.1 in two 64bit linux servers with 17Gb of ram. >>Here's the config we use: > >>sms_nbytes ? ? ? 18446744073709550272 ? ? ? ? ?. ? SMS outstanding bytes > > It's a bug, but it is harmless, it only affects the statistics. > > File a ticket please. > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > From felix at seconddrawer.com.au Fri Apr 23 17:12:55 2010 From: felix at seconddrawer.com.au (Felix) Date: Sat, 24 Apr 2010 00:12:55 +0700 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: References: <6971.1271974079@critter.freebsd.dk> Message-ID: <20100423171255.GB5823@harpo> Yeah, same for me. -felix On Fri, Apr 23, 2010 at 11:52:09AM -0300, Augusto Becciu wrote: > Thanks. I tried to file a ticket but trac registration is not working for me. > > On Thu, Apr 22, 2010 at 7:07 PM, Poul-Henning Kamp wrote: > > In message , Au > > gusto Becciu writes: > >>Hey guys, > >> > >>We're running varnish 2.1 in two 64bit linux servers with 17Gb of ram. > >>Here's the config we use: > > > >>sms_nbytes ? ? ? 18446744073709550272 ? ? ? ? ?. ? SMS outstanding bytes > > > > It's a bug, but it is harmless, it only affects the statistics. > > > > File a ticket please. > > > > -- > > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > > Never attribute to malice what can adequately be explained by incompetence. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc -- email: felix at seconddrawer.com.au web: http://seconddrawer.com.au/ gpg: E6FC 5BC6 268D B874 E546 8F6F A2BB 220B D5F6 92E3 Please don't send me Word or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html From tfheen at varnish-software.com Mon Apr 26 08:01:42 2010 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 26 Apr 2010 10:01:42 +0200 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: (Augusto Becciu's message of "Fri, 23 Apr 2010 11:52:09 -0300") References: <6971.1271974079@critter.freebsd.dk> Message-ID: <87iq7eej6h.fsf@qurzaw.linpro.no> ]] Augusto Becciu | Thanks. I tried to file a ticket but trac registration is not working | for me. Sorry about that; should be fixed now. -- Tollef Fog Heen Varnish Software t: +47 21 54 41 73 From Matt.Robinson at muzu.tv Mon Apr 26 09:23:20 2010 From: Matt.Robinson at muzu.tv (Matt Robinson) Date: Mon, 26 Apr 2010 10:23:20 +0100 Subject: Regex now case-sensitive again? Message-ID: <354614501D0BE0409EE7409E26021EB801756049@atlas.MUZU.TV> Hi all, After upgrading one of our servers from Varnish 2.0.4 to 2.1, it's behaving as though the regex matching is case-sensitive. My understanding was that this had been case-insensitive for a couple of years now. I notice that the regex engine was changed in 2.1, so could this be the reason for this behaviour? Has anyone else seen this? If so, was it an intentional change? Thanks for any help, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Mon Apr 26 09:34:22 2010 From: stewsnooze at gmail.com (Stewart Robinson) Date: Mon, 26 Apr 2010 10:34:22 +0100 Subject: Regex now case-sensitive again? In-Reply-To: <354614501D0BE0409EE7409E26021EB801756049@atlas.MUZU.TV> References: <354614501D0BE0409EE7409E26021EB801756049@atlas.MUZU.TV> Message-ID: <5ECD9495-E62B-442B-80B0-17142A70BA4A@gmail.com> In 2.1.0 we switched to PCRE. So you probably just need to add the case insensitive flag to your regex /i ? On 26 Apr 2010, at 10:23, Matt Robinson wrote: > Hi all, > > After upgrading one of our servers from Varnish 2.0.4 to 2.1, it's > behaving as though the regex matching is case-sensitive. My > understanding was that this had been case-insensitive for a couple > of years now. > > I notice that the regex engine was changed in 2.1, so could this be > the reason for this behaviour? Has anyone else seen this? If so, > was it an intentional change? > > Thanks for any help, > > Matt > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Apr 26 12:25:31 2010 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 26 Apr 2010 12:25:31 +0000 Subject: Regex now case-sensitive again? In-Reply-To: Your message of "Mon, 26 Apr 2010 10:23:20 +0100." <354614501D0BE0409EE7409E26021EB801756049@atlas.MUZU.TV> Message-ID: <471.1272284731@critter.freebsd.dk> In message <354614501D0BE0409EE7409E26021EB801756049 at atlas.MUZU.TV>, "Matt Robi nson" writes: >After upgrading one of our servers from Varnish 2.0.4 to 2.1, it's >behaving as though the regex matching is case-sensitive. This was discussed as part of the PCRE adoptation, you can make them case-insensitive with "?i" (see PCRE docs) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From darron at nonfiction.ca Tue Apr 27 16:54:09 2010 From: darron at nonfiction.ca (Darron Froese) Date: Tue, 27 Apr 2010 10:54:09 -0600 Subject: A couple varnish 2.1 questions... Message-ID: I have a Rackspace Cloud server running Ubuntu 9.10 (1GB of RAM, Quad-Core AMD Opteron Processor 2374 HE, I believe it's 64bit) specifically JUST to run Varnish for a festival website. It's fast enough for most of the year, but for about 3 weeks it gets pretty loaded and I want to make sure that the site survives without issues this year. It's mostly static information so pretty easy to cache. Varnish 2.1 is compiled from this dsc file: http://archive.ubuntu.com/ubuntu/pool/universe/v/varnish/varnish_2.1.0-2.dsc I have 2 questions: 1. I know that I'm doing something wrong with my vcl file - what I want is this: 5 minute TTL for all content (regardless of headers) - as long as the backend is healthy 1 hour TTL for all content if the backend is sick (500 error, not responding, etc) I've tried what's referenced here: http://varnish-cache.org/wiki/VCLExampleGrace http://varnish-cache.org/wiki/BackendPolling And it didn't seem to work - some content gave out a 50x error RIGHT away when I took the test backend down and other content worked for 5 minutes and then gave 50x errors. Here was that vcl: http://gist.github.com/380966 Here's my current vcl (with hostnames obscured sorry): http://gist.github.com/380942 I know it's wrong - can somebody point out what I'm missing? I've messed with it for a while now and just can't seem to get it to do what I want. 2. I stress tested that server overnight hit 479M pageviews which will be much higher than I need - I basically ran these two commands in a loop: ab -n 10000 -c 100 http://varnish.nonfiction.ca/ wget -q -A *.html -r http://varnish.nonfiction.ca/ NOTE: High levels of single page traffic on this site, it exists to serve out lots of homepage traffic and very low levels of anything else. All external images/CSS and JS files will be served via CDN. Varnishstat at the end of the run: http://drp.ly/TEvcv Here's the munin details: http://drp.ly/TEzd0 My only concern is the amount of committed memory - should it be rising that high? After I killed varnish - the committed memory went back down: http://drp.ly/TFtdx Do I need to put a manual restart of Varnish into my crontab to free that up? Any responses/links/etc. would be great - thanks. From bedis9 at gmail.com Tue Apr 27 17:17:50 2010 From: bedis9 at gmail.com (Bedis 9) Date: Tue, 27 Apr 2010 19:17:50 +0200 Subject: A couple varnish 2.1 questions... In-Reply-To: References: Message-ID: Hi, You might need beresp.grace in the vcl_fetch. otherwise the objetcs won't be server if backend is unavailable. set beresp.grace = 1h; rgs On Tue, Apr 27, 2010 at 6:54 PM, Darron Froese wrote: > I have a Rackspace Cloud server running Ubuntu 9.10 (1GB of RAM, > Quad-Core AMD Opteron Processor 2374 HE, I believe it's 64bit) > specifically JUST to run Varnish for a festival website. It's fast > enough for most of the year, but for about 3 weeks it gets pretty > loaded and I want to make sure that the site survives without issues > this year. It's mostly static information so pretty easy to cache. > > Varnish 2.1 is compiled from this dsc file: > > http://archive.ubuntu.com/ubuntu/pool/universe/v/varnish/varnish_2.1.0-2.dsc > > I have 2 questions: > > 1. I know that I'm doing something wrong with my vcl file - what I want is this: > > 5 minute TTL for all content (regardless of headers) - as long as the > backend is healthy > 1 hour TTL for all content if the backend is sick (500 error, not > responding, etc) > > I've tried what's referenced here: > > http://varnish-cache.org/wiki/VCLExampleGrace > http://varnish-cache.org/wiki/BackendPolling > > And it didn't seem to work - some content gave out a 50x error RIGHT > away when I took the test backend down and other content worked for 5 > minutes and then gave 50x errors. > > Here was that vcl: > > http://gist.github.com/380966 > > Here's my current vcl (with hostnames obscured sorry): > > http://gist.github.com/380942 > > I know it's wrong - can somebody point out what I'm missing? I've > messed with it for a while now and just can't seem to get it to do > what I want. > > 2. I stress tested that server overnight hit 479M pageviews which will > be much higher than I need - I basically ran these two commands in a > loop: > > ab -n 10000 -c 100 http://varnish.nonfiction.ca/ > wget -q -A *.html -r http://varnish.nonfiction.ca/ > > NOTE: High levels of single page traffic on this site, it exists to > serve out lots of homepage traffic and very low levels of anything > else. All external images/CSS and JS files will be served via CDN. > > Varnishstat at the end of the run: http://drp.ly/TEvcv > Here's the munin details: http://drp.ly/TEzd0 > > My only concern is the amount of committed memory - should it be > rising that high? > > After I killed varnish - the committed memory went back down: > http://drp.ly/TFtdx > > Do I need to put a manual restart of Varnish into my crontab to free that up? > > Any responses/links/etc. would be great - thanks. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From augusto at jadedpixel.com Tue Apr 27 18:43:24 2010 From: augusto at jadedpixel.com (Augusto Becciu) Date: Tue, 27 Apr 2010 15:43:24 -0300 Subject: Crazy value for sms_nbytes in varnish 2.1 In-Reply-To: <87iq7eej6h.fsf@qurzaw.linpro.no> References: <6971.1271974079@critter.freebsd.dk> <87iq7eej6h.fsf@qurzaw.linpro.no> Message-ID: Still not working for me. Same error as before: Error: Bad Request Missing or invalid form token. Do you have cookies enabled? I do have cookies enabled. On Mon, Apr 26, 2010 at 5:01 AM, Tollef Fog Heen wrote: > ]] Augusto Becciu > > | Thanks. I tried to file a ticket but trac registration is not working > | for me. > > Sorry about that; should be fixed now. > > -- > Tollef Fog Heen > Varnish Software > t: +47 21 54 41 73 > From bendj095124367913213465 at gmail.com Wed Apr 28 01:19:16 2010 From: bendj095124367913213465 at gmail.com (Ben DJ) Date: Tue, 27 Apr 2010 18:19:16 -0700 Subject: compression-at-edge, in front of varnish reverse proxy, gzip-compresses some content, but not other ? Message-ID: nginx compression-at-edge (in front of a reverse proxy) compresses some content, not other ? I've an nginx + varnish + apache2 stack. Nginx serves as redirector, ssl handshake, and gzip compression. Varnish serves as a reverse-proxy, and apache 'just' hosts apps & serves content. I'm trying to get gzip compression behaving properly. I'm getting intermittent results -- some content seems to be gzipped, some not. I'm looking for some help figuring out WHERE the problem lies, and what to do to fix it. In nginx conf, I've, ... http { ... gzip on; gzip_http_version 1.0; gzip_comp_level 9; gzip_proxied any; gzip_buffers 16 8k; gzip_min_length 0; gzip_types text/plain text/css text/xml text/javascript application/x-javascript; gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_vary on; upstream varnish { server 127.0.0.1:8090 weight=10 max_fails=3 fail_timeout=15s; } server { listen x.y.z.w:443; ... location / { proxy_pass http://varnish; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Client-Verify SUCCESS; proxy_set_header X-SSL-Subject $ssl_client_s_dn; proxy_set_header X-SSL-Issuer $ssl_client_i_dn; } } ... In varnish config, per chat @ #irc, I've replaced, if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { # if the browser supports it, we'll use gzip set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { # next, try deflate if it is supported set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm. Probably junk, remove it remove req.http.Accept-Encoding; } with a, ... if (req.http.Accept-Encoding) { remove req.http.Accept-Encoding; } ... clause, since the compression is to be done only at the nginx 'edge'. Apache's compression (gzip, default, or otherwise), is completely disabled. Atm, a check of my test site with YSlow complains that .js's are NOT being compressed. Checking with LiveHTTPHeaders Firefox plugin, shows, e.g., the .js in question NOT being gzipped, but a .gif *is*, e.g. ... ---------------------------------------------------------- https://my.site.com/main/apostrophePlugin/js/jquery.keycodes-0.2.js GET /main/apostrophePlugin/js/jquery.keycodes-0.2.js HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/main/ Cookie: SESS6fa8cdc2d7064704bbda0c83e2c2588c=94889db68945e19ed6f666b7e00cdd36; symfony=3KOH8Qk0hV%2C%2CvXLi0PK5YmdenP1 If-Modified-Since: Tue, 27 Apr 2010 18:18:18 GMT If-None-Match: "362e4-1008-4853810064180" Authorization: Digest username="admin", realm="AUTH my.site.com", nonce="5GopBDmFBAA=99f7be8796e018dde459a07178393d235366ecd9", uri="/main/apostrophePlugin/js/jquery.keycodes-0.2.js", algorithm=MD5, response="b04cb995cd1f86a67197aab3b5a5dbc9", qop=auth, nc=000001c9, cnonce="9dbeaefee4d57b12" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 00:39:48 GMT Connection: keep-alive Etag: "362e4-1008-4853beaefee80" Expires: Sat, 01 May 2010 00:39:48 GMT Cache-Control: max-age=259200 Content-Length: 0 X-Varnish: 940462008 Age: 0 Via: 1.1 varnish ---------------------------------------------------------- https://my.site.com/apostrophePlugin/images/a-special-blank.gif GET /apostrophePlugin/images/a-special-blank.gif HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/main/ Cookie: SESS6fa8cdc2d7064704bbda0c83e2c2588c=94889db68945e19ed6f666b7e00cdd36; symfony=3KOH8Qk0hV%2C%2CvXLi0PK5YmdenP1 HTTP/1.1 404 Not Found Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 00:39:48 GMT Content-Type: text/html; charset=iso-8859-1 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding, accept-language,accept-charset Content-Language: en X-Varnish: 940462009 940461993 Age: 87 Via: 1.1 varnish Content-Encoding: gzip ---------------------------------------------------------- ... where, iiuc, the presence/absence of "Content-Encoding: gzip" defines whether or not the item in question was succesfully gzipped. Any ideas why/where the .js is not getting gzipped? Thanks, Ben From l at lrowe.co.uk Wed Apr 28 01:40:39 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 28 Apr 2010 02:40:39 +0100 Subject: compression-at-edge, in front of varnish reverse proxy, gzip-compresses some content, but not other ? In-Reply-To: References: Message-ID: 2010/4/28 Ben DJ : > nginx compression-at-edge (in front of a reverse proxy) compresses > some content, not other ? > > I've an nginx + varnish + apache2 stack. ?Nginx serves as redirector, > ssl handshake, and gzip compression. ?Varnish serves as a > reverse-proxy, and apache 'just' hosts apps & serves content. > > I'm trying to get gzip compression behaving properly. ?I'm getting > intermittent results -- some content seems to be gzipped, some not. > > I'm looking for some help figuring out WHERE the problem lies, and > what to do to fix it. > > In nginx conf, I've, > > ? ? ? ?... > ? ? ? ?http { > ? ? ? ? ? ? ? ?... > ? ? ? ? ? ? ? ?gzip ? ? ? ? ? ? ?on; > ? ? ? ? ? ? ? ?gzip_http_version 1.0; > ? ? ? ? ? ? ? ?gzip_comp_level ? 9; > ? ? ? ? ? ? ? ?gzip_proxied ? ? ?any; > ? ? ? ? ? ? ? ?gzip_buffers ? ? ?16 8k; > ? ? ? ? ? ? ? ?gzip_min_length ? 0; > ? ? ? ? ? ? ? ?gzip_types text/plain text/css text/xml text/javascript > application/x-javascript; > ? ? ? ? ? ? ? ?gzip_disable "MSIE [1-6].(?!.*SV1)"; > ? ? ? ? ? ? ? ?gzip_vary ? ? ? ? on; > > ? ? ? ? ? ? ? ?upstream varnish { > ? ? ? ? ? ? ? ? ? ? ? ?server 127.0.0.1:8090 weight=10 max_fails=3 fail_timeout=15s; > ? ? ? ? ? ? ? ?} > > ? ? ? ? ? ? ? ?server { > ? ? ? ? ? ? ? ? ? ? ? ?listen ? ? ? ? ? ? ? ? ? ?x.y.z.w:443; > ? ? ? ? ? ? ? ? ? ? ? ?... > ? ? ? ? ? ? ? ? ? ? ? ?location / { > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_pass ? ? ? ? ? ? http://varnish; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_redirect ? ? ? ? off; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? Host $host; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? X-Real-IP $remote_addr; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? X-Forwarded-For $proxy_add_x_forwarded_for; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? X-Client-Verify SUCCESS; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? X-SSL-Subject $ssl_client_s_dn; > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proxy_set_header ? ? ? X-SSL-Issuer ?$ssl_client_i_dn; > ? ? ? ? ? ? ? ? ? ? ? ?} > ? ? ? ? ? ? ? ?} > ? ? ? ? ? ? ? ?... > > > In varnish config, per chat @ #irc, I've replaced, > > ? ? ? ?if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { > ? ? ? ? ? ? ? ?# No point in compressing these > ? ? ? ? ? ? ? ?remove req.http.Accept-Encoding; > ? ? ? ?} elsif (req.http.Accept-Encoding ~ "gzip") { > ? ? ? ? ? ? ? ?# if the browser supports it, we'll use gzip > ? ? ? ? ? ? ? ?set req.http.Accept-Encoding = "gzip"; > ? ? ? ?} elsif (req.http.Accept-Encoding ~ "deflate") { > ? ? ? ? ? ? ? ?# next, try deflate if it is supported > ? ? ? ? ? ? ? ?set req.http.Accept-Encoding = "deflate"; > ? ? ? ?} else { > ? ? ? ? ? ? ? ?# unknown algorithm. Probably junk, remove it > ? ? ? ? ? ? ? ?remove req.http.Accept-Encoding; > ? ? ? ?} > > with a, > > ? ? ? ?... > ? ? ? ?if (req.http.Accept-Encoding) { > ? ? ? ? ? ? ? ?remove req.http.Accept-Encoding; > ? ? ? ?} > ? ? ? ?... > > clause, since the compression is to be done only at the nginx 'edge'. > > Apache's compression (gzip, default, or otherwise), is completely disabled. > > Atm, a check of my test site with YSlow complains that .js's are NOT > being compressed. ?Checking with LiveHTTPHeaders Firefox plugin, > shows, e.g., the .js in question NOT being gzipped, but a .gif *is*, > e.g. > > ? ? ? ?... > ? ? ? ?---------------------------------------------------------- > > ? ? ? ?https://my.site.com/main/apostrophePlugin/js/jquery.keycodes-0.2.js > > > > ? ? ? ?GET /main/apostrophePlugin/js/jquery.keycodes-0.2.js HTTP/1.1 > ? ? ? ?Host: my.site.com > ? ? ? ?User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) > Gecko/20100417 > ? ? ? ?Accept: */* > ? ? ? ?Accept-Language: en-us,en;q=0.5 > ? ? ? ?Accept-Encoding: gzip,deflate > ? ? ? ?Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > ? ? ? ?Keep-Alive: 115 > ? ? ? ?Connection: keep-alive > ? ? ? ?Referer: https://my.site.com/main/ > ? ? ? ?Cookie: SESS6fa8cdc2d7064704bbda0c83e2c2588c=94889db68945e19ed6f666b7e00cdd36; > symfony=3KOH8Qk0hV%2C%2CvXLi0PK5YmdenP1 > ? ? ? ?If-Modified-Since: Tue, 27 Apr 2010 18:18:18 GMT > ? ? ? ?If-None-Match: "362e4-1008-4853810064180" > ? ? ? ?Authorization: Digest username="admin", realm="AUTH my.site.com", > nonce="5GopBDmFBAA=99f7be8796e018dde459a07178393d235366ecd9", > uri="/main/apostrophePlugin/js/jquery.keycodes-0.2.js", algorithm=MD5, > response="b04cb995cd1f86a67197aab3b5a5dbc9", qop=auth, nc=000001c9, > cnonce="9dbeaefee4d57b12" > ? ? ? ?Cache-Control: max-age=0 > > > > ? ? ? ?HTTP/1.1 304 Not Modified > ? ? ? ?Server: nginx/0.8.35 > ? ? ? ?Date: Wed, 28 Apr 2010 00:39:48 GMT > ? ? ? ?Connection: keep-alive > ? ? ? ?Etag: "362e4-1008-4853beaefee80" > ? ? ? ?Expires: Sat, 01 May 2010 00:39:48 GMT > ? ? ? ?Cache-Control: max-age=259200 > ? ? ? ?Content-Length: 0 > ? ? ? ?X-Varnish: 940462008 > ? ? ? ?Age: 0 > ? ? ? ?Via: 1.1 varnish > > ? ? ? ?---------------------------------------------------------- > > ? ? ? ?https://my.site.com/apostrophePlugin/images/a-special-blank.gif > > ? ? ? ?GET /apostrophePlugin/images/a-special-blank.gif HTTP/1.1 > ? ? ? ?Host: my.site.com > ? ? ? ?User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) > Gecko/20100417 > ? ? ? ?Accept: image/png,image/*;q=0.8,*/*;q=0.5 > ? ? ? ?Accept-Language: en-us,en;q=0.5 > ? ? ? ?Accept-Encoding: gzip,deflate > ? ? ? ?Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > ? ? ? ?Keep-Alive: 115 > ? ? ? ?Connection: keep-alive > ? ? ? ?Referer: https://my.site.com/main/ > ? ? ? ?Cookie: SESS6fa8cdc2d7064704bbda0c83e2c2588c=94889db68945e19ed6f666b7e00cdd36; > symfony=3KOH8Qk0hV%2C%2CvXLi0PK5YmdenP1 > > ? ? ? ?HTTP/1.1 404 Not Found > ? ? ? ?Server: nginx/0.8.35 > ? ? ? ?Date: Wed, 28 Apr 2010 00:39:48 GMT > ? ? ? ?Content-Type: text/html; charset=iso-8859-1 > ? ? ? ?Transfer-Encoding: chunked > ? ? ? ?Connection: keep-alive > ? ? ? ?Vary: Accept-Encoding, accept-language,accept-charset > ? ? ? ?Content-Language: en > ? ? ? ?X-Varnish: 940462009 940461993 > ? ? ? ?Age: 87 > ? ? ? ?Via: 1.1 varnish > ? ? ? ?Content-Encoding: gzip > ? ? ? ?---------------------------------------------------------- > ? ? ? ?... > > > where, iiuc, the presence/absence of "Content-Encoding: gzip" defines > whether or not the item in question was succesfully gzipped. That's a 404 Not Found error message being gzipped because it is text/html (I think Nginx compresses that by default). > Any ideas why/where the .js is not getting gzipped? That is a 304 Not Modified response, which has no body, so nothing to be gzipped. Try doing a shift-reload in firefox to see what you get from the original request (which should return 200 OK) Laurence From bendj095124367913213465 at gmail.com Wed Apr 28 01:55:03 2010 From: bendj095124367913213465 at gmail.com (Ben DJ) Date: Tue, 27 Apr 2010 18:55:03 -0700 Subject: compression-at-edge, in front of varnish reverse proxy, gzip-compresses some content, but not other ? In-Reply-To: References: Message-ID: Laurence, On Tue, Apr 27, 2010 at 6:40 PM, Laurence Rowe wrote: > That's a 404 Not Found error message being gzipped because it is > text/html (I think Nginx compresses that by default). > >> Any ideas why/where the .js is not getting gzipped? > > That is a 304 Not Modified response, which has no body, so nothing to > be gzipped. > > Try doing a shift-reload in firefox to see what you get from the > original request (which should return 200 OK) Missed that :-/ Anyway, clearing all caches, and rebooting the silly server just to be sure ;-), here's some LiveHTTPHeader output again -- I *do* see the "200 OK" this time around. The output's referencing 2 .css, a .js, and a .gif. Iiuc, the .css are both gzipped, but the .js & the .gif are not ... ---------------------------------------------------------- https://my.site.com/apostrophePlugin/css/a.css GET /apostrophePlugin/css/a.css HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 01:46:17 GMT Content-Type: text/css Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Last-Modified: Tue, 27 Apr 2010 18:18:16 GMT Etag: "30367-14e26-4853bead16a00" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 01:46:17 GMT X-Varnish: 1825269930 Age: 0 Via: 1.1 varnish Content-Encoding: gzip ---------------------------------------------------------- https://my.site.com/apostrophePlugin/js/jquery.timer-1.2.js GET /apostrophePlugin/js/jquery.timer-1.2.js HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 01:46:17 GMT Content-Type: text/x-js Connection: keep-alive Last-Modified: Tue, 27 Apr 2010 18:18:18 GMT Etag: "362df-c7e-4853beaefee80" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 01:46:17 GMT Content-Length: 3198 X-Varnish: 1825269932 1825269784 Age: 337 Via: 1.1 varnish ---------------------------------------------------------- https://my.site.com/css/main.css GET /css/main.css HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 01:46:17 GMT Content-Type: text/css Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Last-Modified: Tue, 30 Mar 2010 17:05:26 GMT Etag: "1a8d7-f29-48307a2ca0180" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 01:46:17 GMT X-Varnish: 1825269931 Age: 0 Via: 1.1 varnish Content-Encoding: gzip ---------------------------------------------------------- https://my.site.com/apostrophePlugin/images/a-special-blank.gif GET /apostrophePlugin/images/a-special-blank.gif HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 01:46:18 GMT Content-Type: image/gif Connection: keep-alive Last-Modified: Tue, 27 Apr 2010 18:18:16 GMT Etag: "3c20e-31-4853bead16a00" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 01:46:18 GMT Content-Length: 49 X-Varnish: 1825269933 1825269790 Age: 335 Via: 1.1 varnish ---------------------------------------------------------- I'm happy to provide any additional info -- I'm kinda new to digging around in caches & headers. Interesting, nonetheless ... Thanks, Ben From v.bilek at 1art.cz Wed Apr 28 09:34:05 2010 From: v.bilek at 1art.cz (=?UTF-8?B?VsOhY2xhdiBCw61sZWs=?=) Date: Wed, 28 Apr 2010 11:34:05 +0200 Subject: backend write error: 11 on POST Message-ID: <4BD8010D.6060207@1art.cz> Hello does anybody know what does FetchError c backend write error: 11 means? We are hitting this error on some POST requests, on backend we see only part of POST data the rest is cut off. varnish 2.0.5 debian lenny 64bit 386 SessionOpen c xx.xx.xx.xx 4739 0.0.0.0:80 386 ReqStart c xx.xx.xx.xx 4739 2008870321 386 RxRequest c POST 386 RxURL c /index.php?cmd=cms.odds.bookmaker&action=5&itemID=52 386 RxProtocol c HTTP/1.1 386 RxHeader c Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-ms-application, application/x-ms-xbap, applicatio 386 RxHeader c Referer: http://xxx.xxxxx.com/index.php?cmd=cms.odds.bookmaker&action=4&itemID=52 386 RxHeader c Accept-Language: cs 386 RxHeader c User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) 386 RxHeader c Content-Type: application/x-www-form-urlencoded 386 RxHeader c Accept-Encoding: gzip, deflate 386 RxHeader c Host: wata.flashscore.com 386 RxHeader c Content-Length: 10988 386 RxHeader c Connection: Keep-Alive 386 RxHeader c Cache-Control: no-cache 386 RxHeader c Cookie: cmsSID=b5517812196c0f07c61cf9323cb037c5 386 RxHeader c Authorization: Basic cGFydG5lcjpPZmljZTY5 386 VCL_call c recv pass 386 VCL_call c pass pass 386 Backend c 387 default default 386 FetchError c backend write error: 11 386 VCL_call c error deliver 386 Length c 1519 386 VCL_call c deliver deliver 386 TxProtocol c HTTP/1.1 386 TxStatus c 503 386 TxResponse c Service Unavailable 386 TxHeader c Server: Varnish 386 TxHeader c Retry-After: 0 386 TxHeader c Content-Length: 1519 386 TxHeader c Date: Wed, 28 Apr 2010 09:10:50 GMT 386 TxHeader c Age: 1 386 TxHeader c Connection: close 386 ReqEnd c 2008870321 1272445849.859823942 1272445850.874999046 0.000030756 1.015151501 0.000023603 Vaclav Bilek From bedis9 at gmail.com Wed Apr 28 09:57:27 2010 From: bedis9 at gmail.com (Bedis 9) Date: Wed, 28 Apr 2010 11:57:27 +0200 Subject: backend write error: 11 on POST In-Reply-To: <4BD8010D.6060207@1art.cz> References: <4BD8010D.6060207@1art.cz> Message-ID: Hi, What is your backend configuration in VCL? Is your backend fast enough? cheers 2010/4/28 V?clav B?lek : > Hello > > does anybody know what does > > ?FetchError ? c backend write error: 11 > > means? > > We are hitting this error on some POST requests, on backend we see only > part of POST data the rest is cut off. > > > > > > varnish 2.0.5 > debian lenny 64bit > > ?386 SessionOpen ?c xx.xx.xx.xx 4739 0.0.0.0:80 > ?386 ReqStart ? ? c xx.xx.xx.xx 4739 2008870321 > ?386 RxRequest ? ?c POST > ?386 RxURL ? ? ? ?c /index.php?cmd=cms.odds.bookmaker&action=5&itemID=52 > ?386 RxProtocol ? c HTTP/1.1 > ?386 RxHeader ? ? c Accept: image/gif, image/jpeg, image/pjpeg, > image/pjpeg, application/x-shockwave-flash, application/x-silverlight, > application/vnd.ms-excel, application/vnd.ms-powerpoint, > application/msword, application/x-ms-application, application/x-ms-xbap, > applicatio > ?386 RxHeader ? ? c Referer: > http://xxx.xxxxx.com/index.php?cmd=cms.odds.bookmaker&action=4&itemID=52 > ?386 RxHeader ? ? c Accept-Language: cs > ?386 RxHeader ? ? c User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; > Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET > CLR 3.0.4506.2152; .NET CLR 3.5.30729) > ?386 RxHeader ? ? c Content-Type: application/x-www-form-urlencoded > ?386 RxHeader ? ? c Accept-Encoding: gzip, deflate > ?386 RxHeader ? ? c Host: wata.flashscore.com > ?386 RxHeader ? ? c Content-Length: 10988 > ?386 RxHeader ? ? c Connection: Keep-Alive > ?386 RxHeader ? ? c Cache-Control: no-cache > ?386 RxHeader ? ? c Cookie: cmsSID=b5517812196c0f07c61cf9323cb037c5 > ?386 RxHeader ? ? c Authorization: Basic cGFydG5lcjpPZmljZTY5 > ?386 VCL_call ? ? c recv pass > ?386 VCL_call ? ? c pass pass > ?386 Backend ? ? ?c 387 default default > ?386 FetchError ? c backend write error: 11 > ?386 VCL_call ? ? c error deliver > ?386 Length ? ? ? c 1519 > ?386 VCL_call ? ? c deliver deliver > ?386 TxProtocol ? c HTTP/1.1 > ?386 TxStatus ? ? c 503 > ?386 TxResponse ? c Service Unavailable > ?386 TxHeader ? ? c Server: Varnish > ?386 TxHeader ? ? c Retry-After: 0 > ?386 TxHeader ? ? c Content-Length: 1519 > ?386 TxHeader ? ? c Date: Wed, 28 Apr 2010 09:10:50 GMT > ?386 TxHeader ? ? c Age: 1 > ?386 TxHeader ? ? c Connection: close > ?386 ReqEnd ? ? ? c 2008870321 1272445849.859823942 > 1272445850.874999046 0.000030756 1.015151501 0.000023603 > > > > Vaclav Bilek > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > From l at lrowe.co.uk Wed Apr 28 13:53:18 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 28 Apr 2010 14:53:18 +0100 Subject: compression-at-edge, in front of varnish reverse proxy, gzip-compresses some content, but not other ? In-Reply-To: References: Message-ID: OK, it looks like there is something strange set up on one of your backend servers - the js is being served with Content-Type: text/x-js. You should probably fix that, or just add it to gzip_types. Laurence 2010/4/28 Ben DJ : > Laurence, > > On Tue, Apr 27, 2010 at 6:40 PM, Laurence Rowe wrote: >> That's a 404 Not Found error message being gzipped because it is >> text/html (I think Nginx compresses that by default). >> >>> Any ideas why/where the .js is not getting gzipped? >> >> That is a 304 Not Modified response, which has no body, so nothing to >> be gzipped. >> >> Try doing a shift-reload in firefox to see what you get from the >> original request (which should return 200 OK) > > Missed that :-/ > > Anyway, clearing all caches, and rebooting the silly server just to be > sure ;-), here's some LiveHTTPHeader output again -- I *do* see the > "200 OK" this time around. > > The output's referencing ?2 .css, a .js, and a .gif. ?Iiuc, the .css > are both gzipped, but the .js & the .gif are not ... > > ---------------------------------------------------------- > > https://my.site.com/apostrophePlugin/css/a.css > > > > GET /apostrophePlugin/css/a.css HTTP/1.1 > > Host: my.site.com > > User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 > > Accept: text/css,*/*;q=0.1 > > Accept-Language: en-us,en;q=0.5 > > Accept-Encoding: gzip,deflate > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > > Keep-Alive: 115 > > Connection: keep-alive > > Referer: https://my.site.com/ > > Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 > > Pragma: no-cache > > Cache-Control: no-cache > > > > HTTP/1.1 200 OK > > Server: nginx/0.8.35 > > Date: Wed, 28 Apr 2010 01:46:17 GMT > > Content-Type: text/css > > Transfer-Encoding: chunked > > Connection: keep-alive > > Vary: Accept-Encoding > > Last-Modified: Tue, 27 Apr 2010 18:18:16 GMT > > Etag: "30367-14e26-4853bead16a00" > > Cache-Control: max-age=259200 > > Expires: Sat, 01 May 2010 01:46:17 GMT > > X-Varnish: 1825269930 > > Age: 0 > > Via: 1.1 varnish > > Content-Encoding: gzip > > ---------------------------------------------------------- > > https://my.site.com/apostrophePlugin/js/jquery.timer-1.2.js > > > > GET /apostrophePlugin/js/jquery.timer-1.2.js HTTP/1.1 > > Host: my.site.com > > User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 > > Accept: */* > > Accept-Language: en-us,en;q=0.5 > > Accept-Encoding: gzip,deflate > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > > Keep-Alive: 115 > > Connection: keep-alive > > Referer: https://my.site.com/ > > Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 > > Pragma: no-cache > > Cache-Control: no-cache > > > > HTTP/1.1 200 OK > > Server: nginx/0.8.35 > > Date: Wed, 28 Apr 2010 01:46:17 GMT > > Content-Type: text/x-js > > Connection: keep-alive > > Last-Modified: Tue, 27 Apr 2010 18:18:18 GMT > > Etag: "362df-c7e-4853beaefee80" > > Cache-Control: max-age=259200 > > Expires: Sat, 01 May 2010 01:46:17 GMT > > Content-Length: 3198 > > X-Varnish: 1825269932 1825269784 > > Age: 337 > > Via: 1.1 varnish > > ---------------------------------------------------------- > > https://my.site.com/css/main.css > > > > GET /css/main.css HTTP/1.1 > > Host: my.site.com > > User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 > > Accept: text/css,*/*;q=0.1 > > Accept-Language: en-us,en;q=0.5 > > Accept-Encoding: gzip,deflate > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > > Keep-Alive: 115 > > Connection: keep-alive > > Referer: https://my.site.com/ > > Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 > > Pragma: no-cache > > Cache-Control: no-cache > > > > HTTP/1.1 200 OK > > Server: nginx/0.8.35 > > Date: Wed, 28 Apr 2010 01:46:17 GMT > > Content-Type: text/css > > Transfer-Encoding: chunked > > Connection: keep-alive > > Vary: Accept-Encoding > > Last-Modified: Tue, 30 Mar 2010 17:05:26 GMT > > Etag: "1a8d7-f29-48307a2ca0180" > > Cache-Control: max-age=259200 > > Expires: Sat, 01 May 2010 01:46:17 GMT > > X-Varnish: 1825269931 > > Age: 0 > > Via: 1.1 varnish > > Content-Encoding: gzip > > ---------------------------------------------------------- > > https://my.site.com/apostrophePlugin/images/a-special-blank.gif > > > > GET /apostrophePlugin/images/a-special-blank.gif HTTP/1.1 > > Host: my.site.com > > User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 > > Accept: image/png,image/*;q=0.8,*/*;q=0.5 > > Accept-Language: en-us,en;q=0.5 > > Accept-Encoding: gzip,deflate > > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > > Keep-Alive: 115 > > Connection: keep-alive > > Referer: https://my.site.com/ > > Cookie: symfony=NMtNSAJp8s7v-SSl6vBGK-jdpr7 > > Pragma: no-cache > > Cache-Control: no-cache > > > > HTTP/1.1 200 OK > > Server: nginx/0.8.35 > > Date: Wed, 28 Apr 2010 01:46:18 GMT > > Content-Type: image/gif > > Connection: keep-alive > > Last-Modified: Tue, 27 Apr 2010 18:18:16 GMT > > Etag: "3c20e-31-4853bead16a00" > > Cache-Control: max-age=259200 > > Expires: Sat, 01 May 2010 01:46:18 GMT > > Content-Length: 49 > > X-Varnish: 1825269933 1825269790 > > Age: 335 > > Via: 1.1 varnish > > ---------------------------------------------------------- > > > I'm happy to provide any additional info -- I'm kinda new to digging > around in caches & headers. ?Interesting, nonetheless ... > > Thanks, > > Ben > From erik at cederstrand.dk Wed Apr 28 14:13:21 2010 From: erik at cederstrand.dk (Erik Cederstrand) Date: Wed, 28 Apr 2010 16:13:21 +0200 Subject: Caching based on POST data Message-ID: <76060989-A086-4BCB-AADF-7D447202AC0F@cederstrand.dk> Hi list I'm new to Varnish. I host a web application (which is otherwise not in my control) which basically shows the timetable for school classes on a number of schools. It does this (and don't ask me why) by mixing a POST and GET request to pass the parameters needed to show the correct timetable. So, e.g. the users visits the URL /Timetable.jsp/school=MySchool which contains a form. The user then selects the class in a form, which is then sent to the server as a POST request to Timetable.jsp/school=MySchool. I want to cache these timetable views using Varnish. But is there any way to access the POST body in Varnish, so I can append the school and class ID's to the req.hash and fetch the correct cached object afterwards? Thanks, Erik -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1928 bytes Desc: not available URL: From l at lrowe.co.uk Wed Apr 28 14:21:38 2010 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 28 Apr 2010 15:21:38 +0100 Subject: Caching based on POST data In-Reply-To: <76060989-A086-4BCB-AADF-7D447202AC0F@cederstrand.dk> References: <76060989-A086-4BCB-AADF-7D447202AC0F@cederstrand.dk> Message-ID: 2010/4/28 Erik Cederstrand : > Hi list > > I'm new to Varnish. I host a web application (which is otherwise not in my control) which basically shows the timetable for school classes on a number of schools. It does this (and don't ask me why) by mixing a POST and GET request to pass the parameters needed to show the correct timetable. So, e.g. the users visits the URL ?/Timetable.jsp/school=MySchool which contains a form. The user then selects the class in a form, which is then sent to the server as a POST request to Timetable.jsp/school=MySchool. > > I want to cache these timetable views using Varnish. But is there any way to access the POST body in Varnish, so I can append the school and class ID's to the req.hash and fetch the correct cached object afterwards? There's no support for this. Laurence From bendj095124367913213465 at gmail.com Wed Apr 28 15:22:33 2010 From: bendj095124367913213465 at gmail.com (Ben DJ) Date: Wed, 28 Apr 2010 08:22:33 -0700 Subject: compression-at-edge, in front of varnish reverse proxy, gzip-compresses some content, but not other ? In-Reply-To: References: Message-ID: Laurence, On Wed, Apr 28, 2010 at 6:53 AM, Laurence Rowe wrote: > OK, it looks like there is something strange set up on one of your > backend servers - the js is being served with Content-Type: text/x-js. > You should probably fix that, or just add it to gzip_types. This looks like the cause, grep js /etc/apache2/mime.types application/javascript js text/x-js js defined in the standard distro-release of apache2. not mentioned at all in the rfc (http://www.ietf.org/rfc/rfc4329.txt), but widely used, apparently, nonetheless. so, added it to gzip types, and tried again. also added to nginx config, proxy_set_header Accept-Encoding ""; per recommend @ nginx list. now, headers show BOTH js & css being compressed, as intended, e.g., ---------------------------------------------------------- https://my.site.com/apostrophePlugin/js/jquery.autogrow.js GET /apostrophePlugin/js/jquery.autogrow.js HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=R%2Cq-BqTdX-ckfGCqHa2MnDsbJpd HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 15:09:34 GMT Content-Type: text/x-js Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Last-Modified: Wed, 28 Apr 2010 04:42:23 GMT Etag: "267f6-db6-48544a2d549c0" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 15:09:34 GMT X-Varnish: 1289869751 1289869733 Age: 283 Via: 1.1 varnish Content-Encoding: gzip ---------------------------------------------------------- https://my.site.com/apostrophePlugin/css/a.css GET /apostrophePlugin/css/a.css HTTP/1.1 Host: my.site.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100417 Accept: text/css,*/*;q=0.1 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: https://my.site.com/ Cookie: symfony=R%2Cq-BqTdX-ckfGCqHa2MnDsbJpd HTTP/1.1 200 OK Server: nginx/0.8.35 Date: Wed, 28 Apr 2010 15:09:34 GMT Content-Type: text/css Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Last-Modified: Wed, 28 Apr 2010 04:42:21 GMT Etag: "26233-14e26-48544a2b6c540" Cache-Control: max-age=259200 Expires: Sat, 01 May 2010 15:09:34 GMT X-Varnish: 1289869748 Age: 0 Via: 1.1 varnish Content-Encoding: gzip ---------------------------------------------------------- AND, the YSlow plugin now 'grades' the page an "A". Thanks for the help! Ben From erik at cederstrand.dk Wed Apr 28 19:56:34 2010 From: erik at cederstrand.dk (Erik Cederstrand) Date: Wed, 28 Apr 2010 21:56:34 +0200 Subject: Caching based on POST data In-Reply-To: References: <76060989-A086-4BCB-AADF-7D447202AC0F@cederstrand.dk> Message-ID: <600892A6-4394-4826-AD1E-18EFE7F4DA7A@cederstrand.dk> Den 28/04/2010 kl. 16.21 skrev Laurence Rowe: > 2010/4/28 Erik Cederstrand : >> Hi list >> >> I'm new to Varnish. I host a web application (which is otherwise not in my control) which basically shows the timetable for school classes on a number of schools. It does this (and don't ask me why) by mixing a POST and GET request to pass the parameters needed to show the correct timetable. So, e.g. the users visits the URL /Timetable.jsp/school=MySchool which contains a form. The user then selects the class in a form, which is then sent to the server as a POST request to Timetable.jsp/school=MySchool. >> >> I want to cache these timetable views using Varnish. But is there any way to access the POST body in Varnish, so I can append the school and class ID's to the req.hash and fetch the correct cached object afterwards? > > There's no support for this. Thanks for your answer. Google found this for me: http://sourceforge.net/projects/libdynamic/, but unfortunately it doesn't compile on my FreeBSD box because of missing headers. Thanks, Erik -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1928 bytes Desc: not available URL: From ron.van.der.vegt at buyways.nl Thu Apr 8 10:46:25 2010 From: ron.van.der.vegt at buyways.nl (Ron van der Vegt) Date: Thu, 08 Apr 2010 10:46:25 -0000 Subject: vcl_hash authentication questions Message-ID: <4BED5A7F-12EF-4FD9-B023-D4B7D487C376@buyways.nl> Greetings, I hope someone can help me with building two distinct caches using Varnish; one for regulars and another for authenticated premium members. The documentation on this subject [1] suggests in sending a cookie such as premium=1. This, however, is not as secure as i would like it to be. Someone must not be allowed to just set the cookie, like premium=1 and have access to the secured cache. I see two solutions: 1. validate the cookie using a hash plus a salt from within Varnish in order to make it harder to guess the value; 2. have the PHP session ID's do the job for us but we therefore need to check in some backend if the session ID is attached to a authenticated premium member. The first solution would be quite quick to implement but has significant drawbacks such as security obtained through obscurity and the difficulty that comes with a serverside signed-off user. The second solution would be rather elegant; we could fill a memcached pool with PHP session ID's that belong to authenticated premium users; we would then only need to check the condition. The problem is: we don't see a method in Varnish to check a backend. What do you suggest? Are there other approaches that fit the use-case? How did or would you solve this problem with Varnish? Thanks in advance, Ron van der Vegt From se at plista.com Tue Apr 13 10:27:39 2010 From: se at plista.com (Simon Effenberg) Date: Tue, 13 Apr 2010 10:27:39 -0000 Subject: varnish restart and cached objects Message-ID: <1271154447.11766.20.camel@niles.5mm.de> hello, i tried the following workflow: - GET /bla.js vcl_recv => lookup vcl_fetch => obj comes back but with a content-length of "0" so: if (obj.http.Content-Length == "0") { set obj.grace = 60s; restart; } - wanted: vcl_recv => grace is used so the already cached (but timed out over ttl) object will be send out - got 4x: vcl_recv => lookup vcl_fetch => .. like above ... => client gets an Guru Meditation I don't know how to force varnish to use the cached object even its ttl is 0. I'm using varnish 2.0.6 and for completion in vcl_recv req.grace is 12h and vcl_fetch obj.grace is 12h. Can anyone help me? Thx a lot.. -- B.Sc. Simon Effenberg Software-Developer plista GmbH Almstadtstra?e 7 10119 Berlin Germany tel +49 (0) 30 27577670 fax +49 (0) 32 12 10 38 193 email: se at plista.com plista GmbH, District Court of Berlin-Charlottenburg HRB 114726 B Managing Director: Dominik Matyka, M.Sc. This e-mail may contain confidential information. If you are not the intended recipient please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From fh at fholzhauer.de Tue Apr 13 15:10:46 2010 From: fh at fholzhauer.de (Florian Holzhauer) Date: Tue, 13 Apr 2010 15:10:46 -0000 Subject: how storing objects for a given time, regardless of expire info from back-end? In-Reply-To: <4BC487C0.6030907@proximic.com> References: <4BC347DA.3000505@proximic.com> <4BC348C4.9000008@gmail.com> <4BC487C0.6030907@proximic.com> Message-ID: <4BC4896E.5000308@fholzhauer.de> Hey Dirk, Am 13.04.10 17:03, schrieb Dirk Taggesell: > Message from VCC-compiler: > Variable 'obj.ttl' not accessible in method 'vcl_fetch'. > At: (input Line 13 Pos 7) > set obj.ttl = 7200s; > ------#######--------- Iirc you stated in your initial mail that you are using varnish 2.1.0 - obj.* was replaced by beresp.* at vcl_fetch in varnish 2.1, maybe that is the reason for the behaviour. Regards, Florian.