From bashift at gmail.com Mon Nov 3 07:12:29 2008 From: bashift at gmail.com (sigma) Date: Mon, 3 Nov 2008 15:12:29 +0800 Subject: varnishncsa's bug ? Message-ID: <8e53edd40811022312r17849ca3xd211d460530b751a@mail.gmail.com> the request: $ wget -S -O /dev/null --referer="http://www.example.com" --header="Host: icon0 01.example.com" http://192.168.0.105/16/40/1640386/ --2008-11-03 15:04:14-- http://192.168.0.105/16/40/1640386/ Connecting to 192.168.0.105:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: Apache/1.3.34 (Unix) Vary: Accept-Encoding Cache-Control: max-age=10025600 Expires: Fri, 27 Feb 2009 07:55:53 GMT Last-Modified: Sat, 02 Aug 2008 17:18:50 GMT ETag: "1190c03-572-489496fa" Content-Type: image/jpeg Content-Length: 1394 Date: Mon, 03 Nov 2008 07:04:12 GMT X-Varnish: 1737392634 1737367909 Age: 99 Via: 1.1 varnish Connection: keep-alive Length: 1394 (1.4K) [image/jpeg] Saving to: `/dev/null' but the varnishncsa say: $ varnishncsa -n /data/vcache|awk ' $9 = 404 '|grep 192.168.0.1 192.168.0.1 - - [03/Nov/2008:15:04:12 +0800] "GET http://icon001.example.com/16/40/1640386/ HTTP/1.0" 404 1394 " http://www.example.com" "Wget/1.11.3" is it a bug? or my config's problem. -- Best Regards, sigma --EOF-- -------------- next part -------------- An HTML attachment was scrubbed... URL: From romics22 at yahoo.de Tue Nov 4 07:45:57 2008 From: romics22 at yahoo.de (Robert Ming) Date: Tue, 4 Nov 2008 07:45:57 +0000 (GMT) Subject: Varnish 2.01 - GETs with Grinder end up in PASS Message-ID: <845086.63549.qm@web23704.mail.ird.yahoo.com> Hi! We do load-testing with 'The Grinder' vers. 3.1 on Varnish in front of several Plone3 instances. The tests worked out fine with Varnish 2.0 beta. Now with version 2.01 we have the following issue: Executing any GET with the Testing-Framework results always in a PASS in Varnish. As a consequence all subsequent requests with the same url end up in cache hits for pass, that's not what we like to test. Requesting the same urls "manually", say with firefox or ie are first LOOKUPed and afterwards cached, the behaviour we would like to test. Trying different ways to get around this "PASSing"-issue we came to the conclusion that it is not a grinder problem, because a simple GET done with the python httplib.HTTPConnection had the same effect. Any comments, solutions, enlightments on this issue are appreciated. Robert From jt at endpoint.com Tue Nov 4 15:46:19 2008 From: jt at endpoint.com (JT Justman) Date: Tue, 04 Nov 2008 07:46:19 -0800 Subject: Varnish 2.01 - GETs with Grinder end up in PASS In-Reply-To: <845086.63549.qm@web23704.mail.ird.yahoo.com> References: <845086.63549.qm@web23704.mail.ird.yahoo.com> Message-ID: <49106E4B.70207@endpoint.com> Robert Ming wrote: > Hi! > > We do load-testing with 'The Grinder' vers. 3.1 on Varnish in front > of several Plone3 instances. The tests worked out fine with Varnish > 2.0 beta. Now with version 2.01 we have the following issue: > Executing any GET with the Testing-Framework results always in a PASS > in Varnish. As a consequence all subsequent requests with the same > url end up in cache hits for pass, that's not what we like to test. > Requesting the same urls "manually", say with firefox or ie are first > LOOKUPed and afterwards cached, the behaviour we would like to test. > > Trying different ways to get around this "PASSing"-issue we came to > the conclusion that it is not a grinder problem, because a simple GET > done with the python httplib.HTTPConnection had the same effect. > > Any comments, solutions, enlightments on this issue are appreciated. > Post your vcl? Have you looked at the logs? -- jt at endpoint.com http://www.endpoint.com From miles at jamkit.com Tue Nov 4 20:51:49 2008 From: miles at jamkit.com (Miles) Date: Tue, 04 Nov 2008 20:51:49 +0000 Subject: caching using ETags to vary the content Message-ID: Hi I am using varnish 2.0-beta-2. I am using varnish to cache a website where there is a small amount of personalised content in a particular directory. When the user is outside of that directory, the only difference for logged-in and non-logged-in users is a few links (e.g. login/register or view profile/logout - the targets are the same irrespective of which user). I am trying to come up with a cache setup to deal with this. How I had planned to deal with this was as follows: - set an ETag (e.g. "logged-in" or "anon") depending on whether the user is logged in or not; - add a "Vary: ETag" header, so varnish stores several representations - in varnish, set an "ETag" header on the request when it is received, depending on if the user is authenticated or not (can be determined by the presence of a cookie). the request should then match the correct page in the cache. I know varnish doesn't do If-None-Match, but I don't think that is a problem in this scheme. I haven't attempted this yet - can anyone see any holes in it as a method? Or does anyone else have a way of dealing with this sort of personalisation-lite?! Thanks in advance for your help! Miles From miles at jamkit.com Tue Nov 4 21:52:53 2008 From: miles at jamkit.com (Miles) Date: Tue, 04 Nov 2008 21:52:53 +0000 Subject: varnish in front of load balancer Message-ID: Hi, Second question - we are running varnish in front of a load balander. The load balancer has "stickiness" - requests go to the same backend server each time, unless that server goes down. The stickiness is provided by a cookie which is set on the first request, and then read to direct the request to the right backend. What I want to do is ignore these cookies when trying to decide whether the request can be served from the cache or not - but if it can't be served from the cache, then pass the cookies on. I've seen examples on completely ignoring cookies, but not that cover this case. Thanks in advance, Miles From r at tomayko.com Tue Nov 4 21:57:49 2008 From: r at tomayko.com (Ryan Tomayko) Date: Tue, 04 Nov 2008 13:57:49 -0800 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: References: Message-ID: <4910C55D.9070804@tomayko.com> On 11/4/08 12:51 PM, Miles wrote: > I know varnish doesn't do If-None-Match, but I don't think that is a > problem in this scheme. I'm curious to understand why Varnish doesn't do validation / conditional GET. Has If-Modified-Since/If-None-Match support been considered and rejected on merit or is it something that could theoretically be accepted into the project? Has it just not received any real interest? Personally, I'd love to see support for conditional GET as this can significantly reduce backend resource use when the backend generates cache validators upfront and 304's without generating the full response. Ryan From alecshenry at gmail.com Tue Nov 4 22:04:24 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Tue, 4 Nov 2008 20:04:24 -0200 Subject: Frontend caching to multiple sites Message-ID: <3c54843f0811041404u3e30dce2w97512955a00fd418@mail.gmail.com> Hi all, First let me congratulate the people involved in Varnish, great software!!! I've looked around the the mailing list but was unable to find an answer to my question. I want to set up varnish as a reverse proxy/cache to multiple customer sites. As in, I have 10 different customers, each with its own web site (domains) with their own necessities, compression, cookie, authentication, etc; each customer is a different setup from the other, so I thought "OK! Let's use a different VCL for each customer and all will be fine". Bear with me here, I've just started playing with varnish, but it seems that I can't create a different VCL file for each customer and load it in varnish (vcl.use ...) as varnish will stop responding for the previous site and start responding only to the new one (active configuration). Meaning, the content that is served is only the content from the new site, even if using the correct domain. How can I go about setting this up? I'm using Varnish 2.0.1, just downloaded and compiled it today. I really aprecciate all the help you can give! Alecs -------------- next part -------------- An HTML attachment was scrubbed... URL: From miles at jamkit.com Tue Nov 4 22:19:34 2008 From: miles at jamkit.com (Miles) Date: Tue, 04 Nov 2008 22:19:34 +0000 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: <4910C55D.9070804@tomayko.com> References: <4910C55D.9070804@tomayko.com> Message-ID: <4910CA76.5010803@jamkit.com> Ryan Tomayko wrote: > On 11/4/08 12:51 PM, Miles wrote: >> I know varnish doesn't do If-None-Match, but I don't think that is a >> problem in this scheme. > > I'm curious to understand why Varnish doesn't do validation / conditional GET. > Has If-Modified-Since/If-None-Match support been considered and rejected on > merit or is it something that could theoretically be accepted into the > project? Has it just not received any real interest? > > Personally, I'd love to see support for conditional GET as this can > significantly reduce backend resource use when the backend generates cache > validators upfront and 304's without generating the full response. > > Ryan AFAIK varnish does do if-modified-since, just not if-none-match Miles From espen at linpro.no Wed Nov 5 07:58:52 2008 From: espen at linpro.no (Espen Braastad) Date: Wed, 05 Nov 2008 08:58:52 +0100 Subject: Frontend caching to multiple sites In-Reply-To: <3c54843f0811041404u3e30dce2w97512955a00fd418@mail.gmail.com> References: <3c54843f0811041404u3e30dce2w97512955a00fd418@mail.gmail.com> Message-ID: <4911523C.3060004@linpro.no> Alecs Henry wrote: > I want to set up varnish as a reverse proxy/cache to multiple customer > sites. > As in, I have 10 different customers, each with its own web site (domains) > with their own necessities, compression, cookie, authentication, etc; each > customer is a different setup from the other, so I thought "OK! Let's use a > different VCL for each customer and all will be fine". > > Bear with me here, I've just started playing with varnish, but it seems that > I can't create a different VCL file for each customer and load it in varnish > (vcl.use ...) as varnish will stop responding for the previous site and > start responding only to the new one (active configuration). Meaning, the > content that is served is only the content from the new site, even if using > the correct domain. > > How can I go about setting this up? > I'm using Varnish 2.0.1, just downloaded and compiled it today. > Hi, You can try something like this in one VCL: sub vcl_recv { if (req.http.host ~ "^(www\.)site1\.com$"){ # foo } if (req.http.host ~ "^(www\.)site2\.com$"){ # bar } if (req.http.host ~ "^(www\.)site3\.com$"){ # baz } # Unknown host error 403; } -- mvh Espen Braastad, +47 21 54 41 37 espen at linpro.no Linpro AS - Ledende p? Linux From alecshenry at gmail.com Wed Nov 5 13:13:15 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Wed, 5 Nov 2008 11:13:15 -0200 Subject: Frontend caching to multiple sites In-Reply-To: <4911523C.3060004@linpro.no> References: <3c54843f0811041404u3e30dce2w97512955a00fd418@mail.gmail.com> <4911523C.3060004@linpro.no> Message-ID: <3c54843f0811050513v5ad532c2nba2ed20705e5c24f@mail.gmail.com> Hi Espen, Thanks for the answer! Is there a way to accomplish this using different VCLs? I ask it because I'm trying to figure out a way to make it automatic. As in if I have a new customer, I'd just fill out a form (with the customer details like domain name, backend server, other configuration) and a little system working under the hood would generate the VCL file, send it to the varnish server and load it. If I use one VCL file for everybody, how do I reload this VCL when I need to change it? Is it necessary to reload varnish? Would it present downtime? Thanks!! Alecs On Wed, Nov 5, 2008 at 5:58 AM, Espen Braastad wrote: > Alecs Henry wrote: > >> I want to set up varnish as a reverse proxy/cache to multiple customer >> sites. >> As in, I have 10 different customers, each with its own web site (domains) >> with their own necessities, compression, cookie, authentication, etc; each >> customer is a different setup from the other, so I thought "OK! Let's use >> a >> different VCL for each customer and all will be fine". >> >> Bear with me here, I've just started playing with varnish, but it seems >> that >> I can't create a different VCL file for each customer and load it in >> varnish >> (vcl.use ...) as varnish will stop responding for the previous site and >> start responding only to the new one (active configuration). Meaning, the >> content that is served is only the content from the new site, even if >> using >> the correct domain. >> >> How can I go about setting this up? >> I'm using Varnish 2.0.1, just downloaded and compiled it today. >> >> > Hi, > > You can try something like this in one VCL: > > sub vcl_recv { > if (req.http.host ~ "^(www\.)site1\.com$"){ > # foo > } > > if (req.http.host ~ "^(www\.)site2\.com$"){ > # bar > } > > if (req.http.host ~ "^(www\.)site3\.com$"){ > # baz > } > > # Unknown host > error 403; > } > > -- > mvh > Espen Braastad, > +47 21 54 41 37 > espen at linpro.no > Linpro AS - Ledende p? Linux > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecshenry at gmail.com Wed Nov 5 16:03:57 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Wed, 5 Nov 2008 14:03:57 -0200 Subject: TCP_HIT header Message-ID: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> Hi guys, Is there a variable that I can print on the response header that will give me the cache lookup result such as TCP_HIT or TCP_MISS? Thanks!! Alecs -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at linpro.no Wed Nov 5 17:10:58 2008 From: perbu at linpro.no (Per Buer) Date: Wed, 05 Nov 2008 18:10:58 +0100 Subject: TCP_HIT header In-Reply-To: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> Message-ID: <4911D3A2.1050303@linpro.no> Alecs Henry skrev: > Hi guys, > > Is there a variable that I can print on the response header that will > give me the cache lookup result such as TCP_HIT or TCP_MISS? I guess you can add the relevant header in vcl_hit and vcl_miss See the FAQ: http://varnish.projects.linpro.no/wiki/FAQ#HowdoIaddaHTTPheader Just add a sub vcl_hit { # add code from faq here } -- http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From alecshenry at gmail.com Wed Nov 5 17:46:23 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Wed, 5 Nov 2008 15:46:23 -0200 Subject: TCP_HIT header In-Reply-To: <4911D3A2.1050303@linpro.no> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> <4911D3A2.1050303@linpro.no> Message-ID: <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> Hi Per, Thanks for the reply! The issue here is not how to add the header, that is OK, I can do it just fine (even added the Foo: bar header just for the fun of it!). The problem is WHAT variable I can use that contains that information (MISS or HIT). Is there more documentation on the variables available than what is in vcl(7)? Or anywhere else for that matter. Thanks, Alecs On Wed, Nov 5, 2008 at 3:10 PM, Per Buer wrote: > Alecs Henry skrev: > > Hi guys, > > > > Is there a variable that I can print on the response header that will > > give me the cache lookup result such as TCP_HIT or TCP_MISS? > > I guess you can add the relevant header in vcl_hit and vcl_miss > > See the FAQ: > http://varnish.projects.linpro.no/wiki/FAQ#HowdoIaddaHTTPheader > > Just add a > sub vcl_hit { > # add code from faq here > } > > > > -- > http://linpro.no/ | http://redpill.se/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at linpro.no Wed Nov 5 17:59:25 2008 From: perbu at linpro.no (Per Buer) Date: Wed, 05 Nov 2008 18:59:25 +0100 Subject: TCP_HIT header In-Reply-To: <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> <4911D3A2.1050303@linpro.no> <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> Message-ID: <4911DEFD.4010506@linpro.no> Hi. As I said. Add two different headers. One you add in vcl_hit and one (preferably a different one) in vcl_miss. You need no variables. The code in vcl_hit will be run for a hit and vcl_miss will be run for a miss. Check out the getting started guide, the FAQ and the VCL-page on Wiki if you seek documentation. Thats all there is, at the moment. Per. Alecs Henry skrev: > Hi Per, > Thanks for the reply! > > The issue here is not how to add the header, that is OK, I can do it > just fine (even added the Foo: bar header just for the fun of it!). The > problem is WHAT variable I can use that contains that information (MISS > or HIT). > Is there more documentation on the variables available than what is in > vcl(7)? > Or anywhere else for that matter. > > Thanks, > > Alecs > > On Wed, Nov 5, 2008 at 3:10 PM, Per Buer > wrote: > > Alecs Henry skrev: > > Hi guys, > > > > Is there a variable that I can print on the response header that will > > give me the cache lookup result such as TCP_HIT or TCP_MISS? > > I guess you can add the relevant header in vcl_hit and vcl_miss > > See the FAQ: > http://varnish.projects.linpro.no/wiki/FAQ#HowdoIaddaHTTPheader > > Just add a > sub vcl_hit { > # add code from faq here > } > > > > -- > http://linpro.no/ | http://redpill.se/ > > -- Per Buer - Leder Infrastruktur og Drift - Redpill Linpro Telefon: 21 54 41 21 - Mobil: 958 39 117 http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From alecshenry at gmail.com Wed Nov 5 18:21:58 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Wed, 5 Nov 2008 16:21:58 -0200 Subject: TCP_HIT header In-Reply-To: <4911DEFD.4010506@linpro.no> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> <4911D3A2.1050303@linpro.no> <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> <4911DEFD.4010506@linpro.no> Message-ID: <3c54843f0811051021w4031d513k550b364f712ef64f@mail.gmail.com> OH! OH! DUH! Dumb old me!!! Of course! Thanks a lot Per!! About the documentation, I may have read all of it already. Think I'll start over.. Alecs On Wed, Nov 5, 2008 at 3:59 PM, Per Buer wrote: > Hi. > > As I said. Add two different headers. One you add in vcl_hit and one > (preferably a different one) in vcl_miss. You need no variables. The > code in vcl_hit will be run for a hit and vcl_miss will be run for a miss. > > Check out the getting started guide, the FAQ and the VCL-page on Wiki if > you seek documentation. Thats all there is, at the moment. > > Per. > > > > > Alecs Henry skrev: > > Hi Per, > > Thanks for the reply! > > > > The issue here is not how to add the header, that is OK, I can do it > > just fine (even added the Foo: bar header just for the fun of it!). The > > problem is WHAT variable I can use that contains that information (MISS > > or HIT). > > Is there more documentation on the variables available than what is in > > vcl(7)? > > Or anywhere else for that matter. > > > > Thanks, > > > > Alecs > > > > On Wed, Nov 5, 2008 at 3:10 PM, Per Buer > > wrote: > > > > Alecs Henry skrev: > > > Hi guys, > > > > > > Is there a variable that I can print on the response header that > will > > > give me the cache lookup result such as TCP_HIT or TCP_MISS? > > > > I guess you can add the relevant header in vcl_hit and vcl_miss > > > > See the FAQ: > > http://varnish.projects.linpro.no/wiki/FAQ#HowdoIaddaHTTPheader > > > > Just add a > > sub vcl_hit { > > # add code from faq here > > } > > > > > > > > -- > > http://linpro.no/ | http://redpill.se/ > > > > > > > -- > Per Buer - Leder Infrastruktur og Drift - Redpill Linpro > Telefon: 21 54 41 21 - Mobil: 958 39 117 > http://linpro.no/ | http://redpill.se/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecshenry at gmail.com Wed Nov 5 19:19:18 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Wed, 5 Nov 2008 17:19:18 -0200 Subject: TCP_HIT header In-Reply-To: <4911DEFD.4010506@linpro.no> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> <4911D3A2.1050303@linpro.no> <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> <4911DEFD.4010506@linpro.no> Message-ID: <3c54843f0811051119j77d1a5el5aed9513ad351bfe@mail.gmail.com> Hi Per, Here's what I got: ------------- vcl.load test /usr/local/etc/varnish/configs/test.vcl 106 267 *Variable 'obj.http.X-Cache' not accessible in method 'vcl_miss'.* At: (/usr/local/etc/varnish/configs/test.vcl Line 62 Pos 13) set obj.http.X-Cache = "TCP_MISS from " server.ip; ------------################------------------------------ VCL compilation failed ------------- It works just fine for vcl_hit though. Any ideas? Alecs On Wed, Nov 5, 2008 at 3:59 PM, Per Buer wrote: > Hi. > > As I said. Add two different headers. One you add in vcl_hit and one > (preferably a different one) in vcl_miss. You need no variables. The > code in vcl_hit will be run for a hit and vcl_miss will be run for a miss. > > Check out the getting started guide, the FAQ and the VCL-page on Wiki if > you seek documentation. Thats all there is, at the moment. > > Per. > > > > > Alecs Henry skrev: > > Hi Per, > > Thanks for the reply! > > > > The issue here is not how to add the header, that is OK, I can do it > > just fine (even added the Foo: bar header just for the fun of it!). The > > problem is WHAT variable I can use that contains that information (MISS > > or HIT). > > Is there more documentation on the variables available than what is in > > vcl(7)? > > Or anywhere else for that matter. > > > > Thanks, > > > > Alecs > > > > On Wed, Nov 5, 2008 at 3:10 PM, Per Buer > > wrote: > > > > Alecs Henry skrev: > > > Hi guys, > > > > > > Is there a variable that I can print on the response header that > will > > > give me the cache lookup result such as TCP_HIT or TCP_MISS? > > > > I guess you can add the relevant header in vcl_hit and vcl_miss > > > > See the FAQ: > > http://varnish.projects.linpro.no/wiki/FAQ#HowdoIaddaHTTPheader > > > > Just add a > > sub vcl_hit { > > # add code from faq here > > } > > > > > > > > -- > > http://linpro.no/ | http://redpill.se/ > > > > > > > -- > Per Buer - Leder Infrastruktur og Drift - Redpill Linpro > Telefon: 21 54 41 21 - Mobil: 958 39 117 > http://linpro.no/ | http://redpill.se/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at metaweb.com Wed Nov 5 19:29:11 2008 From: tim at metaweb.com (Tim Kientzle) Date: Wed, 5 Nov 2008 11:29:11 -0800 Subject: Inspect Request bodies? Message-ID: <55911D51-8964-4D13-9667-63CACCD1A9A4@metaweb.com> Under certain circumstances, I want to inspect the body of a POST request at the proxy cache. It don't see any hooks for this in the current Varnish 2.0.1, but I've skimmed the source and it looks feasible: * I'll need code to actually read and store the POST body in memory (including updates to the PASS handler and other places to use the in-memory data when it's available) * I'll need to add VCL functions to actually analyze the POST body. The second part looks pretty straightforward. The VCL engine seems quite modular and extensible. Because VCL routines run in per-request threads, it should be feasible to do more time-consuming operations using straightforward sequential code. (I've also looked at extending Squid or Nginx, but breaking down some of these operations into the necessary state machines would be rather tedious.) The first part looks trickier. Has anyone here tried anything similar? Any pointers (particular source files I should pay attention to or memory-management issues I should keep in mind)? Finally, has anyone else encountered similar requirements that might benefit from this? (I.e., if I do get this to work, is it worth cleaning up the code to contribute back?) Of course, if Varnish already provides some of this and I've simply missed it, then that's even better. ;-) Cheers, Tim P.S. For the curious, there are two specific issues I'm exploring: First, I have an API which prefers GET but supports POST if the arguments are too long; I'd like to accurately cache responses to these larger requests. Second, I've been exploring request-signing techniques borrowed from OAuth. Both of these boil down to computing a hash over all query arguments, including those in the POST body. So far, I've been handling these issues at the app server, but I've got a growing suite of applications running in that layer and I'd like to move the redundant code into a common proxy layer, so I've been surveying existing proxy implementations to see which ones are most amenable to this kind of extension. From rafailowski at neoleen.com Wed Nov 5 19:34:46 2008 From: rafailowski at neoleen.com (rafailowski) Date: Wed, 05 Nov 2008 20:34:46 +0100 Subject: Frontend caching to multiple sites In-Reply-To: <3c54843f0811050513v5ad532c2nba2ed20705e5c24f@mail.gmail.com> References: <3c54843f0811041404u3e30dce2w97512955a00fd418@mail.gmail.com> <4911523C.3060004@linpro.no> <3c54843f0811050513v5ad532c2nba2ed20705e5c24f@mail.gmail.com> Message-ID: <4911F556.9030309@neoleen.com> Hi, For change vcl on the fly, just do: # varnishadm -T 127.0.0.1:33222 vcl.load vcl_name /path/to/your/vcl/varnish.vcl also look at : vcl.load vcl.inline vcl.use vcl.discard vcl.list vcl.show or if you want : telnet 127.0.0.1 33222 For the downtime, i don't know exactly but it's very quick. Alecs Henry wrote: > Hi Espen, > > Thanks for the answer! > > Is there a way to accomplish this using different VCLs? > I ask it because I'm trying to figure out a way to make it automatic. As > in if I have a new customer, I'd just fill out a form (with the customer > details like domain name, backend server, other configuration) and a > little system working under the hood would generate the VCL file, send > it to the varnish server and load it. > > If I use one VCL file for everybody, how do I reload this VCL when I > need to change it? Is it necessary to reload varnish? Would it present > downtime? > > Thanks!! > > Alecs > > > > On Wed, Nov 5, 2008 at 5:58 AM, Espen Braastad > wrote: > > Alecs Henry wrote: > > I want to set up varnish as a reverse proxy/cache to multiple > customer > sites. > As in, I have 10 different customers, each with its own web site > (domains) > with their own necessities, compression, cookie, authentication, > etc; each > customer is a different setup from the other, so I thought "OK! > Let's use a > different VCL for each customer and all will be fine". > > Bear with me here, I've just started playing with varnish, but > it seems that > I can't create a different VCL file for each customer and load > it in varnish > (vcl.use ...) as varnish will stop responding for the previous > site and > start responding only to the new one (active configuration). > Meaning, the > content that is served is only the content from the new site, > even if using > the correct domain. > > How can I go about setting this up? > I'm using Varnish 2.0.1, just downloaded and compiled it today. > > > Hi, > > You can try something like this in one VCL: > > sub vcl_recv { > if (req.http.host ~ "^(www\.)site1\.com$"){ > # foo > } > > if (req.http.host ~ "^(www\.)site2\.com$"){ > # bar > } > > if (req.http.host ~ "^(www\.)site3\.com$"){ > # baz > } > > # Unknown host > error 403; > } > > -- > mvh > Espen Braastad, > +47 21 54 41 37 > espen at linpro.no > Linpro AS - Ledende p? Linux > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Wed Nov 5 19:37:30 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Nov 2008 19:37:30 +0000 Subject: Inspect Request bodies? In-Reply-To: Your message of "Wed, 05 Nov 2008 11:29:11 PST." <55911D51-8964-4D13-9667-63CACCD1A9A4@metaweb.com> Message-ID: <9909.1225913850@critter.freebsd.dk> In message <55911D51-8964-4D13-9667-63CACCD1A9A4 at metaweb.com>, Tim Kientzle wri tes: > * I'll need code to actually read and store the POST body in memory > (including updates to the PASS handler and other places to > use the in-memory data when it's available) We sort of have this as point 15 on our shoppinglist: (http://varnish.projects.linpro.no/wiki/PostTwoShoppingList) The crucial point here, is that we want it to be controllable in VCL, so that people can disable it for GB sized uploads and enable it for short stuff (or vice versa) if they want. > * I'll need to add VCL functions to actually analyze the POST body. To be honest, I would would probably just use the inline C facility and do it there, than trying to generalize it into a VCL extension. >The first part looks trickier. Has anyone here tried anything >similar? Any pointers (particular source files I should pay attention >to or memory-management issues I should keep in mind)? It's pretty straightforward really: allocate an (non-hashed) object, add storage to it and store the contents there. You can see pretty much all the code you need in cache_fetch.c and for it to go into the tree as a patch, I would insist that the code gets generalized so we use the same code in both directions, rather than have two copies. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jt at endpoint.com Wed Nov 5 21:42:43 2008 From: jt at endpoint.com (JT Justman) Date: Wed, 05 Nov 2008 13:42:43 -0800 Subject: TCP_HIT header In-Reply-To: <3c54843f0811051119j77d1a5el5aed9513ad351bfe@mail.gmail.com> References: <3c54843f0811050803u5bbfabf8sf2c7c67ee6589469@mail.gmail.com> <4911D3A2.1050303@linpro.no> <3c54843f0811050946h5ab5e6fdjcbd32cbe53486d97@mail.gmail.com> <4911DEFD.4010506@linpro.no> <3c54843f0811051119j77d1a5el5aed9513ad351bfe@mail.gmail.com> Message-ID: <49121353.9090500@endpoint.com> Alecs Henry wrote: > Hi Per, > > Here's what I got: > ------------- > vcl.load test /usr/local/etc/varnish/configs/test.vcl > 106 267 > *Variable 'obj.http.X-Cache' not accessible in method 'vcl_miss'.* > At: (/usr/local/etc/varnish/configs/test.vcl Line 62 Pos 13) > set obj.http.X-Cache = "TCP_MISS from " server.ip; > ------------################------------------------------ > VCL compilation failed > ------------- > > It works just fine for vcl_hit though. > Any ideas? > 'obj' is only available in 'hit' and 'fetch'. So set it in vcl_fetch. The only side effect being then your header will be set in the case of a 'pass' as well, which may or may not be what you want. -- jt at endpoint.com http://www.endpoint.com From tim at metaweb.com Thu Nov 6 01:47:06 2008 From: tim at metaweb.com (Tim Kientzle) Date: Wed, 5 Nov 2008 17:47:06 -0800 Subject: Inspect Request bodies? In-Reply-To: <9909.1225913850@critter.freebsd.dk> References: <9909.1225913850@critter.freebsd.dk> Message-ID: <12E13163-6306-4476-BE46-E20706980FFC@metaweb.com> Thanks, Poul-Henning! These are exactly the hints I needed. Agree completely about it being controllable in VCL; my own environment has a mix of requests of widely-varying sizes and I certainly don't want this for large uploads. Tim On Nov 5, 2008, at 11:37 AM, Poul-Henning Kamp wrote: > In message <55911D51-8964-4D13-9667-63CACCD1A9A4 at metaweb.com>, Tim > Kientzle wri > tes: > >> * I'll need code to actually read and store the POST body in memory >> (including updates to the PASS handler and other places to >> use the in-memory data when it's available) > > We sort of have this as point 15 on our shoppinglist: > > (http://varnish.projects.linpro.no/wiki/PostTwoShoppingList) > > The crucial point here, is that we want it to be controllable in > VCL, so that people can disable it for GB sized uploads and enable > it for short stuff (or vice versa) if they want. > >> * I'll need to add VCL functions to actually analyze the POST body. > > To be honest, I would would probably just use the inline C facility > and do it there, than trying to generalize it into a VCL extension. > >> The first part looks trickier. Has anyone here tried anything >> similar? Any pointers (particular source files I should pay >> attention >> to or memory-management issues I should keep in mind)? > > It's pretty straightforward really: allocate an (non-hashed) > object, add storage to it and store the contents there. > > You can see pretty much all the code you need in cache_fetch.c and > for it to go into the tree as a patch, I would insist that the > code gets generalized so we use the same code in both directions, > rather than have two copies. > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by > incompetence. From torstein at escenic.com Thu Nov 6 09:20:46 2008 From: torstein at escenic.com (Torstein Krause Johansen) Date: Thu, 06 Nov 2008 10:20:46 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror Message-ID: <4912B6EE.2040001@escenic.com> Hi all, I cannot get ESI to work for any "real" browser, it might be something really obvious I've missed, but it seems really odd to me. My VCL: backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_fetch { if (req.url ~ "esiTest.html") { esi; set obj.ttl = 24 h; } elseif (req.url == "/cgi-bin/date.cgi") { set obj.ttl = 1m; } } esiTest.html (copied from varnish web site): The time is: at this very moment. date.cgi (copied from varnish web site): #!/bin/sh echo 'Content-type: text/html' echo '' date "+%Y-%m-%d %H:%M" I've included everything here so you can see there's no copy/paste errors :-) Backend is Apache HTTPd 2.2. Now to the odd bit, this works in IE6, curl and wet, but not in browsers like Firefox (Iceweasel), Konqueror or Opera. It seems that the "Accept-encoding" header the client sends (or doesn't send in curl's case) to Varnish is the crucial bit, but of course I'm not sure. It _does_ influence the curl result though, setting it to the same as what Iceweasel sends to Varnish messes up the results (only garble comes back). Here's the output from when Iceweasel tries to access the esiTest.html page: http://pastebin.com/m5f659a20 And here's the output from curl, which works: http://pastebin.com/m504fd0b0 So, please tell me what I need to do to make this work, and no, using IE is not an option ;-) Best regards, -Torstein From fehwalker at gmail.com Thu Nov 6 18:00:15 2008 From: fehwalker at gmail.com (Bryan Fullerton) Date: Thu, 6 Nov 2008 13:00:15 -0500 Subject: Release policy Message-ID: <35de0c300811061000i329810c4ic88abc44cabe318@mail.gmail.com> Hello, Just wondering, what is the release policy is for point releases? My bug #361 was just fixed, and I'm weighing whether I need to patch the source or if I can afford to wait for a release. Thanks, Bryan From tfheen at linpro.no Thu Nov 6 22:21:17 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Thu, 06 Nov 2008 23:21:17 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <4912B6EE.2040001@escenic.com> (Torstein Krause Johansen's message of "Thu, 06 Nov 2008 10:20:46 +0100") References: <4912B6EE.2040001@escenic.com> Message-ID: <871vxobin6.fsf@qurzaw.linpro.no> ]] Torstein Krause Johansen Hi Torstein, it's been a while, we should meet up one of those days. :-) | It seems that the "Accept-encoding" header the client sends (or doesn't | send in curl's case) to Varnish is the crucial bit, but of course I'm | not sure. It _does_ influence the curl result though, setting it to the | same as what Iceweasel sends to Varnish messes up the results (only | garble comes back). I'd start by turning off Content-Encoding: gzip and see if that helps. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at linpro.no Thu Nov 6 22:24:14 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Thu, 06 Nov 2008 23:24:14 +0100 Subject: Release policy In-Reply-To: <35de0c300811061000i329810c4ic88abc44cabe318@mail.gmail.com> (Bryan Fullerton's message of "Thu, 6 Nov 2008 13:00:15 -0500") References: <35de0c300811061000i329810c4ic88abc44cabe318@mail.gmail.com> Message-ID: <87wsfga3xt.fsf@qurzaw.linpro.no> ]] "Bryan Fullerton" | Just wondering, what is the release policy is for point releases? Basically ?when there's a need?, or we have collected a small set of fixes that we think should go into a point release. Unless we stumble across security or other critical bugs, I'm thinking no more often than once a month and not less than every two or three months is about right. Feedback on this is welcome, whether you feel this is about the right pace, too often or too seldom. I'm looking at cutting a 2.0.2 early next week, so unless you need the fix urgently, I'd advise you to wait for that. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From jsmullyan at gmail.com Fri Nov 7 06:02:08 2008 From: jsmullyan at gmail.com (Jacob Smullyan) Date: Fri, 7 Nov 2008 01:02:08 -0500 Subject: "random" load balancing always choosing the same backend Message-ID: <71ee16860811062202q2969d6bawb771b29b234081cf@mail.gmail.com> I'm delighted with varnish. However, I haven't had any luck so far with weighted random load-balancing. I am seeing it always use one backend, the one with the highest weight, and never use the other backends at all. That is not what I expected -- and the other backends are healthy. If I switch to a round-robin, all the backends spring to life. Is this the expected behavior? Any clarification would be appreciated. My configuration is like so: director www_director random { { .backend=florestan; .weight=1; } { .backend=scelsi; .weight=2; } { .backend=scarbo; .weight=1; } { .backend=thoreau; .weight=1; } { .backend=landowska; .weight=3;} } sub vcl_recv{ set req.backend=www_director; } I'm currently using varnish 2.0.1 on gentoo. Thanks, Jacob S. -- Jacob Smullyan office: 646/829-4498 mobile: 917/576-5274 From torstein at escenic.com Fri Nov 7 10:45:57 2008 From: torstein at escenic.com (Torstein Krause Johansen) Date: Fri, 07 Nov 2008 11:45:57 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <871vxobin6.fsf@qurzaw.linpro.no> References: <4912B6EE.2040001@escenic.com> <871vxobin6.fsf@qurzaw.linpro.no> Message-ID: <49141C65.5060300@escenic.com> Heya, Tollef Fog Heen wrote: > it's been a while, we should meet up one of those days. :-) definitely! > | It seems that the "Accept-encoding" header the client sends (or doesn't > | send in curl's case) to Varnish is the crucial bit, but of course I'm > | not sure. It _does_ influence the curl result though, setting it to the > | same as what Iceweasel sends to Varnish messes up the results (only > | garble comes back). > > I'd start by turning off Content-Encoding: gzip and see if that helps. Excactly where do you mean I should turn this off? Turning off Apache mod_deflate solves the problem. However, it's probably not the "ultimate" solution as I wager the customer wants to still use the deflate module. Setting # Make sure proxies don't deliver the wrong content Header append Vary User-Agent env=!dont-vary in Apache's site configuration doesn't seem to help when mod_deflate is active. I assume the remedy is playing around with more Apache options, I'm grateful any input here. Cheers, -Torstein -- Torstein Krause Johansen System Architect mobile: +47 97 01 76 04 web: http://www.escenic.com/ Escenic - platform for innovation From jeff at funnyordie.com Fri Nov 7 22:05:53 2008 From: jeff at funnyordie.com (Jeff Anderson) Date: Fri, 7 Nov 2008 14:05:53 -0800 Subject: Using req/obj.grace to serve stale objects when backend fails References: Message-ID: > Sorry for the second post. > I've experimented with req/obj.grace set to a few hours so a site > can be served even if the backend fails. For example, with the > backend down and when req/obj.grace is set to several hours I can > open a new browser and get a 503 if I try to open a known cached > (and graced) page. However if I refresh the same browser several > times very rapidly I finally receive the graced page. It seems to > be working as expected from what I read in the documentation > regarding the graced object being served while the same object is > being fetched by another thread. The rapid refreshing is generating > a second thread request which then satisifies the requirements to > have the graced object served. Is there a way to configure varnish > to serve the cached graced object if the backend fails without the > browser ever seeing the 503? Can this also be tied into the backend > probing/polling to serve the graced page if all the backends fail? > > Thanks, > --J From fehwalker at gmail.com Sat Nov 8 05:13:38 2008 From: fehwalker at gmail.com (Bryan Fullerton) Date: Sat, 8 Nov 2008 00:13:38 -0500 Subject: Release policy In-Reply-To: <87wsfga3xt.fsf@qurzaw.linpro.no> References: <35de0c300811061000i329810c4ic88abc44cabe318@mail.gmail.com> <87wsfga3xt.fsf@qurzaw.linpro.no> Message-ID: <35de0c300811072113y564a01bew59333404b0eb35a3@mail.gmail.com> On Thu, Nov 6, 2008 at 5:24 PM, Tollef Fog Heen wrote: > I'm looking at cutting a 2.0.2 early next week, so unless you need the > fix urgently, I'd advise you to wait for that. That works for me! Thanks, Bryan From fehwalker at gmail.com Sat Nov 8 05:18:56 2008 From: fehwalker at gmail.com (Bryan Fullerton) Date: Sat, 8 Nov 2008 00:18:56 -0500 Subject: "random" load balancing always choosing the same backend In-Reply-To: <71ee16860811062202q2969d6bawb771b29b234081cf@mail.gmail.com> References: <71ee16860811062202q2969d6bawb771b29b234081cf@mail.gmail.com> Message-ID: <35de0c300811072118l4764570dp511ce247022561ab@mail.gmail.com> On Fri, Nov 7, 2008 at 1:02 AM, Jacob Smullyan wrote: > I'm delighted with varnish. However, I haven't had any luck so far > with weighted random load-balancing. I am seeing it always use one > backend, the one with the highest weight, and never use the other > backends at all. That is not what I expected -- and the other > backends are healthy. If I switch to a round-robin, all the backends > spring to life. Is this the expected behavior? Any clarification > would be appreciated. Yep, I reported this bug a couple of weeks ago (http://varnish.projects.linpro.no/ticket/361). It's been fixed, and the fix should be in 2.0.2 when it arrives in the next week or so. Bryan From perbu at linpro.no Sat Nov 8 11:16:39 2008 From: perbu at linpro.no (Per Buer) Date: Sat, 08 Nov 2008 12:16:39 +0100 Subject: Using req/obj.grace to serve stale objects when backend fails In-Reply-To: References: Message-ID: <49157517.5060908@linpro.no> Hi Jeff. Jeff Anderson skrev: > I've experimented with req/obj.grace set to a few hours so a site > can be served even if the backend fails. For example, with the > backend down and when req/obj.grace is set to several hours I can > open a new browser and get a 503 if I try to open a known cached > (and graced) page. You're right. It shold be fixed. There is now a ticket (#369) on the matter. -- http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From tfheen at linpro.no Mon Nov 10 08:21:30 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Mon, 10 Nov 2008 09:21:30 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <49141C65.5060300@escenic.com> (Torstein Krause Johansen's message of "Fri, 07 Nov 2008 11:45:57 +0100") References: <4912B6EE.2040001@escenic.com> <871vxobin6.fsf@qurzaw.linpro.no> <49141C65.5060300@escenic.com> Message-ID: <874p2ggfed.fsf@qurzaw.linpro.no> ]] Torstein Krause Johansen | Heya, | | Tollef Fog Heen wrote: | > it's been a while, we should meet up one of those days. :-) | | definitely! | | > | It seems that the "Accept-encoding" header the client sends (or doesn't | > | send in curl's case) to Varnish is the crucial bit, but of course I'm | > | not sure. It _does_ influence the curl result though, setting it to the | > | same as what Iceweasel sends to Varnish messes up the results (only | > | garble comes back). | > | > I'd start by turning off Content-Encoding: gzip and see if that helps. | | Excactly where do you mean I should turn this off? | | Turning off Apache mod_deflate solves the problem. However, it's | probably not the "ultimate" solution as I wager the customer wants to | still use the deflate module. Setting There's unfortunately no way to use ESI and gzipped content at the moment. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From torstein at escenic.com Mon Nov 10 08:47:43 2008 From: torstein at escenic.com (Torstein Krause Johansen) Date: Mon, 10 Nov 2008 09:47:43 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <874p2ggfed.fsf@qurzaw.linpro.no> References: <4912B6EE.2040001@escenic.com> <871vxobin6.fsf@qurzaw.linpro.no><49141C65.5060300@escenic.com> <874p2ggfed.fsf@qurzaw.linpro.no> Message-ID: <4917F52F.7050008@escenic.com> Good morning, Tollef Fog Heen wrote: > | Turning off Apache mod_deflate solves the problem. However, it's > | probably not the "ultimate" solution as I wager the customer wants to > | still use the deflate module. Setting > > There's unfortunately no way to use ESI and gzipped content at the > moment. Ok, thanks for the confirmation :-) Sorry for asking, but I know many customers will be asking me this: do you have any idea when we could expect support for this? Cheers, -Torstein -- Torstein Krause Johansen System architect mobile: +47 97 01 76 04 web: http://www.escenic.com/ Escenic - platform for innovation From varnish-misc at projects.linpro.no Mon Nov 10 11:47:50 2008 From: varnish-misc at projects.linpro.no (varnish-misc at projects.linpro.no) Date: Mon, 10 Nov 2008 12:47:50 +0100 (CET) Subject: ou.Doctor Trina Message-ID: <20081110114750.7DFD31ED2CC@projects.linpro.no> An HTML attachment was scrubbed... URL: From admin at opensubtitles.org Tue Nov 11 07:42:52 2008 From: admin at opensubtitles.org (Brano) Date: Tue, 11 Nov 2008 14:42:52 +0700 Subject: Varnish Error 503 Service Unavailable Message-ID: <956208665.20081111144252@2ge.us> Hi all, recently we installed Varnish on our server. Everything works fine, but on download we get this error: http://www.opensubtitles.org/en/download/file/1951965961 Varnish Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 2138997704 My backend works ok: http://web1.opensubtitles.org/en/download/file/1951965961 http://web2.opensubtitles.org/en/download/file/1951965961 Server info: varnishd (varnish-2.0.1) OS: FreeBSD 7.1-PRERELEASE PHP Version 5.2.6 httpd: Lighttpd 1.4.20 Here is log: 276 ReqStart c 124.157.249.3 50073 2138997704 276 RxRequest c GET 276 RxURL c /en/download/file/1951965961 276 RxProtocol c HTTP/1.1 276 RxHeader c Host: www.opensubtitles.org 276 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729) 276 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 276 RxHeader c Accept-Language: en 276 RxHeader c Accept-Encoding: gzip,deflate 276 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 276 RxHeader c Keep-Alive: 300 276 RxHeader c Connection: keep-alive 276 RxHeader c Referer: http://www.opensubtitles.org/en/subtitles/3363915/prison-break-en 276 RxHeader c Cookie: __utma=188827125.1293269160.1192786702.1226384704.1226387542.495; __utmz=188827125.1226166768.475.44.utmccn=(referral)|utmcsr=forum.opensubtitles.org|utmcct=/index.php|utmcmd=refe 276 RxHeader c Cache-Control: max-age=0 276 VCL_call c recv 276 VCL_return c lookup 276 VCL_call c hash 276 VCL_return c hash 276 VCL_call c miss 276 VCL_return c fetch 139 BackendOpen b web2 92.240.234.126 61634 92.240.234.119 80 276 Backend c 139 www_director web2 139 TxRequest b GET 139 TxURL b /en/download/file/1951965961 139 TxProtocol b HTTP/1.1 139 TxHeader b Host: www.opensubtitles.org 139 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729) 139 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 139 TxHeader b Accept-Language: en 139 TxHeader b Accept-Encoding: gzip,deflate 139 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 139 TxHeader b Referer: http://www.opensubtitles.org/en/subtitles/3363915/prison-break-en 139 TxHeader b Cookie: __utma=188827125.1293269160.1192786702.1226384704.1226387542.495; __utmz=188827125.1226166768.475.44.utmccn=(referral)|utmcsr=forum.opensubtitles.org|utmcct=/index.php|utmcmd=refe 139 TxHeader b X-Varnish: 2138997704 139 TxHeader b X-Forwarded-For: 124.157.249.3 If you need more info, please let me know. Any help appreciated. Thank you. /Brano From alecshenry at gmail.com Tue Nov 11 12:01:46 2008 From: alecshenry at gmail.com (Alecs Henry) Date: Tue, 11 Nov 2008 10:01:46 -0200 Subject: Varnish Error 503 Service Unavailable In-Reply-To: <956208665.20081111144252@2ge.us> References: <956208665.20081111144252@2ge.us> Message-ID: <3c54843f0811110401v23611c72n9ead158e31ef55b2@mail.gmail.com> Hi there! I have the exact same problem, and it comes and goes as it pleases.. I'm testing varnish with different backends (different customers sites in the same instance) and every once in a while it locks up at the same place (the X-Forwarded-For header on varnishlog) for any site. Not the others though, as they are accessible through varnish just fine. And after a while of working the cache the site that locks up comes back to life. This is what varnishlog shows just before the 503 error: Request: 9 SessionOpen c MY_IP_ADDRESS 56186 VARNISH_SERVER_IP:80 9 ReqStart c MY_IP_ADDRESS 56186 444163595 9 RxRequest c GET 9 RxURL c / 9 RxProtocol c HTTP/1.1 9 RxHeader c Host: BACKEND_HOSTNAME 9 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en; rv: 1.9.0.3) Gecko/2008101315 Ubuntu/8.10 (intrepid) Firefox/3.0.3 9 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 9 RxHeader c Accept-Language: q=0.8,en-us;q=0.5,en;q=0.3 9 RxHeader c Accept-Encoding: gzip,deflate 9 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 9 RxHeader c Keep-Alive: 300 9 RxHeader c Connection: keep-alive 9 VCL_call c recv 9 VCL_return c lookup 9 VCL_call c hash 9 VCL_return c hash 9 VCL_call c miss 9 VCL_return c fetch 9 Backend c 17 BACKEND_NAME BACKEND_NAME 17 TxRequest - GET 17 TxURL - / 17 TxProtocol - HTTP/1.1 17 TxHeader - Host: BACKEND_HOSTNAME 17 TxHeader - User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en; rv: 1.9.0.3) Gecko/2008101315 Ubuntu/8.10 (intrepid) Firefox/3.0.3 17 TxHeader - Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 17 TxHeader - Accept-Language: q=0.8,en-us;q=0.5,en;q=0.3 17 TxHeader - Accept-Encoding: gzip,deflate 17 TxHeader - Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 17 TxHeader - X-Varnish: 444163595 17 TxHeader - X-Forwarded-For: MY_IP_ADDRESS ===> Long waiting time (sorry, didn't really time it, but it's like 5 minutes) Response: 17 BackendClose - BACKEND_NAME 9 VCL_call c error 9 VCL_return c deliver 9 Length c 452 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 9 TxResponse c Service Unavailable 9 TxHeader c Server: Varnish 9 TxHeader c Retry-After: 0 9 TxHeader c Content-Type: text/html; charset=utf-8 9 TxHeader c Content-Length: 452 9 TxHeader c Date: Tue, 11 Nov 2008 11:49:00 GMT 9 TxHeader c X-Varnish: 444163595 9 TxHeader c Age: 945 9 TxHeader c Via: 1.1 varnish 9 TxHeader c Connection: close 9 ReqEnd c 444163595 1226403195.496297359 1226404140.571825743 0.000217438 945.075489759 0.000038624 9 SessionClose c error 9 StatSess c MY_IP_ADDRESS 56186 945 1 1 0 0 0 236 452 0 StatAddr - MY_IP_ADDRESS 0 427594 323 1326 0 0 548 435744 9005255 9 SessionOpen c MY_IP_ADDRESS 44754 VARNISH_SERVER_IP:80 9 ReqStart c MY_IP_ADDRESS 44754 444163596 9 RxRequest c GET 9 RxURL c /favicon.ico 9 RxProtocol c HTTP/1.1 9 RxHeader c Host: BACKEND_HOSTNAME 9 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en; rv: 1.9.0.3) Gecko/2008101315 Ubuntu/8.10 (intrepid) Firefox/3.0.3 9 RxHeader c Accept: image/png,image/*;q=0.8,*/*;q=0.5 9 RxHeader c Accept-Language: q=0.8,en-us;q=0.5,en;q=0.3 9 RxHeader c Accept-Encoding: gzip,deflate 9 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 9 RxHeader c Keep-Alive: 300 9 RxHeader c Connection: keep-alive 9 VCL_call c recv 9 VCL_return c lookup 9 VCL_call c hash 9 VCL_return c hash 9 VCL_call c miss 9 VCL_return c fetch 9 Backend c 14 BACKEND_NAME BACKEND_NAME 14 TxRequest - GET 14 TxURL - /favicon.ico 14 TxProtocol - HTTP/1.1 14 TxHeader - Host: BACKEND_HOSTNAME 14 TxHeader - User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en; rv: 1.9.0.3) Gecko/2008101315 Ubuntu/8.10 (intrepid) Firefox/3.0.3 14 TxHeader - Accept: image/png,image/*;q=0.8,*/*;q=0.5 14 TxHeader - Accept-Language: q=0.8,en-us;q=0.5,en;q=0.3 14 TxHeader - Accept-Encoding: gzip,deflate 14 TxHeader - Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 14 TxHeader - X-Varnish: 444163596 14 TxHeader - X-Forwarded-For: MY_IP_ADDRESS Any one else seeing this? Any one have any idea why this happens? Cheers Alecs On Tue, Nov 11, 2008 at 5:42 AM, Brano wrote: > Hi all, > > recently we installed Varnish on our server. Everything works fine, > but on download we get this error: > > http://www.opensubtitles.org/en/download/file/1951965961 > > Varnish Error 503 Service Unavailable > > Service Unavailable > Guru Meditation: > > XID: 2138997704 > > My backend works ok: > http://web1.opensubtitles.org/en/download/file/1951965961 > http://web2.opensubtitles.org/en/download/file/1951965961 > > Server info: > varnishd (varnish-2.0.1) > OS: FreeBSD 7.1-PRERELEASE > PHP Version 5.2.6 > httpd: Lighttpd 1.4.20 > > Here is log: > 276 ReqStart c 124.157.249.3 50073 2138997704 > 276 RxRequest c GET > 276 RxURL c /en/download/file/1951965961 > 276 RxProtocol c HTTP/1.1 > 276 RxHeader c Host: www.opensubtitles.org > 276 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; > en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729) > 276 RxHeader c Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 276 RxHeader c Accept-Language: en > 276 RxHeader c Accept-Encoding: gzip,deflate > 276 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 276 RxHeader c Keep-Alive: 300 > 276 RxHeader c Connection: keep-alive > 276 RxHeader c Referer: > http://www.opensubtitles.org/en/subtitles/3363915/prison-break-en > 276 RxHeader c Cookie: > __utma=188827125.1293269160.1192786702.1226384704.1226387542.495; > __utmz=188827125.1226166768.475.44.utmccn=(referral)|utmcsr= > forum.opensubtitles.org|utmcct=/index.php|utmcmd=refe > 276 RxHeader c Cache-Control: max-age=0 > 276 VCL_call c recv > 276 VCL_return c lookup > 276 VCL_call c hash > 276 VCL_return c hash > 276 VCL_call c miss > 276 VCL_return c fetch > 139 BackendOpen b web2 92.240.234.126 61634 92.240.234.119 80 > 276 Backend c 139 www_director web2 > 139 TxRequest b GET > 139 TxURL b /en/download/file/1951965961 > 139 TxProtocol b HTTP/1.1 > 139 TxHeader b Host: www.opensubtitles.org > 139 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; > en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 (.NET CLR 3.5.30729) > 139 TxHeader b Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 139 TxHeader b Accept-Language: en > 139 TxHeader b Accept-Encoding: gzip,deflate > 139 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > 139 TxHeader b Referer: > http://www.opensubtitles.org/en/subtitles/3363915/prison-break-en > 139 TxHeader b Cookie: > __utma=188827125.1293269160.1192786702.1226384704.1226387542.495; > __utmz=188827125.1226166768.475.44.utmccn=(referral)|utmcsr= > forum.opensubtitles.org|utmcct=/index.php|utmcmd=refe > 139 TxHeader b X-Varnish: 2138997704 > 139 TxHeader b X-Forwarded-For: 124.157.249.3 > > If you need more info, please let me know. > > Any help appreciated. > > Thank you. > > /Brano > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jt at endpoint.com Tue Nov 11 16:04:40 2008 From: jt at endpoint.com (JT Justman) Date: Tue, 11 Nov 2008 08:04:40 -0800 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <4917F52F.7050008@escenic.com> References: <4912B6EE.2040001@escenic.com> <871vxobin6.fsf@qurzaw.linpro.no><49141C65.5060300@escenic.com> <874p2ggfed.fsf@qurzaw.linpro.no> <4917F52F.7050008@escenic.com> Message-ID: <4919AD18.2040206@endpoint.com> Torstein Krause Johansen wrote: > Good morning, > > Tollef Fog Heen wrote: > > | Turning off Apache mod_deflate solves the problem. However, it's > > | probably not the "ultimate" solution as I wager the customer wants to > > | still use the deflate module. Setting > > > > There's unfortunately no way to use ESI and gzipped content at the > > moment. > > Ok, thanks for the confirmation :-) > > Sorry for asking, but I know many customers will be asking me this: do > you have any idea when we could expect support for this? Torstein - We have a client who is interested in ESI and also requires gzip (as I think most would), and we've been working on it on the back burner for a while. Faster work from us depends on the client's priorities. It's not a trivial undertaking, but I have at least got to the point of understanding the ESI request flow enough to guess where the encoding should probably be performed. See here for links to two bugs discussing the issue: http://varnish.projects.linpro.no/wiki/PostTwoShoppingList JT -- jt at endpoint.com http://www.endpoint.com From jeff at funnyordie.com Tue Nov 11 20:21:24 2008 From: jeff at funnyordie.com (Jeff Anderson) Date: Tue, 11 Nov 2008 12:21:24 -0800 Subject: Version of Varnish reporting in the response header Message-ID: Using firebug i get: Via 1.1 varnish In the response headers. Should that be 2.0.1 instead or is it referring to something other than the version of Varnish? Thanks, --Jeff From phk at phk.freebsd.dk Tue Nov 11 20:43:01 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 11 Nov 2008 20:43:01 +0000 Subject: Version of Varnish reporting in the response header In-Reply-To: Your message of "Tue, 11 Nov 2008 12:21:24 PST." Message-ID: <83783.1226436181@critter.freebsd.dk> In message , Jeff Anderson writes: >Using firebug i get: > >Via 1.1 varnish > >In the response headers. Should that be 2.0.1 instead or is it >referring to something other than the version of Varnish? It refers to the HTTP version, see RFC2616 -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From miles at jamkit.com Tue Nov 4 22:19:34 2008 From: miles at jamkit.com (Miles) Date: Tue, 04 Nov 2008 22:19:34 +0000 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: <4910C55D.9070804@tomayko.com> References: <4910C55D.9070804@tomayko.com> Message-ID: <4910CA76.5010803@jamkit.com> Ryan Tomayko wrote: > On 11/4/08 12:51 PM, Miles wrote: >> I know varnish doesn't do If-None-Match, but I don't think that is a >> problem in this scheme. > > I'm curious to understand why Varnish doesn't do validation / conditional GET. > Has If-Modified-Since/If-None-Match support been considered and rejected on > merit or is it something that could theoretically be accepted into the > project? Has it just not received any real interest? > > Personally, I'd love to see support for conditional GET as this can > significantly reduce backend resource use when the backend generates cache > validators upfront and 304's without generating the full response. > > Ryan AFAIK varnish does do if-modified-since, just not if-none-match Miles From r at tomayko.com Tue Nov 4 23:22:38 2008 From: r at tomayko.com (Ryan Tomayko) Date: Tue, 04 Nov 2008 15:22:38 -0800 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: <4910CA76.5010803@jamkit.com> References: <4910C55D.9070804@tomayko.com> <4910CA76.5010803@jamkit.com> Message-ID: <4910D93E.1080903@tomayko.com> On 11/4/08 2:19 PM, Miles wrote: > AFAIK varnish does do if-modified-since, just not if-none-match Oh, nice. Is this new in 2.0? Ryan From jeff at funnyordie.com Fri Nov 7 02:05:46 2008 From: jeff at funnyordie.com (Jeff Anderson) Date: Thu, 6 Nov 2008 18:05:46 -0800 Subject: Using req/obj.grace to serve stale objects when backend fails Message-ID: I've experimented with req/obj.grace set to a few hours so a site can be served even if the backend fails. For example, with the backend down and when req/obj.grace is set to several hours I can open a new browser and get a 503 if I try to open a known cached (and graced) page. However if I refresh the same browser several times very rapidly I finally receive the graced page. It seems to be working as expected from what I read in the documentation regarding the graced object being served while the same object is being fetched by another thread. The rapid refreshing is generating a second thread request which then satisifies the requirements to have the graced object served. Is there a way to configure varnish to serve the cached graced object if the backend fails without the browser ever seeing the 503? Can this also be tied into the backend probing/polling? Thanks, --J From ric at digitalmarbles.com Wed Nov 12 11:19:10 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Wed, 12 Nov 2008 03:19:10 -0800 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: <4910CA76.5010803@jamkit.com> References: <4910C55D.9070804@tomayko.com> <4910CA76.5010803@jamkit.com> Message-ID: <9F8BCFF6-6D87-48C6-9B6F-5DAEF7F18106@digitalmarbles.com> On Nov 4, 2008, at 2:19 PM, Miles wrote: > Ryan Tomayko wrote: >> On 11/4/08 12:51 PM, Miles wrote: >>> I know varnish doesn't do If-None-Match, but I don't think that is a >>> problem in this scheme. >> >> I'm curious to understand why Varnish doesn't do validation / >> conditional GET. >> Has If-Modified-Since/If-None-Match support been considered and >> rejected on >> merit or is it something that could theoretically be accepted into >> the >> project? Has it just not received any real interest? >> >> Personally, I'd love to see support for conditional GET as this can >> significantly reduce backend resource use when the backend >> generates cache >> validators upfront and 304's without generating the full response. >> >> Ryan > > AFAIK varnish does do if-modified-since, just not if-none-match > > Miles Unless this has changed with 2.0, varnish will *respond* to if- modified-since (IMS) with a 304 response if there is cached entry that fails this condition, but varnish will neither *pass* the IMS header to the backend (unless you customize the vcl) nor *generate* an IMS to the backend. Ric From tim at metaweb.com Wed Nov 12 19:11:54 2008 From: tim at metaweb.com (Tim Kientzle) Date: Wed, 12 Nov 2008 11:11:54 -0800 Subject: Getting started... Message-ID: <30A416D7-3317-4777-99BC-7DD70EF4DE9E@metaweb.com> I'm trying to just run a plain-vanilla varnish so I can see it running before I start mucking with configuration. But I'm not having much luck: $ uname -a Darwin tbkk.local 9.5.0 Darwin Kernel Version 9.5.0: Wed Sep 3 11:29:43 PDT 2008; root:xnu-1228.7.58~1/RELEASE_I386 i386 $ sbin/varnishd -a 127.0.0.1:3128 -b 127.0.0.1:80 -d storage_file: filename: ./varnish.2wA0fp (unlinked) size 669 MB. Using old SHMFILE Debugging mode, enter "start" to start child $ echo $? 2 $ So, varnishd simply exits with no explanation at all. After the above, bin/varnishlog just hangs with no output. I finally resorted to running varnishd under GDB, which shows that vev_schedule_one() is getting NULL from binheap_root(), which leads it to return zero, which causes vev_schedule() to return, which causes mgt_schedule() to log "manager dies" and exit(2). What have I missed? Are there any good examples just showing how to run varnish? Tim From tim at metaweb.com Wed Nov 12 19:20:46 2008 From: tim at metaweb.com (Tim Kientzle) Date: Wed, 12 Nov 2008 11:20:46 -0800 Subject: Getting started... In-Reply-To: <30A416D7-3317-4777-99BC-7DD70EF4DE9E@metaweb.com> References: <30A416D7-3317-4777-99BC-7DD70EF4DE9E@metaweb.com> Message-ID: Ah... It seems to work if I omit the -d option. Tim On Nov 12, 2008, at 11:11 AM, Tim Kientzle wrote: > I'm trying to just run a plain-vanilla varnish so I can see it running > before I start mucking with configuration. > > But I'm not having much luck: > > $ uname -a > Darwin tbkk.local 9.5.0 Darwin Kernel Version 9.5.0: Wed Sep 3 > 11:29:43 PDT 2008; root:xnu-1228.7.58~1/RELEASE_I386 i386 > $ sbin/varnishd -a 127.0.0.1:3128 -b 127.0.0.1:80 -d > storage_file: filename: ./varnish.2wA0fp (unlinked) size 669 MB. > Using old SHMFILE > Debugging mode, enter "start" to start child > $ echo $? > 2 > $ > > So, varnishd simply exits with no explanation at all. > > After the above, bin/varnishlog just hangs with no output. > > I finally resorted to running varnishd under GDB, which shows that > vev_schedule_one() is getting NULL from binheap_root(), which leads it > to return zero, which causes vev_schedule() to return, which causes > mgt_schedule() to log "manager dies" and exit(2). > > What have I missed? > > Are there any good examples just showing how to run varnish? > > Tim > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From anders at fupp.net Wed Nov 12 20:19:48 2008 From: anders at fupp.net (Anders Nordby) Date: Wed, 12 Nov 2008 21:19:48 +0100 Subject: Using req/obj.grace to serve stale objects when backend fails In-Reply-To: References: Message-ID: <20081112201948.GA21409@fupp.net> Hi, On Thu, Nov 06, 2008 at 06:05:46PM -0800, Jeff Anderson wrote: > I've experimented with req/obj.grace set to a few hours so a site can > be served even if the backend fails. For example, with the backend > down and when req/obj.grace is set to several hours I can open a new > browser and get a 503 if I try to open a known cached (and graced) > page. However if I refresh the same browser several times very > rapidly I finally receive the graced page. It seems to be working as > expected from what I read in the documentation regarding the graced > object being served while the same object is being fetched by another > thread. The rapid refreshing is generating a second thread request > which then satisifies the requirements to have the graced object > served. Is there a way to configure varnish to serve the cached > graced object if the backend fails without the browser ever seeing the > 503? Can this also be tied into the backend probing/polling? It's not possible yet, but wanted. See ticket http://varnish.projects.linpro.no/ticket/369. You may also have an interest for http://varnish.projects.linpro.no/wiki/VCLExampleRestarts. Cheers, -- Anders. From r at tomayko.com Wed Nov 12 21:57:17 2008 From: r at tomayko.com (Ryan Tomayko) Date: Wed, 12 Nov 2008 13:57:17 -0800 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: <9F8BCFF6-6D87-48C6-9B6F-5DAEF7F18106@digitalmarbles.com> References: <4910C55D.9070804@tomayko.com> <4910CA76.5010803@jamkit.com> <9F8BCFF6-6D87-48C6-9B6F-5DAEF7F18106@digitalmarbles.com> Message-ID: <491B513D.7010607@tomayko.com> On 11/12/08 3:19 AM, Ricardo Newbery wrote: > Unless this has changed with 2.0, varnish will *respond* to if- > modified-since (IMS) with a 304 response if there is cached entry that > fails this condition, but varnish will neither *pass* the IMS header > to the backend (unless you customize the vcl) nor *generate* an IMS to > the backend. Right. Sorry about the ambiguity in my original message. I'm asking specifically about Varnish using If-Modified-Since/If-None-Match to revalidate a stale cache entry with the backend. Can anyone say whether the lack of validation is due to a conscious design decision as opposed to something that just hasn't been implemented due to priority/time/resources? I'd be willing to take a crack at some of this if it's something that would be considered for inclusion in the project. Thanks, Ryan From phk at phk.freebsd.dk Thu Nov 13 10:05:54 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 13 Nov 2008 10:05:54 +0000 Subject: Conditional GET (was Re: caching using ETags to vary the content) In-Reply-To: Your message of "Wed, 12 Nov 2008 13:57:17 PST." <491B513D.7010607@tomayko.com> Message-ID: <9571.1226570754@critter.freebsd.dk> In message <491B513D.7010607 at tomayko.com>, Ryan Tomayko writes: >On 11/12/08 3:19 AM, Ricardo Newbery wrote: >Right. Sorry about the ambiguity in my original message. I'm asking >specifically about Varnish using If-Modified-Since/If-None-Match to >revalidate a stale cache entry with the backend. > >Can anyone say whether the lack of validation is due to a conscious >design decision as opposed to something that just hasn't been >implemented due to priority/time/resources? Both, it hasn't been deemed important enough yet for it to happen. >I'd be willing to take a crack at some of this if it's something that >would be considered for inclusion in the project. It's slightly involved, because presently we don't hold on to a reference to the object that we might revalidate, so the change is semi-nasty locking wise. That said, we're happy to receive patches. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Nov 13 10:24:50 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 13 Nov 2008 10:24:50 +0000 Subject: Wishlist: filtering for varnishhist In-Reply-To: Your message of "Fri, 31 Oct 2008 18:02:14 +0100." Message-ID: <9699.1226571890@critter.freebsd.dk> In message , "Ole L aursen" writes: >Hi, > >Varnishhist is pretty cool. Unfortunately, most of my data comes from >image files which are served by a well-functioning lighttpd instance. >So I'm really only interested in the data from the Apache web server >running Django. See point 8: http://varnish.projects.linpro.no/wiki/PostTwoShoppingList -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From torstein at escenic.com Thu Nov 13 12:12:47 2008 From: torstein at escenic.com (Torstein Krause Johansen) Date: Thu, 13 Nov 2008 13:12:47 +0100 Subject: ESI works in IE6 & curl, but not in FF, Opera, Konqueror In-Reply-To: <4919AD18.2040206@endpoint.com> References: <4912B6EE.2040001@escenic.com> <871vxobin6.fsf@qurzaw.linpro.no><49141C65.5060300@escenic.com> <874p2ggfed.fsf@qurzaw.linpro.no><4917F52F.7050008@escenic.com> <4919AD18.2040206@endpoint.com> Message-ID: <491C19BF.8090401@escenic.com> Hi JT, JT Justman wrote: > We have a client who is interested in ESI and also requires gzip (as I > think most would), and we've been working on it on the back burner for a > while. Faster work from us depends on the client's priorities. It's not > a trivial undertaking, but I have at least got to the point of > understanding the ESI request flow enough to guess where the encoding > should probably be performed. > > See here for links to two bugs discussing the issue: thanks for the additional info regarding the ESI/gzip issue. Although it would be grand if Varnish supports gzip nativelly, currently I don't see a big problem of putting the Apache infront of Varnish instead of behind it. Hopefully we'll get the desired effect then. Cheers, -Torstein -- Torstein Krause Johansen System architect mobile: +47 97 01 76 04 web: http://www.escenic.com/ Escenic - platform for innovation From torstein at escenic.com Thu Nov 13 15:59:37 2008 From: torstein at escenic.com (Torstein Krause Johansen) Date: Thu, 13 Nov 2008 16:59:37 +0100 Subject: Varnish and sticky sessions Message-ID: <491C4EE9.1020504@escenic.com> Heya, is there a way to get Varnish load balancing (the director) to support sticky sessions? Or do I need to put a load balancer behind the Varnish that ensures that a client with a given session always goes to the same backend server? Right now, it looks like I need: LB -> apache1/mod_inflate -> varnish/esi -> apache2/mod_proxy_balancer -> app servers to get ESI support, gzip-ed content delivered to the client and sticky sessions Cheers, -Torstein -- Torstein Krause Johansen System architect mobile: +47 97 01 76 04 web: http://www.escenic.com/ Escenic - platform for innovation From jt at endpoint.com Thu Nov 13 16:03:39 2008 From: jt at endpoint.com (JT Justman) Date: Thu, 13 Nov 2008 08:03:39 -0800 Subject: Varnish and sticky sessions In-Reply-To: <491C4EE9.1020504@escenic.com> References: <491C4EE9.1020504@escenic.com> Message-ID: <491C4FDB.9000707@endpoint.com> Torstein Krause Johansen wrote: > Heya, > > is there a way to get Varnish load balancing (the director) to support > sticky sessions? > > Or do I need to put a load balancer behind the Varnish that ensures that > a client with a given session always goes to the same backend server? Never tried it, but it seems to me you could read a cookie in VCL to determine the backend to use. This is how most load balancers handle sticky sessions, right? -- jt at endpoint.com http://www.endpoint.com From lukas.loesche at bertelsmann.de Thu Nov 13 17:15:45 2008 From: lukas.loesche at bertelsmann.de (Loesche, Lukas, scoyo) Date: Thu, 13 Nov 2008 18:15:45 +0100 Subject: Varnish and sticky sessions In-Reply-To: <491C4FDB.9000707@endpoint.com> Message-ID: JT Justman wrote: > Never tried it, but it seems to me you could read a cookie in VCL to > determine the backend to use. This is how most load balancers handle > sticky sessions, right? That's one way. Another way that doesn't involve cookies or server side session tables and which is generally more performant (if performance is of any relevance here) is to hash some client endpoint identifier and calculate the modulus with the number of available backend servers. As client endpoint identifier you could use the decimal representation of it's ip address. So: decimal Client IP = (first octet * 256**3) + (second octet * 256**2) + (third octet * 256) + (fourth octet) Sticky Client Backend = decimal Client IP % Number of available Backend Servers Example: Client IP = 10.129.40.22 Number of available Backend Servers = e.g. 4 therefor: Decimal Client IP = (10 * 16777216) + (129 * 65536) + (40 * 256) + (22) Decimal Client IP = 176236566 Sticky Client Backend = 176236566 % 4 = 2 Or if VCL doesn't support the modulo operator, calculate it using basic arithmetic operations: 176236566 / 4 = 44059141.5 4 * 44059141 = 176236564 176236566 - 176236564 = 2 So in this case the client always get's balanced to the third backend server (range is 0 - 3). It would only get rebalanced if the number of available backend servers changes. I.e. if a backend server fails or one is added. Instead of the client's dec IP you could also create some hash using a combination of client IP and browser name, or something like that.. depends on what your requirements are and who your site's target audience is. If your site gets lots of traffic from companies who generally NAT their employees using one or two gateway IPs than hashing based on the client IP alone wouldn't do much good. However if the site's target audience are end-users sitting at home each with their own IP it's a very efficient and well balanced way of doing sticky sessions. I really don't know enough about available VCL operators and syntax, but from taking a quick look at it I saw that it's pretty flexible, even supporting regexp (personally wouldn't use them for request balancing though) and inline C Code. So if the required hashing and char matching functions aren't present in VCL itself it seems you could easily do them in C. As far as the number of available backend servers go, I don't know if they get exposed by varnish inside VCL. You could always hard code them of course but it would be better if varnish itself had some way to let you know which backend servers it considers alive and which not. This might require some modifications to varnish itself if they aren't already present. Anyway, the method described is a valid way to do sticky sessions and supported by most commercial load balancers. It's very resource friendly as it doesn't involve any cookie parsing/setting or server side session tables. Cheers, -- Lukas From admin at opensubtitles.org Fri Nov 14 09:35:09 2008 From: admin at opensubtitles.org (Brano) Date: Fri, 14 Nov 2008 16:35:09 +0700 Subject: Varnish Error 503 Service Unavailable In-Reply-To: <956208665.20081111144252@2ge.us> References: <956208665.20081111144252@2ge.us> Message-ID: <477744019.20081114163509@2ge.us> Brano [B], on Tuesday, November 11, 2008 at 14:42 (+0700) typed: B> recently we installed Varnish on our server. Everything works fine, B> but on download we get this error: B> http://www.opensubtitles.org/en/download/file/1951965961 it works now. I did not change nothing in varnish, I changed PHP code. This was original code: if(isset($_SERVER["HTTP_USER_AGENT"]) and strpos($_SERVER["HTTP_USER_AGENT"], 'MSIE')) { ini_set('zlib.output_compression', 'Off'); } I replaced it with: ini_set('zlib.output_compression', 'Off'); Now it works OK. Just to let you know, I am not sure, if it is bug, or... VCL: http://www.pastebin.sk/en/9031/ -- ...m8s, cu l8r, Brano. [My name? said the old man sadly, is Slartibartfast.] From michael at orinoco.jp Fri Nov 14 12:45:22 2008 From: michael at orinoco.jp (Michael Moyle) Date: Fri, 14 Nov 2008 21:45:22 +0900 Subject: round-robin director Message-ID: Hi, I am new to the list, and just getting started with varnish. It appears that the round-robin director is not hitting every node the list. Is this a bug or does rr have a method for determining which node is best to hit out of the box. I set up 4 virtual hosts running on different ports: www1, www2, www3, www4. In no case would the server access www1. It seems to restrict itself to www3 and www4, is there a problem where it choosing only two nodes? If I moved the servers to different IP addresses would it help? Of course that is how it would be configured in production. If anyone has any insight I would like to know. I can do more tests and research further, but it would be good to know if I doing something wrong or this is a known issue. Please find my config below: backend www1 { .host = "www1"; .port = "81"; } backend www2 { .host = "www2"; .port = "82"; } backend www3 { .host = "www3"; .port = "83"; } backend www4 { .host = "www4"; .port = "84"; director wwwdirector round-robin { { .backend = www2; } { .backend = www3; } { .backend = www1; } { .backend = www4; } } sub vcl_recv { set req.backend = wwwdirector; if ( req.request ) { #don't cache anything, just load balance pass; } } Thanks! Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at linpro.no Fri Nov 14 14:58:13 2008 From: perbu at linpro.no (Per Buer) Date: Fri, 14 Nov 2008 15:58:13 +0100 Subject: round-robin director In-Reply-To: References: Message-ID: <491D9205.6080701@linpro.no> Michael Moyle skrev: > If anyone has any insight I would like to know. I can do more tests and > research further, but it would be good to know if I doing something > wrong or this is a known issue. Have a look at varnishlog. There might be a hint there what is going on. Maybe there are connectivity issues with the first backend? If this doesn't solve it be sure to post the output from varnishstat as well as an excerpt from varnishlog. -- Per Buer - Leder Infrastruktur og Drift - Redpill Linpro Telefon: 21 54 41 21 - Mobil: 958 39 117 http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From michael at orinoco.jp Mon Nov 17 10:07:16 2008 From: michael at orinoco.jp (Michael Moyle) Date: Mon, 17 Nov 2008 19:07:16 +0900 Subject: round-robin director In-Reply-To: <491D9205.6080701@linpro.no> References: <491D9205.6080701@linpro.no> Message-ID: Per, I appreciate your interest in this issue. > > Have a look at varnishlog. There might be a hint there what is going on. > Maybe there are connectivity issues with the first backend? > Connectivity is fine for both hosts. I have gone through several tests, this time with two hosts, and I can't be confident that the load balancer is working. varnish seems to establish a preference for one host and stick with it. I also tried using probes, and random. To refresh, each hosts is an apache virtual host on the same IP address with a different host name and port number. round robin with probes: stays with one hosts even if the weight of the other host is the same or greater. round robin (no probes): will cycle between hosts if one has index.html and the other has index.cgi or no index file. I tried clearing the browser cache, and using multiple browsers. However all requests are from the same host. I am still wondering if it is something to do with my environment. I will paste some varnishlog and varnishstat data for a simple case with two RR hosts, and no probes. Thanks again. I'll continue to look into it. varnishlog (2 requests): please find attached vlog.txt varnishstatus: 0+00:14:32 theloin Hitrate ratio: 0 0 0 Hitrate avg: 0.0000 0.0000 0.0000 11 0.00 0.01 Client connections accepted 35 0.00 0.04 Client requests received 0 0.00 0.00 Cache hits 0 0.00 0.00 Cache hits for pass 0 0.00 0.00 Cache misses 35 0.00 0.04 Backend connections success 0 0.00 0.00 Backend connections not attempted 0 0.00 0.00 Backend connections too many 0 0.00 0.00 Backend connections failures 17 0.00 0.02 Backend connections reuses 35 0.00 0.04 Backend connections recycles 0 0.00 0.00 Backend connections unused 1 . . N struct srcaddr 0 . . N active struct srcaddr 2 . . N struct sess_mem 1 . . N struct sess 0 . . N struct object 2 . . N struct objecthead 3 . . N struct smf 0 . . N small free smf 3 . . N large free smf 2 . . N struct vbe_conn 1 . . N struct bereq 10 . . N worker threads 10 0.00 0.01 N worker threads created 0 0.00 0.00 N worker threads not created 0 0.00 0.00 N worker threads limited 0 0.00 0.00 N queued work requests 0 0.00 0.00 N overflowed work requests 0 0.00 0.00 N dropped work requests 2 . . N backends 0 . . N expired objects 0 . . N LRU nuked objects 0 . . N LRU saved objects 0 . . N LRU moved objects 0 . . N objects on deathrow 0 0.00 0.00 HTTP header overflows 0 0.00 0.00 Objects sent with sendfile 20 0.00 0.02 Objects sent with write regards, Michael -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: vlog.txt URL: From miles at jamkit.com Mon Nov 17 16:48:05 2008 From: miles at jamkit.com (Miles) Date: Mon, 17 Nov 2008 16:48:05 +0000 Subject: varnish cache keys Message-ID: Hi, Can someone confirm the default behaviour of varnish in terms of it's cache keys? By default - and without a vary header - is the cache keyed on hostname and url only? If a vary header is added, then the cache also uses those headers to key the request. Specifically, are cookies are ignored by the cache unless specified in the vary header or special behaviour in VCL? I'm trying to understand why we get a low level of cache hits, and want to be certain of the facts. Thanks, Miles From phk at phk.freebsd.dk Mon Nov 17 18:33:00 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Nov 2008 18:33:00 +0000 Subject: varnish cache keys In-Reply-To: Your message of "Mon, 17 Nov 2008 16:48:05 GMT." Message-ID: <14202.1226946780@critter.freebsd.dk> In message , Miles writes: >Hi, > >Can someone confirm the default behaviour of varnish in terms of it's >cache keys? > >By default - and without a vary header - is the cache keyed on hostname >and url only? If a vary header is added, then the cache also uses those >headers to key the request. Vary headers are handled correctly, but not as part of the hash since that is impossible. When you get a request from a client, you don't know what headers the backend would want you to vary on, so you cannot add those headers to the hash string before the lookup. Therefore you have to, and Varnish does, hash on Host+URL and then having found an "object head", examine all the objects hung from that head, as to Vary compatibility. >Specifically, are cookies are ignored by the cache unless specified in >the vary header or special behaviour in VCL? Cookies by default disables caching and do not get added to hash strings unless you do so in VCL. Cookies are not assumed to be in Vary either. >I'm trying to understand why we get a low level of cache hits, and want >to be certain of the facts. Most likely cookies disabling caching entirely. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From plfgoa at gmail.com Tue Nov 18 08:22:34 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Tue, 18 Nov 2008 13:52:34 +0530 Subject: sudden Increase in load on server running varnish Message-ID: <75cf5800811180022r5e1a2353gf4f397b862477eaa@mail.gmail.com> Hi, I have been trying my hand at varnish , which although most of the times tends to run without much of an issue, after running it for about 6-7 hours there seems to be sudden increase in load on the server with CPU usage going upto 100% and the number of worker threads increasing to about 500 . Is this an issue associated with cleanup when the cache becomes full in varnish ? The cache size used was 8GB(RAM disk) . Are there any parameters which could be applied to handle this kind of sudden surge in load on the server ? Any help in this regards will be appreciated and thanks in advance. Server Config: Intel Xeon Quad core , OS: SUSE LINUX 10.1 (X86-64) with 20GB RAM Varnish version used is 2.0.2 Thank you. -Paras From plfgoa at gmail.com Tue Nov 18 08:24:58 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Tue, 18 Nov 2008 13:54:58 +0530 Subject: Removing Headers Message-ID: <75cf5800811180024r360cd51ci6da6fe3215882a00@mail.gmail.com> Hi, Can response headers like "X-Varnish" and "Via" be removed ? Thank you. -Paras From ssm at linpro.no Tue Nov 18 08:44:53 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Tue, 18 Nov 2008 09:44:53 +0100 Subject: Removing Headers In-Reply-To: <75cf5800811180024r360cd51ci6da6fe3215882a00@mail.gmail.com> References: <75cf5800811180024r360cd51ci6da6fe3215882a00@mail.gmail.com> Message-ID: <20081118084452.GC5074@linpro.no> On Tue, Nov 18, 2008 at 01:54:58PM +0530, Paras Fadte wrote: > Hi, > > Can response headers like "X-Varnish" and "Via" be removed ? Yes The subroutine you are looking for is "vcl_deliver", the headers are available as resp.http.
, they can be removed with the "remove" keyword. See the vcl(7) man page. -- Stig Sandbeck Mathisen Redpill Linpro AS - Changing the Game -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 197 bytes Desc: Digital signature URL: From plfgoa at gmail.com Tue Nov 18 09:33:36 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Tue, 18 Nov 2008 15:03:36 +0530 Subject: Removing Headers In-Reply-To: <20081118084452.GC5074@linpro.no> References: <75cf5800811180024r360cd51ci6da6fe3215882a00@mail.gmail.com> <20081118084452.GC5074@linpro.no> Message-ID: <75cf5800811180133g27135fc4oeb84f3406a2b0ec8@mail.gmail.com> Thanks Stig. On Tue, Nov 18, 2008 at 2:14 PM, Stig Sandbeck Mathisen wrote: > On Tue, Nov 18, 2008 at 01:54:58PM +0530, Paras Fadte wrote: >> Hi, >> >> Can response headers like "X-Varnish" and "Via" be removed ? > > Yes > > The subroutine you are looking for is "vcl_deliver", the headers are available > as resp.http.
, they can be removed with the "remove" keyword. > > See the vcl(7) man page. > > -- > Stig Sandbeck Mathisen > Redpill Linpro AS - Changing the Game > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (GNU/Linux) > > iEYEARECAAYFAkkigIQACgkQQONU2fom4u7nYQCdGMcyNtHQTWgexGBnU2blj9rK > wLEAnR6urJep2kISKK1o7N+rSMAu6C+h > =1Tp+ > -----END PGP SIGNATURE----- > > From plfgoa at gmail.com Tue Nov 18 10:06:16 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Tue, 18 Nov 2008 15:36:16 +0530 Subject: Overflowed work requests Message-ID: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> Hi, What does "overflowed work requests" in varnishstat signify ? If this number is large is it a bad sign ? Thank you. -Paras From miles at jamkit.com Tue Nov 18 18:42:11 2008 From: miles at jamkit.com (Miles) Date: Tue, 18 Nov 2008 18:42:11 +0000 Subject: Varnish Error 503 Service Unavailable In-Reply-To: <477744019.20081114163509@2ge.us> References: <956208665.20081111144252@2ge.us> <477744019.20081114163509@2ge.us> Message-ID: <49230C83.9080908@jamkit.com> Brano wrote: > Brano [B], on Tuesday, November 11, 2008 at 14:42 (+0700) typed: > > B> recently we installed Varnish on our server. Everything works fine, > B> but on download we get this error: > B> http://www.opensubtitles.org/en/download/file/1951965961 > > it works now. I did not change nothing in varnish, I changed PHP code. > > This was original code: > if(isset($_SERVER["HTTP_USER_AGENT"]) and strpos($_SERVER["HTTP_USER_AGENT"], 'MSIE')) { > ini_set('zlib.output_compression', 'Off'); > } > > I replaced it with: > ini_set('zlib.output_compression', 'Off'); > > Now it works OK. Just to let you know, I am not sure, if it is bug, > or... > > VCL: http://www.pastebin.sk/en/9031/ > Do you also set vary headers? If not, this might be a cause of the problem - varnish could be serving up compressed content to clients that are not expecting it. You need to vary on "accept-encoding" in order to get varnish to store both representations, and be able to serve up the right one. Miles From perbu at linpro.no Tue Nov 18 19:03:49 2008 From: perbu at linpro.no (Per Buer) Date: Tue, 18 Nov 2008 20:03:49 +0100 Subject: round-robin director In-Reply-To: References: <491D9205.6080701@linpro.no> Message-ID: <49231195.1060607@linpro.no> Michael Moyle skrev: > Per, >=20 > I appreciate your interest in this issue. >=20 >> Have a look at varnishlog. There might be a hint there what is going o= n. >> Maybe there are connectivity issues with the first backend? >> >=20 > Connectivity is fine for both hosts. >=20 > I have gone through several tests, this time with two hosts, and I > can't be confident that the load balancer is working. varnish seems to > establish a preference for one host and stick with it. I can't see any health checks in your logs - are you sure the probes are set up alright? Could you show us your config? Also, which version of Varnish are you using? You'll can see the health checking (without all the other stuff) with: $varnishlog -i Backend_health 0 Backend_health - default Still sick 4--X-S-RH 2 3 8 0.000523 0.000567 HTTP/1.1 200 OK 0 Backend_health - default Back healthy 4--X-S-RH 3 3 8 0.017893 0.006342 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-S-RH 4 3 8 0.001375 0.005100 HTTP/1.1 200 OK (this is my personal server booting varnish and finding the backend healthy). --=20 Per Buer - Leder Infrastruktur og Drift - Redpill Linpro Telefon: 21 54 41 21 - Mobil: 958 39 117 http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From michael at orinoco.jp Wed Nov 19 11:29:13 2008 From: michael at orinoco.jp (Michael Moyle) Date: Wed, 19 Nov 2008 20:29:13 +0900 Subject: round-robin director In-Reply-To: <49231195.1060607@linpro.no> References: <491D9205.6080701@linpro.no> <49231195.1060607@linpro.no> Message-ID: Per, Thanks again. > I can't see any health checks in your logs - are you sure the probes are > set up alright? Could you show us your config? Also, which version of > Varnish are you using? I just realized today that 2.0.2 has been released. I installed 2.0.2 and it fixed the problem ( I was testing with 2.0.1 and 2.0 beta before). With 2.0.2 round robin now chooses different hosts as expected. I must have checked for the latest version just before the page was updated. I should have included the version info in my original post. Is it possible that the fix to the random director in this release fixed round-robin as well? In addition I will note that round-robin load balance selects different hosts on our top page as desired. After a session cookie is issued it stuck with one host (sticky session) which is what we want. I could not find any varnish docs on this and was concerned sticky session was not implemented. I even found some posts on this list suggesting that it was not there and would need to be implemented in the vcl. However it appears to work fine out of the box. Can you (or anyone) confirm that sticky session is implemented? > You'll can see the health checking (without all the other stuff) with: > $varnishlog -i Backend_health Great tip! Thanks. cheers, Michael From romics22 at yahoo.de Wed Nov 19 12:16:52 2008 From: romics22 at yahoo.de (Robert Ming) Date: Wed, 19 Nov 2008 12:16:52 +0000 (GMT) Subject: Varnish 2.01 - GETs with Grinder end up in PASS In-Reply-To: Message-ID: <904005.19025.qm@web23706.mail.ird.yahoo.com> > Robert Ming wrote: > > Hi! > > > > We do load-testing with 'The Grinder' vers. > 3.1 on Varnish in front > > of several Plone3 instances. The tests worked out fine > with Varnish > > 2.0 beta. Now with version 2.01 we have the following > issue: > > Executing any GET with the Testing-Framework results > always in a PASS > > in Varnish. As a consequence all subsequent requests > with the same > > url end up in cache hits for pass, that's not what > we like to test. > > Requesting the same urls "manually", say > with firefox or ie are first > > LOOKUPed and afterwards cached, the behaviour we would > like to test. > > > > Trying different ways to get around this > "PASSing"-issue we came to > > the conclusion that it is not a grinder problem, > because a simple GET > > done with the python httplib.HTTPConnection had the > same effect. > > > > Any comments, solutions, enlightments on this issue > are appreciated. > > > > Post your vcl? Have you looked at the logs? > > -- > jt at endpoint.com > http://www.endpoint.com > Finally the vcl and the log of a grinder test that executes two times the same request -Robert -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish.log Type: application/octet-stream Size: 47931 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish_2008_11_19.vcl Type: application/octet-stream Size: 2259 bytes Desc: not available URL: From postmaster at softsearch.ru Wed Nov 19 17:51:52 2008 From: postmaster at softsearch.ru (Michael) Date: Wed, 19 Nov 2008 20:51:52 +0300 Subject: 302 Moved Temporarily problem Message-ID: <1035982254.20081119205152@softsearch.ru> Hi, I don't want cache 302 beckend responses and correct vcl_fetch(): sub vcl_fetch { if (!obj.cacheable) { pass; } # don't cache redirects if (obj.status == 302) { pass; } if (obj.http.Set-Cookie) { pass; } set obj.prefetch = -30s; deliver; } But it does'n work. -- Michael From plfgoa at gmail.com Thu Nov 20 09:34:36 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Thu, 20 Nov 2008 15:04:36 +0530 Subject: varnish2.0.2 on Suse 10.3 Message-ID: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> Hi, I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it doesn't seem to start and I get "VCL compilation failed" message. What could be the issue ? Thanks in advance. -Paras From miles at jamkit.com Tue Nov 18 18:42:11 2008 From: miles at jamkit.com (Miles) Date: Tue, 18 Nov 2008 18:42:11 +0000 Subject: Varnish Error 503 Service Unavailable In-Reply-To: <477744019.20081114163509@2ge.us> References: <956208665.20081111144252@2ge.us> <477744019.20081114163509@2ge.us> Message-ID: <49230C83.9080908@jamkit.com> Brano wrote: > Brano [B], on Tuesday, November 11, 2008 at 14:42 (+0700) typed: > > B> recently we installed Varnish on our server. Everything works fine, > B> but on download we get this error: > B> http://www.opensubtitles.org/en/download/file/1951965961 > > it works now. I did not change nothing in varnish, I changed PHP code. > > This was original code: > if(isset($_SERVER["HTTP_USER_AGENT"]) and strpos($_SERVER["HTTP_USER_AGENT"], 'MSIE')) { > ini_set('zlib.output_compression', 'Off'); > } > > I replaced it with: > ini_set('zlib.output_compression', 'Off'); > > Now it works OK. Just to let you know, I am not sure, if it is bug, > or... > > VCL: http://www.pastebin.sk/en/9031/ > Do you also set vary headers? If not, this might be a cause of the problem - varnish could be serving up compressed content to clients that are not expecting it. You need to vary on "accept-encoding" in order to get varnish to store both representations, and be able to serve up the right one. Miles From tfheen at linpro.no Thu Nov 20 10:15:56 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Thu, 20 Nov 2008 11:15:56 +0100 Subject: round-robin director In-Reply-To: (Michael Moyle's message of "Wed, 19 Nov 2008 20:29:13 +0900") References: <491D9205.6080701@linpro.no> <49231195.1060607@linpro.no> Message-ID: <87ej16vh2b.fsf@qurzaw.linpro.no> ]] "Michael Moyle" | Is it possible that the fix to the random director in this release | fixed round-robin as well? No, it only affected the random director. | In addition I will note that round-robin load balance selects | different hosts on our top page as desired. After a session cookie is | issued it stuck with one host (sticky session) which is what we want. | I could not find any varnish docs on this and was concerned sticky | session was not implemented. I even found some posts on this list | suggesting that it was not there and would need to be implemented in | the vcl. However it appears to work fine out of the box. Can you (or | anyone) confirm that sticky session is implemented? I am fairly sure we have not implemented sticky sessions, so I am not sure why you are seeing this behaviour. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From michael at dynamine.net Thu Nov 20 10:42:55 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 20 Nov 2008 02:42:55 -0800 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> Message-ID: <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> Smells like an architecture mismatch. Any chance you're running a 32-bit Varnish build? --Michael On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte wrote: > Hi, > > I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it > doesn't seem to start and I get "VCL compilation failed" message. What > could be the issue ? > > Thanks in advance. > > -Paras > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > > From plfgoa at gmail.com Thu Nov 20 15:46:42 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Thu, 20 Nov 2008 21:16:42 +0530 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> Message-ID: <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> I installed the same version on openSUSE 10.1 (X86-64) and it runs fine .What could be the issue? On Thu, Nov 20, 2008 at 4:12 PM, Michael S. Fischer wrote: > Smells like an architecture mismatch. Any chance you're running a > 32-bit Varnish build? > > --Michael > > On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte wrote: >> Hi, >> >> I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it >> doesn't seem to start and I get "VCL compilation failed" message. What >> could be the issue ? >> >> Thanks in advance. >> >> -Paras >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> >> > From michael at dynamine.net Thu Nov 20 17:06:23 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 20 Nov 2008 09:06:23 -0800 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> Message-ID: <86db848d0811200906w40ac446fk34b4b4fcd1f159e0@mail.gmail.com> Where did you get your Varnish package? Or did you build it from source? Is there a working C compiler environment on both systems? --Michael On Thu, Nov 20, 2008 at 7:46 AM, Paras Fadte wrote: > I installed the same version on openSUSE 10.1 (X86-64) and it runs > fine .What could be the issue? > > On Thu, Nov 20, 2008 at 4:12 PM, Michael S. Fischer > wrote: >> Smells like an architecture mismatch. Any chance you're running a >> 32-bit Varnish build? >> >> --Michael >> >> On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte wrote: >>> Hi, >>> >>> I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it >>> doesn't seem to start and I get "VCL compilation failed" message. What >>> could be the issue ? >>> >>> Thanks in advance. >>> >>> -Paras >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>> >>> >> > > From tfheen at linpro.no Thu Nov 20 17:30:42 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Thu, 20 Nov 2008 18:30:42 +0100 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> (Paras Fadte's message of "Thu, 20 Nov 2008 21:16:42 +0530") References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> Message-ID: <878wre9uf1.fsf@qurzaw.linpro.no> ]] "Paras Fadte" | I installed the same version on openSUSE 10.1 (X86-64) and it runs | fine .What could be the issue? Do you have a compiler installed? What happens if you run varnishd with the -C flag? -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From postmaster at softsearch.ru Thu Nov 20 20:06:46 2008 From: postmaster at softsearch.ru (Michael) Date: Thu, 20 Nov 2008 23:06:46 +0300 Subject: Overflowed work requests In-Reply-To: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> References: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> Message-ID: <1774354428.20081120230646@softsearch.ru> Hi, PF> What does "overflowed work requests" in varnishstat signify ? If this PF> number is large is it a bad sign ? I have similar problem. "overflowed work requests" and "dropped work requests" is too large. FreeBSD 7.1-PRERELEASE varnish-2.0.2 from ports > varnishstat -1 uptime 385 . Child uptime client_conn 115120 299.01 Client connections accepted client_req 113731 295.41 Client requests received cache_hit 39565 102.77 Cache hits cache_hitpass 8338 21.66 Cache hits for pass cache_miss 65744 170.76 Cache misses backend_conn 74104 192.48 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 0 0.00 Backend connections failures backend_reuse 73414 190.69 Backend connections reuses backend_recycle 73469 190.83 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 3207 . N struct srcaddr n_srcaddr_act 456 . N active struct srcaddr n_sess_mem 1910 . N struct sess_mem n_sess 1780 . N struct sess n_object 63603 . N struct object n_objecthead 63603 . N struct objecthead n_smf 126931 . N struct smf n_smf_frag 1 . N small free smf n_smf_large 18446744073709551614 . N large free smf n_vbe_conn 239 . N struct vbe_conn n_bereq 391 . N struct bereq n_wrk 496 . N worker threads n_wrk_create 496 1.29 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 47907 124.43 N worker threads limited n_wrk_queue 455 1.18 N queued work requests n_wrk_overflow 111098 288.57 N overflowed work requests n_wrk_drop 47232 122.68 N dropped work requests n_backend 1 . N backends n_expired 1960 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 32435 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 22 0.06 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 85336 221.65 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 77004 200.01 Total Sessions s_req 113233 294.11 Total Requests s_pipe 0 0.00 Total pipe s_pass 8638 22.44 Total pass s_fetch 73696 191.42 Total fetch s_hdrbytes 33793720 87775.90 Total header bytes s_bodybytes 3821523829 9926035.92 Total body bytes sess_closed 6915 17.96 Session Closed sess_pipeline 3056 7.94 Session Pipeline sess_readahead 330 0.86 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 104807 272.23 Session herd shm_records 7238597 18801.55 SHM records shm_writes 606387 1575.03 SHM writes shm_flushes 44 0.11 SHM flushes due to overflow shm_cont 2188 5.68 SHM MTX contention shm_cycles 3 0.01 SHM cycles through buffer sm_nreq 148189 384.91 allocator requests sm_nobj 126908 . outstanding allocations sm_balloc 4091076608 . bytes allocated sm_bfree 5572595712 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 1 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 453 . SMS bytes allocated sms_bfree 453 . SMS bytes freed backend_req 74104 192.48 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed backend default { .host = "xx.xx.xx.xx"; .port = "80"; } acl ournet { "xx.xx.xx.xx"; } #Below is a commented-out copy of the default VCL logic. If you #redefine any of these subroutines, the built-in logic will be #appended to your code. sub vcl_recv { if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && # req.request != "TRACE" && # req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; # error 405 "Not allowed"; } # remove usless headers unset req.http.Cookie; unset req.http.Authenticate; unset req.http.Accept; unset req.http.Accept-Language; unset req.http.Accept-Encoding; unset req.http.Accept-Charset; unset req.http.Referer; # remove args from url set req.url = regsub(req.url, "\?.*", ""); # if content changing if (req.request == "DELETE" || req.request == "PUT") { if (client.ip ~ ournet) { lookup; } else { error 405 "Not allowed"; } } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ pass; } if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ pass; } lookup; } sub vcl_pipe { pipe; } sub vcl_pass { pass; } sub vcl_hash { set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } hash; } sub vcl_hit { # if content changing, remove it from cache if (req.request == "DELETE" || req.request == "PUT") { set obj.ttl = 0s; pass; } if (!obj.cacheable) { pass; } deliver; } sub vcl_miss { # if content changing, remove it from cache if (req.request == "DELETE" || req.request == "PUT") { pass; } fetch; } sub vcl_fetch { if (!obj.cacheable) { pass; } # don't cache redirects if (obj.status == 302) { pass; } if (obj.http.Set-Cookie) { pass; } set obj.prefetch = -30s; deliver; } sub vcl_deliver { deliver; } #sub vcl_discard { # /* XXX: Do not redefine vcl_discard{}, it is not yet supported */ # discard; #} # #sub vcl_prefetch { # /* XXX: Do not redefine vcl_prefetch{}, it is not yet supported */ # fetch; #} # #sub vcl_timeout { # /* XXX: Do not redefine vcl_timeout{}, it is not yet supported */ # discard; #} # sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

Guru Meditation:

XID: "} req.xid {"

Varnish
"}; deliver; } -- Michael From tim at metaweb.com Sat Nov 22 00:13:42 2008 From: tim at metaweb.com (Tim Kientzle) Date: Fri, 21 Nov 2008 16:13:42 -0800 Subject: Inspect Request bodies? In-Reply-To: <9909.1225913850@critter.freebsd.dk> References: <9909.1225913850@critter.freebsd.dk> Message-ID: On Nov 5, 2008, at 11:37 AM, Poul-Henning Kamp wrote: > In message <55911D51-8964-4D13-9667-63CACCD1A9A4 at metaweb.com>, Tim > Kientzle wri > tes: > >> * I'll need code to actually read and store the POST body in memory >> (including updates to the PASS handler and other places to >> use the in-memory data when it's available) > > We sort of have this as point 15 on our shoppinglist: > > (http://varnish.projects.linpro.no/wiki/PostTwoShoppingList) > >> The first part looks trickier. Has anyone here tried anything >> similar? Any pointers (particular source files I should pay >> attention >> to or memory-management issues I should keep in mind)? > > It's pretty straightforward really: allocate an (non-hashed) > object, add storage to it and store the contents there. > > You can see pretty much all the code you need in cache_fetch.c and > for it to go into the tree as a patch, I would insist that the > code gets generalized so we use the same code in both directions, > rather than have two copies. The attached patch is a first step in that direction. It generalizes the existing fetch_straight and fetch_chunked so that the same code is used in both directions. This is mostly just code refactoring, although it does add support for chunked upload. "make check" still succeeds after this change; I haven't tried to add any new tests for this yet. My biggest concern is that I might have changed some of the return values here. I'm not yet clear on what return conventions varnish is using internally. -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish2.0.1-generalize-fetch.patch Type: application/octet-stream Size: 9628 bytes Desc: not available URL: -------------- next part -------------- From michael at dynamine.net Sun Nov 23 18:09:11 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 23 Nov 2008 10:09:11 -0800 Subject: Overflowed work requests In-Reply-To: <1774354428.20081120230646@softsearch.ru> References: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> <1774354428.20081120230646@softsearch.ru> Message-ID: <7CDEF81C-A2B0-48E7-8FDE-A3DD02EF2D15@dynamine.net> How many CPUs (including all cores) are in your systems? --Michael On Nov 20, 2008, at 12:06 PM, Michael wrote: > Hi, > > PF> What does "overflowed work requests" in varnishstat signify ? If > this > PF> number is large is it a bad sign ? > > I have similar problem. "overflowed work requests" and "dropped work > requests" is too large. > > FreeBSD 7.1-PRERELEASE > varnish-2.0.2 from ports > >> varnishstat -1 > uptime 385 . Child uptime > client_conn 115120 299.01 Client connections accepted > client_req 113731 295.41 Client requests received > cache_hit 39565 102.77 Cache hits > cache_hitpass 8338 21.66 Cache hits for pass > cache_miss 65744 170.76 Cache misses > backend_conn 74104 192.48 Backend connections success > backend_unhealthy 0 0.00 Backend connections not > attempted > backend_busy 0 0.00 Backend connections too > many > backend_fail 0 0.00 Backend connections > failures > backend_reuse 73414 190.69 Backend connections reuses > backend_recycle 73469 190.83 Backend connections > recycles > backend_unused 0 0.00 Backend connections unused > n_srcaddr 3207 . N struct srcaddr > n_srcaddr_act 456 . N active struct srcaddr > n_sess_mem 1910 . N struct sess_mem > n_sess 1780 . N struct sess > n_object 63603 . N struct object > n_objecthead 63603 . N struct objecthead > n_smf 126931 . N struct smf > n_smf_frag 1 . N small free smf > n_smf_large 18446744073709551614 . N large free smf > n_vbe_conn 239 . N struct vbe_conn > n_bereq 391 . N struct bereq > n_wrk 496 . N worker threads > n_wrk_create 496 1.29 N worker threads created > n_wrk_failed 0 0.00 N worker threads not > created > n_wrk_max 47907 124.43 N worker threads limited > n_wrk_queue 455 1.18 N queued work requests > n_wrk_overflow 111098 288.57 N overflowed work requests > n_wrk_drop 47232 122.68 N dropped work requests > n_backend 1 . N backends > n_expired 1960 . N expired objects > n_lru_nuked 0 . N LRU nuked objects > n_lru_saved 0 . N LRU saved objects > n_lru_moved 32435 . N LRU moved objects > n_deathrow 0 . N objects on deathrow > losthdr 22 0.06 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 85336 221.65 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing > workspace > s_sess 77004 200.01 Total Sessions > s_req 113233 294.11 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 8638 22.44 Total pass > s_fetch 73696 191.42 Total fetch > s_hdrbytes 33793720 87775.90 Total header bytes > s_bodybytes 3821523829 9926035.92 Total body bytes > sess_closed 6915 17.96 Session Closed > sess_pipeline 3056 7.94 Session Pipeline > sess_readahead 330 0.86 Session Read Ahead > sess_linger 0 0.00 Session Linger > sess_herd 104807 272.23 Session herd > shm_records 7238597 18801.55 SHM records > shm_writes 606387 1575.03 SHM writes > shm_flushes 44 0.11 SHM flushes due to overflow > shm_cont 2188 5.68 SHM MTX contention > shm_cycles 3 0.01 SHM cycles through buffer > sm_nreq 148189 384.91 allocator requests > sm_nobj 126908 . outstanding allocations > sm_balloc 4091076608 . bytes allocated > sm_bfree 5572595712 . bytes free > sma_nreq 0 0.00 SMA allocator requests > sma_nobj 0 . SMA outstanding allocations > sma_nbytes 0 . SMA outstanding bytes > sma_balloc 0 . SMA bytes allocated > sma_bfree 0 . SMA bytes free > sms_nreq 1 0.00 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 453 . SMS bytes allocated > sms_bfree 453 . SMS bytes freed > backend_req 74104 192.48 Backend requests made > n_vcl 1 0.00 N vcl total > n_vcl_avail 1 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_purge 1 . N total active purges > n_purge_add 1 0.00 N new purges added > n_purge_retire 0 0.00 N old purges deleted > n_purge_obj_test 0 0.00 N objects tested > n_purge_re_test 0 0.00 N regexps tested against > n_purge_dups 0 0.00 N duplicate purges removed > > > backend default { > .host = "xx.xx.xx.xx"; > .port = "80"; > } > > > acl ournet { > "xx.xx.xx.xx"; > } > > #Below is a commented-out copy of the default VCL logic. If you > #redefine any of these subroutines, the built-in logic will be > #appended to your code. > > sub vcl_recv { > if (req.request != "GET" && > req.request != "HEAD" && > req.request != "PUT" && > req.request != "POST" && > # req.request != "TRACE" && > # req.request != "OPTIONS" && > req.request != "DELETE") { > /* Non-RFC2616 or CONNECT which is weird. */ > pipe; > # error 405 "Not allowed"; > } > > # remove usless headers > unset req.http.Cookie; > unset req.http.Authenticate; > unset req.http.Accept; > unset req.http.Accept-Language; > unset req.http.Accept-Encoding; > unset req.http.Accept-Charset; > unset req.http.Referer; > > # remove args from url > set req.url = regsub(req.url, "\?.*", ""); > > # if content changing > if (req.request == "DELETE" || req.request == "PUT") { > if (client.ip ~ ournet) { > lookup; > } else { > error 405 "Not allowed"; > } > } > > if (req.request != "GET" && req.request != "HEAD") { > /* We only deal with GET and HEAD by default */ > pass; > } > if (req.http.Authorization || req.http.Cookie) { > /* Not cacheable by default */ > pass; > } > lookup; > } > > sub vcl_pipe { > pipe; > } > > sub vcl_pass { > pass; > } > > sub vcl_hash { > set req.hash += req.url; > if (req.http.host) { > set req.hash += req.http.host; > } else { > set req.hash += server.ip; > } > hash; > } > > sub vcl_hit { > > # if content changing, remove it from cache > if (req.request == "DELETE" || req.request == "PUT") { > set obj.ttl = 0s; > pass; > } > > if (!obj.cacheable) { > pass; > } > > deliver; > } > > sub vcl_miss { > # if content changing, remove it from cache > if (req.request == "DELETE" || req.request == "PUT") { > pass; > } > > fetch; > } > > sub vcl_fetch { > if (!obj.cacheable) { > pass; > } > > # don't cache redirects > if (obj.status == 302) { > pass; > } > > if (obj.http.Set-Cookie) { > pass; > } > set obj.prefetch = -30s; > deliver; > } > > sub vcl_deliver { > deliver; > } > > #sub vcl_discard { > # /* XXX: Do not redefine vcl_discard{}, it is not yet supported */ > # discard; > #} > # > #sub vcl_prefetch { > # /* XXX: Do not redefine vcl_prefetch{}, it is not yet supported > */ > # fetch; > #} > # > #sub vcl_timeout { > # /* XXX: Do not redefine vcl_timeout{}, it is not yet supported */ > # discard; > #} > # > sub vcl_error { > set obj.http.Content-Type = "text/html; charset=utf-8"; > synthetic {" > > "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> > > > "} obj.status " " obj.response {" > > >

Error "} obj.status " " obj.response {"

>

"} obj.response {"

>

Guru Meditation:

>

XID: "} req.xid {"

>
Varnish address> > > > "}; > deliver; > } > > > -- > Michael > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From plfgoa at gmail.com Mon Nov 24 04:42:46 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Mon, 24 Nov 2008 10:12:46 +0530 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <878wre9uf1.fsf@qurzaw.linpro.no> References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> <878wre9uf1.fsf@qurzaw.linpro.no> Message-ID: <75cf5800811232042j77251155s452df10e499f8dc8@mail.gmail.com> Built it from source , got it from varnish site and compiler version is gcc version 4.1.0 (SUSE Linux). On Thu, Nov 20, 2008 at 11:00 PM, Tollef Fog Heen wrote: > ]] "Paras Fadte" > > | I installed the same version on openSUSE 10.1 (X86-64) and it runs > | fine .What could be the issue? > > Do you have a compiler installed? What happens if you run varnishd with > the -C flag? > > -- > Tollef Fog Heen > Redpill Linpro -- Changing the game! > t: +47 21 54 41 73 > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From plfgoa at gmail.com Mon Nov 24 05:46:22 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Mon, 24 Nov 2008 11:16:22 +0530 Subject: Round robin mode Message-ID: <75cf5800811232146o5a5493ceh641e0ced5bb7721a@mail.gmail.com> Hi, In varnish , when a director is specified of type round-robin , it will skip an unhealthy backend , till it is not back healthy, is that correct ? varnish version is 2.0.2 -Paras From plfgoa at gmail.com Mon Nov 24 05:48:37 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Mon, 24 Nov 2008 11:18:37 +0530 Subject: Overflowed work requests In-Reply-To: <7CDEF81C-A2B0-48E7-8FDE-A3DD02EF2D15@dynamine.net> References: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> <1774354428.20081120230646@softsearch.ru> <7CDEF81C-A2B0-48E7-8FDE-A3DD02EF2D15@dynamine.net> Message-ID: <75cf5800811232148r61e26267je69a70e0bcbd0b69@mail.gmail.com> CPU is Quad core , Intel(R) Xeon(R) CPU E5430 @ 2.66GHz On Sun, Nov 23, 2008 at 11:39 PM, Michael S. Fischer wrote: > How many CPUs (including all cores) are in your systems? > > --Michael > > On Nov 20, 2008, at 12:06 PM, Michael wrote: > >> Hi, >> >> PF> What does "overflowed work requests" in varnishstat signify ? If >> this >> PF> number is large is it a bad sign ? >> >> I have similar problem. "overflowed work requests" and "dropped work >> requests" is too large. >> >> FreeBSD 7.1-PRERELEASE >> varnish-2.0.2 from ports >> >>> varnishstat -1 >> uptime 385 . Child uptime >> client_conn 115120 299.01 Client connections accepted >> client_req 113731 295.41 Client requests received >> cache_hit 39565 102.77 Cache hits >> cache_hitpass 8338 21.66 Cache hits for pass >> cache_miss 65744 170.76 Cache misses >> backend_conn 74104 192.48 Backend connections success >> backend_unhealthy 0 0.00 Backend connections not >> attempted >> backend_busy 0 0.00 Backend connections too >> many >> backend_fail 0 0.00 Backend connections >> failures >> backend_reuse 73414 190.69 Backend connections reuses >> backend_recycle 73469 190.83 Backend connections >> recycles >> backend_unused 0 0.00 Backend connections unused >> n_srcaddr 3207 . N struct srcaddr >> n_srcaddr_act 456 . N active struct srcaddr >> n_sess_mem 1910 . N struct sess_mem >> n_sess 1780 . N struct sess >> n_object 63603 . N struct object >> n_objecthead 63603 . N struct objecthead >> n_smf 126931 . N struct smf >> n_smf_frag 1 . N small free smf >> n_smf_large 18446744073709551614 . N large free smf >> n_vbe_conn 239 . N struct vbe_conn >> n_bereq 391 . N struct bereq >> n_wrk 496 . N worker threads >> n_wrk_create 496 1.29 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 47907 124.43 N worker threads limited >> n_wrk_queue 455 1.18 N queued work requests >> n_wrk_overflow 111098 288.57 N overflowed work requests >> n_wrk_drop 47232 122.68 N dropped work requests >> n_backend 1 . N backends >> n_expired 1960 . N expired objects >> n_lru_nuked 0 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 32435 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 22 0.06 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 85336 221.65 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 77004 200.01 Total Sessions >> s_req 113233 294.11 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 8638 22.44 Total pass >> s_fetch 73696 191.42 Total fetch >> s_hdrbytes 33793720 87775.90 Total header bytes >> s_bodybytes 3821523829 9926035.92 Total body bytes >> sess_closed 6915 17.96 Session Closed >> sess_pipeline 3056 7.94 Session Pipeline >> sess_readahead 330 0.86 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 104807 272.23 Session herd >> shm_records 7238597 18801.55 SHM records >> shm_writes 606387 1575.03 SHM writes >> shm_flushes 44 0.11 SHM flushes due to overflow >> shm_cont 2188 5.68 SHM MTX contention >> shm_cycles 3 0.01 SHM cycles through buffer >> sm_nreq 148189 384.91 allocator requests >> sm_nobj 126908 . outstanding allocations >> sm_balloc 4091076608 . bytes allocated >> sm_bfree 5572595712 . bytes free >> sma_nreq 0 0.00 SMA allocator requests >> sma_nobj 0 . SMA outstanding allocations >> sma_nbytes 0 . SMA outstanding bytes >> sma_balloc 0 . SMA bytes allocated >> sma_bfree 0 . SMA bytes free >> sms_nreq 1 0.00 SMS allocator requests >> sms_nobj 0 . SMS outstanding allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 453 . SMS bytes allocated >> sms_bfree 453 . SMS bytes freed >> backend_req 74104 192.48 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> >> >> backend default { >> .host = "xx.xx.xx.xx"; >> .port = "80"; >> } >> >> >> acl ournet { >> "xx.xx.xx.xx"; >> } >> >> #Below is a commented-out copy of the default VCL logic. If you >> #redefine any of these subroutines, the built-in logic will be >> #appended to your code. >> >> sub vcl_recv { >> if (req.request != "GET" && >> req.request != "HEAD" && >> req.request != "PUT" && >> req.request != "POST" && >> # req.request != "TRACE" && >> # req.request != "OPTIONS" && >> req.request != "DELETE") { >> /* Non-RFC2616 or CONNECT which is weird. */ >> pipe; >> # error 405 "Not allowed"; >> } >> >> # remove usless headers >> unset req.http.Cookie; >> unset req.http.Authenticate; >> unset req.http.Accept; >> unset req.http.Accept-Language; >> unset req.http.Accept-Encoding; >> unset req.http.Accept-Charset; >> unset req.http.Referer; >> >> # remove args from url >> set req.url = regsub(req.url, "\?.*", ""); >> >> # if content changing >> if (req.request == "DELETE" || req.request == "PUT") { >> if (client.ip ~ ournet) { >> lookup; >> } else { >> error 405 "Not allowed"; >> } >> } >> >> if (req.request != "GET" && req.request != "HEAD") { >> /* We only deal with GET and HEAD by default */ >> pass; >> } >> if (req.http.Authorization || req.http.Cookie) { >> /* Not cacheable by default */ >> pass; >> } >> lookup; >> } >> >> sub vcl_pipe { >> pipe; >> } >> >> sub vcl_pass { >> pass; >> } >> >> sub vcl_hash { >> set req.hash += req.url; >> if (req.http.host) { >> set req.hash += req.http.host; >> } else { >> set req.hash += server.ip; >> } >> hash; >> } >> >> sub vcl_hit { >> >> # if content changing, remove it from cache >> if (req.request == "DELETE" || req.request == "PUT") { >> set obj.ttl = 0s; >> pass; >> } >> >> if (!obj.cacheable) { >> pass; >> } >> >> deliver; >> } >> >> sub vcl_miss { >> # if content changing, remove it from cache >> if (req.request == "DELETE" || req.request == "PUT") { >> pass; >> } >> >> fetch; >> } >> >> sub vcl_fetch { >> if (!obj.cacheable) { >> pass; >> } >> >> # don't cache redirects >> if (obj.status == 302) { >> pass; >> } >> >> if (obj.http.Set-Cookie) { >> pass; >> } >> set obj.prefetch = -30s; >> deliver; >> } >> >> sub vcl_deliver { >> deliver; >> } >> >> #sub vcl_discard { >> # /* XXX: Do not redefine vcl_discard{}, it is not yet supported */ >> # discard; >> #} >> # >> #sub vcl_prefetch { >> # /* XXX: Do not redefine vcl_prefetch{}, it is not yet supported >> */ >> # fetch; >> #} >> # >> #sub vcl_timeout { >> # /* XXX: Do not redefine vcl_timeout{}, it is not yet supported */ >> # discard; >> #} >> # >> sub vcl_error { >> set obj.http.Content-Type = "text/html; charset=utf-8"; >> synthetic {" >> >> > "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> >> >> >> "} obj.status " " obj.response {" >> >> >>

Error "} obj.status " " obj.response {"

>>

"} obj.response {"

>>

Guru Meditation:

>>

XID: "} req.xid {"

>>
Varnish> address> >> >> >> "}; >> deliver; >> } >> >> >> -- >> Michael >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From tfheen at linpro.no Mon Nov 24 18:08:23 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Mon, 24 Nov 2008 19:08:23 +0100 Subject: Overflowed work requests In-Reply-To: <1774354428.20081120230646@softsearch.ru> (Michael's message of "Thu, 20 Nov 2008 23:06:46 +0300") References: <75cf5800811180206y2f00c7b8q1303640b92ab2f65@mail.gmail.com> <1774354428.20081120230646@softsearch.ru> Message-ID: <87iqqdj8tk.fsf@qurzaw.linpro.no> ]] Michael | Hi, | | PF> What does "overflowed work requests" in varnishstat signify ? If this | PF> number is large is it a bad sign ? | | I have similar problem. "overflowed work requests" and "dropped work | requests" is too large. You might want to increase the maximum number of threads; look at the thread_pool_max parameter. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at linpro.no Mon Nov 24 18:09:20 2008 From: tfheen at linpro.no (Tollef Fog Heen) Date: Mon, 24 Nov 2008 19:09:20 +0100 Subject: Round robin mode In-Reply-To: <75cf5800811232146o5a5493ceh641e0ced5bb7721a@mail.gmail.com> (Paras Fadte's message of "Mon, 24 Nov 2008 11:16:22 +0530") References: <75cf5800811232146o5a5493ceh641e0ced5bb7721a@mail.gmail.com> Message-ID: <87ej11j8rz.fsf@qurzaw.linpro.no> ]] "Paras Fadte" | In varnish , when a director is specified of type round-robin , it | will skip an unhealthy backend , till it is not back healthy, is that | correct ? varnish version is 2.0.2 Yes, both the random and round-robin directors skip unhealthy backends. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From skye at F4.ca Tue Nov 25 19:18:37 2008 From: skye at F4.ca (Skye Poier Nott) Date: Tue, 25 Nov 2008 11:18:37 -0800 Subject: Malformed varnishncsa output Message-ID: <96F99BC3-86B2-4E4B-A4E2-B20150329DB7@F4.ca> I'm getting a lot of lines line this from varnishncsa: 10.151.1.1 - - [25/Nov/2008:19:11:14 +0000] "GET http:// vectordevhttp://vectordev/devsite/diagrams/tn-rev1.png HTTP/1.1" 200 60834 "-" "curl/7.16.3 (amd64-portbld-freebsd6.3) libcurl/7.16.3 OpenSSL/0.9.7e zlib/1.2.3" Notice the duplicated http://vectordevhttp://vectordev part after GET. Should I just change it to not print out lp->df_Host in varnishncsa.c or something? Thanks, Skye From apokalyptik at apokalyptik.com Tue Nov 25 22:37:14 2008 From: apokalyptik at apokalyptik.com (Demitrious Kelly) Date: Tue, 25 Nov 2008 14:37:14 -0800 Subject: is 2.0.2 not as efficient as 1.1.2 was? Message-ID: <492C7E1A.1060304@apokalyptik.com> Hello, We run Gravatar.com and use varnish to cache avatar responses. There are a ton of very small objects and lots of requests per second. Last week we were using 1.1.2 compiled against tcmalloc (-t 600 -w 1,4000,5 -h classic,500009 -p thread_pools 10 -p listen_depth 4096 -s malloc,16G). This used an nginx load balancer on a separate host as its back end which distributed varnish's requests to our pool of webs. All was well. This week we upgraded to 2.0.2 and are using varnish's back end & director configuration for the same work. What we are seeing is that 2.0.2 holds about 60% of the objects in the same amount of cache space as 1.1.2 did (we tried tcmalloc, jemalloc, and mmap.) This caused us quite a few problems after the upgrade as varnish would start spiking the load on the boxes into the hundreds. We attempted tuning the lru_interval (up) and obj_workspace (down) but we couldn't get varnish to hold the same data that it used to on the same machines. Right now we've reduced the time that we keep cached objects drastically, bringing our cache hit rate down to 92% from 96% which roughly doubled the requests (and load) on the web servers. It is, however, stable at this point. Obviously the idea of not keeping up with the latest versions of varnish is not what we want to do, however effectively doubling requirements for scaling the service is just as unappealing. So, what we're asking is... how do we get varnish 2 to be as efficient as varnish 1 was? We're glad to try things... It takes a while to fill up the cache to the point that it can cause problems so testing and reporting back will take some time, but we'd like this fixed and will put in some work. We're currently running the following cli options: -a 0.0.0.0:80 -f ... -P ... -T 10.1.94.43:6969 -t 600 -w 1,4000,5 -h classic,500009 -p thread_pools 10 -p listen_depth 4096 -s malloc,16G And our VCL looks like this (with most of the webs taken out for brevity since they're repeated verbatim with only numbers changed) backend web11 { .host = "xxx"; .port = "8088"; .probe = { .url = "xxx"; .timeout = 50 ms; .interval = 5s; .window = 2; .threshold = 1; } } backend web12 { .host = "xxx"; .port = "8088"; .probe = { .url = "xxx"; .timeout = 50 ms; .interval = 5s; .window = 2; .threshold = 1; } } director default random { .retries = 3; { .backend = web11; .weight = 1; } { .backend = web12; .weight = 1; } } sub vcl_recv { set req.backend = default; set req.grace = 30s; if ( req.url ~ "^/(avatar|userimage)" && req.http.cookie ) { lookup; } } sub vcl_fetch { if (obj.ttl < 600s) { set obj.ttl = 600s; } if (obj.status == 404) { set obj.ttl = 30s; } if (obj.status == 500 || obj.status == 503 ) { pass; } set obj.grace = 30s; deliver; } sub vcl_deliver { remove resp.http.Expires; remove resp.http.Cache-Control; set resp.http.Cache-Control = "public, max-age=600, proxy-revalidate"; deliver; } From michael at orinoco.jp Wed Nov 26 04:53:25 2008 From: michael at orinoco.jp (Michael Moyle) Date: Wed, 26 Nov 2008 13:53:25 +0900 Subject: round-robin director In-Reply-To: <87ej16vh2b.fsf@qurzaw.linpro.no> References: <491D9205.6080701@linpro.no> <49231195.1060607@linpro.no> <87ej16vh2b.fsf@qurzaw.linpro.no> Message-ID: Tollef, > I am fairly sure we have not implemented sticky sessions, so I am not > sure why you are seeing this behaviour. Thanks for confirming that. Sticky session are not there. The application I was testing was setting the url and driving it to just one host after login. I fixed that and confirmed sticky sessions are not implemented. cheers, Michael From des at des.no Wed Nov 26 09:05:12 2008 From: des at des.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 26 Nov 2008 10:05:12 +0100 Subject: Malformed varnishncsa output In-Reply-To: <96F99BC3-86B2-4E4B-A4E2-B20150329DB7@F4.ca> (Skye Poier Nott's message of "Tue, 25 Nov 2008 11:18:37 -0800") References: <96F99BC3-86B2-4E4B-A4E2-B20150329DB7@F4.ca> Message-ID: <86k5aq97sn.fsf@ds4.des.no> Skye Poier Nott writes: > I'm getting a lot of lines line this from varnishncsa: > > 10.151.1.1 - - [25/Nov/2008:19:11:14 +0000] "GET http:// > vectordevhttp://vectordev/devsite/diagrams/tn-rev1.png HTTP/1.1" 200 > 60834 "-" "curl/7.16.3 (amd64-portbld-freebsd6.3) libcurl/7.16.3 > OpenSSL/0.9.7e zlib/1.2.3" > > Notice the duplicated http://vectordevhttp://vectordev part after GET. Varnish (and varnishncsa) expect the request URI to be an absolute path, not an absolute URI as in this case. I don't know of any other user agent that behaves like this, and RFC2616 indicates that HTTP/1.1 user agents should not use an absolute URI as the request URI unless talking to a proxy. However, it also indicates that this might change in future protocol versions, and that servers should support absolute URIs in the interest of forward compatibility. The simplest solution is to strip off everything but the path and query string. A more advanced solution would be to validate it against the Host header, and reject the request if they don't match; however, validation could get tricky if the host part includes a user and / or password, or a port number. (Note that I've argued for URI parsing and validation from the start...) DES -- Dag-Erling Sm?rgrav - des at des.no From marcussmith at britarch.ac.uk Wed Nov 26 10:54:30 2008 From: marcussmith at britarch.ac.uk (Marcus Smith) Date: Wed, 26 Nov 2008 10:54:30 +0000 Subject: Logging for multiple sites Message-ID: <492D2AE6.8030500@britarch.ac.uk> Dear list, I currently have a machine running Varnish (2.0.2) set up in front of several different websites. I would like to be able to collect access logs for each of the different sites separately. Obviously the Apache logs for these sites will be incomplete, as varnishd will serve cache hits without passing them on. With a single instance of varnishd handling all the sites, varnishncsa outputs to a single log file with logs for all the sites together. I would like each site to be logged to a separate file, in the same way that Apache's virtual hosts can have their own log files. What is the recommended way to achieve this with Varnish? At first I considered running a separate named instance of varnishd and varnishncsa for each site, thus: varnishd -n example1 -a "www.example1.com example1.com" -f /path/to/vcl/example1.vcl varnishd -n example2 -a "www.example2.com example2.com" -f /path/to/vcl/example2.vcl ... varnishncsa -n example1 -D -a -w /path/to/logs/example1.log varnishncsa -n example2 -D -a -w /path/to/logs/example2.log ... ...but of course this will not work because once the first instance of varnish for example1.com has bound itself to port 80, the second instance will not start unless it's on a different port. So I'm stuck. How should I be doing this? Many thanks in advance, Marcus -- Marcus Smith Information Officer The Council for British Archaeology From marcussmith at britarch.ac.uk Wed Nov 26 12:58:17 2008 From: marcussmith at britarch.ac.uk (Marcus Smith) Date: Wed, 26 Nov 2008 12:58:17 +0000 Subject: Logging for multiple sites In-Reply-To: <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> References: <492D2AE6.8030500@britarch.ac.uk> <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> Message-ID: <492D47E9.6050809@britarch.ac.uk> David (Kitai) Cruz wrote: > Maybe create a simple script to separate logs???? > Every URL is logged using http://domain/uri structure. > > Really simple script to program. So you mean set varnishncsa to output everything to one log file (or pipe), and then use something like sed or perl to split the log by domain? If that's really the best way, then that's fine by me. I was just hoping there might be a way of writing to separate log files from the start, rather than chopping them up after the fact. Many thanks, Marcus -- Marcus Smith Information Officer The Council for British Archaeology From cidcampeador at gmail.com Wed Nov 26 12:06:30 2008 From: cidcampeador at gmail.com (David (Kitai) Cruz) Date: Wed, 26 Nov 2008 13:06:30 +0100 Subject: Logging for multiple sites In-Reply-To: <492D2AE6.8030500@britarch.ac.uk> References: <492D2AE6.8030500@britarch.ac.uk> Message-ID: <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> Maybe create a simple script to separate logs???? Every URL is logged using http://domain/uri structure. Really simple script to program. Kitai 2008/11/26 Marcus Smith : > Dear list, > > I currently have a machine running Varnish (2.0.2) set up in front of > several different websites. I would like to be able to collect access > logs for each of the different sites separately. > > Obviously the Apache logs for these sites will be incomplete, as > varnishd will serve cache hits without passing them on. With a single > instance of varnishd handling all the sites, varnishncsa outputs to a > single log file with logs for all the sites together. I would like each > site to be logged to a separate file, in the same way that Apache's > virtual hosts can have their own log files. What is the recommended way > to achieve this with Varnish? > > At first I considered running a separate named instance of varnishd and > varnishncsa for each site, thus: > > varnishd -n example1 -a "www.example1.com example1.com" -f > /path/to/vcl/example1.vcl > varnishd -n example2 -a "www.example2.com example2.com" -f > /path/to/vcl/example2.vcl > ... > varnishncsa -n example1 -D -a -w /path/to/logs/example1.log > varnishncsa -n example2 -D -a -w /path/to/logs/example2.log > ... > > ...but of course this will not work because once the first instance of > varnish for example1.com has bound itself to port 80, the second > instance will not start unless it's on a different port. > > So I'm stuck. How should I be doing this? > > Many thanks in advance, > Marcus > > -- > Marcus Smith > Information Officer > The Council for British Archaeology > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From des at des.no Wed Nov 26 14:12:44 2008 From: des at des.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 26 Nov 2008 15:12:44 +0100 Subject: Logging for multiple sites In-Reply-To: <492D47E9.6050809@britarch.ac.uk> (Marcus Smith's message of "Wed, 26 Nov 2008 12:58:17 +0000") References: <492D2AE6.8030500@britarch.ac.uk> <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> <492D47E9.6050809@britarch.ac.uk> Message-ID: <86abbm7ezn.fsf@ds4.des.no> Marcus Smith writes: > So you mean set varnishncsa to output everything to one log file (or > pipe), and then use something like sed or perl to split the log by > domain? If that's really the best way, then that's fine by me. I was > just hoping there might be a way of writing to separate log files from > the start, rather than chopping them up after the fact. Actually varnishncsa uses the same log filtering / selection code as varnishlog, which *can* select requests based on URL. Extending varnishncsa to do the same should not be too hard. Other issues with varnishncsa / varnishlog: varnishlog doesn't allow -o and -w to be used at the same time. There is no reason why it shouldn't. If it did, you could play tricks like this: varnishlog -w /dev/stdout -c -o RxURL foo | varnishncsa -r /dev/stdin DES -- Dag-Erling Sm?rgrav - des at des.no From miles at jamkit.com Wed Nov 26 14:10:02 2008 From: miles at jamkit.com (Miles) Date: Wed, 26 Nov 2008 14:10:02 +0000 Subject: Logged-in users Message-ID: Hi, I have a site where users can log in. This sets a cookie with their encrypted login details, so they can be authenticated. There are a small number of pages which are user-specific ("change your details" forms, etc), and these are set not to cache. When a user is logged in, a message is shown at the top of the page "You are now logged in". However, nothing on the page depends on the individual user. My question is, how can I organise the cache to have the most cache hits, given that there are effectively two versions of each page - one for logged in users, and one for anonymous users. Thanks, Miles From miles at jamkit.com Wed Nov 26 14:11:24 2008 From: miles at jamkit.com (Miles) Date: Wed, 26 Nov 2008 14:11:24 +0000 Subject: Logged-in users Message-ID: Hi, I have a site where users can log in. This sets a cookie with their encrypted login details, so they can be authenticated. There are a small number of pages which are user-specific ("change your details" forms, etc), and these are set not to cache. When a user is logged in, a message is shown at the top of the page "You are now logged in". However, nothing on the page depends on the individual user. My question is, how can I organise the cache to have the most cache hits, given that there are effectively two versions of each page - one for logged in users, and one for anonymous users. I want to specifically avoid each user having their own version of the page stored in the cache. Thanks in advance for any wisdom anyone can share! Miles From marcussmith at britarch.ac.uk Wed Nov 26 15:25:07 2008 From: marcussmith at britarch.ac.uk (Marcus Smith) Date: Wed, 26 Nov 2008 15:25:07 +0000 Subject: Logging for multiple sites In-Reply-To: <86abbm7ezn.fsf@ds4.des.no> References: <492D2AE6.8030500@britarch.ac.uk> <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> <492D47E9.6050809@britarch.ac.uk> <86abbm7ezn.fsf@ds4.des.no> Message-ID: <492D6A53.9000507@britarch.ac.uk> Dag-Erling Sm?rgrav wrote: > Actually varnishncsa uses the same log filtering / selection code as > varnishlog, which *can* select requests based on URL. Extending > varnishncsa to do the same should not be too hard. > > Other issues with varnishncsa / varnishlog: varnishlog doesn't allow -o > and -w to be used at the same time. There is no reason why it > shouldn't. If it did, you could play tricks like this: > > varnishlog -w /dev/stdout -c -o RxURL foo | varnishncsa -r /dev/stdin Ah, I see! Hmmm. Well in that case, is there any reason why I shouldn't simply do something like: varnishlog -c -o RxHeader "Host: (www\.)?example1\.com" > /path/to/logs/example1.log & varnishlog -c -o RxHeader "Host: (www\.)?example2\.com" > /path/to/logs/example2.log & ...etc for each site, logging each to a separate varnish log file? I could then use varnishncsa's '-r' option to convert them into NCSA format once the logs are rotated out. It seems like that would do pretty much what I want. Many thanks, Marcus -- Marcus Smith Information Officer The Council for British Archaeology From cidcampeador at gmail.com Wed Nov 26 16:09:04 2008 From: cidcampeador at gmail.com (David (Kitai) Cruz) Date: Wed, 26 Nov 2008 17:09:04 +0100 Subject: Logging for multiple sites In-Reply-To: <492D6A53.9000507@britarch.ac.uk> References: <492D2AE6.8030500@britarch.ac.uk> <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> <492D47E9.6050809@britarch.ac.uk> <86abbm7ezn.fsf@ds4.des.no> <492D6A53.9000507@britarch.ac.uk> Message-ID: <48d8853d0811260809p558af99bgee8f1c07a0b50f47@mail.gmail.com> So, if i've got 900 domains, do i have to start 900 varnishlog processes? Interesting....:;-) Kitai 2008/11/26 Marcus Smith : > Dag-Erling Sm?rgrav wrote: >> Actually varnishncsa uses the same log filtering / selection code as >> varnishlog, which *can* select requests based on URL. Extending >> varnishncsa to do the same should not be too hard. >> >> Other issues with varnishncsa / varnishlog: varnishlog doesn't allow -o >> and -w to be used at the same time. There is no reason why it >> shouldn't. If it did, you could play tricks like this: >> >> varnishlog -w /dev/stdout -c -o RxURL foo | varnishncsa -r /dev/stdin > > Ah, I see! Hmmm. > > Well in that case, is there any reason why I shouldn't simply do > something like: > > varnishlog -c -o RxHeader "Host: (www\.)?example1\.com" > > /path/to/logs/example1.log & > > varnishlog -c -o RxHeader "Host: (www\.)?example2\.com" > > /path/to/logs/example2.log & > > ...etc for each site, logging each to a separate varnish log file? > > I could then use varnishncsa's '-r' option to convert them into NCSA > format once the logs are rotated out. It seems like that would do > pretty much what I want. > > Many thanks, > Marcus > > -- > Marcus Smith > Information Officer > The Council for British Archaeology > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From sfoutrel at bcstechno.com Wed Nov 26 16:32:27 2008 From: sfoutrel at bcstechno.com (=?iso-8859-1?Q?S=E9bastien_FOUTREL?=) Date: Wed, 26 Nov 2008 17:32:27 +0100 Subject: Logging for multiple sites In-Reply-To: <48d8853d0811260809p558af99bgee8f1c07a0b50f47@mail.gmail.com> References: <492D2AE6.8030500@britarch.ac.uk><48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com><492D47E9.6050809@britarch.ac.uk> <86abbm7ezn.fsf@ds4.des.no><492D6A53.9000507@britarch.ac.uk> <48d8853d0811260809p558af99bgee8f1c07a0b50f47@mail.gmail.com> Message-ID: Hello, What about doing a massive ncsa log, then parse it for each domain with your stats software ? Or maybe splitting it in different logs in post production ? -- S?bastien FOUTREL Responsable Production BCS Technologies 45 Rue Delizy 93692 PANTIN Cedex. Bur : 01.41.83.17.20 Fax : 01.41.83.17.29 -----Message d'origine----- De?: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] De la part de David (Kitai) Cruz Envoy??: mercredi 26 novembre 2008 17:09 ??: Marcus Smith Cc?: varnish-misc at projects.linpro.no Objet?: Re: Logging for multiple sites So, if i've got 900 domains, do i have to start 900 varnishlog processes? Interesting....:;-) Kitai 2008/11/26 Marcus Smith : > Dag-Erling Sm?rgrav wrote: >> Actually varnishncsa uses the same log filtering / selection code as >> varnishlog, which *can* select requests based on URL. Extending >> varnishncsa to do the same should not be too hard. >> >> Other issues with varnishncsa / varnishlog: varnishlog doesn't allow -o >> and -w to be used at the same time. There is no reason why it >> shouldn't. If it did, you could play tricks like this: >> >> varnishlog -w /dev/stdout -c -o RxURL foo | varnishncsa -r /dev/stdin > > Ah, I see! Hmmm. > > Well in that case, is there any reason why I shouldn't simply do > something like: > > varnishlog -c -o RxHeader "Host: (www\.)?example1\.com" > > /path/to/logs/example1.log & > > varnishlog -c -o RxHeader "Host: (www\.)?example2\.com" > > /path/to/logs/example2.log & > > ...etc for each site, logging each to a separate varnish log file? > > I could then use varnishncsa's '-r' option to convert them into NCSA > format once the logs are rotated out. It seems like that would do > pretty much what I want. > > Many thanks, > Marcus > > -- > Marcus Smith > Information Officer > The Council for British Archaeology > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From marcussmith at britarch.ac.uk Wed Nov 26 16:40:51 2008 From: marcussmith at britarch.ac.uk (Marcus Smith) Date: Wed, 26 Nov 2008 16:40:51 +0000 Subject: Logging for multiple sites In-Reply-To: References: <492D2AE6.8030500@britarch.ac.uk><48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com><492D47E9.6050809@britarch.ac.uk> <86abbm7ezn.fsf@ds4.des.no><492D6A53.9000507@britarch.ac.uk> <48d8853d0811260809p558af99bgee8f1c07a0b50f47@mail.gmail.com> Message-ID: <492D7C13.4010808@britarch.ac.uk> S?bastien FOUTREL wrote: > Hello, > What about doing a massive ncsa log, then parse it for each domain with your stats software ? > Or maybe splitting it in different logs in post production ? This was what Kitai suggested, and I agree that this would probably be the best way to do it in the absence of any suitable inbuilt varnish functionality. David (Kitai) Cruz wrote: > So, if i've got 900 domains, do i have to start 900 varnishlog processes? Okay, fair enough, it doesn't really scale. :) (And I put '>' where I meant '>>' - don't want to overwrite existing logs!) But I've probably only got about a dozen or so domains, it *could* have worked for me... I think a single large NCSA log file that I can then hack apart with Perl or some webstats/log-parser program is probably going to be easiest, as above. Many thanks, Marcus -- Marcus Smith Information Officer The Council for British Archaeology From des at des.no Wed Nov 26 17:41:56 2008 From: des at des.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 26 Nov 2008 18:41:56 +0100 Subject: Logging for multiple sites In-Reply-To: <492D6A53.9000507@britarch.ac.uk> (Marcus Smith's message of "Wed, 26 Nov 2008 15:25:07 +0000") References: <492D2AE6.8030500@britarch.ac.uk> <48d8853d0811260406g3427e1f0s3e98b1c110f497be@mail.gmail.com> <492D47E9.6050809@britarch.ac.uk> <86abbm7ezn.fsf@ds4.des.no> <492D6A53.9000507@britarch.ac.uk> Message-ID: <86skpe5qqj.fsf@ds4.des.no> Marcus Smith writes: > Well in that case, is there any reason why I shouldn't simply do > something like: > > varnishlog -c -o RxHeader "Host: (www\.)?example1\.com" > > /path/to/logs/example1.log & > > varnishlog -c -o RxHeader "Host: (www\.)?example2\.com" > > /path/to/logs/example2.log & > > ...etc for each site, logging each to a separate varnish log file? > > I could then use varnishncsa's '-r' option to convert them into NCSA > format once the logs are rotated out. It seems like that would do > pretty much what I want. No, -r can only read binary log files that were previously written with -w, and you can't use -w with -o. Both varnishlog and varnishncsa could use some attention: the former to support combining -w with -o, and the latter to support request filtering. DES -- Dag-Erling Sm?rgrav - des at des.no From miles at jamkit.com Wed Nov 26 20:31:19 2008 From: miles at jamkit.com (Miles) Date: Wed, 26 Nov 2008 20:31:19 +0000 Subject: Logged-in users In-Reply-To: References: Message-ID: <492DB217.6010302@jamkit.com> Miles wrote: > Hi, > > I have a site where users can log in. This sets a cookie with their > encrypted login details, so they can be authenticated. There are a > small number of pages which are user-specific ("change your details" > forms, etc), and these are set not to cache. > > When a user is logged in, a message is shown at the top of the page "You > are now logged in". However, nothing on the page depends on the > individual user. > > My question is, how can I organise the cache to have the most cache > hits, given that there are effectively two versions of each page - one > for logged in users, and one for anonymous users. I want to > specifically avoid each user having their own version of the page stored > in the cache. > > Thanks in advance for any wisdom anyone can share! > > Miles Thanks to everyone who suggested using ESI - I may have to use this, but would quite like to avoid it, as it's useful to be able to run the app without varnish in front for development/testing. I wondered whether it was possible to use vcl_hash for my purposes, as follows: sub vcl_hash { //hash the object with url+host set req.hash += req.url; set req.hash += req.http.host; # see if the user has a cookie to indicate they are logged in if req.http.cookie ~ '__ac=': set req.hash += 'authenticated'; else: set req.hash += 'anonymous' hash; } Would this give me the two representations that I require for each page - or am I going down a route that will turn out bad?! I couldn't find much information about vcl_hash, so I'm not sure if I'm barking up the wrong tree or not... Regards, Miles From tim at metaweb.com Wed Nov 26 21:14:36 2008 From: tim at metaweb.com (Tim Kientzle) Date: Wed, 26 Nov 2008 13:14:36 -0800 Subject: Logged-in users In-Reply-To: <492DB217.6010302@jamkit.com> References: <492DB217.6010302@jamkit.com> Message-ID: Another approach is to simply use a small bit of Javascript. It's easy to test for the existence of the cookie in Javascript and set that text conditionally. Then you have only one copy of the page to be cached. The problem with the approach you've outlined here is that other downstream caches won't understand the difference (although most will simply refuse to cache any responses if the request had a cookie header). Whereas the Javascript approach also allows downstream caches to cache everything efficiently. Tim On Nov 26, 2008, at 12:31 PM, Miles wrote: > Miles wrote: >> Hi, >> >> I have a site where users can log in. This sets a cookie with their >> encrypted login details, so they can be authenticated. There are a >> small number of pages which are user-specific ("change your details" >> forms, etc), and these are set not to cache. >> >> When a user is logged in, a message is shown at the top of the page >> "You >> are now logged in". However, nothing on the page depends on the >> individual user. >> >> My question is, how can I organise the cache to have the most cache >> hits, given that there are effectively two versions of each page - >> one >> for logged in users, and one for anonymous users. I want to >> specifically avoid each user having their own version of the page >> stored >> in the cache. >> >> Thanks in advance for any wisdom anyone can share! >> >> Miles > > Thanks to everyone who suggested using ESI - I may have to use this, > but > would quite like to avoid it, as it's useful to be able to run the app > without varnish in front for development/testing. > > I wondered whether it was possible to use vcl_hash for my purposes, as > follows: > > sub vcl_hash { > > //hash the object with url+host > set req.hash += req.url; > set req.hash += req.http.host; > > # see if the user has a cookie to indicate they are logged in > if req.http.cookie ~ '__ac=': > set req.hash += 'authenticated'; > else: > set req.hash += 'anonymous' > hash; > > } > > Would this give me the two representations that I require for each > page > - or am I going down a route that will turn out bad?! I couldn't find > much information about vcl_hash, so I'm not sure if I'm barking up the > wrong tree or not... > > Regards, > > Miles > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From darryl.dixon at winterhouseconsulting.com Wed Nov 26 21:45:37 2008 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Thu, 27 Nov 2008 10:45:37 +1300 (NZDT) Subject: Logged-in users In-Reply-To: References: <492DB217.6010302@jamkit.com> Message-ID: <65106.58.28.153.120.1227735937.squirrel@services.directender.co.nz> We do both depending on scenario: we use ajax to update parts of a page after-delivery (poor mans ESI ;) as suggested by Tim, and we also have a custom vcl_hash that caches different copies of pages depending on various cookies and other conditions (much as Miles suggests). Both fit depending on the use-case. Miles: FWIW, the CacheFu product for Plone may assist you to maximise the caching potential of your site without too many custom tweaks to your Varnish rules. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com > Another approach is to simply use a small bit of Javascript. It's > easy to test for the existence of the cookie in Javascript and > set that text conditionally. > > Then you have only one copy of the page to be cached. > > The problem with the approach you've outlined here is > that other downstream caches won't understand the difference > (although most will simply refuse to cache any responses > if the request had a cookie header). Whereas the Javascript > approach also allows downstream caches to cache everything > efficiently. > > Tim > > > > On Nov 26, 2008, at 12:31 PM, Miles wrote: > >> Miles wrote: >>> Hi, >>> >>> I have a site where users can log in. This sets a cookie with their >>> encrypted login details, so they can be authenticated. There are a >>> small number of pages which are user-specific ("change your details" >>> forms, etc), and these are set not to cache. >>> >>> When a user is logged in, a message is shown at the top of the page >>> "You >>> are now logged in". However, nothing on the page depends on the >>> individual user. >>> >>> My question is, how can I organise the cache to have the most cache >>> hits, given that there are effectively two versions of each page - >>> one >>> for logged in users, and one for anonymous users. I want to >>> specifically avoid each user having their own version of the page >>> stored >>> in the cache. >>> >>> Thanks in advance for any wisdom anyone can share! >>> >>> Miles >> >> Thanks to everyone who suggested using ESI - I may have to use this, >> but >> would quite like to avoid it, as it's useful to be able to run the app >> without varnish in front for development/testing. >> >> I wondered whether it was possible to use vcl_hash for my purposes, as >> follows: >> >> sub vcl_hash { >> >> //hash the object with url+host >> set req.hash += req.url; >> set req.hash += req.http.host; >> >> # see if the user has a cookie to indicate they are logged in >> if req.http.cookie ~ '__ac=': >> set req.hash += 'authenticated'; >> else: >> set req.hash += 'anonymous' >> hash; >> >> } >> >> Would this give me the two representations that I require for each >> page >> - or am I going down a route that will turn out bad?! I couldn't find >> much information about vcl_hash, so I'm not sure if I'm barking up the >> wrong tree or not... >> >> Regards, >> >> Miles >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From plfgoa at gmail.com Thu Nov 27 05:58:32 2008 From: plfgoa at gmail.com (Paras Fadte) Date: Thu, 27 Nov 2008 11:28:32 +0530 Subject: varnish2.0.2 on Suse 10.3 In-Reply-To: <75cf5800811232042j77251155s452df10e499f8dc8@mail.gmail.com> References: <75cf5800811200134y7cbcc0bew2dd2f113ce710309@mail.gmail.com> <86db848d0811200242j5543663yc9fc09857aae2a81@mail.gmail.com> <75cf5800811200746x74e6ba8fyf9711e1680e28d1f@mail.gmail.com> <878wre9uf1.fsf@qurzaw.linpro.no> <75cf5800811232042j77251155s452df10e499f8dc8@mail.gmail.com> Message-ID: <75cf5800811262158n7819ac40m83f4ddea2d71093e@mail.gmail.com> Hi, When I try to start varnish 2.0.2 in debug mode , I get the following message ./varnishd -d -a :9999 -f /home/accel/varnish/etc/varnish.vcl -s file,/home/accel/varnish/var/cache,1G storage_file: filename: /home/accel/varnish/var/cache size 1024 MB. Using old SHMFILE Debugging mode, enter "start" to start child start child (28673) Started Pushing vcls failed: Internal error: No VCL_conf symbol Child (28673) said Closed fds: 4 9 10 12 13 Child (28673) said Child starts Child (28673) said managed to mmap 1073741824 bytes of 1073741824 Child (28673) said Ready unlink ./vcl.1P9zoqAU.so OS: openSUSE 10.3 (X86-64) Varnish : 2.0.2 (built from source) gcc version 4.2.1 (SUSE Linux) What could be the issue here ? Thank you. -Paras On Mon, Nov 24, 2008 at 10:12 AM, Paras Fadte wrote: > Built it from source , got it from varnish site and compiler version > is gcc version 4.1.0 (SUSE Linux). > > On Thu, Nov 20, 2008 at 11:00 PM, Tollef Fog Heen wrote: >> ]] "Paras Fadte" >> >> | I installed the same version on openSUSE 10.1 (X86-64) and it runs >> | fine .What could be the issue? >> >> Do you have a compiler installed? What happens if you run varnishd with >> the -C flag? >> >> -- >> Tollef Fog Heen >> Redpill Linpro -- Changing the game! >> t: +47 21 54 41 73 >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > From bm at turtle-entertainment.de Wed Nov 26 14:38:44 2008 From: bm at turtle-entertainment.de (Bjoern Metzdorf) Date: Wed, 26 Nov 2008 15:38:44 +0100 Subject: Logged-in users In-Reply-To: References: Message-ID: <492D5F74.2050407@turtle-entertainment.de> Have a look at ESI: http://varnish.projects.linpro.no/wiki/ESIfeatures Regards, Bjoern Miles wrote: > Hi, > > I have a site where users can log in. This sets a cookie with their > encrypted login details, so they can be authenticated. There are a > small number of pages which are user-specific ("change your details" > forms, etc), and these are set not to cache. > > When a user is logged in, a message is shown at the top of the page "You > are now logged in". However, nothing on the page depends on the > individual user. > > My question is, how can I organise the cache to have the most cache > hits, given that there are effectively two versions of each page - one > for logged in users, and one for anonymous users. I want to > specifically avoid each user having their own version of the page stored > in the cache. > > Thanks in advance for any wisdom anyone can share! > > Miles > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From perbu at linpro.no Thu Nov 27 10:22:16 2008 From: perbu at linpro.no (Per Buer) Date: Thu, 27 Nov 2008 11:22:16 +0100 Subject: Logged-in users In-Reply-To: References: Message-ID: <492E74D8.8020606@linpro.no> Miles skrev: > My question is, how can I organise the cache to have the most cache > hits, given that there are effectively two versions of each page - one > for logged in users, and one for anonymous users. ESI has already been pointed out as the ideal way of handling this. You might also consider issuing a "Vary: $USER" header so Varnish will keep a copy for each user (this might bloat your cache, be careful) or set a custom header for logged inn users (X-foo: logged in as bar) and then pick this header up in vcl_recv and do a "pass" on the relevant request. -- Per Buer - Leder Infrastruktur og Drift - Redpill Linpro Telefon: 21 54 41 21 - Mobil: 958 39 117 http://linpro.no/ | http://redpill.se/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: From miles at jamkit.com Wed Nov 26 20:31:19 2008 From: miles at jamkit.com (Miles) Date: Wed, 26 Nov 2008 20:31:19 +0000 Subject: Logged-in users In-Reply-To: References: Message-ID: <492DB217.6010302@jamkit.com> Miles wrote: > Hi, > > I have a site where users can log in. This sets a cookie with their > encrypted login details, so they can be authenticated. There are a > small number of pages which are user-specific ("change your details" > forms, etc), and these are set not to cache. > > When a user is logged in, a message is shown at the top of the page "You > are now logged in". However, nothing on the page depends on the > individual user. > > My question is, how can I organise the cache to have the most cache > hits, given that there are effectively two versions of each page - one > for logged in users, and one for anonymous users. I want to > specifically avoid each user having their own version of the page stored > in the cache. > > Thanks in advance for any wisdom anyone can share! > > Miles Thanks to everyone who suggested using ESI - I may have to use this, but would quite like to avoid it, as it's useful to be able to run the app without varnish in front for development/testing. I wondered whether it was possible to use vcl_hash for my purposes, as follows: sub vcl_hash { //hash the object with url+host set req.hash += req.url; set req.hash += req.http.host; # see if the user has a cookie to indicate they are logged in if req.http.cookie ~ '__ac=': set req.hash += 'authenticated'; else: set req.hash += 'anonymous' hash; } Would this give me the two representations that I require for each page - or am I going down a route that will turn out bad?! I couldn't find much information about vcl_hash, so I'm not sure if I'm barking up the wrong tree or not... Regards, Miles