From ssm at linpro.no Tue Apr 1 05:34:01 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Tue, 01 Apr 2008 07:34:01 +0200 Subject: production ready devel snapshot? In-Reply-To: <200803312010.06804.ottolski@web.de> (Sascha Ottolski's message of "Mon, 31 Mar 2008 20:10:06 +0200") References: <200803312010.06804.ottolski@web.de> Message-ID: <7xabkennp2.fsf@iostat.linpro.no> On Mon, 31 Mar 2008 20:10:06 +0200, Sascha Ottolski said: > is there anything like a snapshot release that is worth giving it a > try, especially if my configuration will hopefully stay simple for a > while? You could try using trunk. It seems fairly stable. -- Stig Sandbeck Mathisen, Linpro From michael at dynamine.net Tue Apr 1 05:36:05 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 31 Mar 2008 22:36:05 -0700 Subject: production ready devel snapshot? In-Reply-To: <7xabkennp2.fsf@iostat.linpro.no> References: <200803312010.06804.ottolski@web.de> <7xabkennp2.fsf@iostat.linpro.no> Message-ID: <86db848d0803312236r64fa4f56x4183100a6c9b735d@mail.gmail.com> On Mon, Mar 31, 2008 at 10:34 PM, Stig Sandbeck Mathisen wrote: > On Mon, 31 Mar 2008 20:10:06 +0200, Sascha Ottolski > said: > > > is there anything like a snapshot release that is worth giving it a > > try, especially if my configuration will hopefully stay simple for a > > while? > > You could try using trunk. It seems fairly stable. If it's so stable, why not cut a release? The nice thing about releases is that they're easy to revert to when analyzing bug reports. --Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.stener at betradar.com Tue Apr 1 09:50:07 2008 From: h.stener at betradar.com (Henning Stener) Date: Tue, 01 Apr 2008 11:50:07 +0200 Subject: production ready devel snapshot? In-Reply-To: <7xabkennp2.fsf@iostat.linpro.no> References: <200803312010.06804.ottolski@web.de> <7xabkennp2.fsf@iostat.linpro.no> Message-ID: <1207043407.10906.138.camel@henning-desktop> On Tue, 2008-04-01 at 07:34 +0200, Stig Sandbeck Mathisen wrote: > On Mon, 31 Mar 2008 20:10:06 +0200, Sascha Ottolski said: > > > is there anything like a snapshot release that is worth giving it a > > try, especially if my configuration will hopefully stay simple for a > > while? > > You could try using trunk. It seems fairly stable. > rev 2437: First part of major backend overhaul. *** Please do not use -trunk in production until I say so again *** -phk I have not seen any later message in the svn log that reverts this. So "stable" is probably only true for a given value of "true". :) :Henning From ssm at linpro.no Tue Apr 1 12:37:55 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Tue, 01 Apr 2008 14:37:55 +0200 Subject: production ready devel snapshot? In-Reply-To: <86db848d0803312236r64fa4f56x4183100a6c9b735d@mail.gmail.com> (Michael S. Fischer's message of "Mon, 31 Mar 2008 22:36:05 -0700") References: <200803312010.06804.ottolski@web.de> <7xabkennp2.fsf@iostat.linpro.no> <86db848d0803312236r64fa4f56x4183100a6c9b735d@mail.gmail.com> Message-ID: <7xhcelkaxo.fsf@iostat.linpro.no> On Mon, 31 Mar 2008 22:36:05 -0700, "Michael S. Fischer" said: > If it's so stable, why not cut a release? The nice thing about > releases is that they're easy to revert to when analyzing bug > reports. I think I'd like to see a 1.2 release done before a trunk snapshot release. The amount of work done to actually do a proper release is often underestimated... As for trunk, you should be able to checkout a specific revision. You only need to know which revisions are known not to work if they contain undocumented and undesirable random features. Then you could refer to "revision^Wrelease 2617 of varnish trunk". :) -- Stig Sandbeck Mathisen, Linpro From ottolski at web.de Tue Apr 1 15:19:38 2008 From: ottolski at web.de (Sascha Ottolski) Date: Tue, 1 Apr 2008 17:19:38 +0200 Subject: the most basic config Message-ID: <200804011719.38180.ottolski@web.de> Hi, I'm a bit puzzled by the examples and the explanation of the "default" vcl config presented in the man page. Now I'm wondering, if I want to make my first steps for creating a reverse proxy for static images only, that basically should cache everything indefinetely (as long as cache space is available), what would be the minimum config I need to have? Of course I need to define a backend, may be increase the TTL for objects. A pointer to some kind of "beginners guide" would be nice, if such a thing exists. Thanks a lot, Sascha From michael at dynamine.net Tue Apr 1 15:38:55 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 1 Apr 2008 08:38:55 -0700 Subject: Miscellaneous questions In-Reply-To: <200803312008.57372.ottolski@web.de> References: <1167.1202770705@critter.freebsd.dk> <47DEF163.9000604@itiva.com> <86db848d0803171607j56449124ubbe2ef8bd53896@mail.gmail.com> <200803312008.57372.ottolski@web.de> Message-ID: <86db848d0804010838m6de75eb2mc8948abebedcb4ad@mail.gmail.com> On Mon, Mar 31, 2008 at 11:08 AM, Sascha Ottolski wrote: > probably not exactly the same, but may be someone finds it useful: If > just started to dive a bit into HAProxy (http://haproxy.1wt.eu/): the > development version has the ability to calculate the loadbalancing > based on the hash of the URI to decide which backend should receive a > request. I guess this could be a nice companion to put in front of > several reverse proxies to increase the hit rate of each one. One major shortcoming of HAProxy is that it does not support HTTP Keep-Alive connections. This can be an issue if your origin servers are far away from your proxies. --Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-list at itiva.com Tue Apr 1 15:42:17 2008 From: varnish-list at itiva.com (DHF) Date: Tue, 01 Apr 2008 08:42:17 -0700 Subject: the most basic config In-Reply-To: <200804011719.38180.ottolski@web.de> References: <200804011719.38180.ottolski@web.de> Message-ID: <47F257D9.3020504@itiva.com> Sascha Ottolski wrote: > Hi, > > I'm a bit puzzled by the examples and the explanation of the "default" > vcl config presented in the man page. Now I'm wondering, if I want to > make my first steps for creating a reverse proxy for static images > only, that basically should cache everything indefinetely (as long as > cache space is available), what would be the minimum config I need to > have? Of course I need to define a backend, may be increase the TTL for > objects. > > A pointer to some kind of "beginners guide" would be nice, if such a > thing exists. > I don't think there really is a step by step beginners guide to varnish, though one nice thing is out of the box it works with very few changes. If you used the rpm that is available on sourceforge, all you need is the following in /etc/sysconfig/varnish: DAEMON_OPTS="-a :80 \ -T localhost:82 \ -b localhost:81 \ -u varnish -g varnish \ -s file,/var/lib/varnish/varnish_storage.bin,1G" I have this on a test machine in my lab currently and it is happily wailing out cached bits at an unbelievable rate. Apache is running on localhost:81, and it is setting the desired age for objects, the storage file size above is the default. This should get you up and running, and you can start tuning from there. One thing that seems to be daunting at first is that varnish is extremely flexible, and because of that there are many examples of neat configuration tricks and snippets of wizardry sprinkled in the sparse documentation, which makes it seem that these things are necessary to make varnish work, but this is not the case, the default settings work quite well. --Dave From ottolski at web.de Tue Apr 1 19:15:11 2008 From: ottolski at web.de (Sascha Ottolski) Date: Tue, 1 Apr 2008 21:15:11 +0200 Subject: the most basic config In-Reply-To: <47F257D9.3020504@itiva.com> References: <200804011719.38180.ottolski@web.de> <47F257D9.3020504@itiva.com> Message-ID: <200804012115.11264.ottolski@web.de> Am Dienstag 01 April 2008 17:42:17 schrieb DHF: > Sascha Ottolski wrote: > > Hi, > > > > I'm a bit puzzled by the examples and the explanation of the > > "default" vcl config presented in the man page. Now I'm wondering, > > if I want to make my first steps for creating a reverse proxy for > > static images only, that basically should cache everything > > indefinetely (as long as cache space is available), what would be > > the minimum config I need to have? Of course I need to define a > > backend, may be increase the TTL for objects. > > > > A pointer to some kind of "beginners guide" would be nice, if such > > a thing exists. > > I don't think there really is a step by step beginners guide to > varnish, though one nice thing is out of the box it works with very > few changes. If you used the rpm that is available on sourceforge, > all you need is the following in /etc/sysconfig/varnish: > > DAEMON_OPTS="-a :80 \ > -T localhost:82 \ > -b localhost:81 \ > -u varnish -g varnish \ > -s file,/var/lib/varnish/varnish_storage.bin,1G" > > I have this on a test machine in my lab currently and it is happily > wailing out cached bits at an unbelievable rate. Apache is running > on localhost:81, and it is setting the desired age for objects, the > storage file size above is the default. This should get you up and > running, and you can start tuning from there. One thing that seems > to be daunting at first is that varnish is extremely flexible, and > because of that there are many examples of neat configuration tricks > and snippets of wizardry sprinkled in the sparse documentation, which > makes it seem that these things are necessary to make varnish work, > but this is not the case, the default settings work quite well. > > --Dave thanks very much, this was very helpful. now, could anyone give me a hint how to interpret the output of varnishhist? I'm seeing client_req 15054 123.39 Client requests received cache_hit 1632 13.38 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 1024 8.39 Cache misses backend_req 12347 101.20 Backend requests made I'm wondering, why is there such a big difference between cache_miss and client_req. I would expect both to be similar, shouldn't they? as I said, my setup is very simply, only static images to be be proxied. I started varnish (1.1.2) with # cat run_varnish.sh ulimit -n 131072 varnishd \ -u sfapp \ -g sfapp \ -a :80 \ -T :81 \ -b 192.168.1.11 \ -p thread_pool_max=2000 \ -p default_ttl=31104000 \ -h classic,2500009 \ -s file,/var/cache/varnish/store.bin BTW, in the wiki, under "Performance", is a hint to increase "lru_interval", but apparently this parameter isn't known to varnishd any more. Thanks, Sascha From ottolski at web.de Wed Apr 2 07:35:05 2008 From: ottolski at web.de (Sascha Ottolski) Date: Wed, 2 Apr 2008 09:35:05 +0200 Subject: the most basic config In-Reply-To: <200804012115.11264.ottolski@web.de> References: <200804011719.38180.ottolski@web.de> <47F257D9.3020504@itiva.com> <200804012115.11264.ottolski@web.de> Message-ID: <200804020935.05823.ottolski@web.de> Am Dienstag 01 April 2008 21:15:11 schrieb Sascha Ottolski: > thanks very much, this was very helpful. now, could anyone give me a > hint how to interpret the output of varnishhist? > > I'm seeing > > client_req ? ? ? ? ? ? ?15054 ? ? ? 123.39 Client requests received > cache_hit ? ? ? ? ? ? ? ?1632 ? ? ? ?13.38 Cache hits > cache_hitpass ? ? ? ? ? ? ? 0 ? ? ? ? 0.00 Cache hits for pass > cache_miss ? ? ? ? ? ? ? 1024 ? ? ? ? 8.39 Cache misses > backend_req ? ? ? ? ? ? 12347 ? ? ? 101.20 Backend requests made ok, I'm getting a bit futher, without claiming that I understand it fully. I just took the example of the wiki backend default { set backend.host = "192.168.1.2"; set backend.port = "http"; } sub vcl_recv { if (req.request == "GET" && req.url ~ "\.(jpg|jpeg|gif|png)$") { lookup; } if (req.request == "HEAD" && req.url ~ "\.(jpg|jpeg|gif|png)$") { lookup; } } and now I'm seeing as many misses as backend request. I guess now it's really caching most of what I want. now, could someone help me interpreting the hitrate ratio and avg? Hitrate ratio: 10 100 360 Hitrate avg: 0.3366 0.3837 0.4636 Thanks, Sascha (very impressed so far :-)) From ottolski at web.de Thu Apr 3 06:33:20 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 3 Apr 2008 08:33:20 +0200 Subject: cache empties itself? Message-ID: <200804030833.21070.ottolski@web.de> Hi, how can this be? My varnish runs for about 36 hours now. yesterday evening, the resident memory size was like 10 GB, which is still way below the available 32. later that evening, I stopped letting request to the proxy over night. now I came back, let the request back in, and am wondering that I see a low cacht hit rate. looking a bit closer it appears as if the cache got smaller over night, now the process only consumes less than 1 GB of resident memory, which fits the reported "bytes allocated" in the stats. can I somehow find out why my cached objects were expired? I have a varnishlog -w running all the time, the the information might there. but, what to look for, and even more important, how can I prevent that expiration? I started the daemon with -p default_ttl=31104000 to make it cache very aggresively... Thanks, Sascha From varnish-list at itiva.com Thu Apr 3 16:14:04 2008 From: varnish-list at itiva.com (DHF) Date: Thu, 03 Apr 2008 09:14:04 -0700 Subject: the most basic config In-Reply-To: <200804020935.05823.ottolski@web.de> References: <200804011719.38180.ottolski@web.de> <47F257D9.3020504@itiva.com> <200804012115.11264.ottolski@web.de> <200804020935.05823.ottolski@web.de> Message-ID: <47F5024C.4070405@itiva.com> Sascha Ottolski wrote: > now, could someone help me interpreting the hitrate ratio and avg? > > Hitrate ratio: 10 100 360 > Hitrate avg: 0.3366 0.3837 0.4636 > Hit rate is the number of hits/number of requests. Hits are requests for objects that are in the cache, Misses are requests that go to the backend, the more misses the lower your Hitrate average. Hitrate ratio is the ratio of hits to misses, I believe. The lower your hitrate average the lower your performance. --Dave From varnish-list at itiva.com Thu Apr 3 16:07:53 2008 From: varnish-list at itiva.com (DHF) Date: Thu, 03 Apr 2008 09:07:53 -0700 Subject: cache empties itself? In-Reply-To: <200804030833.21070.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> Message-ID: <47F500D9.9080508@itiva.com> Sascha Ottolski wrote: > how can this be? My varnish runs for about 36 hours now. yesterday > evening, the resident memory size was like 10 GB, which is still way > below the available 32. later that evening, I stopped letting request > to the proxy over night. now I came back, let the request back in, and > am wondering that I see a low cacht hit rate. looking a bit closer it > appears as if the cache got smaller over night, now the process only > consumes less than 1 GB of resident memory, which fits the > reported "bytes allocated" in the stats. > > can I somehow find out why my cached objects were expired? I have a > varnishlog -w running all the time, the the information might there. > but, what to look for, and even more important, how can I prevent that > expiration? I started the daemon with > > -p default_ttl=31104000 > > to make it cache very aggresively... > There could be a lot of factors, is apache setting a max-age on the items? As it says in the man page: default_ttl The default time-to-live assigned to objects if neither the backend nor the configuration assign one. Note that changes to this param- eter are not applied retroactively. Is this running on a test machine in a lab where you can control the requests this box gets? If so you should run some tests to make sure that you really are caching objects. Run wireshark on the apache server listening on port 80, and using curl send two requests for the same object, and make sure that only one request hits the apache box. If thats working like you expect, and the Age header is incrementing, then you need to run some tests using a typical workload that your apache server expects to see. Are you setting cookies on this site? I think what is happening is that you are setting a max-age on objects from apache ( which you can verify using curl, netcat, telnet, whatever you like ), and varnish is honoring that setting and expiring items as instructed. I'm not awesome with varnishtop and varnishlog yet, so I'm probably not the one to ask about getting those to show you an objects attributes, anyone care to assist on that front? --Dave From ottolski at web.de Thu Apr 3 17:17:53 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 3 Apr 2008 19:17:53 +0200 Subject: cache empties itself? In-Reply-To: <47F500D9.9080508@itiva.com> References: <200804030833.21070.ottolski@web.de> <47F500D9.9080508@itiva.com> Message-ID: <200804031917.53474.ottolski@web.de> Am Donnerstag 03 April 2008 18:07:53 schrieb DHF: > Sascha Ottolski wrote: > > how can this be? My varnish runs for about 36 hours now. yesterday > > evening, the resident memory size was like 10 GB, which is still > > way below the available 32. later that evening, I stopped letting > > request to the proxy over night. now I came back, let the request > > back in, and am wondering that I see a low cacht hit rate. looking > > a bit closer it appears as if the cache got smaller over night, now > > the process only consumes less than 1 GB of resident memory, which > > fits the reported "bytes allocated" in the stats. > > > > can I somehow find out why my cached objects were expired? I have a > > varnishlog -w running all the time, the the information might > > there. but, what to look for, and even more important, how can I > > prevent that expiration? I started the daemon with > > > > -p default_ttl=31104000 > > > > to make it cache very aggresively... > > There could be a lot of factors, is apache setting a max-age on the > items? As it says in the man page: > > default_ttl > The default time-to-live assigned to objects if neither > the backend > nor the configuration assign one. Note that changes to > this param- > eter are not applied retroactively. > > Is this running on a test machine in a lab where you can control the > requests this box gets? If so you should run some tests to make sure > that you really are caching objects. Run wireshark on the apache > server listening on port 80, and using curl send two requests for the > same object, and make sure that only one request hits the apache box. > If thats working like you expect, and the Age header is > incrementing, then you need to run some tests using a typical > workload that your apache server expects to see. Are you setting > cookies on this site? > > I think what is happening is that you are setting a max-age on > objects from apache ( which you can verify using curl, netcat, > telnet, whatever you like ), and varnish is honoring that setting and > expiring items as instructed. I'm not awesome with varnishtop and > varnishlog yet, so I'm probably not the one to ask about getting > those to show you an objects attributes, anyone care to assist on > that front? > > --Dave dave, thanks a lot, I may have confused it with varnishd -t, which doesn't seem to be the same as -p default_ttl? hmm, but then again, the manual says it's a shortcut, but the semantic sound different then the above: -t ttl Specifies a hard minimum time to live for cached documents. This is a shortcut for specifying the default_ttl run-time parameter. Cheers, Sascha From ottolski at web.de Thu Apr 3 17:26:19 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 3 Apr 2008 19:26:19 +0200 Subject: cache empties itself? In-Reply-To: <47F500D9.9080508@itiva.com> References: <200804030833.21070.ottolski@web.de> <47F500D9.9080508@itiva.com> Message-ID: <200804031926.19302.ottolski@web.de> Am Donnerstag 03 April 2008 18:07:53 schrieb DHF: > > how can this be? My varnish runs for about 36 hours now. yesterday > > evening, the resident memory size was like 10 GB, which is still > > way below the available 32. later that evening, I stopped letting > > request to the proxy over night. now I came back, let the request > > back in, and am wondering that I see a low cacht hit rate. looking > > a bit closer it appears as if the cache got smaller over night, now > > the process only consumes less than 1 GB of resident memory, which > > fits the reported "bytes allocated" in the stats. now I just had to learn the hard way... 1. if I stop and start the child via the management port, the cache is empty afterwards I did this, because I couldn't manage to change the config, at least to me it looked that way: vcl.list 200 52 0 boot 44085 default * 7735 default2 vcl.discard default 106 37 No configuration named default known. 200 0 vcl.list 200 52 0 boot 44085 default * 7750 default2 vcl.show default 106 37 No configuration named default known. 2. but even worse, I've just seen that a new child was born somehow, which also emptied the cache is this some regular behaviour, or was there a crash? All this with 1.1.2. It's vital to my setup to cache as many objects as possible, for a long time, and that they really stay in the cache. Is there anything I could do to prevent the cache being emptied? May be I've been bitten by a bug and should give the trunk a shot? BTW, after starting to play around with varnish, I'm really impressed, it's a bit frustrating sometimes to understand everything, but the outcome is very impreesing. Thanks for such a nice piece of software! Thanks a lot, Sascha From michael at dynamine.net Thu Apr 3 17:30:25 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 3 Apr 2008 10:30:25 -0700 Subject: cache empties itself? In-Reply-To: <200804031926.19302.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> <47F500D9.9080508@itiva.com> <200804031926.19302.ottolski@web.de> Message-ID: <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> On Thu, Apr 3, 2008 at 10:26 AM, Sascha Ottolski wrote: > All this with 1.1.2. It's vital to my setup to cache as many objects as > possible, for a long time, and that they really stay in the cache. Is > there anything I could do to prevent the cache being emptied? May be > I've been bitten by a bug and should give the trunk a shot? Just set the Expires: headers on the origin (backend) server responses to now + 10 years or something. --Michael From varnish-list at itiva.com Thu Apr 3 17:47:49 2008 From: varnish-list at itiva.com (DHF) Date: Thu, 03 Apr 2008 10:47:49 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <47F500D9.9080508@itiva.com> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> Message-ID: <47F51845.5050207@itiva.com> Michael S. Fischer wrote: > On Thu, Apr 3, 2008 at 10:26 AM, Sascha Ottolski wrote: > >> All this with 1.1.2. It's vital to my setup to cache as many objects as >> possible, for a long time, and that they really stay in the cache. Is >> there anything I could do to prevent the cache being emptied? May be >> I've been bitten by a bug and should give the trunk a shot? >> > > Just set the Expires: headers on the origin (backend) server responses > to now + 10 years or something. > If you're not using php or some other cgi app, you can set headers using mod_headers in apache, if you are running a web app, just set the headers within the app itself. You can also explicitly set the ttl on objects in the cache using vcl code, but moving the load off the cache to the backend makes more sense since you'll be cutting traffic down to apache and it would free up cycles to modify headers. If you have your heart set on making varnish do the work you could add something like this: sub vcl_fetch { if (!obj.valid) { error; } if (!obj.cacheable ) { pass; } if (obj.http.Set-Cookie) { pass; } if (req.url ~ "\.(jpg|jpeg|gif|png)$") { set obj.ttl = 31449600; } insert; } But I would first look at getting apache to set the age correctly and leave varnish to do what its good at. --Dave From ottolski at web.de Thu Apr 3 17:58:49 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 3 Apr 2008 19:58:49 +0200 Subject: cache empties itself? In-Reply-To: <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> Message-ID: <200804031958.49556.ottolski@web.de> Am Donnerstag 03 April 2008 19:30:25 schrieb Michael S. Fischer: > On Thu, Apr 3, 2008 at 10:26 AM, Sascha Ottolski wrote: > > All this with 1.1.2. It's vital to my setup to cache as many > > objects as possible, for a long time, and that they really stay in > > the cache. Is there anything I could do to prevent the cache being > > emptied? May be I've been bitten by a bug and should give the trunk > > a shot? > > Just set the Expires: headers on the origin (backend) server > responses to now + 10 years or something. > > --Michael thanks for the hint. unfortunately, not quite what I want. I want varnish to keep the objects very long, so that it not has to ask the backend to often. therefore, it's important that the cache keeps growing, instead of vanishing once in a while. and I don't wan't upstream caches or browsers to cache that long, only varnish, so setting headers doesn't seem to fit. Cheers, Sascha From michael at dynamine.net Thu Apr 3 18:04:57 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 3 Apr 2008 11:04:57 -0700 Subject: cache empties itself? In-Reply-To: <200804031958.49556.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> Message-ID: <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski wrote: > and I don't wan't upstream caches or browsers to cache that long, only > varnish, so setting headers doesn't seem to fit. Why not? Just curious. If it's truly cachable content, it seems as though it would make sense (both for your performance and your bandwidth outlays) to let browsers cache. --Michael From ric at digitalmarbles.com Thu Apr 3 18:53:40 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 3 Apr 2008 11:53:40 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> Message-ID: On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote: > On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski > wrote: >> and I don't wan't upstream caches or browsers to cache that long, >> only >> varnish, so setting headers doesn't seem to fit. > > Why not? Just curious. If it's truly cachable content, it seems as > though it would make sense (both for your performance and your > bandwidth outlays) to let browsers cache. > > --Michael Can't speak for the OP but a common use case is where you want an aggressive cache but still need to retain the ability to purge the cache when content changes. As far as I know, there are only two ways to do this without contaminating downstream caches with potentially stale content... via special treatment in the varnish config (which is what the OP is trying to do) or using a special header that only your varnish instance will recognize (like Surrogate-Control, which as far as I know Varnish does not support out-of-the-box but Squid3 does). Ric From michael at dynamine.net Thu Apr 3 19:45:25 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 3 Apr 2008 12:45:25 -0700 Subject: cache empties itself? In-Reply-To: References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> Message-ID: <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> On Thu, Apr 3, 2008 at 11:53 AM, Ricardo Newbery wrote: > On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote: > > On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski wrote: > > > > > and I don't wan't upstream caches or browsers to cache that long, only > > > varnish, so setting headers doesn't seem to fit. > > > > > > > Why not? Just curious. If it's truly cachable content, it seems as > > though it would make sense (both for your performance and your > > bandwidth outlays) to let browsers cache. > > Can't speak for the OP but a common use case is where you want an > aggressive cache but still need to retain the ability to purge the cache > when content changes. As far as I know, there are only two ways to do this > without contaminating downstream caches with potentially stale content... > via special treatment in the varnish config (which is what the OP is trying > to do) or using a special header that only your varnish instance will > recognize (like Surrogate-Control, which as far as I know Varnish does not > support out-of-the-box but Squid3 does). Seems to me that this is rather brittle and error-prone. - If a particular resource is truly dynamic, then it should not be cachable at all. - If a particular resource can be considered static (i.e. cachable), yet updateable, then it is *far* safer to version your URLs, as you have zero control over intermediate proxies. --Michael From ottolski at web.de Thu Apr 3 20:26:51 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 3 Apr 2008 22:26:51 +0200 Subject: cache empties itself? In-Reply-To: <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> Message-ID: <200804032226.52056.ottolski@web.de> Am Donnerstag 03 April 2008 21:45:25 schrieb Michael S. Fischer: > On Thu, Apr 3, 2008 at 11:53 AM, Ricardo Newbery wrote: > > On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote: > > > On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski wrote: > > > > and I don't wan't upstream caches or browsers to cache that > > > > long, only varnish, so setting headers doesn't seem to fit. > > > > > > Why not? Just curious. If it's truly cachable content, it > > > seems as though it would make sense (both for your performance > > > and your bandwidth outlays) to let browsers cache. > > > > Can't speak for the OP but a common use case is where you want an > > aggressive cache but still need to retain the ability to purge the > > cache when content changes. As far as I know, there are only two > > ways to do this without contaminating downstream caches with > > potentially stale content... via special treatment in the varnish > > config (which is what the OP is trying to do) or using a special > > header that only your varnish instance will recognize (like > > Surrogate-Control, which as far as I know Varnish does not support > > out-of-the-box but Squid3 does). > > Seems to me that this is rather brittle and error-prone. > > - If a particular resource is truly dynamic, then it should not be > cachable at all. > - If a particular resource can be considered static (i.e. cachable), > yet updateable, then it is *far* safer to version your URLs, as you > have zero control over intermediate proxies. > > --Michael the reason is simple: the cache is for static images only. for some reasons it works better for us to put reverse proxies in front to lighten the requests to the real webservers (and the storage backend they in turn need to access to deliver the images). so, the cache shall be caching very agressive, but since the access to the images is restricted, the TTL for upstream caches and browsers has to be kept small. obviously I still do not understand all the details of varnish, but I thought it would "natural" that the cache can maintain a different TTL for its objects than what is reported in the headers to upstream clients. however, my main problem is currently that the varnish childs keep restarting, and that this empties the cache, which effectively renders the whole setup useless for me :-( if the cache has filled up, it works great, if it restarts empty, obviously it doesn't. is there anything I can do to prevent such restarts? Thanks, Sascha From varnish-list at itiva.com Thu Apr 3 23:32:28 2008 From: varnish-list at itiva.com (DHF) Date: Thu, 03 Apr 2008 16:32:28 -0700 Subject: cache empties itself? In-Reply-To: <200804032226.52056.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <200804032226.52056.ottolski@web.de> Message-ID: <47F5690C.7020205@itiva.com> Sascha Ottolski wrote: > however, my main problem is currently that the varnish childs keep > restarting, and that this empties the cache, which effectively renders > the whole setup useless for me :-( if the cache has filled up, it works > great, if it restarts empty, obviously it doesn't. > > is there anything I can do to prevent such restarts? > Varnish doesn't just restart on its own. Check to make sure you aren't sending a kill signal if you are running logrotate through a cronjob. I'm not sure if a HUP will empty the cache or not. --Dave From simon at darkmere.gen.nz Fri Apr 4 00:20:17 2008 From: simon at darkmere.gen.nz (Simon Lyall) Date: Fri, 4 Apr 2008 13:20:17 +1300 (NZDT) Subject: Child dies every 90 seconds Message-ID: I guess these is either a bug or I'm trying to do something dumb but I have fairly low TTLs for various html pages so I wanted to automaticly refetch them: # # If something in the cache is about to expire and has been requested # in the last minute then refetch it. # sub vcl_timeout { if ( obj.lastuse < 60s) { fetch; } else { discard; } } However when I run the varnish child process dies every 90 seconds (give or take a couple of seconds) . Added -d I get: >> Child said (2, 7602): <handling == VCL_RET_DISCARD) not true. errno = 0 (Success) >> Cache child died pid=7602 status=0x6 Clean child Child cleaned Removing the vcl lines makes everything work nicely. Any ideas? I'm Running varnishd 1.1.2 built from these rpms: varnish-1.1.2-5el5.i386.rpm varnish-libs-1.1.2-5el5.i386.rpm On Centos 5. but I think I was seeing the same problem with a compiled build under ubuntu. Load is 10-50 q/s in a test enviroment. -- Simon J. Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From ric at digitalmarbles.com Fri Apr 4 02:37:44 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 3 Apr 2008 19:37:44 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> Message-ID: <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> On Apr 3, 2008, at 12:45 PM, Michael S. Fischer wrote: > On Thu, Apr 3, 2008 at 11:53 AM, Ricardo Newbery > wrote: >> On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote: >>> On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski >>> wrote: >>> >>>> and I don't wan't upstream caches or browsers to cache that long, >>>> only >>>> varnish, so setting headers doesn't seem to fit. >>>> >>> >>> Why not? Just curious. If it's truly cachable content, it seems >>> as >>> though it would make sense (both for your performance and your >>> bandwidth outlays) to let browsers cache. >> >> Can't speak for the OP but a common use case is where you want an >> aggressive cache but still need to retain the ability to purge the >> cache >> when content changes. As far as I know, there are only two ways to >> do this >> without contaminating downstream caches with potentially stale >> content... >> via special treatment in the varnish config (which is what the OP >> is trying >> to do) or using a special header that only your varnish instance will >> recognize (like Surrogate-Control, which as far as I know Varnish >> does not >> support out-of-the-box but Squid3 does). > > Seems to me that this is rather brittle and error-prone. > > - If a particular resource is truly dynamic, then it should not be > cachable at all. > - If a particular resource can be considered static (i.e. cachable), > yet updateable, then it is *far* safer to version your URLs, as you > have zero control over intermediate proxies. > > --Michael If done correctly, this is neither brittle nor error-prone. This is the point after all of the Surrogate-Control header -- A way for your backend to instruct your proxy (or "surrogate" if you insist) how to handle your content in a way that is invisible to intermediate proxies not under your control. While not as flexible as the Surrogate-Control header, you can do the same just with special stanzas in your varnish.vcl. In fact, the vcl man page contains one example of how to do this for all objects to enforce a minimum ttl: sub vcl_fetch { if (obj.ttl < 120s) { set obj.ttl = 120s; } } Or you can invent your own header... let's call it X-Varnish-1day sub vcl_fetch { if (obj.http.X-Varnish-1day) { set obj.ttl = 86400s; } } Neither of these two examples are "unsafe" and both are invisible to intermediate proxies. With regards to URL versioning, this is indeed a powerful strategy -- assuming your backend is capable of doing this. But it's a strategy generally only appropriate for supporting resources like inline graphics, css, and javascript. URL versioning is usually not appropriate for html pages or other primary resources that are intended to be reached directly by the end user and whose URLs must not change. Ric From michael at dynamine.net Fri Apr 4 02:46:46 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 3 Apr 2008 19:46:46 -0700 Subject: cache empties itself? In-Reply-To: <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> Message-ID: <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> On Thu, Apr 3, 2008 at 7:37 PM, Ricardo Newbery wrote: > URL versioning is usually not appropriate for html > pages or other primary resources that are intended to be reached directly by > the end user and whose URLs must not change. Back to square one. Are these latter resources dynamic, or are they static? - If they are dynamic, neither your own proxies nor upstream proxies should be caching the content. - If they are static, then they should be cacheable for the same amount of time all the way upstream (modulo protected URLs). I've haven't yet seen a defensible need for varying cache lifetimes, depending on the proximity of the proxy to the origin server, as this request seems to be. Of course, I'm open to being convinced otherwise :-) --Michael From ric at digitalmarbles.com Fri Apr 4 03:59:19 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 3 Apr 2008 20:59:19 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> Message-ID: On Apr 3, 2008, at 7:46 PM, Michael S. Fischer wrote: > On Thu, Apr 3, 2008 at 7:37 PM, Ricardo Newbery > wrote: > >> URL versioning is usually not appropriate for html >> pages or other primary resources that are intended to be reached >> directly by >> the end user and whose URLs must not change. > > Back to square one. Are these latter resources dynamic, or are they > static? > > - If they are dynamic, neither your own proxies nor upstream proxies > should be caching the content. > - If they are static, then they should be cacheable for the same > amount of time all the way upstream (modulo protected URLs). > > I've haven't yet seen a defensible need for varying cache lifetimes, > depending on the proximity of the proxy to the origin server, as this > request seems to be. Of course, I'm open to being convinced otherwise > :-) Well, first of all you're setting up a false dichotomy. Not everything fits neatly into your apparent definitions of dynamic versus static. Your definitions appear to exclude the use case where you have cacheable content that is subject to change at unpredictable intervals but which is otherwise fairly "static" for some length of time. Sometimes, in such a case, serving stale content for some time after an edit is an acceptable compromise between performance and freshness but often it is not. And sometimes, impacting overall performance by hitting the backend for every such request is also undesirable. Thankfully, those are not the only choices. With a combination of PURGE requests and something like Surrogate-Control (or hardcoded behavior in your reverse-proxy config), you can still insure immediate freshness (or whatever level of freshness you require) without forcing your backend to do all the work. Ric From ottolski at web.de Fri Apr 4 07:01:57 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 09:01:57 +0200 Subject: cache empties itself? In-Reply-To: <47F5690C.7020205@itiva.com> References: <200804030833.21070.ottolski@web.de> <200804032226.52056.ottolski@web.de> <47F5690C.7020205@itiva.com> Message-ID: <200804040901.57430.ottolski@web.de> Am Freitag 04 April 2008 01:32:28 schrieb DHF: > Sascha Ottolski wrote: > > however, my main problem is currently that the varnish childs keep > > restarting, and that this empties the cache, which effectively > > renders the whole setup useless for me :-( if the cache has filled > > up, it works great, if it restarts empty, obviously it doesn't. > > > > is there anything I can do to prevent such restarts? > > Varnish doesn't just restart on its own. Check to make sure you > aren't sending a kill signal if you are running logrotate through a > cronjob. I'm not sure if a HUP will empty the cache or not. > > --Dave I definetely did nothing like this, I've observed restarts "out of the blue". I'm no giving the trunk a try, hopefully there's an improvement to that matter. what I did once in a while is to vcl.load, vcl.use. will this force a restart of the child, thus flushing the cache? Thanks again, Sascha From ottolski at web.de Fri Apr 4 07:13:42 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 09:13:42 +0200 Subject: cache empties itself? In-Reply-To: <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> Message-ID: <200804040913.42291.ottolski@web.de> Am Freitag 04 April 2008 04:37:44 schrieb Ricardo Newbery: > ? ? ? ? ? sub vcl_fetch { > ? ? ? ? ? ? ? if (obj.ttl < 120s) { > ? ? ? ? ? ? ? ? ? set obj.ttl = 120s; > ? ? ? ? ? ? ? } > ? ? ? ? ? } > > Or you can invent your own header... let's call it ?X-Varnish-1day > > ? ? ? ? ? sub vcl_fetch { > ? ? ? ? ? ? ? if (obj.http.X-Varnish-1day) { > ? ? ? ? ? ? ? ? ? set obj.ttl = 86400s; > ? ? ? ? ? ? ? } > ? ? ? ? ? } so it seems like I'm on the right track, thanks for clarifying. now, is the ttl a information local to varnish, or will it set headers also (if I look into the headers of my varnishs' responses, it doesn't appear so)? what really confuses me: the man pages state a little different semantics for default_ttl. in man varnishd: -t ttl Specifies a hard minimum time to live for cached documents. This is a shortcut for specifying the default_ttl run-time parameter. default_ttl The default time-to-live assigned to objects if neither the backend nor the configuration assign one. Note that changes to this parameter are not applied retroac? tively. The default is 120 seconds. "hard minimum" sounds to me as if it would overwrite any setting the backend has given. however, in man vcl it's explained, that default_ttl does only affect documents without backend given TTL: The following snippet demonstrates how to force a minimum TTL for all documents. Note that this is not the same as setting the default_ttl run-time parameter, as that only affects doc? ument for which the backend did not specify a TTL. sub vcl_fetch { if (obj.ttl < 120s) { set obj.ttl = 120s; } } the examples have a unit (s) appended, as in the example of the man page, that suggests that I could also append things like m, h, d (for minutes, hours, days)? BTW, in the trunk version, the examples for a backend definition have still the old syntax. backend www { set backend.host = "www.example.com"; set backend.port = "80"; } instead backend www { .host = "www.example.com"; .port = "80"; } Thanks a lot, Sascha From ottolski at web.de Fri Apr 4 07:51:29 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 09:51:29 +0200 Subject: make varnish still respond if backend dead Message-ID: <200804040951.29493.ottolski@web.de> Hi, sorry if this is FAQ: what can I do to make varnish respond to request if it's backend is dead. should return cache hits, of course, and a "proxy error" or something for a miss. and how can I prevent varnish to cache "404" for objects it couldn't fetch due to a dead backend? at least I think that is what happened, as varnish reported 404 for URLs that definetely exist; the dead backend seems to be the only logical explanation why varnish could think it's not. oh, and is there a way to put the local hostname in a header? I have two proxies, load balanced by LVS, so using server.ip reports the same IP on both nodes. Thanks, Sascha From ssm at linpro.no Fri Apr 4 08:11:52 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Fri, 04 Apr 2008 10:11:52 +0200 Subject: cache empties itself? In-Reply-To: <200804040901.57430.ottolski@web.de> (Sascha Ottolski's message of "Fri, 4 Apr 2008 09:01:57 +0200") References: <200804030833.21070.ottolski@web.de> <200804032226.52056.ottolski@web.de> <47F5690C.7020205@itiva.com> <200804040901.57430.ottolski@web.de> Message-ID: <7x7ifem43b.fsf@iostat.linpro.no> On Fri, 4 Apr 2008 09:01:57 +0200, Sascha Ottolski said: > I definetely did nothing like this, I've observed restarts "out of > the blue". I'm no giving the trunk a try, hopefully there's an > improvement to that matter. If the varnish caching process dies for some reason, the parent varnish process will start a new one to keep the service running. This new one will not re-use the cache of the previous. With all that said, the varnish caching process should not die in this way, that is undesirable behaviour. If you'd like to help debugging this issue, take a look at http://varnish.projects.linpro.no/wiki/DebuggingVarnish Note that if you run a released version, your issue may have beeen fixed already in a later release, the related branch, or in trunk, > what I did once in a while is to vcl.load, vcl.use. will this force > a restart of the child, thus flushing the cache? No. The reason Varnish has vcl.load and vcl.use is to make sure you don't have to restart anything, thus losing your cached data. -- Stig Sandbeck Mathisen, Linpro From ottolski at web.de Fri Apr 4 09:45:03 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 11:45:03 +0200 Subject: unable to compile nagios module from trunk Message-ID: <200804041145.03248.ottolski@web.de> after checking out and running autogen.sh, configure stops with this error: ./configure: line 19308: syntax error near unexpected token `VARNISHAPI,' ./configure: line 19308: `PKG_CHECK_MODULES(VARNISHAPI, varnishapi)' Cheers, Sascha From ottolski at web.de Fri Apr 4 09:48:14 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 11:48:14 +0200 Subject: cache empties itself? In-Reply-To: <7x7ifem43b.fsf@iostat.linpro.no> References: <200804030833.21070.ottolski@web.de> <200804040901.57430.ottolski@web.de> <7x7ifem43b.fsf@iostat.linpro.no> Message-ID: <200804041148.14924.ottolski@web.de> Am Freitag 04 April 2008 10:11:52 schrieb Stig Sandbeck Mathisen: > On Fri, 4 Apr 2008 09:01:57 +0200, Sascha Ottolski said: > > I definetely did nothing like this, I've observed restarts "out of > > the blue". I'm no giving the trunk a try, hopefully there's an > > improvement to that matter. > > If the varnish caching process dies for some reason, the parent > varnish process will start a new one to keep the service running. > This new one will not re-use the cache of the previous. > > With all that said, the varnish caching process should not die in > this way, that is undesirable behaviour. > > If you'd like to help debugging this issue, take a look at > http://varnish.projects.linpro.no/wiki/DebuggingVarnish I already started my proxies with the latest trunk and corecumps enabled, and cross my fingers. so far it's running for about 11 hours... BTW, if I have 32 GB of RAM, and 517 GB of cache file, how large will the core dump be? > > Note that if you run a released version, your issue may have beeen > fixed already in a later release, the related branch, or in trunk, > > > what I did once in a while is to vcl.load, vcl.use. will this force > > a restart of the child, thus flushing the cache? > > No. The reason Varnish has vcl.load and vcl.use is to make sure you > don't have to restart anything, thus losing your cached data. excellent. the numbers that are shown next to the config in vcl.list, is it the number of connections that (still) use it? Thanks again, Sascha From michael at dynamine.net Fri Apr 4 09:50:51 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Fri, 4 Apr 2008 02:50:51 -0700 Subject: cache empties itself? In-Reply-To: References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> Message-ID: <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> On Thu, Apr 3, 2008 at 8:59 PM, Ricardo Newbery wrote: > Well, first of all you're setting up a false dichotomy. Not everything > fits neatly into your apparent definitions of dynamic versus static. Your > definitions appear to exclude the use case where you have cacheable content > that is subject to change at unpredictable intervals but which is otherwise > fairly "static" for some length of time. In my experience, you almost never need a caching proxy for this purpose. Most modern web servers are perfectly capable of serving static content at wire speed. Moreover, if your origin servers have a reasonable amount of RAM and the working set size is relatively small, the static objects are already likely to be in the buffer cache. In a scenario such as this, having caching proxies upstream for these sorts of objects can actually be *worse* in terms of performance -- consider the wasted time processing a cache miss for content that's already cached downstream. Best regards, --Michael From ottolski at web.de Fri Apr 4 10:20:59 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 12:20:59 +0200 Subject: cache empties itself? In-Reply-To: <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> Message-ID: <200804041220.59803.ottolski@web.de> Am Freitag 04 April 2008 11:50:51 schrieb Michael S. Fischer: > On Thu, Apr 3, 2008 at 8:59 PM, Ricardo Newbery wrote: > > Well, first of all you're setting up a false dichotomy. Not > > everything fits neatly into your apparent definitions of dynamic > > versus static. Your definitions appear to exclude the use case > > where you have cacheable content that is subject to change at > > unpredictable intervals but which is otherwise fairly "static" for > > some length of time. > > In my experience, you almost never need a caching proxy for this > purpose. Most modern web servers are perfectly capable of serving > static content at wire speed. Moreover, if your origin servers have > a reasonable amount of RAM and the working set size is relatively > small, the static objects are already likely to be in the buffer > cache. In a scenario such as this, having caching proxies upstream > for these sorts of objects can actually be *worse* in terms of > performance -- consider the wasted time processing a cache miss for > content that's already cached downstream. > > Best regards, > > --Michael you are right, _if_ the working set is small. in my case, we're talking 20+ mio. small images (5-50 KB each), 400+ GB in total size, and it's growing every day. access is very random, but there still is a good amount of "hot" objects. and to be ready for a larger set it cannot reside on the webserver, but lives on a central storage. access performance to the (network) storage is relatively slow, and our experiences with mod_cache from apache were bad, that's why I started testing varnish. and so far, it works amazingly! my main problem remains that it randomly crashed, and that the cache file isn't persistent across restarts. Cheers, Sascha From michael at dynamine.net Fri Apr 4 16:11:23 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Fri, 4 Apr 2008 09:11:23 -0700 Subject: cache empties itself? In-Reply-To: <200804041220.59803.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <200804041220.59803.ottolski@web.de> Message-ID: <86db848d0804040911w3a5962bcof9801b37f3969a7c@mail.gmail.com> On Fri, Apr 4, 2008 at 3:20 AM, Sascha Ottolski wrote: > you are right, _if_ the working set is small. in my case, we're talking > 20+ mio. small images (5-50 KB each), 400+ GB in total size, and it's > growing every day. access is very random, but there still is a good > amount of "hot" objects. and to be ready for a larger set it cannot > reside on the webserver, but lives on a central storage. access > performance to the (network) storage is relatively slow, and our > experiences with mod_cache from apache were bad, that's why I started > testing varnish. Ah, I see. The problem is that you're basically trying to compensate for a congenital defect in your design: the network storage (I assume NFS) backend. NFS read requests are not cacheable by the kernel because another client may have altered the file since the last read took place. If your working set is as large as you say it is, eventually you will end up with a low cache hit ratio on your Varnish server(s) and you'll be back to square one again. The way to fix this problem in the long term is to split your file library into shards and put them on local storage. Didn't we discuss this a couple of weeks ago? Best regards, --Michael From ottolski at web.de Fri Apr 4 16:38:12 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 4 Apr 2008 18:38:12 +0200 Subject: cache empties itself? In-Reply-To: <86db848d0804040911w3a5962bcof9801b37f3969a7c@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804041220.59803.ottolski@web.de> <86db848d0804040911w3a5962bcof9801b37f3969a7c@mail.gmail.com> Message-ID: <200804041838.12639.ottolski@web.de> Am Freitag 04 April 2008 18:11:23 schrieb Michael S. Fischer: > On Fri, Apr 4, 2008 at 3:20 AM, Sascha Ottolski wrote: > > you are right, _if_ the working set is small. in my case, we're > > talking 20+ mio. small images (5-50 KB each), 400+ GB in total > > size, and it's growing every day. access is very random, but there > > still is a good amount of "hot" objects. and to be ready for a > > larger set it cannot reside on the webserver, but lives on a > > central storage. access performance to the (network) storage is > > relatively slow, and our experiences with mod_cache from apache > > were bad, that's why I started testing varnish. > > Ah, I see. > > The problem is that you're basically trying to compensate for a > congenital defect in your design: the network storage (I assume NFS) > backend. NFS read requests are not cacheable by the kernel because > another client may have altered the file since the last read took > place. > > If your working set is as large as you say it is, eventually you will > end up with a low cache hit ratio on your Varnish server(s) and > you'll be back to square one again. > > The way to fix this problem in the long term is to split your file > library into shards and put them on local storage. > > Didn't we discuss this a couple of weeks ago? exactly :-) what can I see, I did analyze the logfiles, and learned that despite the fact that a lot of the access are truly random, there is still a good amount of the request concentrated to a smaller set of the images. of course, the set is changing over time, but thats what a cache can handle perfectly. and my experiences seem to prove my theory: if varnish keeps running like it is now for about 18 hours *knock on wood*, the cache hit rate is close to 80 %! and that takes so much pressure from the backend that the overall performance is just awesome. putting the files on local storage just doesn't scales well. I'm more thinking about splitting the proxies like discussed on the list before: a loadbalancer could distribute the URLs in a way that each cache holds it's own share of the objects. Cheers, Sascha From ric at digitalmarbles.com Fri Apr 4 18:05:14 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Fri, 4 Apr 2008 11:05:14 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031926.19302.ottolski@web.de> <86db848d0804031030y5b326640xe1aab5759ff73580@mail.gmail.com> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> Message-ID: On Apr 4, 2008, at 2:50 AM, Michael S. Fischer wrote: > On Thu, Apr 3, 2008 at 8:59 PM, Ricardo Newbery > wrote: > >> Well, first of all you're setting up a false dichotomy. Not >> everything >> fits neatly into your apparent definitions of dynamic versus >> static. Your >> definitions appear to exclude the use case where you have cacheable >> content >> that is subject to change at unpredictable intervals but which is >> otherwise >> fairly "static" for some length of time. > > In my experience, you almost never need a caching proxy for this > purpose. Most modern web servers are perfectly capable of serving > static content at wire speed. Moreover, if your origin servers have a > reasonable amount of RAM and the working set size is relatively small, > the static objects are already likely to be in the buffer cache. In a > scenario such as this, having caching proxies upstream for these sorts > of objects can actually be *worse* in terms of performance -- consider > the wasted time processing a cache miss for content that's already > cached downstream. Again, "static" content isn't only the stuff that is served from filesystems in the classic static web server scenario. There are plenty of "dynamic" applications that process content from database -- applying skins and compositing multiple elements into a single page while filtering every element or otherwise applying special processing based on a user's access privileges. An example of this is a dynamic content management system like Plone or Drupal. In many cases, these "dynamic" responses are fairly "static" for some period of time but there is still a definite performance hit, especially under load. Ric From varnish-list at itiva.com Fri Apr 4 20:27:34 2008 From: varnish-list at itiva.com (DHF) Date: Fri, 04 Apr 2008 13:27:34 -0700 Subject: cache empties itself? In-Reply-To: <200804041838.12639.ottolski@web.de> References: <200804030833.21070.ottolski@web.de> <200804041220.59803.ottolski@web.de> <86db848d0804040911w3a5962bcof9801b37f3969a7c@mail.gmail.com> <200804041838.12639.ottolski@web.de> Message-ID: <47F68F36.2000306@itiva.com> Sascha Ottolski wrote: > Am Freitag 04 April 2008 18:11:23 schrieb Michael S. Fischer: > >> Ah, I see. >> >> The problem is that you're basically trying to compensate for a >> congenital defect in your design: the network storage (I assume NFS) >> backend. NFS read requests are not cacheable by the kernel because >> another client may have altered the file since the last read took >> place. >> >> If your working set is as large as you say it is, eventually you will >> end up with a low cache hit ratio on your Varnish server(s) and >> you'll be back to square one again. >> >> The way to fix this problem in the long term is to split your file >> library into shards and put them on local storage. >> >> Didn't we discuss this a couple of weeks ago? >> > > exactly :-) what can I see, I did analyze the logfiles, and learned that > despite the fact that a lot of the access are truly random, there is > still a good amount of the request concentrated to a smaller set of the > images. of course, the set is changing over time, but thats what a > cache can handle perfectly. > > and my experiences seem to prove my theory: if varnish keeps running > like it is now for about 18 hours *knock on wood*, the cache hit rate > is close to 80 %! and that takes so much pressure from the backend that > the overall performance is just awesome. > > putting the files on local storage just doesn't scales well. I'm more > thinking about splitting the proxies like discussed on the list before: > a loadbalancer could distribute the URLs in a way that each cache holds > it's own share of the objects. > By putting intermediate caches between the file storage and the client, you are essentially just spreading the storage locally between cache boxes, so if this method doesn't scale then you are still in need of a design change, and frankly so am I :) What you need to model is the popularity curve for your content, if your images do not fit with an 80/20 rule of popularity, ie. 20% of your images soak up less than 80% or requests, then you will spend more time thrashing the caches than serving the content, and Michael is right, you would be better served to dedicate web servers with local storage and shard your images across them. If 80% of your content is rarely viewed, then using the same amount of hardware defined as caching accelerators, you will see an increase in throughput due to more hardware serving a smaller number of images. It all depends on your content and users viewing habits. --Dave From michael at dynamine.net Fri Apr 4 21:04:41 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Fri, 4 Apr 2008 14:04:41 -0700 Subject: cache empties itself? In-Reply-To: References: <200804030833.21070.ottolski@web.de> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> Message-ID: <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> On Fri, Apr 4, 2008 at 11:05 AM, Ricardo Newbery wrote: > Again, "static" content isn't only the stuff that is served from > filesystems in the classic static web server scenario. There are plenty of > "dynamic" applications that process content from database -- applying skins > and compositing multiple elements into a single page while filtering every > element or otherwise applying special processing based on a user's access > privileges. An example of this is a dynamic content management system like > Plone or Drupal. In many cases, these "dynamic" responses are fairly > "static" for some period of time but there is still a definite performance > hit, especially under load. If that's truly the case, then your CMS should be caching the output locally. --Michael From ric at digitalmarbles.com Fri Apr 4 22:31:02 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Fri, 4 Apr 2008 15:31:02 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <200804031958.49556.ottolski@web.de> <86db848d0804031104i75eb5517nc1d99bfca3b6769a@mail.gmail.com> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> Message-ID: On Apr 4, 2008, at 2:04 PM, Michael S. Fischer wrote: > On Fri, Apr 4, 2008 at 11:05 AM, Ricardo Newbery > wrote: > >> Again, "static" content isn't only the stuff that is served from >> filesystems in the classic static web server scenario. There are >> plenty of >> "dynamic" applications that process content from database -- >> applying skins >> and compositing multiple elements into a single page while >> filtering every >> element or otherwise applying special processing based on a user's >> access >> privileges. An example of this is a dynamic content management >> system like >> Plone or Drupal. In many cases, these "dynamic" responses are fairly >> "static" for some period of time but there is still a definite >> performance >> hit, especially under load. > > If that's truly the case, then your CMS should be caching the output > locally. Should be? Why? If you can provide this capability via a separate process like Varnish, then why "should" your CMS do this instead? Am I missing some moral dimension to this issue? ;-) In any case, both of these examples, Plone and Drupal, can indeed cache the output "locally" but that is still not as fast as placing a dedicated cache server in front. It's almost always faster to have a dedicated single-purpose process do something instead of cranking up the hefty machinery for requests that can be adequately served by the lighter process. Ric From des at linpro.no Sat Apr 5 12:26:27 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sat, 05 Apr 2008 14:26:27 +0200 Subject: Child dies every 90 seconds In-Reply-To: (Simon Lyall's message of "Fri\, 4 Apr 2008 13\:20\:17 +1300 \(NZDT\)") References: Message-ID: <87fxu0cwss.fsf@des.linpro.no> Simon Lyall writes: > sub vcl_timeout { > if ( obj.lastuse < 60s) { > fetch; > } else { > discard; > } > } This is not expected to work. Prefetching is only available in trunk. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ottolski at web.de Sun Apr 6 13:09:46 2008 From: ottolski at web.de (Sascha Ottolski) Date: Sun, 6 Apr 2008 15:09:46 +0200 Subject: recommendation for swap space? Message-ID: <200804061509.46611.ottolski@web.de> Hi, now that my varnish processes start to reach the RAM size, I'm wondering what a dimension of swap would be wise? I currently have about 30 GB swap space for 32 GB RAM, but am wondering if it could even make sense to have no swap at all? My cache file is 517 GB in size. BTW, the trunk seems to run stable, for 2,5 half days now! Cheers, Sascha From duja at torlen.net Sun Apr 6 20:55:49 2008 From: duja at torlen.net (Erik Torlen) Date: Sun, 06 Apr 2008 22:55:49 +0200 Subject: Directors user sessions In-Reply-To: <47ECDA29.1060709@dotimes.com> References: <47ECDA29.1060709@dotimes.com> Message-ID: <47F938D5.3060706@torlen.net> But the loadbalancing is already implenteed. I dont see why it shouldn't be used as loadbalancer if the functionality exists? I don't want to use one place for all sessions, like a file share or something in that direction. Im thinking about creating a Header that is called X-Backend: (a|b|c|d). This could be used to check what backend the user should use. Its really not a nice way to do it but its a way of doing it. Is there any idea that I could create a ticket for a feature like this? / Erik Cherife Li skrev: > On 2008-3-28 19:05, duja at torlen.net wrote: >> Hi, >> >> I got a question regarding the Directors in varnish vcl. If user A is >> logging in to http://mywebsite.com and the website is using varnish >> (with directors) in front of 4 backend servers. The 4 backend servers >> is identical. >> >> User A is logging in and hits server 1. He then goes to his profile >> and hits server 2. The server 2 doesn't know that user A is logged >> and redirect him to some "Not logged in"-page. >> >> Is there any way for varnish to lookup which server that user A >> should be directed to? Some kind of Sticky Session function? >> > IMHO, Varnish is for caching, rather than for redirecting. > Maybe you could consider HAProxy, or pound, or IPVS, or > similar implementation. > Besides, I know that sessions can be shared. > >> / Erik >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > From duja at torlen.net Mon Apr 7 14:26:17 2008 From: duja at torlen.net (duja at torlen.net) Date: Mon, 7 Apr 2008 16:26:17 +0200 Subject: Management console Message-ID: Is it possible to use a password when connection the the management console? Why is it wierd line-breaks when connecting from windows telnet/putty telnet to varnish management console? Looks like this: discard vcl.list vcl.show param.show [-l] [] param. set url.purge hash.purge Thanks Erik From des at linpro.no Mon Apr 7 16:00:14 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 07 Apr 2008 18:00:14 +0200 Subject: recommendation for swap space? In-Reply-To: <200804061509.46611.ottolski@web.de> (Sascha Ottolski's message of "Sun\, 6 Apr 2008 15\:09\:46 +0200") References: <200804061509.46611.ottolski@web.de> Message-ID: <877if9lkoh.fsf@des.linpro.no> Sascha Ottolski writes: > now that my varnish processes start to reach the RAM size, I'm wondering > what a dimension of swap would be wise? I currently have about 30 GB > swap space for 32 GB RAM, but am wondering if it could even make sense > to have no swap at all? My cache file is 517 GB in size. Varnish does not use swap. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Mon Apr 7 16:18:32 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 7 Apr 2008 09:18:32 -0700 Subject: recommendation for swap space? In-Reply-To: <877if9lkoh.fsf@des.linpro.no> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> Message-ID: <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> On Mon, Apr 7, 2008 at 9:00 AM, Dag-Erling Sm?rgrav wrote: > Sascha Ottolski writes: > > now that my varnish processes start to reach the RAM size, I'm wondering > > what a dimension of swap would be wise? I currently have about 30 GB > > swap space for 32 GB RAM, but am wondering if it could even make sense > > to have no swap at all? My cache file is 517 GB in size. > > Varnish does not use swap. That said, it wouldn't make sense to entirely deallocate your swap space, since the kernel may decide to page or swap out processes other than Varnish. --Michael From des at linpro.no Mon Apr 7 16:47:56 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 07 Apr 2008 18:47:56 +0200 Subject: Management console In-Reply-To: (duja@torlen.net's message of "Mon\, 7 Apr 2008 16\:26\:17 +0200") References: Message-ID: <873apxligz.fsf@des.linpro.no> writes: > Is it possible to use a password when connection the the management > console? Not currently. It wouldn't make much difference anyway, since the connection is unencrypted. I have plans to add support for binding the management interface to a Unix socket instead of a TCP socket, which will prevent sniffing and allow you to restrict access using regular file system permissions. > Why is it wierd line-breaks when connecting from windows telnet/putty > telnet to varnish management console? Because you didn't set it up to perform the required LF -> CR LF translation. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From alex at davz.net Mon Apr 7 19:58:27 2008 From: alex at davz.net (Alex Davies) Date: Mon, 7 Apr 2008 21:58:27 +0200 Subject: Multiple backends - Restarts - 1.1.2 Message-ID: <5fb622120804071258t4fc9efccgb4815243b09cedfc@mail.gmail.com> Hi, I've just sucessfully configured Varnish 1.1.2. I have two webservers with identical content. If both servers are working, I do not care if the traffic is set to only one or to both. However, as and when one dies, I would like varnish to send traffic only to the working one! I notice the "Using restarts to try multiple backends" [1] in the Wiki, but it does not work for me. Even configuring more than one backend prevents varnish from starting. I've searched the mailing lists and there are references to this feature only being available in "trunk" versions. Does anyone have any information on when a branch that is stableish is likely to appear from this trunk? Many thanks, Alex [1] http://varnish.projects.linpro.no/wiki/VCLExampleRestarts From simon at darkmere.gen.nz Mon Apr 7 21:14:14 2008 From: simon at darkmere.gen.nz (Simon Lyall) Date: Tue, 8 Apr 2008 09:14:14 +1200 (NZST) Subject: recommendation for swap space? In-Reply-To: <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> Message-ID: On Mon, 7 Apr 2008, Michael S. Fischer wrote: > That said, it wouldn't make sense to entirely deallocate your swap > space, since the kernel may decide to page or swap out processes other > than Varnish. and what is wrong with that? Surely your RAM is better being used by the main applications on the server ( Varnish ) rather than "sitting around and waiting" copies of sshd, cron and getty? -- Simon J. Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From michael at dynamine.net Mon Apr 7 21:44:20 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 7 Apr 2008 14:44:20 -0700 Subject: recommendation for swap space? In-Reply-To: References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> Message-ID: <86db848d0804071444l2f261edeod978f1d48b7f10b0@mail.gmail.com> On Mon, Apr 7, 2008 at 2:14 PM, Simon Lyall wrote: > On Mon, 7 Apr 2008, Michael S. Fischer wrote: > > That said, it wouldn't make sense to entirely deallocate your swap > > space, since the kernel may decide to page or swap out processes other > > than Varnish. > > and what is wrong with that? Surely your RAM is better being used by the > main applications on the server ( Varnish ) rather than "sitting around > and waiting" copies of sshd, cron and getty? Huh? Nothing I said contradicts that. --Michael From simon at darkmere.gen.nz Mon Apr 7 23:12:30 2008 From: simon at darkmere.gen.nz (Simon Lyall) Date: Tue, 8 Apr 2008 11:12:30 +1200 (NZST) Subject: recommendation for swap space? In-Reply-To: <86db848d0804071444l2f261edeod978f1d48b7f10b0@mail.gmail.com> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> <86db848d0804071444l2f261edeod978f1d48b7f10b0@mail.gmail.com> Message-ID: On Mon, 7 Apr 2008, Michael S. Fischer wrote: > On Mon, Apr 7, 2008 at 2:14 PM, Simon Lyall wrote: > > On Mon, 7 Apr 2008, Michael S. Fischer wrote: > > > That said, it wouldn't make sense to entirely deallocate your swap > > > space, since the kernel may decide to page or swap out processes other > > > than Varnish. > > > > and what is wrong with that? Surely your RAM is better being used by the > > main applications on the server ( Varnish ) rather than "sitting around > > and waiting" copies of sshd, cron and getty? > > Huh? Nothing I said contradicts that. Sorry, I read " wouldn't " as " would " [1] . [1] Approx 7 times. -- Simon J. Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From mlp1 at ig.com.br Tue Apr 8 00:02:51 2008 From: mlp1 at ig.com.br (MARCELO LICASTRO PAGNI) Date: Mon, 7 Apr 2008 21:02:51 -0300 Subject: Varnish config/performance with Domino Webmail Message-ID: <86d7e60d0804071702n12557b96taf9ce8634923c531@mail.gmail.com> Hi everyone, I am setting up a new server that will sit into a DMZ to serve as a reverse proxy for our company's Lotus Domino webmail. Having heard about Varnish, my choice couldn't be something else. I've set it up with the default configuration, but its performance showed to be very, very poor. I've tried some tweaks to the VCL config file, but it did not change. Performance is twice, tree times worst than directly accesing the original server. Machine is a HP server DL320 G5p, Xeon dual-core 2,66Ghz, 2GB RAM. I would appreciate some directions on what to do. Thank you, Marcelo L. ps. below is my changed VCL config file: backend default { set backend.host = "172.16.251.2"; set backend.port = "80"; } sub vcl_recv { if (req.request == "GET" && req.http.cookie) { lookup; } } sub vcl_fetch { if (obj.http.Set-Cookie) { insert; } } sub vcl_fetch { if (obj.ttl < 120s) { set obj.ttl = 120s; } } sub vcl_fetch { remove obj.http.Set-Cookie; } sub vcl_recv { if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css|js).*") { lookup; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at dynamine.net Tue Apr 8 00:22:42 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 7 Apr 2008 17:22:42 -0700 Subject: cache empties itself? In-Reply-To: References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> Message-ID: <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> On Fri, Apr 4, 2008 at 3:31 PM, Ricardo Newbery wrote: > > > Again, "static" content isn't only the stuff that is served from > > > filesystems in the classic static web server scenario. There are plenty > of > > > "dynamic" applications that process content from database -- applying > skins > > > and compositing multiple elements into a single page while filtering > every > > > element or otherwise applying special processing based on a user's > access > > > privileges. An example of this is a dynamic content management system > like > > > Plone or Drupal. In many cases, these "dynamic" responses are fairly > > > "static" for some period of time but there is still a definite > performance > > > hit, especially under load > In any case, both of these examples, Plone and Drupal, can indeed cache the > output "locally" but that is still not as fast as placing a dedicated cache > server in front. It's almost always faster to have a dedicated > single-purpose process do something instead of cranking up the hefty > machinery for requests that can be adequately served by the lighter process. Sure, but this is also the sort of content that can be cached back upstream using ordinary HTTP headers. Still waiting for that compelling case that requires independent cache configuration, --Michael From ric at digitalmarbles.com Tue Apr 8 01:18:24 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Mon, 7 Apr 2008 18:18:24 -0700 Subject: cache empties itself? In-Reply-To: <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> Message-ID: <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote: > On Fri, Apr 4, 2008 at 3:31 PM, Ricardo Newbery > wrote: > >>>> Again, "static" content isn't only the stuff that is served from >>>> filesystems in the classic static web server scenario. There are >>>> plenty >> of >>>> "dynamic" applications that process content from database -- >>>> applying >> skins >>>> and compositing multiple elements into a single page while >>>> filtering >> every >>>> element or otherwise applying special processing based on a user's >> access >>>> privileges. An example of this is a dynamic content management >>>> system >> like >>>> Plone or Drupal. In many cases, these "dynamic" responses are >>>> fairly >>>> "static" for some period of time but there is still a definite >> performance >>>> hit, especially under load > >> In any case, both of these examples, Plone and Drupal, can indeed >> cache the >> output "locally" but that is still not as fast as placing a >> dedicated cache >> server in front. It's almost always faster to have a dedicated >> single-purpose process do something instead of cranking up the hefty >> machinery for requests that can be adequately served by the lighter >> process. > > Sure, but this is also the sort of content that can be cached back > upstream using ordinary HTTP headers. No, it cannot. Again, the use case is dynamically-generated content that is subject to change at unpredictable intervals but which is otherwise fairly "static" for some length of time, and where serving stale content after a change is unacceptable. "Ordinary" HTTP headers just don't solve that use case without unnecessary loading of the backend. > Still waiting for that compelling case that requires independent cache > configuration, This is an odd response. I've already pointed out at least one common use case which can benefit from "independent" cache configuration. Is that not compelling enough? It might help if you can explain your criteria for what qualifies as "compelling". Ric From varnish-list at itiva.com Tue Apr 8 05:30:32 2008 From: varnish-list at itiva.com (DHF) Date: Mon, 07 Apr 2008 22:30:32 -0700 Subject: cache empties itself? In-Reply-To: <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> Message-ID: <47FB02F8.9070608@itiva.com> Ricardo Newbery wrote: > On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote: > > >> Sure, but this is also the sort of content that can be cached back >> upstream using ordinary HTTP headers. >> > > > No, it cannot. Again, the use case is dynamically-generated content > that is subject to change at unpredictable intervals but which is > otherwise fairly "static" for some length of time, and where serving > stale content after a change is unacceptable. "Ordinary" HTTP headers > just don't solve that use case without unnecessary loading of the > backend. > Isn't this what if-modified-since requests are for? 304 not modified is a pretty small request/response, though I can understand the tendency to want to push it out to the frontend caches. I would think the management overhead of maintaining two seperate expirations wouldn't be worth the extra hassle just to save yourself some ims requests to a backend. Unless of course varnish doesn't support ims requests in a usable way, I haven't actually tested it myself. --Dave From ric at digitalmarbles.com Tue Apr 8 07:09:18 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 8 Apr 2008 00:09:18 -0700 Subject: cache empties itself? In-Reply-To: <47FB02F8.9070608@itiva.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> <47FB02F8.9070608@itiva.com> Message-ID: <5B21257F-52EA-42E9-B365-530499D9447B@digitalmarbles.com> On Apr 7, 2008, at 10:30 PM, DHF wrote: > Ricardo Newbery wrote: >> On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote: >> >> >>> Sure, but this is also the sort of content that can be cached back >>> upstream using ordinary HTTP headers. >>> >> >> >> No, it cannot. Again, the use case is dynamically-generated >> content that is subject to change at unpredictable intervals but >> which is otherwise fairly "static" for some length of time, and >> where serving stale content after a change is unacceptable. >> "Ordinary" HTTP headers just don't solve that use case without >> unnecessary loading of the backend. >> > Isn't this what if-modified-since requests are for? 304 not > modified is a pretty small request/response, though I can understand > the tendency to want to push it out to the frontend caches. I would > think the management overhead of maintaining two seperate > expirations wouldn't be worth the extra hassle just to save yourself > some ims requests to a backend. Unless of course varnish doesn't > support ims requests in a usable way, I haven't actually tested it > myself. Unless things have changed recently, Varnish support for IMS is mixed. Varnish supports IMS for cache hits but not for cache misses unless you tweak the vcl to pass them in vcl_miss. Varnish will not generate an IMS to revalidate it's own cache. Also it is not necessarily true that generating a 304 response is always light impact. I'm not sure about the Drupal case, but at least for Plone there can be a significant performance hit even when just calculating the Last-Modified date. The hit is usually lighter than that required for generating the full response but for high-traffic sites, it's still a significant consideration. But the most significant issue is that IMS doesn't help in the slightest to lighten the load of *new* requests to your backend. IMS requests are only helpful if you already have the content in your own browser cache -- or in an intermediate proxy cache server (for proxies that support IMS to revalidate their own cache). Regarding the potential management overhead... this is not relevant to the question of whether this strategy would increase your site's performance. Management overhead is a separate question, and not an easy one to answer in the general case. The overhead might be a problem for some. But I know in my own case, the overhead required to manage this sort of thing is actually pretty trivial. Ric From fragfutter at gmail.com Tue Apr 8 08:18:06 2008 From: fragfutter at gmail.com (C. Handel) Date: Tue, 8 Apr 2008 10:18:06 +0200 Subject: recommendation for swap space? In-Reply-To: <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> <86db848d0804070918l2fb85398ma0c4870a03d57128@mail.gmail.com> Message-ID: <3d62bd5f0804080118r148d3366k553f4ddaff80f69d@mail.gmail.com> On Mon, Apr 7, 2008 at 6:18 PM, Michael S. Fischer wrote: > > > now that my varnish processes start to reach the RAM size, I'm wondering > > > what a dimension of swap would be wise? I currently have about 30 GB > > > swap space for 32 GB RAM, but am wondering if it could even make sense > > > to have no swap at all? My cache file is 517 GB in size. > > > > Varnish does not use swap. > > That said, it wouldn't make sense to entirely deallocate your swap > space, since the kernel may decide to page or swap out processes other > than Varnish. You also need swap if a huge process tries to fork. Having a huge Process forking a child, means, that the child process initially is a copy of the parent. It is a copy on write memory (so it doesn't realy use memory) and in most cases the child will release all memory and do something else, but the virtual memory needs to be big enough during the fork. Greetings Christoph From duja at torlen.net Tue Apr 8 09:17:07 2008 From: duja at torlen.net (duja at torlen.net) Date: Tue, 8 Apr 2008 11:17:07 +0200 Subject: Management console Message-ID: Nice, the CR LF did the thing, spank you ;) From duja at torlen.net Tue Apr 8 09:22:47 2008 From: duja at torlen.net (duja at torlen.net) Date: Tue, 8 Apr 2008 11:22:47 +0200 Subject: Cookies in VCL Message-ID: <2oikjahtt5wixgp.080420081122@torlen.net> A question about cookies in VCL. Is there a way of handling cookies in VCL? Like: if(req.http.Cookies[userid] == "1234") or set req.http.Cookies[language] = "sv" Thanks Erik From phk at phk.freebsd.dk Tue Apr 8 09:38:39 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Apr 2008 09:38:39 +0000 Subject: Cookies in VCL In-Reply-To: Your message of "Tue, 08 Apr 2008 11:22:47 +0200." <2oikjahtt5wixgp.080420081122@torlen.net> Message-ID: <15706.1207647519@critter.freebsd.dk> In message <2oikjahtt5wixgp.080420081122 at torlen.net>, duja at torlen.net writes: >A question about cookies in VCL. > >Is there a way of handling cookies in VCL? Not yet, but it's on our list. > >Like: >if(req.http.Cookies[userid] == "1234") > >or > >set req.http.Cookies[language] = "sv" > >Thanks >Erik > >_______________________________________________ >varnish-misc mailing list >varnish-misc at projects.linpro.no >http://projects.linpro.no/mailman/listinfo/varnish-misc > -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ottolski at web.de Tue Apr 8 08:51:13 2008 From: ottolski at web.de (Sascha Ottolski) Date: Tue, 8 Apr 2008 10:51:13 +0200 Subject: recommendation for swap space? In-Reply-To: <877if9lkoh.fsf@des.linpro.no> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> Message-ID: <200804081051.13994.ottolski@web.de> Am Montag 07 April 2008 18:00:14 schrieb Dag-Erling Sm?rgrav: > Sascha Ottolski writes: > > now that my varnish processes start to reach the RAM size, I'm > > wondering what a dimension of swap would be wise? I currently have > > about 30 GB swap space for 32 GB RAM, but am wondering if it could > > even make sense to have no swap at all? My cache file is 517 GB in > > size. > > Varnish does not use swap. > > DES hmm, then I'm wondering why my machines do swap quite a bit. It's a almost naked linux, the only processes really doing some work are varnishd and varnishlog. I have 32 GB of RAM, 30 GB of swap, and 517 GB of cache file. according to "top", varnishd has a resident size of 25 GB, and almost 1,5 GB of swap is in use. kswapd often shows up in "top". # free total used free shared buffers cached Mem: 32969244 32874908 94336 0 108648 29129752 -/+ buffers/cache: 3636508 29332736 Swap: 29045480 1473200 27572280 it's not worrying me, performance is brilliant, I'm just curious :-) Thanks, Sascha From malevo at gmail.com Tue Apr 8 10:18:47 2008 From: malevo at gmail.com (=?ISO-8859-1?Q?Pablo_Garc=EDa?=) Date: Tue, 8 Apr 2008 07:18:47 -0300 Subject: recommendation for swap space? In-Reply-To: <200804081051.13994.ottolski@web.de> References: <200804061509.46611.ottolski@web.de> <877if9lkoh.fsf@des.linpro.no> <200804081051.13994.ottolski@web.de> Message-ID: <4ec3c3f70804080318j4e7c1b55s378a54038409da4b@mail.gmail.com> Sacha, try to modify the /proc/sys/vm/swappiness it's on 60 (default), I reduce it to 20 or even 0, on my oracle cluster, to prevent important process from being swapped. Regards, Pablo On Tue, Apr 8, 2008 at 5:51 AM, Sascha Ottolski wrote: > Am Montag 07 April 2008 18:00:14 schrieb Dag-Erling Sm?rgrav: > > > Sascha Ottolski writes: > > > now that my varnish processes start to reach the RAM size, I'm > > > wondering what a dimension of swap would be wise? I currently have > > > about 30 GB swap space for 32 GB RAM, but am wondering if it could > > > even make sense to have no swap at all? My cache file is 517 GB in > > > size. > > > > Varnish does not use swap. > > > > DES > > hmm, then I'm wondering why my machines do swap quite a bit. It's a > almost naked linux, the only processes really doing some work are > varnishd and varnishlog. > > I have 32 GB of RAM, 30 GB of swap, and 517 GB of cache file. according > to "top", varnishd has a resident size of 25 GB, and almost 1,5 GB of > swap is in use. kswapd often shows up in "top". > > > # free > total used free shared buffers cached > Mem: 32969244 32874908 94336 0 108648 29129752 > -/+ buffers/cache: 3636508 29332736 > Swap: 29045480 1473200 27572280 > > > it's not worrying me, performance is brilliant, I'm just curious :-) > > > Thanks, Sascha > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From duja at torlen.net Tue Apr 8 13:50:36 2008 From: duja at torlen.net (duja at torlen.net) Date: Tue, 8 Apr 2008 15:50:36 +0200 Subject: Varnishtop hangs (again) Message-ID: When I run varnishtop with "varnishtop -i BackendReuse" it hangs and I cant do anything to return from the program. CTRL-C gives me nothing :( It hangs exactly when the first request hits the server. Here is the strace on varnishtop when the request hits: rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0 gettimeofday({1207662136, 378806}, NULL) = 0 poll([{fd=0, events=POLLIN}], 1, 1000) = 0 gettimeofday({1207662137, 379658}, NULL) = 0 time(NULL) = 1207662137 rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0 gettimeofday({1207662137, 379837}, NULL) = 0 poll([{fd=0, events=POLLIN}], 1, 1000) = 0 gettimeofday({1207662138, 380725}, NULL) = 0 time(NULL) = 1207662138 rt_sigaction(SIGTSTP, {SIG_IGN}, {0xb7ed61a0, [], SA_RESTART}, 8) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 poll([{fd=0, events=POLLIN}], 1, 0) = 0 rt_sigaction(SIGTSTP, {0xb7ed61a0, [], SA_RESTART}, NULL, 8) = 0 gettimeofday({1207662138, 380904}, NULL) = 0 poll([{fd=0, events=POLLIN}], 1, 1000) = 0 gettimeofday({1207662139, 384322}, NULL) = 0 futex(0x804af0c, FUTEX_WAIT, 2, NULL Varnishstat: client_conn 19 0.02 Client connections accepted client_req 123 0.14 Client requests received cache_hit 0 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 100 0.12 Cache misses backend_conn 123 0.14 Backend connections success backend_fail 0 0.00 Backend connections failures backend_reuse 115 0.13 Backend connections reuses backend_recycle 123 0.14 Backend connections recycles backend_unused 0 0.00 Backend connections unused # uname -a Linux varnish06 2.6.18-6-686 #1 SMP Sun Feb 10 22:11:31 UTC 2008 i686 GNU/Linux Debian Etch I have reported this before and I know there is a ticket for the problem. http://varnish.projects.linpro.no/ticket/217 Hope you'll get some where with the info. // Erik From duja at torlen.net Tue Apr 8 14:15:33 2008 From: duja at torlen.net (duja at torlen.net) Date: Tue, 8 Apr 2008 16:15:33 +0200 Subject: No subject Message-ID: <0xj0i7k8gg801e8.080420081615@torlen.net> Im trying to figure out some ways to extend the response headers with some info of the request. What I want for now is if it was a hit or miss and which backend it used. I cant figure out how to know which backend it used. The only way i know of is if the backend would deliver a header with host name or something similar. Is there any way to do this in VCL? I thought I could do like this to see if it was a miss or not but it didnt work. Im not even sure if the Age-header is always 0 on misses or if it could be 0 on hits too? sub vcl_deliver { if(resp.http.Age > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } / Erik From phk at phk.freebsd.dk Tue Apr 8 14:29:59 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Apr 2008 14:29:59 +0000 Subject: No subject In-Reply-To: Your message of "Tue, 08 Apr 2008 16:15:33 +0200." <0xj0i7k8gg801e8.080420081615@torlen.net> Message-ID: <37642.1207664999@critter.freebsd.dk> In message <0xj0i7k8gg801e8.080420081615 at torlen.net>, duja at torlen.net writes: >Im trying to figure out some ways to extend the response headers >with some info of the request. What I want for now is if it was a >hit or miss and which backend it used. Hit/Miss status is already in the X-Varnish header, if it has two numbers it is a hit. >I cant figure out how to know which backend it used. The only way >i know of is if the backend would deliver a header with host name >or something similar. Is there any way to do this in VCL? You can set your own header in vcl_recv along with the backend. Then in vcl_fetch, copy that header from req.foobar to obj.foobar and you should be all set. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin.abt at vertical.ch Tue Apr 8 14:51:15 2008 From: martin.abt at vertical.ch (Martin Abt) Date: Tue, 8 Apr 2008 16:51:15 +0200 Subject: caching directories Message-ID: Hi, i am new to varnish and i am wondering, if it is possible to exclude everything in a directory (including subdirectories) from caching. It works with single files, like: if (req.url ~ "/test/README.txt") { pass; } How can I do this with directories ? Best wishes, martin From phk at phk.freebsd.dk Tue Apr 8 14:57:46 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 08 Apr 2008 14:57:46 +0000 Subject: caching directories In-Reply-To: Your message of "Tue, 08 Apr 2008 16:51:15 +0200." Message-ID: <37760.1207666666@critter.freebsd.dk> In message , "Mart in Abt" writes: >Hi, > >i am new to varnish and i am wondering, if it is possible to exclude >everything in a directory (including subdirectories) from caching. > >It works with single files, like: > >if (req.url ~ "/test/README.txt") { > pass; >} > if (req.url ~ "^/test/") { pass; } ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin.abt at vertical.ch Tue Apr 8 15:15:02 2008 From: martin.abt at vertical.ch (Martin Abt) Date: Tue, 8 Apr 2008 17:15:02 +0200 Subject: AW: caching directories References: <37760.1207666666@critter.freebsd.dk> Message-ID: >>Hi, >> >>i am new to varnish and i am wondering, if it is possible to exclude >>everything in a directory (including subdirectories) from caching. >> >>It works with single files, like: >> >>if (req.url ~ "/test/README.txt") { >> pass; >>} >> > > if (req.url ~ "^/test/") { > pass; > } > >? Thanks, it works. So i probably should get in to learning regular expressions. Best wishes, martin From varnish-list at itiva.com Tue Apr 8 15:26:26 2008 From: varnish-list at itiva.com (DHF) Date: Tue, 08 Apr 2008 08:26:26 -0700 Subject: cache empties itself? In-Reply-To: <5B21257F-52EA-42E9-B365-530499D9447B@digitalmarbles.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> <47FB02F8.9070608@itiva.com> <5B21257F-52EA-42E9-B365-530499D9447B@digitalmarbles.com> Message-ID: <47FB8EA2.1070407@itiva.com> Ricardo Newbery wrote: > > On Apr 7, 2008, at 10:30 PM, DHF wrote: > >> Ricardo Newbery wrote: >>> On Apr 7, 2008, at 5:22 PM, Michael S. Fischer wrote: >>> >>> >>>> Sure, but this is also the sort of content that can be cached back >>>> upstream using ordinary HTTP headers. >>>> >>> >>> >>> No, it cannot. Again, the use case is dynamically-generated >>> content that is subject to change at unpredictable intervals but >>> which is otherwise fairly "static" for some length of time, and >>> where serving stale content after a change is unacceptable. >>> "Ordinary" HTTP headers just don't solve that use case without >>> unnecessary loading of the backend. >>> >> Isn't this what if-modified-since requests are for? 304 not modified >> is a pretty small request/response, though I can understand the >> tendency to want to push it out to the frontend caches. I would >> think the management overhead of maintaining two seperate expirations >> wouldn't be worth the extra hassle just to save yourself some ims >> requests to a backend. Unless of course varnish doesn't support ims >> requests in a usable way, I haven't actually tested it myself. > > > Unless things have changed recently, Varnish support for IMS is > mixed. Varnish supports IMS for cache hits but not for cache misses > unless you tweak the vcl to pass them in vcl_miss. Varnish will not > generate an IMS to revalidate it's own cache. Good to know. > > Also it is not necessarily true that generating a 304 response is > always light impact. I'm not sure about the Drupal case, but at least > for Plone there can be a significant performance hit even when just > calculating the Last-Modified date. The hit is usually lighter than > that required for generating the full response but for high-traffic > sites, it's still a significant consideration. > > But the most significant issue is that IMS doesn't help in the > slightest to lighten the load of *new* requests to your backend. IMS > requests are only helpful if you already have the content in your own > browser cache -- or in an intermediate proxy cache server (for proxies > that support IMS to revalidate their own cache). The intermediate proxy was the case I was thinking about, but you are correct, if there is no intermediate proxy and varnish frontends don't revalidate with ims requests then the whole plan is screwed. > Regarding the potential management overhead... this is not relevant to > the question of whether this strategy would increase your site's > performance. Management overhead is a separate question, and not an > easy one to answer in the general case. The overhead might be a > problem for some. But I know in my own case, the overhead required to > manage this sort of thing is actually pretty trivial. How do you manage the split ttl's? Do you send a purge after a page has changed or have you crafted another way to force a revalidation of cached objects? --Dave > > Ric > > > > From ric at digitalmarbles.com Tue Apr 8 18:15:49 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 8 Apr 2008 11:15:49 -0700 Subject: cache empties itself? In-Reply-To: <47FB8EA2.1070407@itiva.com> References: <200804030833.21070.ottolski@web.de> <86db848d0804031245p463778d2na01c2cfc164fc81b@mail.gmail.com> <6945ECE3-14F5-45B0-868A-DEB4816D4A5D@digitalmarbles.com> <86db848d0804031946g72cf5500vb398dc01456661ad@mail.gmail.com> <86db848d0804040250s1652bb00i1e4c8d86b8760bc@mail.gmail.com> <86db848d0804041404h40cde4fbu58b1be47f784424e@mail.gmail.com> <86db848d0804071722u5883884cy40ce9bfd02a95644@mail.gmail.com> <3D04D430-3771-42AB-A04E-0DEEE89129D9@digitalmarbles.com> <47FB02F8.9070608@itiva.com> <5B21257F-52EA-42E9-B365-530499D9447B@digitalmarbles.com> <47FB8EA2.1070407@itiva.com> Message-ID: <04A6DD37-07BB-4D38-9B31-30E052AAA805@digitalmarbles.com> On Apr 8, 2008, at 8:26 AM, DHF wrote: > Ricardo Newbery wrote: >> Regarding the potential management overhead... this is not relevant >> to the question of whether this strategy would increase your site's >> performance. Management overhead is a separate question, and not >> an easy one to answer in the general case. The overhead might be a >> problem for some. But I know in my own case, the overhead required >> to manage this sort of thing is actually pretty trivial. > How do you manage the split ttl's? Do you send a purge after a page > has changed or have you crafted another way to force a revalidation > of cached objects? Yes, a purge is sent after the page has changed. For Plone, all of this is easy to automate with the CacheFu add-on. Although support for adding a Surrogate-Control header (or whatever you use to communicate the local ttl) requires some minor customization (about 5 lines of code). Ric From jsd at cluttered.com Mon Apr 7 22:18:22 2008 From: jsd at cluttered.com (Jon Drukman) Date: Mon, 07 Apr 2008 15:18:22 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: <3503.1200041404@critter.freebsd.dk> References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> Message-ID: Poul-Henning Kamp wrote: > In message <4232A186-6F90-4713-8723-0925E4BD286A at emerose.com>, Sam Quigley writ > es: >> ...just thought I'd point out another seemingly-nifty thing the Squid >> folks are working on: >> >> http://www.mnot.net/cache_channels/ >> and >> http://www.mnot.net/blog/2008/01/04/cache_channels > > Interesting to see what hoops they try to jump through these days... > I just got through working at Yahoo and they have valid reasons to want all these behaviors. The thing I didn't like about the cache channel implementation is it involves squid polling an RSS feed every few seconds to determine which bits of the cache to invalidate. I'm looking at launching a small site for a client and the stale-while-revalidate/stale-on-error functionality is fairly critical. I want to go with varnish, though. Front end cache server in India, pulling content from the USA... lots of potential for slow/dead connections back to the origin, so it would be great if Varnish would serve stale content in this eventuality. -jsd- From ric at digitalmarbles.com Tue Apr 8 23:18:46 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 8 Apr 2008 16:18:46 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> Message-ID: On Apr 7, 2008, at 3:18 PM, Jon Drukman wrote: > Poul-Henning Kamp wrote: >> In message <4232A186-6F90-4713-8723-0925E4BD286A at emerose.com>, Sam >> Quigley writ >> es: >>> ...just thought I'd point out another seemingly-nifty thing the >>> Squid >>> folks are working on: >>> >>> http://www.mnot.net/cache_channels/ >>> and >>> http://www.mnot.net/blog/2008/01/04/cache_channels >> >> Interesting to see what hoops they try to jump through these days... >> > > I just got through working at Yahoo and they have valid reasons to > want > all these behaviors. The thing I didn't like about the cache channel > implementation is it involves squid polling an RSS feed every few > seconds to determine which bits of the cache to invalidate. > > I'm looking at launching a small site for a client and the > stale-while-revalidate/stale-on-error functionality is fairly > critical. > I want to go with varnish, though. Front end cache server in India, > pulling content from the USA... lots of potential for slow/dead > connections back to the origin, so it would be great if Varnish would > serve stale content in this eventuality. > > -jsd- +1 on stale-while-revalidate. I found this one to be real handy. Ric From michael at dynamine.net Tue Apr 8 23:25:02 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 8 Apr 2008 16:25:02 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> Message-ID: <86db848d0804081625q79e5ce2bpd5e1411fa3b3c835@mail.gmail.com> On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery wrote: > +1 on stale-while-revalidate. I found this one to be real handy. Another +1 --Michael From michael at dynamine.net Tue Apr 8 23:26:53 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 8 Apr 2008 16:26:53 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: <86db848d0804081625q79e5ce2bpd5e1411fa3b3c835@mail.gmail.com> References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> <86db848d0804081625q79e5ce2bpd5e1411fa3b3c835@mail.gmail.com> Message-ID: <86db848d0804081626k60b29c78n90cebfcb595f196c@mail.gmail.com> On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer wrote: > On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery wrote: > > +1 on stale-while-revalidate. I found this one to be real handy. > > Another +1 I should add a qualifier to my vote, that stale-while-revalidate generally is used to mask suboptimal backend performance and so I discourage it in favor of fixing the backend. --Michael From ric at digitalmarbles.com Tue Apr 8 23:34:00 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 8 Apr 2008 16:34:00 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: <86db848d0804081626k60b29c78n90cebfcb595f196c@mail.gmail.com> References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> <86db848d0804081625q79e5ce2bpd5e1411fa3b3c835@mail.gmail.com> <86db848d0804081626k60b29c78n90cebfcb595f196c@mail.gmail.com> Message-ID: <0B5B625A-5797-4C47-8B57-F0FB92756FD4@digitalmarbles.com> On Apr 8, 2008, at 4:26 PM, Michael S. Fischer wrote: > On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer > wrote: >> On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery > > wrote: >>> +1 on stale-while-revalidate. I found this one to be real handy. >> >> Another +1 > > I should add a qualifier to my vote, that stale-while-revalidate > generally is used to mask suboptimal backend performance and so I > discourage it in favor of fixing the backend. > > --Michael Of course the main premise of a reverse-proxy cache is to mask suboptimal backend performance. :-) Ric From duja at torlen.net Wed Apr 9 07:42:31 2008 From: duja at torlen.net (duja at torlen.net) Date: Wed, 9 Apr 2008 09:42:31 +0200 Subject: No subject Message-ID: <38dgzq5c4ruspgc.090420080942@torlen.net> >Hit/Miss status is already in the X-Varnish header, if it has two >numbers it is a hit. What does the numbers stand for? >You can set your own header in vcl_recv along with the backend. >Then in vcl_fetch, copy that header from req.foobar to obj.foobar >and you should be all set. I can do like this: vcl_recv set req.backend = mwc; set req.http.X-Backend = "Director"; But no like this: vcl_recv set req.backend = mwc; set req.http.X-Backend = req.backend; I receive: "Starting Varnish: varnishString representation of 'req.backend' not implemented yet. (/usr/local/etc/default.vcl Line 32 Pos 42) set req.http.X-Backend = req.backend; -----------------------------------------###########- VCL compilation failed" What I want know is which backend was used in the director. Is that possible? Here is my backends and the director: backend windows { .host = "10.1.1.124"; .port = "80"; } director mwc random { { .backend = windows; .weight = 2; } { .backend = { .host = "10.1.1.125"; .port = "80"; } .weight = 8; } } // Erik From phk at phk.freebsd.dk Wed Apr 9 09:09:36 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Apr 2008 09:09:36 +0000 Subject: No subject In-Reply-To: Your message of "Wed, 09 Apr 2008 09:42:31 +0200." <38dgzq5c4ruspgc.090420080942@torlen.net> Message-ID: <43841.1207732176@critter.freebsd.dk> In message <38dgzq5c4ruspgc.090420080942 at torlen.net>, duja at torlen.net writes: >>Hit/Miss status is already in the X-Varnish header, if it has two >>numbers it is a hit. > >What does the numbers stand for? They are varnish transaction numbers. The first is the current requests XID, The second is the XID of the request the created the cached object. >What I want know is which backend was used in the director. Is that possible? Hmm, not presently I'm afraid. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ssm at linpro.no Wed Apr 9 09:50:05 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Wed, 09 Apr 2008 11:50:05 +0200 Subject: No subject In-Reply-To: <43841.1207732176@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed, 09 Apr 2008 09:09:36 +0000") References: <43841.1207732176@critter.freebsd.dk> Message-ID: <7x8wznuzle.fsf@iostat.linpro.no> On Wed, 09 Apr 2008 09:09:36 +0000, "Poul-Henning Kamp" said: > Hmm, not presently I'm afraid. If possible, you could add this header on the backends. -- Stig Sandbeck Mathisen, Linpro From duja at torlen.net Wed Apr 9 13:42:19 2008 From: duja at torlen.net (duja at torlen.net) Date: Wed, 9 Apr 2008 15:42:19 +0200 Subject: No subject Message-ID: >> Hmm, not presently I'm afraid. >If possible, you could add this header on the backends. Yepp, I will have to do something like that. From mlp1 at ig.com.br Wed Apr 9 20:43:51 2008 From: mlp1 at ig.com.br (Marcelo L.) Date: Wed, 9 Apr 2008 20:43:51 +0000 (UTC) Subject: Varnish config/performance with Domino Webmail References: <86d7e60d0804071702n12557b96taf9ce8634923c531@mail.gmail.com> Message-ID: MARCELO LICASTRO PAGNI writes: > > Hi everyone,I am setting up a new server that will sit into a DMZ to serve as a reverse proxy for our company's Lotus Domino webmail. Having heard about Varnish, my choice couldn't be something else. > I've set it up with the default configuration, but its performance showed to be very, very poor. I've tried some tweaks to the VCL config file, but it did not change. Performance is twice, tree times worst than directly accesing the original server.Machine is a HP server DL320 G5p, Xeon dual-core 2,66Ghz, 2GB RAM.I would appreciate some directions on what to do.Thank you,Marcelo L. ps. below is my changed VCL config file: backend default { > ??????? set backend.host = "172.16.251.2";??????? set backend.port = "80";} sub vcl_recv {??? if (req.request == "GET" && req.http.cookie) { > ??????? lookup;??? }}sub vcl_fetch {??? if (obj.http.Set-Cookie) {??????? insert;??? }}sub vcl_fetch {??? if (obj.ttl < 120s) {??????? set obj.ttl = 120s;??? }}sub vcl_fetch {??? remove obj.http.Set-Cookie;}sub vcl_recv {?if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css|js).*") {??? lookup;?}} > > _______________________________________________ > varnish-misc mailing list > varnish-misc at ... > http://projects.linpro.no/mailman/listinfo/varnish-misc > Hi people, any comments on this? I really need some directions, some guidance on what to do or where to search more... Please. Thanks, Marcelo L. From des at linpro.no Wed Apr 9 21:23:39 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 09 Apr 2008 23:23:39 +0200 Subject: Varnish config/performance with Domino Webmail In-Reply-To: (Marcelo L.'s message of "Wed\, 9 Apr 2008 20\:43\:51 +0000 \(UTC\)") References: <86d7e60d0804071702n12557b96taf9ce8634923c531@mail.gmail.com> Message-ID: <87ej9e90ys.fsf@des.linpro.no> Marcelo L. writes: > Hi people, any comments on this? I really need some directions, some > guidance on what to do or where to search more... Please. You haven't provided any information whatsoever on what you think is wrong. No logs, no performance numbers, nothing. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Wed Apr 9 22:03:29 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Wed, 9 Apr 2008 15:03:29 -0700 Subject: Two New HTTP Caching Extensions In-Reply-To: <0B5B625A-5797-4C47-8B57-F0FB92756FD4@digitalmarbles.com> References: <4232A186-6F90-4713-8723-0925E4BD286A@emerose.com> <3503.1200041404@critter.freebsd.dk> <86db848d0804081625q79e5ce2bpd5e1411fa3b3c835@mail.gmail.com> <86db848d0804081626k60b29c78n90cebfcb595f196c@mail.gmail.com> <0B5B625A-5797-4C47-8B57-F0FB92756FD4@digitalmarbles.com> Message-ID: <86db848d0804091503m7054db1ahd582273a8b9b3986@mail.gmail.com> On Tue, Apr 8, 2008 at 4:34 PM, Ricardo Newbery wrote: > > I should add a qualifier to my vote, that stale-while-revalidate > > generally is used to mask suboptimal backend performance and so I > > discourage it in favor of fixing the backend. > > Of course the main premise of a reverse-proxy cache is to mask suboptimal > backend performance. :-) Except, in this case, you are presumably already relieving your backend of a significant burden with your cache. if your backend is *still* unable to process requests to fulfill a request from the caching layer within a reasonable time, you're in serious trouble indeed. --Michael From ottolski at web.de Thu Apr 10 14:30:39 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 10 Apr 2008 16:30:39 +0200 Subject: mass purge causes high load? Message-ID: <200804101630.39759.ottolski@web.de> Hi, I just needed to get rid of about 27,000 stale URLs, that were cached as 404 or 302 due to a configuration error on the backends. So I did a url.purge in a loop, sleeping 0.1 seconds after each URL: for i in `cat notfound.txt.sorted` ; do varnishadm -T:81 url.purge $i; sleep 0.1; done However, after about half of it I needed to stop, cause the varnish servers have a high load (about 30, dropping only very slowly), and the response time is bad (more or less 20 seconds per request :-() May the purge be the cause? I stopped the purge 45 min. ago, and still the high load and slow responses. Any way to see what is going on inside? sometimes the load even goes up, I see a 50 now :-( The traffic that comes in is normal for this time of day, and usually the load stays below 3, response time way under a second. I don't like the idea of performing a restart, don't want to lose the cache... Thanks a lot, Sascha From des at linpro.no Sun Apr 13 10:08:55 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sun, 13 Apr 2008 12:08:55 +0200 Subject: mass purge causes high load? In-Reply-To: <200804101630.39759.ottolski@web.de> (Sascha Ottolski's message of "Thu\, 10 Apr 2008 16\:30\:39 +0200") References: <200804101630.39759.ottolski@web.de> Message-ID: <87od8e3w3s.fsf@des.linpro.no> Sascha Ottolski writes: > I just needed to get rid of about 27,000 stale URLs, that were cached as > 404 or 302 due to a configuration error on the backends. Why couldn't you just wait for them to expire? > So I did a url.purge in a loop, sleeping 0.1 seconds after each URL: > > for i in `cat notfound.txt.sorted` ; do varnishadm -T:81 url.purge $i; > sleep 0.1; done That's a very, very bad idea. Varnish must now check every object in cache against 27,000 regular expressions. If you know the exact URL to purge, use an HTTP PURGE (see VCL code examples in the vcl man page) DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ottolski at web.de Sun Apr 13 12:19:05 2008 From: ottolski at web.de (Sascha Ottolski) Date: Sun, 13 Apr 2008 14:19:05 +0200 Subject: mass purge causes high load? In-Reply-To: <87od8e3w3s.fsf@des.linpro.no> References: <200804101630.39759.ottolski@web.de> <87od8e3w3s.fsf@des.linpro.no> Message-ID: <200804131419.05388.ottolski@web.de> Am Sonntag 13 April 2008 12:08:55 schrieben Sie: > Sascha Ottolski writes: > > I just needed to get rid of about 27,000 stale URLs, that were > > cached as 404 or 302 due to a configuration error on the backends. > > Why couldn't you just wait for them to expire? If I only knew when this would happen...I set the default_ttl to 360 days... > > > So I did a url.purge in a loop, sleeping 0.1 seconds after each > > URL: > > > > for i in `cat notfound.txt.sorted` ; do varnishadm -T:81 url.purge > > $i; sleep 0.1; done > > That's a very, very bad idea. Varnish must now check every object in > cache against 27,000 regular expressions. > > If you know the exact URL to purge, use an HTTP PURGE (see VCL code > examples in the vcl man page) I'm aware of this, but had expected that the semantic of both method would be identical. Especially as I did not pass regular expresssions, but the complete URLs. Well, at least that's what I thought :-) After all, good news is, that the proxies slowly came down to normal operation (took 3 or 4 hours), without crashing. However, since then I have the feeling that the load is a bit higher than it used to be before, now it seems to stay at around 1 even at low traffic periods. Anyway, response time is excellent. Thanks a lot, Sascha > > DES From des at linpro.no Mon Apr 14 11:43:11 2008 From: des at linpro.no (Dag-Erling =?utf-8?Q?Sm=C3=B8rgrav?=) Date: Mon, 14 Apr 2008 13:43:11 +0200 Subject: mass purge causes high load? In-Reply-To: <200804131414.34555.ottolski@web.de> (Sascha Ottolski's message of "Sun\, 13 Apr 2008 14\:14\:34 +0200") References: <200804101630.39759.ottolski@web.de> <87od8e3w3s.fsf@des.linpro.no> <200804131414.34555.ottolski@web.de> Message-ID: <87wsn0ps2f.fsf@des.linpro.no> [resent to varnish-misc] Sascha Ottolski writes: > Dag-Erling Sm?rgrav writes: > > If you know the exact URL to purge, use an HTTP PURGE (see VCL code > > examples in the vcl man page) > I'm aware of this, but had expected that the semantic of both method > would be identical. Especially as I did not pass regular expresssions, > but the complete URLs. Well, at least that's what I thought :-) No, the semantics are completely different. With HTTP PURGE, you do a direct cache lookup, and set the object's TTL to 0 if it exists. With url.purge, you add an entry to a ban list, and every time an object is looked up in the cache, it is checked against all ban list entries that have arrived since the last time. This is the only way to implement regexp purging efficiently. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Apr 14 12:19:11 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 14 Apr 2008 14:19:11 +0200 Subject: mass purge causes high load? In-Reply-To: <200804141346.52512.ottolski@web.de> (Sascha Ottolski's message of "Mon\, 14 Apr 2008 13\:46\:52 +0200") References: <200804101630.39759.ottolski@web.de> <200804131414.34555.ottolski@web.de> <8763ukr7ao.fsf@des.linpro.no> <200804141346.52512.ottolski@web.de> Message-ID: <87skxopr28.fsf@des.linpro.no> Sascha Ottolski writes: > Dag-Erling Sm?rgrav writes: > > No, the semantics are completely different. With HTTP PURGE, you do > > a direct cache lookup, and set the object's TTL to 0 if it exists. > > With url.purge, you add an entry to a ban list, and every time an > > object is looked up in the cache, it is checked against all ban list > > entries that have arrived since the last time. This is the only way > > to implement regexp purging efficiently. > thanks very much for clarification. I guess the ban list gets smaller > everytime an object has been purged? Each ban list entry has a sequence number, and each object has a generation number. When a new object is inserted into the cache, its generation number is set to the sequence number of the newest ban list entry. For every cache hit, the object's generation number is compared to the sequence number of the last ban list entry. If they don't match, the object is checked against every ban list entry that has a sequence number higher than the object's generation number. If the object matches one of these entries, it is discarded, and processing continues as if the object had never been in cache. If it doesn't, its generation number is set to the sequence number of the last entry it was matched against. The only alternative to this algorithm would be to lock the cache and inspect every item, which would stop all request processing for several seconds or minutes, depending on the size of your cache and how much of it is resident; and even then, it would only work for hash.purge, not url.purge, as only the hash string is actually stored in the cache. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From f.engelhardt at 21torr.com Mon Apr 14 13:26:45 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Mon, 14 Apr 2008 15:26:45 +0200 Subject: Cookies in VCL In-Reply-To: <2oikjahtt5wixgp.080420081122@torlen.net> References: <2oikjahtt5wixgp.080420081122@torlen.net> Message-ID: <20080414152645.23598dce@21torr.com> On Tue, 8 Apr 2008 11:22:47 +0200 wrote: > A question about cookies in VCL. > > Is there a way of handling cookies in VCL? > > Like: > if(req.http.Cookies[userid] == "1234") > > or > > set req.http.Cookies[language] = "sv" Kind of. vcl_recv looks like this: sub vcl_recv { if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie) { pass; } lookup; } I changed it to this: sub vcl_recv { if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie ~ "PHPSESSID") { pass; } lookup; } The change ist, that if somewhere in the Cookie-Header we have a "PHPSESSID"-String i pass it. The Problem ist, that in our application we are using Cookies in JavaScript. So nearly every page will be "passed" by varnish, couse the browser will send the cookies set by javascript to the server. In the new vcl_recv it ignores all other cookies. The only problem is, if some cookie's value would be "PHPSESSID", than the request will also be "passed". Kind regards Flo From ottolski at web.de Mon Apr 14 13:26:08 2008 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 14 Apr 2008 15:26:08 +0200 Subject: mass purge causes high load? In-Reply-To: <87skxopr28.fsf@des.linpro.no> References: <200804101630.39759.ottolski@web.de> <200804141346.52512.ottolski@web.de> <87skxopr28.fsf@des.linpro.no> Message-ID: <200804141526.09080.ottolski@web.de> Am Montag 14 April 2008 14:19:11 schrieb Dag-Erling Sm?rgrav: > Sascha Ottolski writes: > > Dag-Erling Sm?rgrav writes: > > > No, the semantics are completely different. With HTTP PURGE, you > > > do a direct cache lookup, and set the object's TTL to 0 if it > > > exists. With url.purge, you add an entry to a ban list, and every > > > time an object is looked up in the cache, it is checked against > > > all ban list entries that have arrived since the last time. This > > > is the only way to implement regexp purging efficiently. > > > > thanks very much for clarification. I guess the ban list gets > > smaller everytime an object has been purged? > > Each ban list entry has a sequence number, and each object has a > generation number. When a new object is inserted into the cache, its > generation number is set to the sequence number of the newest ban > list entry. > > For every cache hit, the object's generation number is compared to > the sequence number of the last ban list entry. If they don't match, > the object is checked against every ban list entry that has a > sequence number higher than the object's generation number. > > If the object matches one of these entries, it is discarded, and > processing continues as if the object had never been in cache. > > If it doesn't, its generation number is set to the sequence number of > the last entry it was matched against. > > The only alternative to this algorithm would be to lock the cache and > inspect every item, which would stop all request processing for > several seconds or minutes, depending on the size of your cache and > how much of it is resident; and even then, it would only work for > hash.purge, not url.purge, as only the hash string is actually stored > in the cache. > > DES Dag, thanks again. If I get it right, the ban list never shrinks, so I probably have 17,000 ban list entries hanging around. can I purge this list somehow, other than restarting the proxy? I suppose even if the list is not used any more, even the comparing the generation and sequence no. for each request adds a bit of overhead, doesn't it? Cheers, Sascha From phk at phk.freebsd.dk Mon Apr 14 13:42:23 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Apr 2008 13:42:23 +0000 Subject: mass purge causes high load? In-Reply-To: Your message of "Mon, 14 Apr 2008 15:26:08 +0200." <200804141526.09080.ottolski@web.de> Message-ID: <65070.1208180543@critter.freebsd.dk> In message <200804141526.09080.ottolski at web.de>, Sascha Ottolski writes: >thanks again. If I get it right, the ban list never shrinks, so I >probably have 17,000 ban list entries hanging around. can I purge this >list somehow, other than restarting the proxy? I suppose even if the >list is not used any more, even the comparing the generation and >sequence no. for each request adds a bit of overhead, doesn't it? The only overhead added by the list is the memory it consumes, but with 17000 entries, that also adds up of course. I have on my todo list somewhere to find a way to prune the list but until now that has not become such a priority that it has happened. It will at some point though. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From skye at F4.ca Mon Apr 14 18:55:27 2008 From: skye at F4.ca (Skye Poier Nott) Date: Mon, 14 Apr 2008 11:55:27 -0700 Subject: Varnish dumping cache? Message-ID: <965F34D7-3638-46A0-A049-7C51653F6FE5@F4.ca> Hello, I'm a new Varnish user (coming from Squid - glad to be rid of it) but I'm seeing some strange behaviour when I test load a Varnish server. I have a configuration of 4 machines with 250 http client test threads each requesting a set of 46 files from 2,000 virtual hosts (92,000 objects total). But while it's running, I see the following: # while ((1)); do date varnishstat -1 | grep n_object sleep 5 done n_object will climb steadily and then all of a sudden, it all goes away: n_objecthead 86765 . N struct objecthead Mon Apr 14 11:51:06 PDT 2008 n_object 86780 . N struct object n_objecthead 86782 . N struct objecthead Mon Apr 14 11:51:11 PDT 2008 n_object 358 . N struct object n_objecthead 358 . N struct objecthead Mon Apr 14 11:51:16 PDT 2008 n_object 358 . N struct object n_objecthead 358 . N struct objecthead I don't see anything in varnishlog -i Error when this happens but I'm not sure where to look. Here's my config: param.show 200 783 user nobody (65534) group nogroup (65533) default_ttl 86400 [seconds] thread_pools 1 [pools] thread_pool_max 1000 [threads] thread_pool_min 1 [threads] thread_pool_timeout 120 [seconds] overflow_max 100 [%] http_workspace 8192 [bytes] sess_timeout 5 [seconds] pipe_timeout 60 [seconds] send_timeout 600 [seconds] auto_restart on [bool] fetch_chunksize 128 [kilobytes] sendfile_threshold unlimited [bytes] vcl_trace off [bool] listen_address ":80" listen_depth 1024 [connections] srcaddr_hash 1049 [buckets] srcaddr_ttl 30 [seconds] backend_http11 off [bool] client_http11 off [bool] ping_interval 3 [seconds] storage is varnishd_storage="file,/var/cache/varnish,128G" (pre-alloced) on FreeBSD 6.3-RELEASE Thanks for helping this Varnish newbie. Great daemon! Skye From skye at F4.ca Mon Apr 14 19:20:03 2008 From: skye at F4.ca (Skye Poier Nott) Date: Mon, 14 Apr 2008 12:20:03 -0700 Subject: Varnish dumping cache? In-Reply-To: <965F34D7-3638-46A0-A049-7C51653F6FE5@F4.ca> References: <965F34D7-3638-46A0-A049-7C51653F6FE5@F4.ca> Message-ID: <6D94E230-6B99-4483-BDCD-E4EB4401B9E0@F4.ca> Update: I ran varnishd in foreground with -d and I'm seeing these periodically, which would explain the cache invalidation... Child not responding to ping Cache child died pid=23899 status=0x9 Clean child Child cleaned start child pid 23914 Child said (2, 23914): <> Cache child died pid=23914 status=0xb Clean child Child cleaned start child pid 23916 Child said (2, 23916): <> Cache child died pid=23916 status=0xb Clean child Child cleaned start child pid 23917 Child said (2, 23917): <> Child said (2, 23917): <> I'm running varnish-1.1.2 built out of ports on amd64...? # uname -a FreeBSD 6.3-RELEASE FreeBSD 6.3-RELEASE #0: Wed Jan 16 01:43:02 UTC 2008 root at palmer.cse.buffalo.edu:/usr/obj/usr/src/sys/SMP amd64 Skye On 14-Apr-08, at 11:55 AM, Skye Poier Nott wrote: > Hello, > > I'm a new Varnish user (coming from Squid - glad to be rid of it) but > I'm seeing some strange behaviour when I test load a Varnish server. > I have a configuration of 4 machines with 250 http client test threads > each requesting a set of 46 files from 2,000 virtual hosts (92,000 > objects total). But while it's running, I see the following: > > # while ((1)); do > date > varnishstat -1 | grep n_object > sleep 5 > done > > n_object will climb steadily and then all of a sudden, it all goes > away: > > n_objecthead 86765 . N struct objecthead > Mon Apr 14 11:51:06 PDT 2008 > n_object 86780 . N struct object > n_objecthead 86782 . N struct objecthead > Mon Apr 14 11:51:11 PDT 2008 > n_object 358 . N struct object > n_objecthead 358 . N struct objecthead > Mon Apr 14 11:51:16 PDT 2008 > n_object 358 . N struct object > n_objecthead 358 . N struct objecthead > > I don't see anything in varnishlog -i Error when this happens but I'm > not sure where to look. > Here's my config: > > param.show > 200 783 > user nobody (65534) > group nogroup (65533) > default_ttl 86400 [seconds] > thread_pools 1 [pools] > thread_pool_max 1000 [threads] > thread_pool_min 1 [threads] > thread_pool_timeout 120 [seconds] > overflow_max 100 [%] > http_workspace 8192 [bytes] > sess_timeout 5 [seconds] > pipe_timeout 60 [seconds] > send_timeout 600 [seconds] > auto_restart on [bool] > fetch_chunksize 128 [kilobytes] > sendfile_threshold unlimited [bytes] > vcl_trace off [bool] > listen_address ":80" > listen_depth 1024 [connections] > srcaddr_hash 1049 [buckets] > srcaddr_ttl 30 [seconds] > backend_http11 off [bool] > client_http11 off [bool] > ping_interval 3 [seconds] > > storage is varnishd_storage="file,/var/cache/varnish,128G" (pre- > alloced) > on FreeBSD 6.3-RELEASE > > Thanks for helping this Varnish newbie. Great daemon! > > Skye > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From mlp1 at ig.com.br Mon Apr 14 22:39:54 2008 From: mlp1 at ig.com.br (MARCELO LICASTRO PAGNI) Date: Mon, 14 Apr 2008 19:39:54 -0300 Subject: Varnish config/performance with Domino Webmail In-Reply-To: <87ej9e90ys.fsf@des.linpro.no> References: <86d7e60d0804071702n12557b96taf9ce8634923c531@mail.gmail.com> <87ej9e90ys.fsf@des.linpro.no> Message-ID: <86d7e60d0804141539g1be47ca4m7767340b4008e92d@mail.gmail.com> Sorry for that. I thought that maybe I was doing some very basic begginner's error, easily detectable from my vcl config file. I will post wathever info is needed, I'd rather fix my Varnish setup than choosing Apache or Squid as alternatives. It seems that just PIPE'd connections (those HTTP POST requests) take a long time to complete. Normal GET goes lightning fast, but POST's, almost always, are very very slow. Weird is that, when accessing the server directly, it works ok, POST'ed requests work all right. I'll post links for some stats and logs besides my vcl config file. I know external links are boring, but the log file is rather long. Varnish Stat: http://www.mscaetano.com.br/tmp/varnishstat.txt Varnish Log: http://www.mscaetano.com.br/tmp/varnishlog.txt VCL config file: http://www.mscaetano.com.br/tmp/default.vcl.txt Is there any more info I could provide to help in this problem assessment? Thank you very much, Marcelo L. 2008/4/9, Dag-Erling Sm?rgrav : > > Marcelo L. writes: > > Hi people, any comments on this? I really need some directions, some > > guidance on what to do or where to search more... Please. > > > You haven't provided any information whatsoever on what you think is > wrong. No logs, no performance numbers, nothing. > > DES > > -- > Dag-Erling Sm?rgrav > Senior Software Developer > Linpro AS - www.linpro.no > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ric at digitalmarbles.com Tue Apr 15 03:30:15 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Mon, 14 Apr 2008 20:30:15 -0700 Subject: Unprivileged user? Message-ID: I'm trying to understand the purpose of the "-u user" option for varnishd. It appears that even when starting up as root, and the child process dropping to "nobody", Varnish is still saving and serving from cache even though "nobody" doesn't have read/write access to the storage file owned by root. I'm guessing this is happening because Varnish is reading and writing to memory instead of the file storage? So I suppose my question is what functionality is missing if the effective user doesn't have read/ write privileges to the file storage? Is the backing file only accessed by the parent process? And if so, what is the purpose of the "-u user" option? Ric From perbu at linpro.no Tue Apr 15 06:03:02 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Tue, 15 Apr 2008 08:03:02 +0200 Subject: Unprivileged user? In-Reply-To: References: Message-ID: <48044516.3040402@linpro.no> Ricardo Newbery skrev: > I'm trying to understand the purpose of the "-u user" option for > varnishd. It appears that even when starting up as root, and the > child process dropping to "nobody", Varnish is still saving and > serving from cache even though "nobody" doesn't have read/write access > to the storage file owned by root. In Unix, if you drop privileges, you still have access to all your open files. Access control happens when you open files. That should answer the rest of your questions too, I believe. Per. From ric at digitalmarbles.com Tue Apr 15 06:20:11 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Mon, 14 Apr 2008 23:20:11 -0700 Subject: Unprivileged user? In-Reply-To: <48044516.3040402@linpro.no> References: <48044516.3040402@linpro.no> Message-ID: On Apr 14, 2008, at 11:03 PM, Per Andreas Buer wrote: > Ricardo Newbery skrev: >> I'm trying to understand the purpose of the "-u user" option for >> varnishd. It appears that even when starting up as root, and the >> child process dropping to "nobody", Varnish is still saving and >> serving from cache even though "nobody" doesn't have read/write >> access >> to the storage file owned by root. > > In Unix, if you drop privileges, you still have access to all your > open > files. Access control happens when you open files. That should answer > the rest of your questions too, I believe. Hmm... maybe I'm missing something but this doesn't seem to answer the main question. If, as you seem to imply, Varnish is opening any files it needs while it's still "root", then what is the purpose of the "-u user" option? Ric From perbu at linpro.no Tue Apr 15 06:25:27 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Tue, 15 Apr 2008 08:25:27 +0200 Subject: Unprivileged user? In-Reply-To: References: <48044516.3040402@linpro.no> Message-ID: <48044A57.8040004@linpro.no> Ricardo Newbery skrev: > Hmm... maybe I'm missing something but this doesn't seem to answer the > main question. If, as you seem to imply, Varnish is opening any files > it needs while it's still "root", then what is the purpose of the "-u > user" option? I'm guessing Varnish (like most Unix daemons) opens the file as root and then drops its privileges. That way, when Varnish deals with the untrusted data coming from the network it runs as an unprivileged user. So, I there is a buffer overflow in Varnish, the code won't run with root privileges. Per. From f.engelhardt at 21torr.com Tue Apr 15 06:25:36 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Tue, 15 Apr 2008 08:25:36 +0200 Subject: Unprivileged user? In-Reply-To: References: <48044516.3040402@linpro.no> Message-ID: <20080415082536.53fab746@21torr.com> On Mon, 14 Apr 2008 23:20:11 -0700 Ricardo Newbery wrote: > > On Apr 14, 2008, at 11:03 PM, Per Andreas Buer wrote: > > > Ricardo Newbery skrev: > >> I'm trying to understand the purpose of the "-u user" option for > >> varnishd. It appears that even when starting up as root, and the > >> child process dropping to "nobody", Varnish is still saving and > >> serving from cache even though "nobody" doesn't have read/write > >> access > >> to the storage file owned by root. > > > > In Unix, if you drop privileges, you still have access to all your > > open > > files. Access control happens when you open files. That should > > answer the rest of your questions too, I believe. > > Hmm... maybe I'm missing something but this doesn't seem to answer > the main question. If, as you seem to imply, Varnish is opening any > files it needs while it's still "root", then what is the purpose of > the "-u user" option? Thats the same thing in apache, mysql, ... Open every filehandle you need, then drop privileges. In case the software is beeing hacked, it can not damage the system, only the opened file pointers and everything the user can do. If the daemon would run as root, the hacker could do everything with your computer. /Flo From ric at digitalmarbles.com Tue Apr 15 06:48:57 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Mon, 14 Apr 2008 23:48:57 -0700 Subject: Unprivileged user? In-Reply-To: <20080415082536.53fab746@21torr.com> References: <48044516.3040402@linpro.no> <20080415082536.53fab746@21torr.com> Message-ID: On Apr 14, 2008, at 11:25 PM, Florian Engelhardt wrote: > On Mon, 14 Apr 2008 23:20:11 -0700 > Ricardo Newbery wrote: > >> >> On Apr 14, 2008, at 11:03 PM, Per Andreas Buer wrote: >> >>> Ricardo Newbery skrev: >>>> I'm trying to understand the purpose of the "-u user" option for >>>> varnishd. It appears that even when starting up as root, and the >>>> child process dropping to "nobody", Varnish is still saving and >>>> serving from cache even though "nobody" doesn't have read/write >>>> access >>>> to the storage file owned by root. >>> >>> In Unix, if you drop privileges, you still have access to all your >>> open >>> files. Access control happens when you open files. That should >>> answer the rest of your questions too, I believe. >> >> Hmm... maybe I'm missing something but this doesn't seem to answer >> the main question. If, as you seem to imply, Varnish is opening any >> files it needs while it's still "root", then what is the purpose of >> the "-u user" option? > > Thats the same thing in apache, mysql, ... > Open every filehandle you need, then drop privileges. In case the > software is beeing hacked, it can not damage the system, only the > opened file pointers and everything the user can do. If the daemon > would run as root, the hacker could do everything with your computer. > > /Flo Please reread my question. I know why privileges are dropped. That is not the question. Ric From ric at digitalmarbles.com Tue Apr 15 07:01:17 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 15 Apr 2008 00:01:17 -0700 Subject: Unprivileged user? In-Reply-To: <48044A57.8040004@linpro.no> References: <48044516.3040402@linpro.no> <48044A57.8040004@linpro.no> Message-ID: <26683944-DF74-443E-BD14-AF59BAB71050@digitalmarbles.com> On Apr 14, 2008, at 11:25 PM, Per Andreas Buer wrote: > Ricardo Newbery skrev: > >> Hmm... maybe I'm missing something but this doesn't seem to answer >> the >> main question. If, as you seem to imply, Varnish is opening any >> files >> it needs while it's still "root", then what is the purpose of the "-u >> user" option? > > I'm guessing Varnish (like most Unix daemons) opens the file as root > and > then drops its privileges. That way, when Varnish deals with the > untrusted data coming from the network it runs as an unprivileged > user. > > So, I there is a buffer overflow in Varnish, the code won't run with > root privileges. > > Per. Again, this is *not* my question. Of course dropping privileges is a standard practice for daemons that need temporary elevated privileges. But this does not explain the purpose that the "-u user" option serves in the Varnish case... other than perhaps to provide another option in case the standard default "nobody" is not available for some reason. In Apache, the less-privileged user still needs read access to the files it serves. In Squid, the less-privileged user still needs write access to the cache directory in order to create the cache storage. In Varnish, does the less-privileged user need access to anything? Ric From phk at phk.freebsd.dk Tue Apr 15 07:15:30 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 15 Apr 2008 07:15:30 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Mon, 14 Apr 2008 20:30:15 MST." Message-ID: <2963.1208243730@critter.freebsd.dk> In message , Ricardo N ewbery writes: >I'm trying to understand the purpose of the "-u user" option for >varnishd. It appears that even when starting up as root, and the >child process dropping to "nobody", Varnish is still saving and >serving from cache even though "nobody" doesn't have read/write access >to the storage file owned by root. The file is opened before the cache process drops to nobody, and in UNIX the access check is performed at open time and not at read/write time. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ric at digitalmarbles.com Tue Apr 15 07:25:24 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 15 Apr 2008 00:25:24 -0700 Subject: Unprivileged user? In-Reply-To: <2963.1208243730@critter.freebsd.dk> References: <2963.1208243730@critter.freebsd.dk> Message-ID: On Apr 15, 2008, at 12:15 AM, Poul-Henning Kamp wrote: > Ricardo Newbery writes: > >> I'm trying to understand the purpose of the "-u user" option for >> varnishd. It appears that even when starting up as root, and the >> child process dropping to "nobody", Varnish is still saving and >> serving from cache even though "nobody" doesn't have read/write >> access >> to the storage file owned by root. > > The file is opened before the cache process drops to nobody, and in > UNIX the access check is performed at open time and not at read/write > time. I must not be making myself clear. Let me try again... Assuming that "nobody" is an available user on your system, then is the "-u user" option for varnishd superfluous? Ric From michael at dynamine.net Tue Apr 15 07:31:42 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 15 Apr 2008 00:31:42 -0700 Subject: Unprivileged user? In-Reply-To: References: <2963.1208243730@critter.freebsd.dk> Message-ID: <86db848d0804150031u56d04f2el347f6200daad766d@mail.gmail.com> On Tue, Apr 15, 2008 at 12:25 AM, Ricardo Newbery wrote: > Assuming that "nobody" is an available user on your system, then is > the "-u user" option for varnishd superfluous? Who's to say that "nobody" is an unprivileged user? /etc/passwd: nobody:*:0:0:alias for root:... Well-engineered software doesn't make potentially false assumptions about the environment in which it runs. --Michael From phk at phk.freebsd.dk Tue Apr 15 07:35:20 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 15 Apr 2008 07:35:20 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Tue, 15 Apr 2008 00:25:24 MST." Message-ID: <3089.1208244920@critter.freebsd.dk> In message , Ricardo N ewbery writes: >Assuming that "nobody" is an available user on your system, then is >the "-u user" option for varnishd superfluous? Yes. You can confirm the uid nobody is used with the ps(1) command. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ric at digitalmarbles.com Tue Apr 15 08:07:43 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 15 Apr 2008 01:07:43 -0700 Subject: Unprivileged user? In-Reply-To: <3089.1208244920@critter.freebsd.dk> References: <3089.1208244920@critter.freebsd.dk> Message-ID: <55BE2DDF-5B05-4A4A-AB29-B26ACA3FFB3A@digitalmarbles.com> On Apr 15, 2008, at 12:35 AM, Poul-Henning Kamp wrote: > In message BC82-2F1A1B0CE10D at digitalmarbles.com>, Ricardo N > ewbery writes: > >> Assuming that "nobody" is an available user on your system, then is >> the "-u user" option for varnishd superfluous? > > Yes. Cool, thanks PHK. That's really all I wanted to know. Ric From ric at digitalmarbles.com Tue Apr 15 08:10:21 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Tue, 15 Apr 2008 01:10:21 -0700 Subject: Unprivileged user? In-Reply-To: <86db848d0804150031u56d04f2el347f6200daad766d@mail.gmail.com> References: <2963.1208243730@critter.freebsd.dk> <86db848d0804150031u56d04f2el347f6200daad766d@mail.gmail.com> Message-ID: <5D5446CF-7F86-4801-A469-0E6C7E83DA2F@digitalmarbles.com> On Apr 15, 2008, at 12:31 AM, Michael S. Fischer wrote: > On Tue, Apr 15, 2008 at 12:25 AM, Ricardo Newbery > wrote: >> Assuming that "nobody" is an available user on your system, then is >> the "-u user" option for varnishd superfluous? > > Who's to say that "nobody" is an unprivileged user? > > /etc/passwd: > > nobody:*:0:0:alias for root:... > > Well-engineered software doesn't make potentially false assumptions > about the environment in which it runs. > > --Michael Geez Michael... this is unnecessarily snarky. Anyone that redefines "nobody" in this way is just asking for trouble. But in any case, I'm not suggesting that this option is superfluous in the general case. I'm just trying to find out whether, in the ordinary scenario, I need to concern myself with the access privileges of the less-privileged user -- as is the case in many other apps that do this, like Apache or Varnish. Ric From phk at phk.freebsd.dk Tue Apr 15 08:16:08 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 15 Apr 2008 08:16:08 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Tue, 15 Apr 2008 00:31:42 MST." <86db848d0804150031u56d04f2el347f6200daad766d@mail.gmail.com> Message-ID: <3336.1208247368@critter.freebsd.dk> In message <86db848d0804150031u56d04f2el347f6200daad766d at mail.gmail.com>, "Mich ael S. Fischer" writes: >On Tue, Apr 15, 2008 at 12:25 AM, Ricardo Newbery > wrote: >> Assuming that "nobody" is an available user on your system, then is >> the "-u user" option for varnishd superfluous? > >Who's to say that "nobody" is an unprivileged user? > >/etc/passwd: > >nobody:*:0:0:alias for root:... > >Well-engineered software doesn't make potentially false assumptions >about the environment in which it runs. And they don't. Varnish for instance assumes that the administrator is not a total madman, who would do something as patently stupid as you prospose above, under the general assumption that if he were, varnish would be the least of his troubles. Can we be a bit serious here ? Thanks. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From duja at torlen.net Tue Apr 15 11:05:14 2008 From: duja at torlen.net (duja at torlen.net) Date: Tue, 15 Apr 2008 13:05:14 +0200 Subject: Error compiling last revision from trunk Message-ID: I compiled the latest revision from trunk (2629) and received this when I tried to start varnish: "./bin.XXObuUCq: undefined symbol: VRT_init_dir_simple" I then tried to recompile varnish and noticed this when I ran "make": "Making all in varnishd make[3]: Entering directory `/home/tvswe/varnish/trunk/varnish-cache/bin/varnishd' gcc -DHAVE_CONFIG_H -I. -I../.. -I../../include -DVARNISH_STATE_DIR='"/usr/local/var/varnish"' -g -O2 -MT varnishd-storage_malloc.o -MD -MP -MF .deps/varnishd-storage_malloc.Tpo -c -o varnishd-storage_malloc.o `test -f 'storage_malloc.c' || echo './'`storage_malloc.c storage_malloc.c:46: error: ?SIZE_T_MAX? undeclared here (not in a function)" / Erik From des at linpro.no Tue Apr 15 12:34:12 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Tue, 15 Apr 2008 14:34:12 +0200 Subject: Error compiling last revision from trunk In-Reply-To: (duja@torlen.net's message of "Tue\, 15 Apr 2008 13\:05\:14 +0200") References: Message-ID: <87lk3fnvp7.fsf@des.linpro.no> writes: > storage_malloc.c:46: error: ?SIZE_T_MAX? undeclared here (not in a function)" this should be SIZE_MAX. Bad phk, no cookie! DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From gaute at pht.no Tue Apr 15 13:47:54 2008 From: gaute at pht.no (Gaute Amundsen) Date: Tue, 15 Apr 2008 15:47:54 +0200 Subject: Current stable version? Message-ID: <200804151547.55104.gaute@pht.no> Hi we are currently running varnish-1.0.4-3el4.i386.rpm ( with a small patch ) We were planning to hold out for the next release, but our need for per-host purging is growing rapidly... Is it possible to say anything about how far off a release might be, or is there a particular SVN revision that is recommended in the meantime? Gaute -- Programmerer - Pixelhospitalet AS T?rkoppveien 10, 1570 Dilling Tlf. 24 12 97 81 - 9074 7344 From michael at dynamine.net Tue Apr 15 17:52:40 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 15 Apr 2008 10:52:40 -0700 Subject: Unprivileged user? In-Reply-To: <3336.1208247368@critter.freebsd.dk> References: <86db848d0804150031u56d04f2el347f6200daad766d@mail.gmail.com> <3336.1208247368@critter.freebsd.dk> Message-ID: <86db848d0804151052i3ef158cfjba3c8b21eed03314@mail.gmail.com> On Tue, Apr 15, 2008 at 1:16 AM, Poul-Henning Kamp wrote: > >Well-engineered software doesn't make potentially false assumptions > >about the environment in which it runs. > > And they don't. > > Varnish for instance assumes that the administrator is not a total > madman, who would do something as patently stupid as you prospose > above, under the general assumption that if he were, varnish would > be the least of his troubles. I'm not saying that they would; I'm just saying that you can't count on user 'nobody' having the precise role that a security-conscious sysadmin would want. Perhaps the sysadmin might create a 'varnishd' user instead that also has limited access, and, hence, the -u option is quite useful. Assuming that the nonprivileged user is named 'nobody' could well be false. I was simply providing the most extreme example to demonstrate a point. Best regards, --Michael From ssm at linpro.no Wed Apr 16 05:50:05 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Wed, 16 Apr 2008 07:50:05 +0200 Subject: Unprivileged user? In-Reply-To: <26683944-DF74-443E-BD14-AF59BAB71050@digitalmarbles.com> (Ricardo Newbery's message of "Tue, 15 Apr 2008 00:01:17 -0700") References: <48044516.3040402@linpro.no> <48044A57.8040004@linpro.no> <26683944-DF74-443E-BD14-AF59BAB71050@digitalmarbles.com> Message-ID: <7xd4oqe4c2.fsf@iostat.linpro.no> On Tue, 15 Apr 2008 00:01:17 -0700, Ricardo Newbery said: > In Varnish, does the less-privileged user need access to anything? After it has dropped root privileges, it needs at least: * Open new network connections (no problem unless you use MAC or a uid-matching firewall) * Read access to where you store your VCL files * Execute a C compiler * Write access to its cache directory, to store the compiled configuration * Write core dumps ...possibly more. -- Stig Sandbeck Mathisen, Linpro From phk at phk.freebsd.dk Wed Apr 16 06:53:35 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 16 Apr 2008 06:53:35 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Tue, 15 Apr 2008 10:52:40 MST." <86db848d0804151052i3ef158cfjba3c8b21eed03314@mail.gmail.com> Message-ID: <8264.1208328815@critter.freebsd.dk> In message <86db848d0804151052i3ef158cfjba3c8b21eed03314 at mail.gmail.com>, "Mich ael S. Fischer" writes: >> Varnish for instance assumes that the administrator is not a total >> madman, who would do something as patently stupid as you prospose >> above, under the general assumption that if he were, varnish would >> be the least of his troubles. > >I'm not saying that they would; I'm just saying that you can't count >on user 'nobody' having the precise role that a security-conscious >sysadmin would want. Which is why there is a -u argument, for people who muck up the configuration that has been standard on all decent UNIX'es for the last 15 years. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Apr 16 06:56:37 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 16 Apr 2008 06:56:37 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Wed, 16 Apr 2008 07:50:05 +0200." <7xd4oqe4c2.fsf@iostat.linpro.no> Message-ID: <8358.1208328997@critter.freebsd.dk> In message <7xd4oqe4c2.fsf at iostat.linpro.no>, Stig Sandbeck Mathisen writes: >On Tue, 15 Apr 2008 00:01:17 -0700, Ricardo Newbery said: > >> In Varnish, does the less-privileged user need access to anything? > >After it has dropped root privileges, it needs at least: > >* Open new network connections (no problem unless you use MAC or a > uid-matching firewall) No, it accepts them only. >* Read access to where you store your VCL files No, the vcl files are read by the master process which does not drop priviledge. >* Execute a C compiler Same. >* Write access to its cache directory, to store the compiled > configuration Same. Please figure out how varnish really works before you acuse us of being incompetent. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at linpro.no Wed Apr 16 07:38:30 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Wed, 16 Apr 2008 09:38:30 +0200 Subject: Unprivileged user? In-Reply-To: <8358.1208328997@critter.freebsd.dk> References: <8358.1208328997@critter.freebsd.dk> Message-ID: <4805ACF6.8080200@linpro.no> Poul-Henning Kamp skrev: > In message <7xd4oqe4c2.fsf at iostat.linpro.no>, Stig Sandbeck Mathisen writes: >> On Tue, 15 Apr 2008 00:01:17 -0700, Ricardo Newbery said: >> >>> In Varnish, does the less-privileged user need access to anything? >> After it has dropped root privileges, it needs at least: >> >> * Open new network connections (no problem unless you use MAC or a >> uid-matching firewall) > > No, it accepts them only. Does the privilegded prosess talk to the origin servers? > (..) > > Please figure out how varnish really works before you acuse us of > being incompetent. Please figure out who is calling you incompetent before you start accusing people of accusing you of being incompetent (puh!). ssm was only trying to be helpful - all though I can see he probably failed in being that. :-) Per. From duja at torlen.net Wed Apr 16 07:53:12 2008 From: duja at torlen.net (duja at torlen.net) Date: Wed, 16 Apr 2008 09:53:12 +0200 Subject: Error compiling last revision from trunk Message-ID: >this should be SIZE_MAX. Could you fix this please? or can I make subversion to download a specific revision? / Erik From phk at phk.freebsd.dk Wed Apr 16 07:59:07 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 16 Apr 2008 07:59:07 +0000 Subject: Error compiling last revision from trunk In-Reply-To: Your message of "Wed, 16 Apr 2008 09:53:12 +0200." Message-ID: <9264.1208332747@critter.freebsd.dk> In message , duja at torlen.net writes: >>this should be SIZE_MAX. > >Could you fix this please? Sorry, I forgot to submit this change, done now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From duja at torlen.net Wed Apr 16 08:10:04 2008 From: duja at torlen.net (duja at torlen.net) Date: Wed, 16 Apr 2008 10:10:04 +0200 Subject: Error compiling last revision from trunk Message-ID: >Sorry, I forgot to submit this change, done now. Thank you ;) From phk at phk.freebsd.dk Wed Apr 16 09:23:48 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 16 Apr 2008 09:23:48 +0000 Subject: Unprivileged user? In-Reply-To: Your message of "Wed, 16 Apr 2008 09:38:30 +0200." <4805ACF6.8080200@linpro.no> Message-ID: <9700.1208337828@critter.freebsd.dk> In message <4805ACF6.8080200 at linpro.no>, Per Andreas Buer writes: >Poul-Henning Kamp skrev: >>> * Open new network connections (no problem unless you use MAC or a >>> uid-matching firewall) >> >> No, it accepts them only. > >Does the privilegded prosess talk to the origin servers? No. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From anders at fupp.net Wed Apr 16 10:44:22 2008 From: anders at fupp.net (Anders Nordby) Date: Wed, 16 Apr 2008 12:44:22 +0200 Subject: Unprivileged user? In-Reply-To: <3089.1208244920@critter.freebsd.dk> References: <3089.1208244920@critter.freebsd.dk> Message-ID: <20080416104422.GA71627@fupp.net> Hi, On Tue, Apr 15, 2008 at 07:35:20AM +0000, Poul-Henning Kamp wrote: >>Assuming that "nobody" is an available user on your system, then is >>the "-u user" option for varnishd superfluous? > Yes. > > You can confirm the uid nobody is used with the ps(1) command. I disagree. Suppose you have another process on your system that runs as nobody, like Apache. And people have access to run CGIs and other types of scripts through this user. Would you want them to be able to do naughty things to your Varnish process (they might be able to if Apache and Varnish both run as nobody) as well? An option to specify which user to change to is something people want, to control which user a process runs as. There are perfectly valid reasons to run as a different user than the standard, especially in multi-user/non-dedicated setups. Thanks! :) Bye, -- Anders. From des at linpro.no Wed Apr 16 15:44:19 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 16 Apr 2008 17:44:19 +0200 Subject: Current stable version? In-Reply-To: <200804151547.55104.gaute@pht.no> (Gaute Amundsen's message of "Tue\, 15 Apr 2008 15\:47\:54 +0200") References: <200804151547.55104.gaute@pht.no> Message-ID: <87od893iuk.fsf@des.linpro.no> Gaute Amundsen writes: > we are currently running varnish-1.0.4-3el4.i386.rpm > ( with a small patch ) 1.1.2 has been out for, eh, four months now... DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Wed Apr 16 15:45:26 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Wed, 16 Apr 2008 08:45:26 -0700 Subject: Unprivileged user? In-Reply-To: <8264.1208328815@critter.freebsd.dk> References: <86db848d0804151052i3ef158cfjba3c8b21eed03314@mail.gmail.com> <8264.1208328815@critter.freebsd.dk> Message-ID: <86db848d0804160845n48ba7c92obe1fbc62a5d4893f@mail.gmail.com> On Tue, Apr 15, 2008 at 11:53 PM, Poul-Henning Kamp wrote: > In message <86db848d0804151052i3ef158cfjba3c8b21eed03314 at mail.gmail.com>, "Mich > > ael S. Fischer" writes: > > >> Varnish for instance assumes that the administrator is not a total > >> madman, who would do something as patently stupid as you prospose > >> above, under the general assumption that if he were, varnish would > >> be the least of his troubles. > > > >I'm not saying that they would; I'm just saying that you can't count > >on user 'nobody' having the precise role that a security-conscious > >sysadmin would want. > > Which is why there is a -u argument, for people who muck up the > configuration that has been standard on all decent UNIX'es for > the last 15 years. Thus answering OP's question. QED. :-) --Michael From des at linpro.no Wed Apr 16 15:46:39 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 16 Apr 2008 17:46:39 +0200 Subject: Unprivileged user? In-Reply-To: <8358.1208328997@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed\, 16 Apr 2008 06\:56\:37 +0000") References: <8358.1208328997@critter.freebsd.dk> Message-ID: <87ej953iqo.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > Stig Sandbeck Mathisen writes: > > After it has dropped root privileges, it needs at least: > > > > * Open new network connections (no problem unless you use MAC or a > > uid-matching firewall) > No, it accepts them only. wrong, it initiates new connections to the backend servers. > Please figure out how varnish really works before you acuse us of > being incompetent. That was completely uncalled for. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Wed Apr 16 15:48:19 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 16 Apr 2008 17:48:19 +0200 Subject: Unprivileged user? In-Reply-To: <8264.1208328815@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed\, 16 Apr 2008 06\:53\:35 +0000") References: <8264.1208328815@critter.freebsd.dk> Message-ID: <87abjt3inw.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > "Michael S. Fischer" writes: > > I'm not saying that they would; I'm just saying that you can't count > > on user 'nobody' having the precise role that a security-conscious > > sysadmin would want. > Which is why there is a -u argument, for people who muck up the > configuration that has been standard on all decent UNIX'es for > the last 15 years. It is also for people who have a little bit of sense and understand that different daemons should use different unprivileged users when they drop their root privileges. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ssm at linpro.no Thu Apr 17 06:44:08 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Thu, 17 Apr 2008 08:44:08 +0200 Subject: Unprivileged user? In-Reply-To: <8358.1208328997@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed, 16 Apr 2008 06:56:37 +0000") References: <8358.1208328997@critter.freebsd.dk> Message-ID: <7x7iexc75z.fsf@iostat.linpro.no> On Wed, 16 Apr 2008 06:56:37 +0000, "Poul-Henning Kamp" said: > In message <7xd4oqe4c2.fsf at iostat.linpro.no>, Stig Sandbeck Mathisen writes: >> * Read access to where you store your VCL files > No, the vcl files are read by the master process which does not drop > priviledge. >> * Execute a C compiler > Same. >> * Write access to its cache directory, to store the compiled >> configuration > Same. In other words, I mixed up the parent and child process regarding configuration file handling and compiling. :/ -- Stig Sandbeck Mathisen, Linpro Any sufficiently advanced incompetence is indistinguishable from malice. From ab at turtle-entertainment.de Wed Apr 16 13:30:04 2008 From: ab at turtle-entertainment.de (Andre Braun) Date: Wed, 16 Apr 2008 15:30:04 +0200 Subject: Varnish Nagios plugin Documentation Message-ID: Hi folks, I want to Monitor our varnish process with our Nagios System. I?ve found the Varnish-Nagios Plugin but there is no Documentation what i have to do. Does somebody know where i can find a documentation? Thank you for your help Kind Regards Andre Braun From calle at korjus.se Thu Apr 17 13:03:39 2008 From: calle at korjus.se (Calle Korjus) Date: Thu, 17 Apr 2008 15:03:39 +0200 Subject: Varnish crashing when system starts to swap Message-ID: We have an environment that serves lots of small dynamicly backend generated image files. The total dataset is about 2TB but we're not looking to cache all of it, just ease the load on the backend machines. We have about 2000-2500 hits/s in total today and we are running 3 apaches with mod_caucho as frontends. We have installed varnish on the same servers as the apache frontends and configured them to use the local apache as backend. The machines are dual opterons with dualcore so 4 cores per server with 16GB of ram and we're running rhel 4.2. This is our varnish setup: user varnish (201) group varnish (201) default_ttl 3600 [seconds] thread_pools 1 [pools] thread_pool_max 1000 [threads] thread_pool_min 128 [threads] thread_pool_timeout 60 [seconds] overflow_max 100 [%] rush_exponent 3 [requests per request] sess_workspace 8192 [bytes] obj_workspace 8192 [bytes] sess_timeout 5 [seconds] pipe_timeout 60 [seconds] send_timeout 600 [seconds] auto_restart on [bool] fetch_chunksize 128 [kilobytes] vcl_trace off [bool] listen_address ":80" listen_depth 1024 [connections] srcaddr_hash 1049 [buckets] srcaddr_ttl 30 [seconds] backend_http11 off [bool] client_http11 off [bool] cli_timeout 5 [seconds] ping_interval 3 [seconds] lru_interval 3600 [seconds] cc_command exec cc -fpic -shared -Wl,-x -o %o %s max_restarts 4 [restarts] max_esi_includes 5 [restarts] cache_vbe_conns off [bool] cli_buffer 8192 [bytes] diag_bitmap 0x0 [bitmap] This is our startup command: /opt/varnish/sbin/varnishd -a :80 -p lru_interval 3600 -f /opt/varnish/conf/default.vcl -T 127.0.0.1:6082 -t 3600 -w 128,1000,60 -u varnish -g varnish -s file,/srv/varnish/varnish_storage.bin,30G -P /var/run/varnish.pid Varnish looks fine until it's had abour 1,5 million requests, then we can see the kswapd0 and kswapd1 start working and load average rises to about 200 and the machine gets totally unresponsive. Top shows a lot of cpu beeing spent on i/o waits and varnish child process restarts sometimes. In best case the process restarts and the server starts behaving within 5 minutes but sometimes varnish dies completely. One thing we have noticed is that the reserved memory for varnish keeps rising and when it crashes it is usually around 14G. The varnish storage file is running on the same physical disk as the system and the swap, could that be the problem? Should varnish really allocate so much memory so that the system starts to swap to disk? Any suggestions or comments are welcome. Regards Calle Korjus From trey at propeller.com Thu Apr 17 15:48:38 2008 From: trey at propeller.com (Trey Long) Date: Thu, 17 Apr 2008 11:48:38 -0400 Subject: Upload Buffering and x-sendfile Message-ID: I apologize if this question has been answered, I searched the list the best I could. Does varnish support upload (and download) buffering? Since varnish handles all of the traffic going to and from my host I was wondering if it buffered the client when they were uploading a large post body or downloading a large portion of HTML. The benefit here is that my host is not waiting for them to upload on their slow connection and the same for download. Does varnish support x-sendfile type responses? It seems I might be able to program this using VCL but I was unsure so I thought I would add it to this question. Can Varnish intercept a header on the way to the client and redirect to a static file? Thanks, Trey From ppragin at SolutionSet.com Thu Apr 17 18:08:21 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Thu, 17 Apr 2008 11:08:21 -0700 Subject: help with purging Message-ID: Hello, Please help me get Purging working. I am really stuck. I followed the directions in this link: http://varnish.projects.linpro.no/wiki/VCLExamplePurging I added these lines to my VCL configuration file and restarted Varnish: sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } When I connect to Apache on port 80 and send these command nothing happens. I need a way to Purge on the fly! [root at 155496-app3 varnish]# telnet 67.192.43.99 80 Trying 6.12.4.9... Connected to 6.12.4.9. Escape character is '^]'. PURGE / HTTP/1.1 Host: www.bla.net Thanks . PAVEL PRAGIN ppragin at solutionset.com T > 650.328.3900 M > 650.521.4377 F > 650.328.3901 SolutionSet The Brand Technology Company http://www.SolutionSet.com PA > 131 Lytton Ave., Palo Alto, CA 94301 SF > 85 Second St., San Francisco, CA 94105 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ric at digitalmarbles.com Thu Apr 17 18:53:37 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 17 Apr 2008 11:53:37 -0700 Subject: help with purging In-Reply-To: References: Message-ID: On Apr 17, 2008, at 11:08 AM, Pavel Pragin wrote: > Hello, > > Please help me get Purging working. I am really stuck. I followed > the directions in this link:http://varnish.projects.linpro.no/wiki/VCLExamplePurging > > I added these lines to my VCL configuration file and restarted > Varnish: > sub vcl_hit { > if (req.request == "PURGE") { > set obj.ttl = 0s; > error 200 "Purged."; > } > } > > When I connect to Apache on port 80 and send these command nothing > happens. I need a way to Purge on the fly! > > [root at 155496-app3 varnish]# telnet 67.192.43.99 80 > Trying 6.12.4.9... > Connected to 6.12.4.9. > Escape character is '^]'. > PURGE / HTTP/1.1 > Host: www.bla.net > > > Thanks > Did you add the other lines described in those directions? By the way, it may be worth noting that if you have another proxy (like Apache) in front of Varnish, enabling PURGE requests could potentially allow unauthorized purges. One way to fix this is to put Apache and Varnish on different IPs and make sure the Apache IP is not on the purge acl list. Another fix is just to remove localhost from the acl list. But in both cases then you have to send your purge requests straight to Varnish from an authorized IP and with a URL matching the form it would have coming from Apache. Ric From des at linpro.no Thu Apr 17 19:25:59 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Thu, 17 Apr 2008 21:25:59 +0200 Subject: help with purging In-Reply-To: (Pavel Pragin's message of "Thu\, 17 Apr 2008 11\:08\:21 -0700") References: Message-ID: <87ve2g1dx4.fsf@des.linpro.no> "Pavel Pragin" writes: > Please help me get Purging working. I am really stuck. I followed the > directions in this link: > http://varnish.projects.linpro.no/wiki/VCLExamplePurging So far so good... > I added these lines to my VCL configuration file and restarted Varnish: > > sub vcl_hit { > if (req.request == "PURGE") { > set obj.ttl = 0s; > error 200 "Purged."; > } > } So you *didn't* follow the instructions. Try again. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From jsd at cluttered.com Fri Apr 18 16:35:42 2008 From: jsd at cluttered.com (Jon Drukman) Date: Fri, 18 Apr 2008 09:35:42 -0700 Subject: regsub, string concatenation? Message-ID: i'm trying to rewrite all incoming URLs to include the http host header as part of the destination url. example: incoming: http://site1.com/someurl rewritten: http://originserver.com/site/site1.com/someurl incoming: http://site2.com/otherurl rewritten: http://originserver.com/site/site2.com/someurl the originserver is parsing the original hostname out of the requested url. works great with one hardcoded host: set req.url = regsub(req.url, "^", "/site/site1.com"); i can't get it to use the submitted http host though... set req.url = regsub(req.url, "^", "/site/" + req.http.host); varnish complains about the plus sign. is there some way to do this kind of string concatenation in the replacement? -jsd- From varnish-list at itiva.com Fri Apr 18 18:20:22 2008 From: varnish-list at itiva.com (DHF) Date: Fri, 18 Apr 2008 11:20:22 -0700 Subject: Varnish crashing when system starts to swap In-Reply-To: References: Message-ID: <4808E666.7070907@itiva.com> Calle Korjus wrote: > This is our startup command: > > /opt/varnish/sbin/varnishd -a :80 -p lru_interval 3600 -f /opt/varnish/conf/default.vcl -T 127.0.0.1:6082 -t 3600 -w 128,1000,60 -u varnish -g varnish -s file,/srv/varnish/varnish_storage.bin,30G -P /var/run/varnish.pid > > Varnish looks fine until it's had abour 1,5 million requests, then we can see the kswapd0 and kswapd1 start working and load average rises to about 200 and the machine gets totally unresponsive. Top shows a lot of cpu beeing spent on i/o waits and varnish child process restarts sometimes. In best case the process restarts and the server starts behaving within 5 minutes but sometimes varnish dies completely. One thing we have noticed is that the reserved memory for varnish keeps rising and when it crashes it is usually around 14G. > I would try lowering the storage file size to within your total system ram, subtracting some memory for buffers and cache and apache. See if it still spirals into swap hell. You could also try setting rlimits for the varnish user, though I don't know if settings in /etc/security/limits.conf apply to privilege dropped processes. > The varnish storage file is running on the same physical disk as the system and the swap, could that be the problem? Should varnish really allocate so much memory so that the system starts to swap to disk? > I think what is happening is that your hit ratio is low and your storage size is quite large, so varnish has enough objects marked as hot that its trying to hold them all in memory? I don't know for sure, I could be way off. I think if you restrict the storage size there will be increased disk activity as you churn the cache, but you won't be churning swap space as well, and you shouldn't exhaust the virtual memory of the system. I'd have to test that though. --Dave From varnish-list at itiva.com Fri Apr 18 18:45:49 2008 From: varnish-list at itiva.com (DHF) Date: Fri, 18 Apr 2008 11:45:49 -0700 Subject: regsub, string concatenation? In-Reply-To: References: Message-ID: <4808EC5D.3000001@itiva.com> Jon Drukman wrote: > i'm trying to rewrite all incoming URLs to include the http host header > as part of the destination url. example: > > incoming: http://site1.com/someurl > rewritten: http://originserver.com/site/site1.com/someurl > > incoming: http://site2.com/otherurl > rewritten: http://originserver.com/site/site2.com/someurl > > the originserver is parsing the original hostname out of the requested > url. works great with one hardcoded host: > > set req.url = regsub(req.url, "^", "/site/site1.com"); > > i can't get it to use the submitted http host though... > > set req.url = regsub(req.url, "^", "/site/" + req.http.host); > > varnish complains about the plus sign. is there some way to do this > kind of string concatenation in the replacement? > Try this: set req.url = "/site/" req.http.host "/" req.url; The extra / by itself might not be necessary. Set will allow you to concatenate strings but I'm not sure the regsub will. I think this will provide you what you are looking for, let me know. --Dave From bm at turtle-entertainment.de Fri Apr 18 10:19:50 2008 From: bm at turtle-entertainment.de (Bjoern Metzdorf) Date: Fri, 18 Apr 2008 12:19:50 +0200 Subject: Requests suddenly slow / fine after restart Message-ID: <480875C6.60503@turtle-entertainment.de> Hello everybody, we are running 2 varnish proxies with version 1.1.2 on debian etch amd64. They are balanced with LVS. The commandline is as follows: /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 1,3000,120 -s file,/var/lib/varnish/proxy1/varnish_storage.bin,100G The proxies cache 2 domains, which only serve static files (no cookies in use). there are no special expire options set in varnish config. the backend servers (lighttpd) set them. Recently we had some problems with slow requests. Some static graphic files suddenly took several seconds to load instead of the usual milliseconds. After a restart of varnish everything was fine again and the very same graphic was fast again. Since then we restart the varnish processes every morning. But 2 days ago we again had the "slow delivery" problem. I have not seen any obvious reasons for that, but I have not yet looked deeper into this issue. Perhaps somebody experienced similar things and found a solution? Thank you Regards, Bjoern From ric at digitalmarbles.com Sun Apr 20 09:44:50 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Sun, 20 Apr 2008 02:44:50 -0700 Subject: default_ttl applied even when Expires exist Message-ID: Noticed some odd behavior. On page with an already-expired Expires header (Expires: Sat, 1 Jan 2000 00:00:00 GMT) and no other cache control headers, a stock install of Varnish 1.1.2 appears to be applying the built-in default_ttl of 120 seconds when instead it should just immediately expire. There is nothing in the vcl doing this so it appears that Varnish is just ignoring the Expires header. Can anyone else confirm? Ric From timball at gmail.com Sun Apr 20 17:25:58 2008 From: timball at gmail.com (Timothy Ball) Date: Sun, 20 Apr 2008 13:25:58 -0400 Subject: varnish and logging Message-ID: I have a pretty basic setup w/ varnish load balancing infront of apache. The problem I am experiencing is that the apache logs themselves seem *really* off (hit rates are nearly one order of magnitude low). So I examined the varnishlog(1) man pages... Does anyone have a script that takes varnishlog output and munges it into something that looks combinedlog-ish? Queries to google-tube have not been useful. Just for completeness my configs look like this: --snip--snip--snip-- # blatantly ripped off from supplied example++ backend default { set backend.host = "10.80.80.252"; set backend.port = "8080"; } sub vcl_recv { if (req.url ~ "\.(css|js|jpg|JPG|png|gif|ico|mov|flv|swf|feed|atom|rss2)") { lookup; } if (req.url ~ ".trackback.") { error 503 "Request type not allowed."; } } sub vcl_pass { pass; } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; hash; } sub vcl_hit { if (!obj.cacheable) { pass; } deliver; } sub vcl_miss { fetch; } sub vcl_fetch { if (!obj.valid) { error; } if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } insert; } sub vcl_deliver { deliver; } sub vcl_timeout { discard; } --snip--snip--snip-- TIA, --timball -- GPG key available on pgpkeys.mit.edu pub 1024D/511FBD54 2001-07-23 Timothy Lu Hu Ball Key fingerprint = B579 29B0 F6C8 C7AA 3840 E053 FE02 BB97 511F BD54 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at dynamine.net Sun Apr 20 18:00:28 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 20 Apr 2008 11:00:28 -0700 Subject: varnish and logging In-Reply-To: References: Message-ID: <86db848d0804201100u5a35ef23rda451a26eee9ecbb@mail.gmail.com> On Sun, Apr 20, 2008 at 10:25 AM, Timothy Ball wrote: > Does anyone have a script that takes varnishlog output and munges it into > something that looks combinedlog-ish? Queries to google-tube have not been > useful. varnishncsa(1) comes in the box. --Michael From ric at digitalmarbles.com Sun Apr 20 19:28:20 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Sun, 20 Apr 2008 12:28:20 -0700 Subject: default_ttl applied even when Expires exist In-Reply-To: References: Message-ID: On Apr 20, 2008, at 2:44 AM, Ricardo Newbery wrote: > > Noticed some odd behavior. > > On page with an already-expired Expires header (Expires: Sat, 1 Jan > 2000 00:00:00 GMT) and no other cache control headers, a stock install > of Varnish 1.1.2 appears to be applying the built-in default_ttl of > 120 seconds when instead it should just immediately expire. There is > nothing in the vcl doing this so it appears that Varnish is just > ignoring the Expires header. > > Can anyone else confirm? > > Ric Answering my own question. I see in rfc2616.c that this behavior is intentional. Varnish apparently assumes a "clockless" origin server if the Expires date is not in the future and then applies the default ttl. The solution to this -- assuming you can't change the backend behavior -- appears to be to manually set a default_ttl = 0. Are there any potential issues with this solution? Ric From ric at digitalmarbles.com Mon Apr 21 00:47:13 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Sun, 20 Apr 2008 17:47:13 -0700 Subject: default_ttl applied even when Expires exist In-Reply-To: References: Message-ID: <8240BA9F-EAC7-4A9B-8128-265ABAD83B89@digitalmarbles.com> On Apr 20, 2008, at 12:28 PM, Ricardo Newbery wrote: > > On Apr 20, 2008, at 2:44 AM, Ricardo Newbery wrote: > >> >> Noticed some odd behavior. >> >> On page with an already-expired Expires header (Expires: Sat, 1 Jan >> 2000 00:00:00 GMT) and no other cache control headers, a stock >> install >> of Varnish 1.1.2 appears to be applying the built-in default_ttl of >> 120 seconds when instead it should just immediately expire. There is >> nothing in the vcl doing this so it appears that Varnish is just >> ignoring the Expires header. >> >> Can anyone else confirm? >> >> Ric > > > > Answering my own question. > > I see in rfc2616.c that this behavior is intentional. Varnish > apparently assumes a "clockless" origin server if the Expires date > is not in the future and then applies the default ttl. > > The solution to this -- assuming you can't change the backend > behavior -- appears to be to manually set a default_ttl = 0. Are > there any potential issues with this solution? > > Ric > > Regarding this behavior. I would like to suggest to the Varnish developers that this logic seems faulty. I guess it's reasonable to assume a bad backend clock if the Date header looks off... but the Expires header? At least one backend I'm familiar with uses an already-expired Expires date as a shorthand for "do not cache" and it seems that this is valid behavior according to RFC 2616. From RFC 2616 (14.9.3), Many HTTP/1.0 cache implementations will treat an Expires value that is less than or equal to the response Date value as being equivalent to the Cache-Control response directive "no-cache". If an HTTP/1.1 cache receives such a response, and the response does not include a Cache-Control header field, it SHOULD consider the response to be non-cacheable in order to retain compatibility with HTTP/1.0 servers. Even in the case of a "clockless" origin server, RFC 2616 allows for a past Expires date, From RFC 2616 (14.18.1), Some origin server implementations might not have a clock available. An origin server without a clock MUST NOT assign Expires or Last- Modified values to a response, unless these values were associated with the resource by a system or user with a reliable clock. It MAY assign an Expires value that is known, at or before server configuration time, to be in the past (this allows "pre-expiration" of responses without storing separate Expires values for each resource). Ric From Phuwadon at sanookonline.co.th Mon Apr 21 02:27:03 2008 From: Phuwadon at sanookonline.co.th (Phuwadon Danrahan) Date: Mon, 21 Apr 2008 09:27:03 +0700 Subject: Varnishncsa options Message-ID: Hi I'm running the system by using 1 varnish server and 2-3 backend servers with different domain name. I need to keep separate access log for each domain and I could not find the proper options for this function. I had tried ' varnishncsa -i RxRequest -I "Host: mydomain.domain.com" ' but no luck. This option works well with varnishlog Thanks. From phk at phk.freebsd.dk Mon Apr 21 05:33:09 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 05:33:09 +0000 Subject: default_ttl applied even when Expires exist In-Reply-To: Your message of "Sun, 20 Apr 2008 17:47:13 MST." <8240BA9F-EAC7-4A9B-8128-265ABAD83B89@digitalmarbles.com> Message-ID: <76968.1208755989@critter.freebsd.dk> In message <8240BA9F-EAC7-4A9B-8128-265ABAD83B89 at digitalmarbles.com>, Ricardo N ewbery writes: >> I see in rfc2616.c that this behavior is intentional. Varnish >> apparently assumes a "clockless" origin server if the Expires date >> is not in the future and then applies the default ttl. >Regarding this behavior. I would like to suggest to the Varnish >developers that this logic seems faulty. I guess it's reasonable to >assume a bad backend clock if the Date header looks off... but the >Expires header? That particular piece of code is taken pretty directly from RFC2616 with addition of the default_ttl assumption. I'm not at all adverse to changing this code, provided we can agree what the correct heuristics should be. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 06:19:29 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 06:19:29 +0000 Subject: regsub, string concatenation? In-Reply-To: Your message of "Fri, 18 Apr 2008 09:35:42 MST." Message-ID: <80207.1208758769@critter.freebsd.dk> In message , Jon Drukman writes: >i'm trying to rewrite all incoming URLs to include the http host header >as part of the destination url. example: > > set req.url = regsub(req.url, "^", "/site/" + req.http.host); You can't do it directly right now, but this may be a feasible workaround: set req.http.foobar = "/site/" set req.http.foo = regsub(req.url, "^", req.http.foobar); unset req.http.foobar; -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 06:41:34 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 06:41:34 +0000 Subject: Varnish dumping cache? In-Reply-To: Your message of "Mon, 14 Apr 2008 12:20:03 MST." <6D94E230-6B99-4483-BDCD-E4EB4401B9E0@F4.ca> Message-ID: <80712.1208760094@critter.freebsd.dk> In message <6D94E230-6B99-4483-BDCD-E4EB4401B9E0 at F4.ca>, Skye Poier Nott writes : >Update: I ran varnishd in foreground with -d and I'm seeing these >periodically, which would explain the cache invalidation... > >Child not responding to ping >Cache child died pid=23899 status=0x9 This is the manager process not getting a reply from the child process and restarting it, assuming that it is not serving requests either. You need to find out why the child process does not reply to pings. The first thing to do is to increase the managers timeout by increasing the "cli_timeout" parameter to see if the child process is wedged or just slow. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 06:50:11 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 06:50:11 +0000 Subject: Requests suddenly slow / fine after restart In-Reply-To: Your message of "Fri, 18 Apr 2008 12:19:50 +0200." <480875C6.60503@turtle-entertainment.de> Message-ID: <80823.1208760611@critter.freebsd.dk> In message <480875C6.60503 at turtle-entertainment.de>, Bjoern Metzdorf writes: >Recently we had some problems with slow requests. Some static graphic >files suddenly took several seconds to load instead of the usual >milliseconds. After a restart of varnish everything was fine again and >the very same graphic was fast again. You want to use some tool to graph your system activity so that you can see what system resource is causing your grief. Most likely it is disk-I/O, but without data it is impossible to be sure of that. A lot of varnish people seem be partial to the "Munin" tool for this. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 07:08:23 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 07:08:23 +0000 Subject: Varnish crashing when system starts to swap In-Reply-To: Your message of "Thu, 17 Apr 2008 15:03:39 +0200." Message-ID: <81618.1208761703@critter.freebsd.dk> In message , Ca lle Korjus writes: >Varnish looks fine until it's had abour 1,5 million requests, then >we can see the kswapd0 and kswapd1 start working and load average >rises to about 200 and the machine gets totally unresponsive. Top >shows a lot of cpu beeing spent on i/o waits and varnish child >process restarts sometimes. In best >case the process restarts and the server starts behaving within 5 >minutes but sometimes varnish dies completely. One thing we have >noticed is that the reserved memory for varnish keeps rising and >when it crashes it is usually around 14G. You didn't say which varnish version you are using, but we have fixed a number of memory leaks in -trunk recently. You should probably reduce the varnish storage file size to the amount of RAM you are willing to spend on Varnish when you run on the same machine as your backend. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ric at digitalmarbles.com Mon Apr 21 07:15:07 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Mon, 21 Apr 2008 00:15:07 -0700 Subject: default_ttl applied even when Expires exist In-Reply-To: <76968.1208755989@critter.freebsd.dk> References: <76968.1208755989@critter.freebsd.dk> Message-ID: On Apr 20, 2008, at 10:33 PM, Poul-Henning Kamp wrote: > In message <8240BA9F- > EAC7-4A9B-8128-265ABAD83B89 at digitalmarbles.com>, Ricardo N > ewbery writes: > >>> I see in rfc2616.c that this behavior is intentional. Varnish >>> apparently assumes a "clockless" origin server if the Expires date >>> is not in the future and then applies the default ttl. > >> Regarding this behavior. I would like to suggest to the Varnish >> developers that this logic seems faulty. I guess it's reasonable to >> assume a bad backend clock if the Date header looks off... but the >> Expires header? > > That particular piece of code is taken pretty directly from RFC2616 > with addition of the default_ttl assumption. > > I'm not at all adverse to changing this code, provided we can agree > what the correct heuristics should be. Well, if I parse the pseudocode correctly, it seems to be claiming to do the right thing. But the actual code following adds something extra to the heuristic which results in slightly different behavior. Using the 1.1.2 release, lines 84-86 in rcd2616.c, if (date && expires) retirement_age = max(0, min(retirement_age, Expires: - Date:) But in lines 146-145, we have, if (h_date != 0 && h_expires != 0) { if (h_date < h_expires && h_expires - h_date < retirement_age) retirement_age = h_expires - h_date; } Which appears to impose an extra requirement that Expires must be greater than Date. Fix that (and enforce a floor of 0) and it seems like we can interpret Expires with a date in the past correctly. Ric From tietje at topconcepts.com Mon Apr 21 10:26:24 2008 From: tietje at topconcepts.com (Sven Tietje) Date: Mon, 21 Apr 2008 12:26:24 +0200 Subject: url.purge Message-ID: <000601c8a39a$27269470$9e00a8c0@stade.topconcepts.net> hi, i have got a problem with url.purge :D i already have searched the mailing list archive and the svn and i have found http://varnish.projects.linpro.no/browser/trunk/varnish-cache/include/cli.h? rev=1816 so, i try to purge all documents for a host by doing varnishadm -T 127.0.0.1:6082 url.purge "#http://mydomain.de/#$" or to delete a specific file of a host varnishadm -T 127.0.0.1:6082 url.purge "myfile.html#http://mydomain.de/#$ it doesn`t work :-) using varnish-1.1 on sles. please help me ... i convinced by ceo/cto to use varnish instead of squid because of its mighty purging :D thanks sven From phk at phk.freebsd.dk Mon Apr 21 10:58:59 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 10:58:59 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 12:26:24 +0200." <000601c8a39a$27269470$9e00a8c0@stade.topconcepts.net> Message-ID: <84147.1208775539@critter.freebsd.dk> In message <000601c8a39a$27269470$9e00a8c0 at stade.topconcepts.net>, "Sven Tietje " writes: >so, i try to purge all documents for a host by doing >varnishadm -T 127.0.0.1:6082 url.purge "#http://mydomain.de/#$" url.purge only operates on the URL part of the request, not on the Host: header part. If you need to select on both host+url you need to use hash.purge CLI command and the hash string, by default is built from URL first and host last, so the command to purge http://foo.bar.com/index.html would be: hash.purge "index.html#foo.bar.com#" -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 14:25:58 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 07:25:58 -0700 Subject: url.purge In-Reply-To: <84147.1208775539@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 12:26:24 +0200."<000601c8a39a$27269470$9e00a8c0@stade.topconcepts.net> <84147.1208775539@critter.freebsd.dk> Message-ID: Hello, Please help me get Purging working. I am really stuck. I followed the directions in this link: http://varnish.projects.linpro.no/wiki/VCLExamplePurging I added these lines to my VCL configuration file and restarted Varnish: sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } When I connect to Apache on port 80 and send these command nothing happens. I need a way to Purge on the fly! [root at 155496-app3 varnish]# telnet 67.192.43.99 80 Trying 6.12.4.9... Connected to 6.12.4.9. Escape character is '^]'. PURGE / HTTP/1.1 Host: www.bla.net Thanks -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 3:59 AM To: Sven Tietje Cc: varnish-misc at projects.linpro.no Subject: Re: url.purge In message <000601c8a39a$27269470$9e00a8c0 at stade.topconcepts.net>, "Sven Tietje " writes: >so, i try to purge all documents for a host by doing >varnishadm -T 127.0.0.1:6082 url.purge "#http://mydomain.de/#$" url.purge only operates on the URL part of the request, not on the Host: header part. If you need to select on both host+url you need to use hash.purge CLI command and the hash string, by default is built from URL first and host last, so the command to purge http://foo.bar.com/index.html would be: hash.purge "index.html#foo.bar.com#" -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From tietje at topconcepts.com Mon Apr 21 14:35:54 2008 From: tietje at topconcepts.com (Sven Tietje) Date: Mon, 21 Apr 2008 16:35:54 +0200 Subject: url.purge In-Reply-To: <90347.1208787370@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 14:14:31 +0200." <001001c8a3a9$41e92ca0$9e00a8c0@stade.topconcepts.net> <90347.1208787370@critter.freebsd.dk> Message-ID: <000701c8a3bd$027b98a0$9e00a8c0@stade.topconcepts.net> >>> isn`t it implemented in varnish-1.1? >> >> No, sorry. >> >> I can't remember if it's in 1.2 http://varnish.projects.linpro.no/browser/branches/1.2/include/cli.h looks like that. am i right? >ok, i`ll give svn-trunk a try. will it be possible to pruge all documents of >a specified host and to purge all files in folder "x" of a specified host, >for examples all styles? > >hash.purge "/styles/*#myhost.com#"? > yes. Only, you should use ".*" as wildcard, as it is a regular expression ok - thanks. sounds great and will make life so easy :D From phk at phk.freebsd.dk Mon Apr 21 14:40:33 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 14:40:33 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 07:25:58 MST." Message-ID: <90501.1208788833@critter.freebsd.dk> In message , "Pavel Pragin" writes: >directions in this link: >http://varnish.projects.linpro.no/wiki/VCLExamplePurging > >I added these lines to my VCL configuration file and restarted Varnish: >sub vcl_hit { > if (req.request =3D=3D "PURGE") { > set obj.ttl =3D 0s; > error 200 "Purged."; > } >} That is not enough, you need to include the _entire_ example. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 14:43:03 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 07:43:03 -0700 Subject: url.purge In-Reply-To: <90501.1208788833@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 07:25:58 MST." <90501.1208788833@critter.freebsd.dk> Message-ID: I tried the whole example it still doesn't work. Here is my whole vcl file: backend default { set backend.host = "127.0.0.1"; set backend.port = "9090"; } sub vcl_recv { if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css)$") { lookup; } if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie) { pass; } remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } sub vcl_pipe { pipe; } sub vcl_pass { pass; } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; hash; } sub vcl_hit { if (!obj.cacheable) { pass; } deliver; } sub vcl_miss { if (req.http.user-agent ~ "spider") { error 503 "Not in cache"; } fetch; } sub vcl_fetch { if (!obj.valid) { error; } if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } insert; } sub vcl_deliver { deliver; } sub vcl_timeout { discard; } sub vcl_discard { discard; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } } From phk at phk.freebsd.dk Mon Apr 21 14:49:16 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 14:49:16 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 07:43:03 MST." Message-ID: <90582.1208789356@critter.freebsd.dk> In message , "Pavel Pragin" writes: >I tried the whole example it still doesn't work. >Here is my whole vcl file: Order is important here... Your first vcl_recv {} sends the PURGE request to pipe before the second copy ever gets at it. > sub vcl_recv { [...] > if (req.request !=3D "GET" && req.request !=3D "HEAD") { > pipe; > } [...] > } >sub vcl_recv { > if (req.request =3D=3D "PURGE") { > if (!client.ip ~ purge) { > error 405 "Not allowed."; > } > lookup; > } >} -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 14:52:50 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 07:52:50 -0700 Subject: url.purge In-Reply-To: <90582.1208789356@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 07:43:03 MST." <90582.1208789356@critter.freebsd.dk> Message-ID: Hello, Can you please help me put it the right order. I will be very great full. Thanks -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 7:49 AM To: Pavel Pragin Cc: Sven Tietje; varnish-misc at projects.linpro.no Subject: Re: url.purge In message , "Pavel Pragin" writes: >I tried the whole example it still doesn't work. >Here is my whole vcl file: Order is important here... Your first vcl_recv {} sends the PURGE request to pipe before the second copy ever gets at it. > sub vcl_recv { [...] > if (req.request !=3D "GET" && req.request !=3D "HEAD") { > pipe; > } [...] > } >sub vcl_recv { > if (req.request =3D=3D "PURGE") { > if (!client.ip ~ purge) { > error 405 "Not allowed."; > } > lookup; > } >} -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 14:55:59 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 14:55:59 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 07:52:50 MST." Message-ID: <90692.1208789759@critter.freebsd.dk> In message , "Pavel Pragin" writes: >Hello, > >Can you please help me put it the right order. I will be very great >full. Put the purge stuff first. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 15:00:56 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 08:00:56 -0700 Subject: url.purge In-Reply-To: <90692.1208789759@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 07:52:50 MST." <90692.1208789759@critter.freebsd.dk> Message-ID: I am getting this now. Its says nothing to purge? [root at 155493-app1 ~]# telnet localhost 80 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. PURGE / HTTP/1.1 Host: beta.manyone.net HTTP/1.0 404 Not Found Server: Varnish Retry-After: 0 Content-Type: text/html; charset=utf-8 Content-Length: 425 Date: Mon, 21 Apr 2008 15:00:09 GMT X-Varnish: 497484035 Age: nan Via: 1.1 varnish Connection: keep-alive 404 Not Found

Error 404 Not Found

Not in cache.

Guru Meditation:

XID: 497484035

Varnish
Connection closed by foreign host. -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 7:56 AM To: Pavel Pragin Cc: Sven Tietje; varnish-misc at projects.linpro.no Subject: Re: url.purge In message , "Pavel Pragin" writes: >Hello, > >Can you please help me put it the right order. I will be very great >full. Put the purge stuff first. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 15:02:12 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 15:02:12 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 08:00:56 MST." Message-ID: <90788.1208790132@critter.freebsd.dk> In message , "Pavel Pragin" writes: >I am getting this now. Its says nothing to purge? > >[root at 155493-app1 ~]# telnet localhost 80 >Trying 127.0.0.1... >Connected to localhost. >Escape character is '^]'. >PURGE / HTTP/1.1 >Host: beta.manyone.net > >HTTP/1.0 404 Not Found That means there was no cached object to purge. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 15:04:00 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 08:04:00 -0700 Subject: url.purge In-Reply-To: <90788.1208790132@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 08:00:56 MST." <90788.1208790132@critter.freebsd.dk> Message-ID: I am pretty sure there is: 282624 . . bytes allocated 5368426496 . . bytes free Does this purge all cache for beta.manyone.net? PURGE / HTTP/1.1 Host: beta.manyone.net Thank a lot Pavel -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 8:02 AM To: Pavel Pragin Cc: Sven Tietje; varnish-misc at projects.linpro.no Subject: Re: url.purge In message , "Pavel Pragin" writes: >I am getting this now. Its says nothing to purge? > >[root at 155493-app1 ~]# telnet localhost 80 >Trying 127.0.0.1... >Connected to localhost. >Escape character is '^]'. >PURGE / HTTP/1.1 >Host: beta.manyone.net > >HTTP/1.0 404 Not Found That means there was no cached object to purge. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 15:04:47 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 15:04:47 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 08:04:00 MST." Message-ID: <90808.1208790287@critter.freebsd.dk> In message , "Pavel Pragin" writes: >I am pretty sure there is: >282624 . . bytes allocated >5368426496 . . bytes free > > >Does this purge all cache for beta.manyone.net? >PURGE / HTTP/1.1 >Host: beta.manyone.net No, it only purges the "/" document. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 15:05:47 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 08:05:47 -0700 Subject: url.purge In-Reply-To: <90808.1208790287@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 08:04:00 MST." <90808.1208790287@critter.freebsd.dk> Message-ID: Hello, How can I purge all the cache using this? Thanks -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 8:05 AM To: Pavel Pragin Cc: Sven Tietje; varnish-misc at projects.linpro.no Subject: Re: url.purge In message , "Pavel Pragin" writes: >I am pretty sure there is: >282624 . . bytes allocated >5368426496 . . bytes free > > >Does this purge all cache for beta.manyone.net? >PURGE / HTTP/1.1 >Host: beta.manyone.net No, it only purges the "/" document. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 15:10:11 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 15:10:11 +0000 Subject: url.purge In-Reply-To: Your message of "Mon, 21 Apr 2008 08:05:47 MST." Message-ID: <90829.1208790611@critter.freebsd.dk> In message , "Pavel Pragin" writes: >How can I purge all the cache using this? Then you need to use the more advanced hash.purge functionality, I'm not sure if we have a cook-book example for that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 15:18:24 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 08:18:24 -0700 Subject: url.purge In-Reply-To: <90829.1208790611@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 08:05:47 MST." <90829.1208790611@critter.freebsd.dk> Message-ID: Hello, Here is what I am trying to accomplish. We want to be able to clear all varnish cache from the php application. It may be using a php function or something. So far the only way to clear varnish cache has been to restart Varnish. What would be the best way? Thanks -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 8:10 AM To: Pavel Pragin Cc: Sven Tietje; varnish-misc at projects.linpro.no Subject: Re: url.purge In message , "Pavel Pragin" writes: >How can I purge all the cache using this? Then you need to use the more advanced hash.purge functionality, I'm not sure if we have a cook-book example for that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 17:42:30 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 10:42:30 -0700 Subject: how to flush all cache In-Reply-To: <90501.1208788833@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 07:25:58 MST." <90501.1208788833@critter.freebsd.dk> Message-ID: Hello, Is there a way to flush all cache from varnish without restarting it? Thanks From phk at phk.freebsd.dk Mon Apr 21 17:44:18 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 17:44:18 +0000 Subject: how to flush all cache In-Reply-To: Your message of "Mon, 21 Apr 2008 10:42:30 MST." Message-ID: <95786.1208799858@critter.freebsd.dk> In message , "Pavel Pragin" writes: >Hello, > >Is there a way to flush all cache from varnish without restarting it? url.purge . -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 17:52:52 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 10:52:52 -0700 Subject: how to flush all cache In-Reply-To: <95786.1208799858@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 10:42:30 MST." <95786.1208799858@critter.freebsd.dk> Message-ID: Hello, Thanks for that. Where is this run from? Using http request or from the admin port? Thanks -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 10:44 AM To: Pavel Pragin Cc: varnish-misc at projects.linpro.no Subject: Re: how to flush all cache In message , "Pavel Pragin" writes: >Hello, > >Is there a way to flush all cache from varnish without restarting it? url.purge . -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 17:59:07 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 10:59:07 -0700 Subject: how to flush all cache In-Reply-To: <95786.1208799858@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 10:42:30 MST." <95786.1208799858@critter.freebsd.dk> Message-ID: Hello. Does this mean its cleared? [root at 155493-app1 varnish]# telnet localhost 6082 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. url.purge . 200 8 PURGE . I don't see any change in the varnish stats? Thanks -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 10:44 AM To: Pavel Pragin Cc: varnish-misc at projects.linpro.no Subject: Re: how to flush all cache In message , "Pavel Pragin" writes: >Hello, > >Is there a way to flush all cache from varnish without restarting it? url.purge . -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at linpro.no Mon Apr 21 18:00:04 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Mon, 21 Apr 2008 20:00:04 +0200 Subject: how to flush all cache In-Reply-To: References: Your message of "Mon, 21 Apr 2008 10:42:30 MST." <95786.1208799858@critter.freebsd.dk> Message-ID: <480CD624.1000207@linpro.no> Hi Pavel. Please consult the FAQ before asking on the mailing list. Thanks! Pavel Pragin skrev: > Hello, > > Thanks for that. Where is this run from? Using http request or from the > admin port? > > Thanks > > > -----Original Message----- > From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf > Of Poul-Henning Kamp > Sent: Monday, April 21, 2008 10:44 AM > To: Pavel Pragin > Cc: varnish-misc at projects.linpro.no > Subject: Re: how to flush all cache > > In message > m>, "Pavel Pragin" writes: >> Hello, >> >> Is there a way to flush all cache from varnish without restarting it? > > url.purge . > From ppragin at SolutionSet.com Mon Apr 21 18:04:33 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 11:04:33 -0700 Subject: how to flush all cache In-Reply-To: <480CD624.1000207@linpro.no> References: Your message of "Mon, 21 Apr 2008 10:42:30MST." <95786.1208799858@critter.freebsd.dk> <480CD624.1000207@linpro.no> Message-ID: Hello, I looked at the FAQ and its insufficient. I would love to contribute to it once I get this working. I have been contributing to the Amanda project and I am a believer in open source and taking the time to help others. Varnish is great , but documentation is inadequate to say the least. Thanks -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per Andreas Buer Sent: Monday, April 21, 2008 11:00 AM Cc: varnish-misc at projects.linpro.no Subject: Re: how to flush all cache Hi Pavel. Please consult the FAQ before asking on the mailing list. Thanks! Pavel Pragin skrev: > Hello, > > Thanks for that. Where is this run from? Using http request or from the > admin port? > > Thanks > > > -----Original Message----- > From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf > Of Poul-Henning Kamp > Sent: Monday, April 21, 2008 10:44 AM > To: Pavel Pragin > Cc: varnish-misc at projects.linpro.no > Subject: Re: how to flush all cache > > In message > m>, "Pavel Pragin" writes: >> Hello, >> >> Is there a way to flush all cache from varnish without restarting it? > > url.purge . > _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Mon Apr 21 18:08:59 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 18:08:59 +0000 Subject: how to flush all cache In-Reply-To: Your message of "Mon, 21 Apr 2008 10:59:07 MST." Message-ID: <95946.1208801339@critter.freebsd.dk> In message , "Pavel Pragin" writes: >Hello. > >Does this mean its cleared? >[root at 155493-app1 varnish]# telnet localhost 6082 >Trying 127.0.0.1... >Connected to localhost. >Escape character is '^]'. >url.purge . >200 8 >PURGE . > > >I don't see any change in the varnish stats? You wont see any immediate effect, the objects are only thrown out as we encounter them during our normal business. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 21 18:10:34 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 11:10:34 -0700 Subject: how to flush all cache In-Reply-To: <95946.1208801339@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 10:59:07 MST." <95946.1208801339@critter.freebsd.dk> Message-ID: Will this have same effect: varnishadm -T 127.0.0.1:6082 url.purge "." thanks a lot -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 21, 2008 11:09 AM To: Pavel Pragin Cc: varnish-misc at projects.linpro.no Subject: Re: how to flush all cache In message , "Pavel Pragin" writes: >Hello. > >Does this mean its cleared? >[root at 155493-app1 varnish]# telnet localhost 6082 >Trying 127.0.0.1... >Connected to localhost. >Escape character is '^]'. >url.purge . >200 8 >PURGE . > > >I don't see any change in the varnish stats? You wont see any immediate effect, the objects are only thrown out as we encounter them during our normal business. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Apr 21 18:11:48 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 21 Apr 2008 18:11:48 +0000 Subject: how to flush all cache In-Reply-To: Your message of "Mon, 21 Apr 2008 11:10:34 MST." Message-ID: <95977.1208801508@critter.freebsd.dk> In message , "Pavel Pragin" writes: >Will this have same effect: >varnishadm -T 127.0.0.1:6082 url.purge "." yes -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jsd at cluttered.com Mon Apr 21 18:14:29 2008 From: jsd at cluttered.com (Jon Drukman) Date: Mon, 21 Apr 2008 11:14:29 -0700 Subject: regsub, string concatenation? In-Reply-To: <80207.1208758769@critter.freebsd.dk> References: <80207.1208758769@critter.freebsd.dk> Message-ID: Poul-Henning Kamp wrote: > In message , Jon Drukman writes: >> i'm trying to rewrite all incoming URLs to include the http host header >> as part of the destination url. example: >> >> set req.url = regsub(req.url, "^", "/site/" + req.http.host); > > You can't do it directly right now, but this may be a feasible > workaround: > > set req.http.foobar = "/site/" > set req.http.foo = regsub(req.url, "^", req.http.foobar); > unset req.http.foobar; > Thanks for the ideas. I ended up just moving the original host request into a different header and modifying the origin server to look for that header instead of looking in the URL. -jsd- From perbu at linpro.no Mon Apr 21 18:22:45 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Mon, 21 Apr 2008 20:22:45 +0200 Subject: how to flush all cache In-Reply-To: References: Your message of "Mon, 21 Apr 2008 10:42:30MST." <95786.1208799858@critter.freebsd.dk> <480CD624.1000207@linpro.no> Message-ID: <480CDB75.8000804@linpro.no> Well, as far as I can see the following entry would answer your question perfectly; http://varnish.projects.linpro.no/wiki/FAQ#HowcanIforcearefreshonaobjectcachedbyvarnish It explains both purging via the command line and HTTP PURGE. I agree that the documentation is a bit on the sparse side. Help is always welcome. :-) Per. Pavel Pragin skrev: > Hello, > > I looked at the FAQ and its insufficient. I would love to contribute to > it once I get this working. I have been contributing to the > Amanda project and I am a believer in open source and taking the time to > help others. Varnish is great , but documentation is inadequate to say > the least. > > Thanks > > > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no > [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per > Andreas Buer > Sent: Monday, April 21, 2008 11:00 AM > Cc: varnish-misc at projects.linpro.no > Subject: Re: how to flush all cache > > Hi Pavel. > > Please consult the FAQ before asking on the mailing list. > > Thanks! > > > Pavel Pragin skrev: >> Hello, >> >> Thanks for that. Where is this run from? Using http request or from > the >> admin port? >> >> Thanks >> >> >> -----Original Message----- >> From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf >> Of Poul-Henning Kamp >> Sent: Monday, April 21, 2008 10:44 AM >> To: Pavel Pragin >> Cc: varnish-misc at projects.linpro.no >> Subject: Re: how to flush all cache >> >> In message >> > m>, "Pavel Pragin" writes: >>> Hello, >>> >>> Is there a way to flush all cache from varnish without restarting it? >> url.purge . >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From ppragin at SolutionSet.com Mon Apr 21 18:27:15 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 21 Apr 2008 11:27:15 -0700 Subject: how to flush all cache In-Reply-To: <480CDB75.8000804@linpro.no> References: Your message of "Mon, 21 Apr 200810:42:30MST." <95786.1208799858@critter.freebsd.dk><480CD624.1000207@linpro.no> <480CDB75.8000804@linpro.no> Message-ID: Hello, Do you think there is a diff between these two: url.purge .* 200 9 PURGE .* url.purge . 200 8 PURGE . Thanks -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per Andreas Buer Sent: Monday, April 21, 2008 11:23 AM Cc: varnish-misc at projects.linpro.no Subject: Re: how to flush all cache Well, as far as I can see the following entry would answer your question perfectly; http://varnish.projects.linpro.no/wiki/FAQ#HowcanIforcearefreshonaobject cachedbyvarnish It explains both purging via the command line and HTTP PURGE. I agree that the documentation is a bit on the sparse side. Help is always welcome. :-) Per. Pavel Pragin skrev: > Hello, > > I looked at the FAQ and its insufficient. I would love to contribute to > it once I get this working. I have been contributing to the > Amanda project and I am a believer in open source and taking the time to > help others. Varnish is great , but documentation is inadequate to say > the least. > > Thanks > > > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no > [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per > Andreas Buer > Sent: Monday, April 21, 2008 11:00 AM > Cc: varnish-misc at projects.linpro.no > Subject: Re: how to flush all cache > > Hi Pavel. > > Please consult the FAQ before asking on the mailing list. > > Thanks! > > > Pavel Pragin skrev: >> Hello, >> >> Thanks for that. Where is this run from? Using http request or from > the >> admin port? >> >> Thanks >> >> >> -----Original Message----- >> From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf >> Of Poul-Henning Kamp >> Sent: Monday, April 21, 2008 10:44 AM >> To: Pavel Pragin >> Cc: varnish-misc at projects.linpro.no >> Subject: Re: how to flush all cache >> >> In message >> > m>, "Pavel Pragin" writes: >>> Hello, >>> >>> Is there a way to flush all cache from varnish without restarting it? >> url.purge . >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From jsd at cluttered.com Mon Apr 21 18:30:42 2008 From: jsd at cluttered.com (Jon Drukman) Date: Mon, 21 Apr 2008 11:30:42 -0700 Subject: stale delivery and prefetch sanity check... In-Reply-To: <4074.1200047445@critter.freebsd.dk> References: <4074.1200047445@critter.freebsd.dk> Message-ID: Poul-Henning Kamp wrote: > If any of you have time, your comments to this outline for degraded > mode and prefetching would be appreciated. Looks like a great start to me. Is there an approximate timeframe (very rough is OK) for when we might see these features? > Prefetch > -------- > > Prefetching is easy to dispatch with: at some VCL determined time > before the TTL expires, we try to refresh the object from the backend > so that it never grows stale. would you replace the current object as soon as the prefetch succeeds, or wait until the current one expires? > The simplest solution is probably to replay the headers used to > fetch the object in the first place, but this may wrongly account > the fetch to a particular client/cookie/user/account. > > The alternative is to filter headers to the bare minimum, respecting > Vary:, and hope that gives the expected result. yeah it seems like either of these choices could be very tricky. maybe make it configurable? > For the lack of any better idea, I think all prefetching will look like > it happend from a client with IP# 127.0.0.2 How about a header? X-Varnish-Prefetch: true and don't mess with the IP at all. > Degraded mode > ------------- > > Degraded mode is the intentional serving of technically stale objects > instead of returning errors. oh man, i really need this :) > The condition for returning stale content is: > > * Client must be marked as accepting degraded objects (VCL: > "client.degraded = true", default true) before lookup. > > * Object must be within it's timelimit for degraded mode (VCL: > "obj.stale_time = 1h", default 30 seconds). as long as the timelimit can be set to something ridiculously high. > * An attempt to fetch the object from the backend must be in progress > or recently (VCL: "backend.backoff = 1m", default 15 seconds) have failed. are you planning on having this work asynchronously? (ie: object expires, stale object is served to all requestors while the new one is retrieved from the origin, if the origin fails to serve a new one, the old one continues to serve while the stale_time window is still open). -jsd- From perbu at linpro.no Mon Apr 21 18:34:52 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Mon, 21 Apr 2008 20:34:52 +0200 Subject: how to flush all cache In-Reply-To: References: Your message of "Mon, 21 Apr 200810:42:30MST." <95786.1208799858@critter.freebsd.dk><480CD624.1000207@linpro.no> <480CDB75.8000804@linpro.no> Message-ID: <480CDE4C.7050800@linpro.no> Yes - but the semantics are the same. There are many ways to skin a cat - if you are into stuff like that. Personally I like "url.purge .*" because it clearly states its a regular expression. Per. Pavel Pragin skrev: > Hello, > > Do you think there is a diff between these two: > > url.purge .* > 200 9 > PURGE .* > > url.purge . > 200 8 > PURGE . > > Thanks > > > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no > [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per > Andreas Buer > Sent: Monday, April 21, 2008 11:23 AM > Cc: varnish-misc at projects.linpro.no > Subject: Re: how to flush all cache > > Well, as far as I can see the following entry would answer your question > > perfectly; > http://varnish.projects.linpro.no/wiki/FAQ#HowcanIforcearefreshonaobject > cachedbyvarnish > > It explains both purging via the command line and HTTP PURGE. > > I agree that the documentation is a bit on the sparse side. Help is > always welcome. :-) > > Per. > > > Pavel Pragin skrev: >> Hello, >> >> I looked at the FAQ and its insufficient. I would love to contribute > to >> it once I get this working. I have been contributing to the >> Amanda project and I am a believer in open source and taking the time > to >> help others. Varnish is great , but documentation is inadequate to say >> the least. >> >> Thanks >> >> >> -----Original Message----- >> From: varnish-misc-bounces at projects.linpro.no >> [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Per >> Andreas Buer >> Sent: Monday, April 21, 2008 11:00 AM >> Cc: varnish-misc at projects.linpro.no >> Subject: Re: how to flush all cache >> >> Hi Pavel. >> >> Please consult the FAQ before asking on the mailing list. >> >> Thanks! >> >> >> Pavel Pragin skrev: >>> Hello, >>> >>> Thanks for that. Where is this run from? Using http request or from >> the >>> admin port? >>> >>> Thanks >>> >>> >>> -----Original Message----- >>> From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On > Behalf >>> Of Poul-Henning Kamp >>> Sent: Monday, April 21, 2008 10:44 AM >>> To: Pavel Pragin >>> Cc: varnish-misc at projects.linpro.no >>> Subject: Re: how to flush all cache >>> >>> In message >>> >> m>, "Pavel Pragin" writes: >>>> Hello, >>>> >>>> Is there a way to flush all cache from varnish without restarting > it? >>> url.purge . >>> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From armdan20 at gmail.com Tue Apr 22 14:08:43 2008 From: armdan20 at gmail.com (andan andan) Date: Tue, 22 Apr 2008 16:08:43 +0200 Subject: Security doubt about Varnish and firewall. Message-ID: Hi all. We are testing Varnish in order to deploy it into a production enviroment. We have a security doubt: Should we install Varnish inside or outside firewall? For better performance, we consider that the best choice is outside, but for obvious security reasons, the better is putting it into a DMZ. Any suggestions? Somebody has Varnish outside the firewall? Thanks in advance. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-list at itiva.com Tue Apr 22 18:38:03 2008 From: varnish-list at itiva.com (DHF) Date: Tue, 22 Apr 2008 11:38:03 -0700 Subject: Security doubt about Varnish and firewall. In-Reply-To: References: Message-ID: <480E308B.2080403@itiva.com> andan andan wrote: > We have a security doubt: Should we install Varnish inside or outside > firewall? I run varnish on a many linux boxes with Netfilter default log and drop rules and have not seen a performance problem. > For better performance, we consider that the best choice is outside, > but for > obvious security reasons, the better is putting it into a DMZ. This depends on your particular environment. What kind of hardware are you using? What kind of firewall is it? How much traffic can the firewall handle? How much traffic do you usually see to the backend server? Where is the backend server located? What is your reason for using a reverse proxy? What is the expected hit ratio on the cache? What kind of content are you delivering? Do you have any network operations tasks that require you to collect data from the server in a fashion that requires it to be behind the firewall? If the backend server is through the firewall, it could be beneficial to have your varnish box outside the firewall and you could restrict access to the backend server to only the varnish servers ip or an internal ip on a seperate network. Then run iptables or ipfw on the varnish server itself > Any suggestions? Somebody has Varnish outside the firewall? I have found no reason to not use ipfw or iptables on deployed servers, the benefit in my opinion out weighs the performance loss. With a minimal ruleset the performance impact is so small its hard to measure until you reach huge packets per second, or connections a second ( assuming your hardware isn't a few years away from collecting a pension ). I have never seen a production box reach the limits of iptables packets per second because whatever process is on the box ( apache, varnish, squid, mysql, etc ) will have long ago melted down into a pile of smoldering ruin, due to high load and iptables performance becomes irrelevant. --Dave From skye at F4.ca Tue Apr 22 20:37:02 2008 From: skye at F4.ca (Skye Poier Nott) Date: Tue, 22 Apr 2008 13:37:02 -0700 Subject: Varnish dumping cache? In-Reply-To: <80712.1208760094@critter.freebsd.dk> References: <80712.1208760094@critter.freebsd.dk> Message-ID: <97FE299C-1F8E-4527-8044-372C094D80FD@F4.ca> Thanks, I'll do as you suggest and see what happens. If it's wedged, then I should do what, attach with gdb and get a backtrace? Thanks, Skye On 20-Apr-08, at 11:41 PM, Poul-Henning Kamp wrote: > In message <6D94E230-6B99-4483-BDCD-E4EB4401B9E0 at F4.ca>, Skye Poier > Nott writes > : >> Update: I ran varnishd in foreground with -d and I'm seeing these >> periodically, which would explain the cache invalidation... >> >> Child not responding to ping >> Cache child died pid=23899 status=0x9 > > This is the manager process not getting a reply from the child > process and restarting it, assuming that it is not serving > requests either. > > You need to find out why the child process does not reply to pings. > > The first thing to do is to increase the managers timeout by > increasing > the "cli_timeout" parameter to see if the child process is wedged > or just slow. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by > incompetence. > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From f.engelhardt at 21torr.com Wed Apr 23 10:14:14 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Wed, 23 Apr 2008 12:14:14 +0200 Subject: Munin-Plugin Message-ID: <20080423121414.601bfc58@21torr.com> Hello, i am using munin to monitor the server load, cpu usage, filesystems ... and also the varnish stats (hit/miss ratio, client request, backend requests, ...). The Problem ist, that this plugin reports every two or three days 14 Million hits per second the the munin-node, and therefor the graphic generated is unusable and wrong. At the same time this "peak" is shown, nor the eth-traffic neither the cpu usage increase, they are all at the same normal level as in the period befor. The only thing that changes is the mem usage is going down a few megabytes, and the hit/miss ratio is at 0% and increasing afterwards. See attached munin-graphics Any idea of what is going wrong? /Flo -------------- next part -------------- A non-text attachment was scrubbed... Name: lighttpd-memory-day.png Type: image/png Size: 49677 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lighttpd-varnish_client_req-day.png Type: image/png Size: 17776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lighttpd-varnish_ratio-day.png Type: image/png Size: 20235 bytes Desc: not available URL: From ssm at linpro.no Wed Apr 23 14:15:42 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Wed, 23 Apr 2008 16:15:42 +0200 Subject: Munin-Plugin In-Reply-To: <20080423121414.601bfc58@21torr.com> (Florian Engelhardt's message of "Wed, 23 Apr 2008 12:14:14 +0200") References: <20080423121414.601bfc58@21torr.com> Message-ID: <7xlk34k67l.fsf@iostat.linpro.no> On Wed, 23 Apr 2008 12:14:14 +0200, Florian Engelhardt said: > Any idea of what is going wrong? If you're using COUNTER in your munin plugin, you'll get large spikes when the counter you're tracking resets. If this is the case, you should use DERIVE instead for the peaking graph. If the varnish worker process restarts, it will start out empty (shown as the hit/miss graph dropping to 0% hit, and rising, then flattening out at the previous level after a while), and the counters will reset (shown as the large spike when COUNTER is used by rrdtool update). See http://munin.projects.linpro.no/wiki/HowToWritePlugins#DERIVEDvs.COUNTER -- Stig Sandbeck Mathisen, Linpro From wichert at wiggy.net Wed Apr 23 15:32:25 2008 From: wichert at wiggy.net (Wichert Akkerman) Date: Wed, 23 Apr 2008 17:32:25 +0200 Subject: unprocessed requests Message-ID: <480F5689.8070806@wiggy.net> We are seeing some very weird behaviour with requests that seem to be getting send to the backend server and returned to the browser without any VCL processing happening. I've submitted a ticket with all the information we have at http://varnish.projects.linpro.no/ticket/232 . I'm a bit at a loss how to debug this further. Is there something we can take a look at? Unfortunately I do not have root access to the relevant machines, so I can not get a network dump. Wichert. -- Wichert Akkerman It is simple to make things. http://www.wiggy.net/ It is hard to make things simple. From max.clark at gmail.com Wed Apr 23 17:25:37 2008 From: max.clark at gmail.com (Max Clark) Date: Wed, 23 Apr 2008 10:25:37 -0700 Subject: Dependance (or better non-dependance) on the Origin Message-ID: <2fa1e1780804231025k445b1f8cj87700890f79c4a6a@mail.gmail.com> Hello, With other reverse proxy caches we constantly see issues where the proxy is dependent on the Origin - even if the requested object is in the cache if the Origin is not available the proxy will not respond to requests. How does Varnish handle the connections to the Origin? Is it possible to have Varnish serve requests if the Origin is down? Thanks, Max From max.clark at gmail.com Wed Apr 23 17:34:14 2008 From: max.clark at gmail.com (Max Clark) Date: Wed, 23 Apr 2008 10:34:14 -0700 Subject: make varnish still respond if backend dead In-Reply-To: <200804040951.29493.ottolski@web.de> References: <200804040951.29493.ottolski@web.de> Message-ID: <2fa1e1780804231034t6f8093exab2a57bca4203424@mail.gmail.com> Did you find a solution to this? On Fri, Apr 4, 2008 at 12:51 AM, Sascha Ottolski wrote: > Hi, > > sorry if this is FAQ: what can I do to make varnish respond to request > if it's backend is dead. should return cache hits, of course, and > a "proxy error" or something for a miss. > > and how can I prevent varnish to cache "404" for objects it couldn't > fetch due to a dead backend? at least I think that is what happened, as > varnish reported 404 for URLs that definetely exist; the dead backend > seems to be the only logical explanation why varnish could think it's > not. > > oh, and is there a way to put the local hostname in a header? I have two > proxies, load balanced by LVS, so using server.ip reports the same IP > on both nodes. > > > Thanks, Sascha > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From max.clark at gmail.com Wed Apr 23 17:48:15 2008 From: max.clark at gmail.com (Max Clark) Date: Wed, 23 Apr 2008 10:48:15 -0700 Subject: Dirty Caching Message-ID: <2fa1e1780804231048w76f6ad6w138bf023d368515@mail.gmail.com> In July of 2007 there was a discussion on Dirty Caching - or serving stale content if the backend is down: http://projects.linpro.no/pipermail/varnish-misc/2007-July/000590.html Whatever happened with this? Is this feature available with Varnish? How do I configure it? Thanks, Max From ssm at linpro.no Thu Apr 24 04:52:29 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Thu, 24 Apr 2008 06:52:29 +0200 Subject: unprocessed requests In-Reply-To: <480F5689.8070806@wiggy.net> (Wichert Akkerman's message of "Wed, 23 Apr 2008 17:32:25 +0200") References: <480F5689.8070806@wiggy.net> Message-ID: <7xabjjkg6q.fsf@iostat.linpro.no> On Wed, 23 Apr 2008 17:32:25 +0200, Wichert Akkerman said: > We are seeing some very weird behaviour with requests that seem to > be getting send to the backend server and returned to the browser > without any VCL processing happening. I've submitted a ticket with > all the information we have at > http://varnish.projects.linpro.no/ticket/232 . Try changing: ,---- | if (req.request != "GET" && req.request != "HEAD") { | pipe; | } `---- to ,---- | if (req.request != "GET" && req.request != "HEAD") { | set req.http.connection = "close"; | pipe; | } `---- Do this for all "pipe;"'s in your vcl. If one request goes is directed to "pipe", and the connection is kept open, other requests will go through the same connection without being inspected touched by varnish. This is especially visible after a login in zope (or plone), which is done via the POST method. The "zope-plone" example VCL does not reflect this, unfortunately. > I'm a bit at a loss how to debug this further. Is there something we > can take a look at? Unfortunately I do not have root access to the > relevant machines, so I can not get a network dump. As a non-privileged user, you may still have access to run "varnishlog", to see details of your client and backend traffic, as well as what happens with your request through VCL. -- Stig Sandbeck Mathisen, Linpro From ssm at linpro.no Thu Apr 24 05:33:23 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Thu, 24 Apr 2008 07:33:23 +0200 Subject: unprocessed requests In-Reply-To: <7xabjjkg6q.fsf@iostat.linpro.no> (Stig Sandbeck Mathisen's message of "Thu, 24 Apr 2008 06:52:29 +0200") References: <480F5689.8070806@wiggy.net> <7xabjjkg6q.fsf@iostat.linpro.no> Message-ID: <7x63u7keak.fsf@iostat.linpro.no> On Thu, 24 Apr 2008 06:52:29 +0200, Stig Sandbeck Mathisen said: > If one request goes is directed to "pipe", and the connection is > kept open, other requests will go through the same connection > without being inspected touched by varnish. This is especially > visible after a login in zope (or plone), which is done via the POST > method. On further inspection of source and tickets, using "pipe" and 'req.http.connection = "close";' should be done for 1.1.2. For 1.2, which includes the "2115" changeset, you should be able to use "pass" instead of "pipe" for POST, enabling connection reuse. My relevant vcl for 1.2 : sub vcl_recv { # Normalize Host: header to limit cache usage if (req.http.host ~ "^(www.)?fnord.no") { set req.http.host = "fnord.no"; set req.backend = zope_195; set req.url = "/VirtualHostBase/http/fnord.no:80/Sites/fnord.no/VirtualHostRoot" req.url; } elsif () { # [...] } else { error 404 "Unknown virtual host" } if (req.request == "POST") { pass; } if (req.request != "GET" && req.request != "HEAD") { # [...] } # [...] } A few short tests on one of my own sites shows it to not break immediately, at least. :) -- Stig Sandbeck Mathisen, Linpro From pnasrat at googlemail.com Thu Apr 24 08:10:58 2008 From: pnasrat at googlemail.com (Paul Nasrat) Date: Thu, 24 Apr 2008 09:10:58 +0100 Subject: Dirty Caching In-Reply-To: <2fa1e1780804231048w76f6ad6w138bf023d368515@mail.gmail.com> References: <2fa1e1780804231048w76f6ad6w138bf023d368515@mail.gmail.com> Message-ID: On 23 Apr 2008, at 18:48, Max Clark wrote: > In July of 2007 there was a discussion on Dirty Caching - or serving > stale content if the backend is down: > > http://projects.linpro.no/pipermail/varnish-misc/2007-July/000590.html > > Whatever happened with this? Is this feature available with Varnish? > How do I configure it? I've been looking at the Degraded mode page and am trying to get vcl parser to be able to handle some of the pseudo-code described here: http://varnish.projects.linpro.no/wiki/Degraded Hopefully I should have a patch set shortly that allows set to contain arithmetic operations with variables. It should be fairly easy to write some vcl to do stale serving based on headers eg http://www.mnot.net/blog/2007/12/12/stale Paul From ottolski at web.de Thu Apr 24 08:28:09 2008 From: ottolski at web.de (Sascha Ottolski) Date: Thu, 24 Apr 2008 10:28:09 +0200 Subject: make varnish still respond if backend dead In-Reply-To: <2fa1e1780804231034t6f8093exab2a57bca4203424@mail.gmail.com> References: <200804040951.29493.ottolski@web.de> <2fa1e1780804231034t6f8093exab2a57bca4203424@mail.gmail.com> Message-ID: <200804241028.09899.ottolski@web.de> Am Mittwoch 23 April 2008 19:34:14 schrieb Max Clark: > Did you find a solution to this? not really, I hope to have found a workaround by adding this to my vcl: sub vcl_fetch { remove obj.http.X-Varnish-Host; set obj.http.X-Varnish-Host = "myhostname"; if (obj.status == 404) { set obj.ttl = 7200s; } } so 404 still may happen, but are cached shorter than my default. Cheers, Sascha > > On Fri, Apr 4, 2008 at 12:51 AM, Sascha Ottolski wrote: > > Hi, > > > > sorry if this is FAQ: what can I do to make varnish respond to > > request if it's backend is dead. should return cache hits, of > > course, and a "proxy error" or something for a miss. > > > > and how can I prevent varnish to cache "404" for objects it > > couldn't fetch due to a dead backend? at least I think that is what > > happened, as varnish reported 404 for URLs that definetely exist; > > the dead backend seems to be the only logical explanation why > > varnish could think it's not. > > > > oh, and is there a way to put the local hostname in a header? I > > have two proxies, load balanced by LVS, so using server.ip reports > > the same IP on both nodes. > > > > > > Thanks, Sascha > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at projects.linpro.no > > http://projects.linpro.no/mailman/listinfo/varnish-misc From bm at turtle-entertainment.de Thu Apr 24 09:06:45 2008 From: bm at turtle-entertainment.de (Bjoern Metzdorf) Date: Thu, 24 Apr 2008 11:06:45 +0200 Subject: Requests suddenly slow / fine after restart In-Reply-To: <80823.1208760611@critter.freebsd.dk> References: <80823.1208760611@critter.freebsd.dk> Message-ID: <48104DA5.4090300@turtle-entertainment.de> Poul-Henning Kamp wrote: > In message <480875C6.60503 at turtle-entertainment.de>, Bjoern Metzdorf writes: > >> Recently we had some problems with slow requests. Some static graphic >> files suddenly took several seconds to load instead of the usual >> milliseconds. After a restart of varnish everything was fine again and >> the very same graphic was fast again. > > You want to use some tool to graph your system activity so that you can > see what system resource is causing your grief. > > Most likely it is disk-I/O, but without data it is impossible to be > sure of that. > > A lot of varnish people seem be partial to the "Munin" tool for this. > We are graphing cache hits and client requests with cacti and have the machines in ganglia as well. I will have a look at the graphs the next time this happens. Regards, Bjoern From phk at phk.freebsd.dk Fri Apr 25 05:58:38 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 25 Apr 2008 05:58:38 +0000 Subject: Dirty Caching In-Reply-To: Your message of "Thu, 24 Apr 2008 09:10:58 +0100." Message-ID: <5073.1209103118@critter.freebsd.dk> In message , Paul Nasrat writes: > >On 23 Apr 2008, at 18:48, Max Clark wrote: >> In July of 2007 there was a discussion on Dirty Caching - or serving >> stale content if the backend is down: >> >> http://projects.linpro.no/pipermail/varnish-misc/2007-July/000590.html >> >> Whatever happened with this? Is this feature available with Varnish? >> How do I configure it? Well it happened, but ended up being called "grace mode" instead, since that better describes what it does. To enable: in vcl_recv: set req.grace = 2m; in vcl_fetch: set obj.grace = 2m; Then if we encounter an expired object, but another client is already busy pulling it from the backend, and it is not expired by more than the lower of the two fields above, we serve the old object. >Hopefully I should have a patch set shortly that allows set to contain >arithmetic operations with variables. Sounds cool :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Fri Apr 25 06:53:08 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 25 Apr 2008 06:53:08 +0000 Subject: Dependance (or better non-dependance) on the Origin In-Reply-To: Your message of "Wed, 23 Apr 2008 10:25:37 MST." <2fa1e1780804231025k445b1f8cj87700890f79c4a6a@mail.gmail.com> Message-ID: <5279.1209106388@critter.freebsd.dk> In message <2fa1e1780804231025k445b1f8cj87700890f79c4a6a at mail.gmail.com>, "Max Clark" writes: >How does Varnish handle the connections to the Origin? Is it possible >to have Varnish serve requests if the Origin is down? Varnish does not contact the backend if the object it has in the cache is good to be served. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From pnasrat at googlemail.com Fri Apr 25 07:07:29 2008 From: pnasrat at googlemail.com (Paul Nasrat) Date: Fri, 25 Apr 2008 08:07:29 +0100 Subject: Wiki editing Message-ID: <8177532E-2B2C-4D79-9E24-3A31F3124C7F@googlemail.com> Would it be possible to have a wiki editing bit set on my trac account - pnasrat so I can update some of the information (eg Degraded->Grace mode) and add some docs. Paul From phk at phk.freebsd.dk Fri Apr 25 07:09:03 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 25 Apr 2008 07:09:03 +0000 Subject: Wiki editing In-Reply-To: Your message of "Fri, 25 Apr 2008 08:07:29 +0100." <8177532E-2B2C-4D79-9E24-3A31F3124C7F@googlemail.com> Message-ID: <5421.1209107343@critter.freebsd.dk> In message <8177532E-2B2C-4D79-9E24-3A31F3124C7F at googlemail.com>, Paul Nasrat writes: >Would it be possible to have a wiki editing bit set on my trac account >- pnasrat so I can update some of the information (eg Degraded->Grace >mode) and add some docs. Done -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From wichert at wiggy.net Fri Apr 25 08:37:34 2008 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 25 Apr 2008 10:37:34 +0200 Subject: unprocessed requests In-Reply-To: <7x63u7keak.fsf@iostat.linpro.no> References: <480F5689.8070806@wiggy.net> <7xabjjkg6q.fsf@iostat.linpro.no> <7x63u7keak.fsf@iostat.linpro.no> Message-ID: <4811984E.8030807@wiggy.net> Stig Sandbeck Mathisen wrote: > On Thu, 24 Apr 2008 06:52:29 +0200, Stig Sandbeck Mathisen said: > >> If one request goes is directed to "pipe", and the connection is >> kept open, other requests will go through the same connection >> without being inspected touched by varnish. This is especially >> visible after a login in zope (or plone), which is done via the POST >> method. > > On further inspection of source and tickets, using "pipe" and > 'req.http.connection = "close";' should be done for 1.1.2. For 1.2, > which includes the "2115" changeset, you should be able to use "pass" > instead of "pipe" for POST, enabling connection reuse. That indeed seems to have fixed things. Thanks! Might I suggest adding a warning to vcl(7) about this? It wouldn't surprise me if others run into the same thing and you could least tell them to RTFM then :) Wichert. -- Wichert Akkerman It is simple to make things. http://www.wiggy.net/ It is hard to make things simple. From armdan20 at gmail.com Mon Apr 28 10:38:03 2008 From: armdan20 at gmail.com (andan andan) Date: Mon, 28 Apr 2008 12:38:03 +0200 Subject: PURGE http method and compression. Message-ID: Hi all. According to: http://varnish.projects.linpro.no/wiki/FAQ/Compression, for two requests with different "Accept-encoding", the cache stores two objets. But, to purge an URL, how affect this ? we must send two PURGE requests with the two "Accept-encoding" ? Thanks in advance. Best regards. From phk at phk.freebsd.dk Mon Apr 28 10:40:24 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 28 Apr 2008 10:40:24 +0000 Subject: PURGE http method and compression. In-Reply-To: Your message of "Mon, 28 Apr 2008 12:38:03 +0200." Message-ID: <6276.1209379224@critter.freebsd.dk> In message , "anda n andan" writes: >Hi all. > >According to: http://varnish.projects.linpro.no/wiki/FAQ/Compression, >for two requests with different "Accept-encoding", the cache stores >two objets. > >But, to purge an URL, how affect this ? we must send two PURGE >requests with the two "Accept-encoding" ? Depends how you do the purge. If you use the url.purge or hash.purge VCL primitives all matching objects will be purged. If you use the "set obj.ttl = 0s" way of purging, only the single object you found in the cache will get purged. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From wichert at wiggy.net Mon Apr 28 10:43:00 2008 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 28 Apr 2008 12:43:00 +0200 Subject: PURGE http method and compression. In-Reply-To: <6276.1209379224@critter.freebsd.dk> References: <6276.1209379224@critter.freebsd.dk> Message-ID: <4815AA34.5040300@wiggy.net> Poul-Henning Kamp wrote: > In message , "anda > n andan" writes: >> Hi all. >> >> According to: http://varnish.projects.linpro.no/wiki/FAQ/Compression, >> for two requests with different "Accept-encoding", the cache stores >> two objets. >> >> But, to purge an URL, how affect this ? we must send two PURGE >> requests with the two "Accept-encoding" ? > > Depends how you do the purge. > > If you use the url.purge or hash.purge VCL primitives all matching > objects will be purged. > > If you use the "set obj.ttl = 0s" way of purging, only the single > object you found in the cache will get purged. That's interesting. So the default VCL for HTTP PURGE requests will not purge all variants? How can a backend know which variants there are so it can purge all of them? Wichert. -- Wichert Akkerman It is simple to make things. http://www.wiggy.net/ It is hard to make things simple. From phk at phk.freebsd.dk Mon Apr 28 11:21:24 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 28 Apr 2008 11:21:24 +0000 Subject: PURGE http method and compression. In-Reply-To: Your message of "Mon, 28 Apr 2008 12:43:00 +0200." <4815AA34.5040300@wiggy.net> Message-ID: <6419.1209381684@critter.freebsd.dk> In message <4815AA34.5040300 at wiggy.net>, Wichert Akkerman writes: >That's interesting. So the default VCL for HTTP PURGE requests will not >purge all variants? How can a backend know which variants there are so it >can purge all of them? It probably can't, so if your backend returns "Vary:" headers, you should use the url.purge or hash.purge method. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From Phuwadon at sanookonline.co.th Mon Apr 28 11:36:27 2008 From: Phuwadon at sanookonline.co.th (Phuwadon Danrahan) Date: Mon, 28 Apr 2008 18:36:27 +0700 Subject: PURGE http method and compression. Message-ID: In my environment, we have the vcl_hash that uses "Accept-Encoding" as one of hash keys. sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Accept-Encoding ~ "gzip") { set req.hash += "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.hash += "deflate"; } hash; } So, Varnish may have 3 hash versions of the same url. First is just for no gzip/deflate client. Second is for gzip client. And third is for deflate client. If we need to purge the URL from backoffice (web server), we just purge 3 times for each URL by using CURL. -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Poul-Henning Kamp Sent: Monday, April 28, 2008 6:21 PM To: Wichert Akkerman Cc: varnish-misc at projects.linpro.no Subject: Re: PURGE http method and compression. In message <4815AA34.5040300 at wiggy.net>, Wichert Akkerman writes: >That's interesting. So the default VCL for HTTP PURGE requests will not >purge all variants? How can a backend know which variants there are so it >can purge all of them? It probably can't, so if your backend returns "Vary:" headers, you should use the url.purge or hash.purge method. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Mon Apr 28 13:22:49 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 28 Apr 2008 13:22:49 +0000 Subject: PURGE http method and compression. In-Reply-To: Your message of "Mon, 28 Apr 2008 18:36:27 +0700." Message-ID: <6906.1209388969@critter.freebsd.dk> In message , "Phuwadon Danrahan" writes: >In my environment, we have the vcl_hash that uses "Accept-Encoding" as >one of hash keys.=20 > >sub vcl_hash { > set req.hash +=3D req.url; > set req.hash +=3D req.http.host; > if (req.http.Accept-Encoding ~ "gzip") { > set req.hash +=3D "gzip"; > } > else if (req.http.Accept-Encoding ~ "deflate") { > set req.hash +=3D "deflate"; > } > hash; >} > >So, Varnish may have 3 hash versions of the same url. First is just for >no gzip/deflate client. Second is for gzip client. And third is for >deflate client. > >If we need to purge the URL from backoffice (web server), we just purge >3 times for each URL by using CURL. You only have to do one purge if you use hash.purge: just let the regexp match everything up to, but not including gzip/deflate -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 28 21:31:26 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 28 Apr 2008 14:31:26 -0700 Subject: redirect in varnish In-Reply-To: <95977.1208801508@critter.freebsd.dk> References: Your message of "Mon, 21 Apr 2008 11:10:34 MST." <95977.1208801508@critter.freebsd.dk> Message-ID: Hello, Can I add rule to the VCL configuration file to redirect from one URL to another. So if the user goes to http://www.bla.com it will redirect to http://www.bla1.com . What will this rule look like and will it be in the "sub vcl_recv" section. I would love an example. Thanks From phk at phk.freebsd.dk Mon Apr 28 21:47:16 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 28 Apr 2008 21:47:16 +0000 Subject: redirect in varnish In-Reply-To: Your message of "Mon, 28 Apr 2008 14:31:26 MST." Message-ID: <9571.1209419236@critter.freebsd.dk> In message , "Pavel Pragin" writes: >Hello, > >Can I add rule to the VCL configuration file to redirect from one URL to >another. So if the user goes to http://www.bla.com it will redirect to >http://www.bla1.com . What will this rule look like and will it be in >the "sub vcl_recv" section. I would love an example. if (req.http.host == "www.bla.com") { set req.http.host = "www.bla1.com"; } -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 28 21:56:24 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 28 Apr 2008 14:56:24 -0700 Subject: redirect in varnish In-Reply-To: <9571.1209419236@critter.freebsd.dk> References: Your message of "Mon, 28 Apr 2008 14:31:26 MST." <9571.1209419236@critter.freebsd.dk> Message-ID: Hello, I added the example in the config, but the redirect is not happening. When I access the site from the browser it does not redirect. Thanks First part of config: sub vcl_recv { if (req.http.host == "www.tibcocommunity.com") { set req.http.host = "www.yahoo.com"; } if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css)$") { lookup; } if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie) { pass; } remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 28, 2008 2:47 PM To: Pavel Pragin Cc: varnish-misc at projects.linpro.no Subject: Re: redirect in varnish In message , "Pavel Pragin" writes: >Hello, > >Can I add rule to the VCL configuration file to redirect from one URL to >another. So if the user goes to http://www.bla.com it will redirect to >http://www.bla1.com . What will this rule look like and will it be in >the "sub vcl_recv" section. I would love an example. if (req.http.host == "www.bla.com") { set req.http.host = "www.bla1.com"; } -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ppragin at SolutionSet.com Mon Apr 28 22:10:28 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 28 Apr 2008 15:10:28 -0700 Subject: redirect in varnish In-Reply-To: References: Your message of "Mon, 28 Apr 2008 14:31:26 MST."<9571.1209419236@critter.freebsd.dk> Message-ID: Hello, At least the address in the URL bar does not change and the new site doesn't come up. I simplified my config file for testing: backend default { set backend.host = "127.0.0.1"; set backend.port = "8081"; } sub vcl_recv { if (req.http.host == "www.tibcocommunity.com") { set req.http.host = "www.yahoo.com"; } } -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Pavel Pragin Sent: Monday, April 28, 2008 2:56 PM To: Poul-Henning Kamp Cc: varnish-misc at projects.linpro.no Subject: RE: redirect in varnish Hello, I added the example in the config, but the redirect is not happening. When I access the site from the browser it does not redirect. Thanks First part of config: sub vcl_recv { if (req.http.host == "www.tibcocommunity.com") { set req.http.host = "www.yahoo.com"; } if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css)$") { lookup; } if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie) { pass; } remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } -----Original Message----- From: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] On Behalf Of Poul-Henning Kamp Sent: Monday, April 28, 2008 2:47 PM To: Pavel Pragin Cc: varnish-misc at projects.linpro.no Subject: Re: redirect in varnish In message , "Pavel Pragin" writes: >Hello, > >Can I add rule to the VCL configuration file to redirect from one URL to >another. So if the user goes to http://www.bla.com it will redirect to >http://www.bla1.com . What will this rule look like and will it be in >the "sub vcl_recv" section. I would love an example. if (req.http.host == "www.bla.com") { set req.http.host = "www.bla1.com"; } -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From ppragin at SolutionSet.com Mon Apr 28 23:14:24 2008 From: ppragin at SolutionSet.com (Pavel Pragin) Date: Mon, 28 Apr 2008 16:14:24 -0700 Subject: its me again Message-ID: Hello, I am sorry to be so annoying but I have a deadline and time is ticking! I am trying to redirect from one URL to another using the rules below and I am not having any luck. The original page is still coming up with no direction of any kind. Maybe you can look at the log and see why it's not working. It looks like " TxHeader " is picking up the new host it's supposed to go to, but is not getting redirected. And I also notice some "404" errors. I must be missing the URL part of the request. I am not sure. Thank You for your help ahead of time! 1)This is my Varnish configuration: backend default { set backend.host = "127.0.0.1"; set backend.port = "8081"; } sub vcl_recv { if (req.http.host == "www.tibcocommunity.com") { set req.http.host = " www.tibcommunity.com "; } } 2) Varnishlog output when accessing the site: [root at 165248-web1 varnish]# varnishlog 0 CLI Rd ping 0 CLI Wr 0 200 PONG 1209423896 0 CLI Rd ping 0 CLI Wr 0 200 PONG 1209423899 0 CLI Rd ping 0 CLI Wr 0 200 PONG 1209423902 0 WorkThread 0x6450c110 start 13 SessionOpen c 69.28.122.122 31857 13 ReqStart c 69.28.122.122 31857 438756787 13 RxRequest c GET 13 RxURL c / 13 RxProtocol c HTTP/1.1 13 RxHeader c Host: www.tibcocommunity.com 13 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 13 RxHeader c Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai n;q=0.8,image/png,*/*;q=0.5 13 RxHeader c Accept-Language: en-us,en;q=0.5 13 RxHeader c Accept-Encoding: gzip,deflate 13 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 13 RxHeader c Keep-Alive: 300 13 RxHeader c Connection: keep-alive 13 VCL_call c recv 13 VCL_return c lookup 13 VCL_call c hash 13 VCL_return c hash 13 VCL_call c miss 13 VCL_return c fetch 16 BackendOpen b default 127.0.0.1 33962 127.0.0.1 8081 16 BackendXID b 438756787 13 Backend c 16 default 16 TxRequest b GET 16 TxURL b / 16 TxProtocol b HTTP/1.1 16 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 16 TxHeader b Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai n;q=0.8,image/png,*/*;q=0.5 16 TxHeader b Accept-Language: en-us,en;q=0.5 16 TxHeader b Accept-Encoding: gzip,deflate 16 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 TxHeader b host: www.tibcommunity.com 16 TxHeader b X-Varnish: 438756787 16 TxHeader b X-Forwarded-for: 69.28.122.122 16 RxProtocol b HTTP/1.1 16 RxStatus b 200 16 RxResponse b OK 16 RxHeader b Date: Mon, 28 Apr 2008 23:05:05 GMT 16 RxHeader b Server: Apache/2.0.52 (Red Hat) 16 RxHeader b Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 16 RxHeader b ETag: "a383dd-234-10bf8b80" 16 RxHeader b Accept-Ranges: bytes 16 RxHeader b Content-Length: 564 16 RxHeader b Connection: close 16 RxHeader b Content-Type: text/html; charset=UTF-8 13 ObjProtocol c HTTP/1.1 13 ObjStatus c 200 13 ObjResponse c OK 13 ObjHeader c Date: Mon, 28 Apr 2008 23:05:05 GMT 13 ObjHeader c Server: Apache/2.0.52 (Red Hat) 13 ObjHeader c Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 13 ObjHeader c ETag: "a383dd-234-10bf8b80" 13 ObjHeader c Content-Type: text/html; charset=UTF-8 16 BackendClose b default 13 TTL c 438756787 RFC 120 1209423905 1209423905 0 0 0 13 VCL_call c fetch 13 VCL_return c insert 13 Length c 564 13 VCL_call c deliver 13 VCL_return c deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c Server: Apache/2.0.52 (Red Hat) 13 TxHeader c Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 13 TxHeader c ETag: "a383dd-234-10bf8b80" 13 TxHeader c Content-Type: text/html; charset=UTF-8 13 TxHeader c Content-Length: 564 13 TxHeader c Date: Mon, 28 Apr 2008 23:05:05 GMT 13 TxHeader c X-Varnish: 438756787 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: keep-alive 13 ReqEnd c 438756787 1209423905.541680098 1209423905.542206049 0.003940105 0.000486851 0.000039101 0 StatAddr 69.28.122.122 0 0 1 1 0 0 1 297 564 0 CLI Rd ping 0 CLI Wr 0 200 PONG 1209423905 13 ReqStart c 69.28.122.122 31857 438756788 13 RxRequest c GET 13 RxURL c /keycard.jpg 13 RxProtocol c HTTP/1.1 13 RxHeader c Host: www.tibcocommunity.com 13 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 13 RxHeader c Accept: image/png,*/*;q=0.5 13 RxHeader c Accept-Language: en-us,en;q=0.5 13 RxHeader c Accept-Encoding: gzip,deflate 13 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 13 RxHeader c Keep-Alive: 300 13 RxHeader c Connection: keep-alive 13 RxHeader c Referer: http://www.tibcocommunity.com/ 13 VCL_call c recv 13 VCL_return c lookup 13 VCL_call c hash 13 VCL_return c hash 13 VCL_call c miss 13 VCL_return c fetch 16 BackendOpen b default 127.0.0.1 33963 127.0.0.1 8081 16 BackendXID b 438756788 13 Backend c 16 default 16 TxRequest b GET 16 TxURL b /keycard.jpg 16 TxProtocol b HTTP/1.1 16 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 16 TxHeader b Accept: image/png,*/*;q=0.5 16 TxHeader b Accept-Language: en-us,en;q=0.5 16 TxHeader b Accept-Encoding: gzip,deflate 16 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 TxHeader b Referer: http://www.tibcocommunity.com/ 16 TxHeader b host: www.tibcommunity.com 16 TxHeader b X-Varnish: 438756788 16 TxHeader b X-Forwarded-for: 69.28.122.122 16 RxProtocol b HTTP/1.1 16 RxStatus b 200 16 RxResponse b OK 16 RxHeader b Date: Mon, 28 Apr 2008 23:05:05 GMT 16 RxHeader b Server: Apache/2.0.52 (Red Hat) 16 RxHeader b Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 16 RxHeader b ETag: "a383ef-aefa-10bf8b80" 16 RxHeader b Accept-Ranges: bytes 16 RxHeader b Content-Length: 44794 16 RxHeader b Connection: close 16 RxHeader b Content-Type: image/jpeg 13 ObjProtocol c HTTP/1.1 13 ObjStatus c 200 13 ObjResponse c OK 13 ObjHeader c Date: Mon, 28 Apr 2008 23:05:05 GMT 13 ObjHeader c Server: Apache/2.0.52 (Red Hat) 13 ObjHeader c Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 13 ObjHeader c ETag: "a383ef-aefa-10bf8b80" 13 ObjHeader c Content-Type: image/jpeg 16 BackendClose b default 13 TTL c 438756788 RFC 120 1209423905 1209423905 0 0 0 13 VCL_call c fetch 13 VCL_return c insert 13 Length c 44794 13 VCL_call c deliver 13 VCL_return c deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 200 13 TxResponse c OK 13 TxHeader c Server: Apache/2.0.52 (Red Hat) 13 TxHeader c Last-Modified: Mon, 28 Apr 2008 17:15:42 GMT 13 TxHeader c ETag: "a383ef-aefa-10bf8b80" 13 TxHeader c Content-Type: image/jpeg 13 TxHeader c Content-Length: 44794 13 TxHeader c Date: Mon, 28 Apr 2008 23:05:05 GMT 13 TxHeader c X-Varnish: 438756788 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: keep-alive 13 ReqEnd c 438756788 1209423905.645124912 1209423906.011852026 0.102918863 0.000420094 0.366307020 0 StatAddr 69.28.122.122 0 0 1 2 0 0 2 583 45358 13 ReqStart c 69.28.122.122 31857 438756789 13 RxRequest c GET 13 RxURL c /favicon.ico 13 RxProtocol c HTTP/1.1 13 RxHeader c Host: www.tibcocommunity.com 13 RxHeader c User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 13 RxHeader c Accept: image/png,*/*;q=0.5 13 RxHeader c Accept-Language: en-us,en;q=0.5 13 RxHeader c Accept-Encoding: gzip,deflate 13 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 13 RxHeader c Keep-Alive: 300 13 RxHeader c Connection: keep-alive 13 VCL_call c recv 13 VCL_return c lookup 13 VCL_call c hash 13 VCL_return c hash 13 VCL_call c miss 13 VCL_return c fetch 16 BackendOpen b default 127.0.0.1 33964 127.0.0.1 8081 16 BackendXID b 438756789 13 Backend c 16 default 16 TxRequest b GET 16 TxURL b /favicon.ico 16 TxProtocol b HTTP/1.1 16 TxHeader b User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14 16 TxHeader b Accept: image/png,*/*;q=0.5 16 TxHeader b Accept-Language: en-us,en;q=0.5 16 TxHeader b Accept-Encoding: gzip,deflate 16 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 16 TxHeader b host: www.tibcommunity.com 16 TxHeader b X-Varnish: 438756789 16 TxHeader b X-Forwarded-for: 69.28.122.122 16 RxProtocol b HTTP/1.1 16 RxStatus b 404 16 RxResponse b Not Found 16 RxHeader b Date: Mon, 28 Apr 2008 23:05:06 GMT 16 RxHeader b Server: Apache/2.0.52 (Red Hat) 16 RxHeader b Content-Length: 296 16 RxHeader b Connection: close 16 RxHeader b Content-Type: text/html; charset=iso-8859-1 13 ObjProtocol c HTTP/1.1 13 ObjStatus c 404 13 ObjResponse c Not Found 13 ObjHeader c Date: Mon, 28 Apr 2008 23:05:06 GMT 13 ObjHeader c Server: Apache/2.0.52 (Red Hat) 13 ObjHeader c Content-Type: text/html; charset=iso-8859-1 16 BackendClose b default 13 TTL c 438756789 RFC 120 1209423906 1209423906 0 0 0 13 VCL_call c fetch 13 VCL_return c insert 13 Length c 296 13 VCL_call c deliver 13 VCL_return c deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 404 13 TxResponse c Not Found 13 TxHeader c Server: Apache/2.0.52 (Red Hat) 13 TxHeader c Content-Type: text/html; charset=iso-8859-1 13 TxHeader c Content-Length: 296 13 TxHeader c Date: Mon, 28 Apr 2008 23:05:06 GMT 13 TxHeader c X-Varnish: 438756789 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: keep-alive 13 ReqEnd c 438756789 1209423906.226600885 1209423906.226854086 0.214748859 0.000235081 0.000018120 0 StatAddr 69.28.122.122 0 1 1 3 0 0 3 817 45654 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Tue Apr 29 04:27:41 2008 From: simon at darkmere.gen.nz (Simon Lyall) Date: Tue, 29 Apr 2008 16:27:41 +1200 (NZST) Subject: its me again In-Reply-To: References: Message-ID: On Mon, 28 Apr 2008, Pavel Pragin wrote: > if (req.http.host == "www.tibcocommunity.com") { > > set req.http.host = " www.tibcommunity.com "; > > } I'd watch your spaces in the example above, that might be confusing things. My config has: if (req.http.host ~ "^simon.darkmere.gen.nz$") { set req.http.host = "www.darkmere.gen.nz"; set req.backend = darkmere; } Note the lack of spaces in between my quotes. Perhaps try and add a header or something instaed of setting a new req.http.host to make sure your test is working. > 2) Varnishlog output when accessing the site: Try varnishncsa -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From eden at hulu.com Tue Apr 29 07:47:36 2008 From: eden at hulu.com (Eden Li) Date: Tue, 29 Apr 2008 15:47:36 +0800 Subject: Varnish crashing when system starts to swap In-Reply-To: <81618.1208761703@critter.freebsd.dk> References: <81618.1208761703@critter.freebsd.dk> Message-ID: <1dd361e10804290047h5443152hdede46f152040420@mail.gmail.com> On Mon, Apr 21, 2008 at 3:08 PM, Poul-Henning Kamp wrote: > In message , Ca > > lle Korjus writes: > > >Varnish looks fine until it's had abour 1,5 million requests, then > >we can see the kswapd0 and kswapd1 start working and load average > >rises to about 200 and the machine gets totally unresponsive. Top > >shows a lot of cpu beeing spent on i/o waits and varnish child > >process restarts sometimes. In best > >case the process restarts and the server starts behaving within 5 > >minutes but sometimes varnish dies completely. One thing we have > >noticed is that the reserved memory for varnish keeps rising and > >when it crashes it is usually around 14G. > > You didn't say which varnish version you are using, but we have fixed > a number of memory leaks in -trunk recently. I've just run into this exact issue varnish 1.1.2, except that varnish was running for over a month until kswap started acting up. Also, our backends are not on the same machine. I tried running different revisions of varnish from trunk on our machines (r2634 and r2436 on Linux CentOS 5.0), but it keeps asserting when daemonizing the process (varnishd.c:557). Using an export of the unstable branch kept at http://varnish.projects.linpro.no/svn/branches/1.2 seems to be working. Do you know if this branch has some of the memory leak fixes you mention here? From ay at vg.no Tue Apr 29 14:12:25 2008 From: ay at vg.no (Audun Ytterdal) Date: Tue, 29 Apr 2008 16:12:25 +0200 Subject: Varnish performance tips on linux Message-ID: <48172CC9.9000303@vg.no> I've been dealing with a few varnish problems the last months on linuxboxes I do not have writeaccess to the wiki, so I'll write my notes down here. Problems first, then how i fixed some of them There might be some ugly stuff here and even something that are just plain wrong. It works for me, I'd love feedback Problems I've run into * Crashes when cache runs full (1.1.2) * performance issues * CPU in IO-wait -> High Load -> * Slower cache Thread Pileup under heavy load -> Load > 60 Crashes when cache runs full ---------------------------- My fix: Upgade to trunk. There has been a lot of fixes in trunk to deal with this. Trunk is usually pretty stable with some exceptions (I'm running varnish-1.1.2-trunk2543) Network performance issues -------------------------- My fix: Tune kernel parameters. Linux has a increasing number of tcp autotuning parameters especially after 2.6.17. But you still want to look into the following net.ipv4.ip_local_port_range = 1024 65536 #Defines the local port range that is used by TCP and UDP to choose #the local port. The first number is the first, the second the last #local port number. The default value depends on the amount of memory #available on the system: > 128MB 32768 - 61000, < 128MB 1024 - 4999 #or even less This number defines number of active connections, which #this system can issue simultaneously to systems not supporting TCP #extensions (timestamps). With tcp_tw_recycle enabled, range 1024 - #4999 is enough to issue up to 2000 connections per second to systems #supporting timestamps. net.core.rmem_max=16777216 #This setting changes the maximum network receive buffer net.core.wmem_max=16777216 #The same thing for the send buffer net.ipv4.tcp_rmem=4096 87380 16777216 #This sets the kernel's minimum, default, and maximum TCP receive #buffer sizes. You might be surprised, seeing the maximum of 16M, #that many Unix-like operating systems still have a maximum of 256K! net.ipv4.tcp_wmem=4096 65536 16777216 #A similar setting for the TCP send buffer. Note that the #default value is a little lower. Don't worry about this, #the send buffer size is less important than the #receive buffer. net.ipv4.tcp_fin_timeout = 3 #Time to hold socket in state FIN-WAIT-2, #if it was closed by our side. Peer can be broken and never close its #side, or even die unexpectedly. The default value is 60 seconds. #Usual value used in 2.2 was 180 seconds, you may restore it, but #remember that if your machine is even underloaded web server, you risk #to overflow memory with lots of dead sockets. FIN-WAIT-2 sockets are #less dangerous than FIN-WAIT-1, because they eat maximum 1.5 kilobytes #of memory, but they tend to live longer. net.ipv4.tcp_tw_recycle = 1 #Allow to reuse TIME-WAIT sockets for new connections when it is safe #from protocol viewpoint. The default value is 0 net.core.netdev_max_backlog = 30000 #Maximum number of packets, queued on the input side, when the #interface receives packets faster than #kernel can process them. Applies to non-NAPI devices only. The default #value is 1000. net.ipv4.tcp_no_metrics_save=1 #This removes an odd behavior in the 2.6 kernels, #whereby the kernel stores the slow start threshold for a #client between TCP sessions. This can cause undesired results, as a #single period of congestion can affect many subsequent connections. I #recommend that you disable it. net.core.somaxconn = 262144 #Limit of socket listen() backlog, known in userspace as SOMAXCONN. #Defaults to 128. See also tcp_max_syn_backlog for additional #tuning for TCP sockets. net.ipv4.tcp_syncookies = 0 # Send out syncookies when the syn backlog queue of a socket overflows. # This is to prevent against the common "syn flood attack". # Disabled (0) by default. And should stay disabled. net.ipv4.tcp_max_orphans = 262144 #Maximal number of TCP sockets not attached to any user file handle, #held by system. If this number is exceeded orphaned connections are #reset immediately and warning is printed. net.ipv4.tcp_max_syn_backlog = 262144 #Maximal number of remembered connection requests, which still did #not receive an acknowledgment from connecting client. #The default value is 1024 for systems with more than 128 MB of memory, #and 128 for low memory machines. net.ipv4.tcp_synack_retries = 2 #Number of times SYNACKs for a passive TCP connection attempt #will be retransmitted. Should not be higher than 255. #The default value is 5, which corresponds to ~ 180 seconds. net.ipv4.tcp_syn_retries = 2 #Number of times initial SYNs for an active TCP connection attempt #will be retransmitted. Should not be higher than 255. #The default value is 5, which corresponds to ~ 180 seconds. CPU in IO-wait -------------- For some reason, newer kernels (>2.6.9 atleast) are much more aggressive in writing the varnish mmap'ed datafile down to disk. On older kernels a "iostat -x 1" would give you an allmost idling disk If "iostat -x 1 gives you up to 100% usage in the last column, you have vm-io-trouble. I've tried to ajust alot of the settings under vm and disk, but I can't seem to get it to behave like old linux kernels (Newer FreeBSD-kernels seem to have a similar problem My fix: Use The malloc backend instead of mmap'ed datafile. To do this you have to add a pretty large swap, just trash the filesystem you used to store you datafile on and give it to swap and add "-s malloc,30G" for a "datafile" of 30G on swap. High Load, slow backends, Thread Pile-up --------------------------------------- In my case I have 3 varnishes talking to the same backend for a specific service. Each varnish serving 4000 hits/s, So about 12000 hits/s. Most requests are hitting the same page. The backend is pretty quick, answering in about 10 milliseconds, but that is unfortunalty too slow in some cases. When a client asks for a object that is new or gone stale, varnish queues up the requests while it retrieves the object from the backend. During the 10 milliseconds it takes to retrieve the object from the backend 40 new requests is queued up waiting for the same object using up a thread. So what happens if the backend at some point use up to maybe 1 full second to answer. Thread pile-up. It's a bit tricky tosolve this, but we have some options. The first one is to enable object grace period, this allows varnish to serve stale, but cacheable object to clients while it retrives a new object from the backend. set obj.grace = 30s; at the top of both vcl_recv and vcl_fetch this helps alot for stale objects, but its still a problem for new object not yet in the cache. There is unfortunately not a good way of dealing with this except "warming" up the cache by sending a request of the object before you "publish" the object. -- Audun ***************************************************************** Denne fotnoten bekrefter at denne e-postmeldingen ble skannet av MailSweeper og funnet fri for virus. ***************************************************************** This footnote confirms that this email message has been swept by MailSweeper for the presence of computer viruses. ***************************************************************** From ay at vg.no Tue Apr 29 15:52:31 2008 From: ay at vg.no (Audun Ytterdal) Date: Tue, 29 Apr 2008 17:52:31 +0200 Subject: Varnish performance tips on linux In-Reply-To: <48172CC9.9000303@vg.no> References: <48172CC9.9000303@vg.no> Message-ID: <4817443F.4040004@vg.no> Audun Ytterdal wrote: > > It's a bit tricky tosolve this, but we have some options. > > The first one is to enable object grace period, this allows > > varnish to serve stale, but cacheable object to clients while > > it retrives a new object from the backend. > > > > set obj.grace = 30s; > > > > at the top of both vcl_recv and vcl_fetch > > > A bit too quick here. in vcl_recv: "set req.grace = 30s;" in vcl_fetch: "set obj.grace = 30s;" :-) -- Audun ***************************************************************** Denne fotnoten bekrefter at denne e-postmeldingen ble skannet av MailSweeper og funnet fri for virus. ***************************************************************** This footnote confirms that this email message has been swept by MailSweeper for the presence of computer viruses. ***************************************************************** From perbu at linpro.no Tue Apr 29 16:40:38 2008 From: perbu at linpro.no (Per Andreas Buer) Date: Tue, 29 Apr 2008 18:40:38 +0200 Subject: Varnish performance tips on linux In-Reply-To: <48172CC9.9000303@vg.no> References: <48172CC9.9000303@vg.no> Message-ID: <48174F86.20008@linpro.no> Audun Ytterdal skrev: > I do not have writeaccess to the wiki Now you have full access to the Wiki. :-) Per. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 252 bytes Desc: OpenPGP digital signature URL: