From manueltxo at gmail.com Sun Apr 1 12:35:38 2012 From: manueltxo at gmail.com (Manu Campos) Date: Sun, 1 Apr 2012 14:35:38 +0200 Subject: More than one varnish Message-ID: Hi, I'm a newbie in varnish, and I don?t know which would be the best way of using varnish with more than one box. We have a load balancer, which balance the traffic between two varnish boxes, and each one of the varnish boxes has three backends configured. Is there any way to sync the cached objects betwen the two varnish? Cheers, Manu. From jc.bedier at gmail.com Sun Apr 1 12:52:39 2012 From: jc.bedier at gmail.com (Jean-Christian BEDIER) Date: Sun, 1 Apr 2012 14:52:39 +0200 Subject: More than one varnish In-Reply-To: References: Message-ID: Did you use uri hash for loadbalancing rules? This could prevent redundancy object cache on both or more varnish's servers On Sun, Apr 1, 2012 at 2:35 PM, Manu Campos wrote: > Hi, I'm a newbie in varnish, and I don?t know which would be the best way of using varnish with more than one box. > > We have a load balancer, which balance the traffic between two varnish boxes, and each one of the varnish boxes has three backends > configured. Is there any way to sync the cached objects betwen the two varnish? > > Cheers, > Manu. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From manueltxo at gmail.com Sun Apr 1 13:03:28 2012 From: manueltxo at gmail.com (Manu Campos) Date: Sun, 1 Apr 2012 15:03:28 +0200 Subject: More than one varnish In-Reply-To: References: Message-ID: El 01/04/2012, a las 14:52, Jean-Christian BEDIER escribi?: > Did you use uri hash for loadbalancing rules? This could prevent > redundancy object cache on both or more varnish's servers Not right now, we are using Brightbox load balancing service, I?m afraid we can?t change this. Isn?t there any another to use more than one varnish server? > On Sun, Apr 1, 2012 at 2:35 PM, Manu Campos wrote: >> Hi, I'm a newbie in varnish, and I don?t know which would be the best way of using varnish with more than one box. >> >> We have a load balancer, which balance the traffic between two varnish boxes, and each one of the varnish boxes has three backends >> configured. Is there any way to sync the cached objects betwen the two varnish? >> >> Cheers, >> Manu. >> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jc.bedier at gmail.com Sun Apr 1 13:05:16 2012 From: jc.bedier at gmail.com (Jean-Christian BEDIER) Date: Sun, 1 Apr 2012 15:05:16 +0200 Subject: More than one varnish In-Reply-To: References: Message-ID: I don't know which bandwith will be in use, but, haproxy is able to do this job: http://code.google.com/p/haproxy-docs/wiki/balance uri mode? On Sun, Apr 1, 2012 at 3:03 PM, Manu Campos wrote: > > > El 01/04/2012, a las 14:52, Jean-Christian BEDIER escribi?: > >> Did you use uri hash for loadbalancing rules? This could prevent >> redundancy object cache on both or more varnish's servers > > > Not right now, we are using Brightbox load balancing service, I?m afraid we can?t change this. > Isn?t there any another to use more than one varnish server? > > > > >> On Sun, Apr 1, 2012 at 2:35 PM, Manu Campos wrote: >>> Hi, I'm a newbie in varnish, and I don?t know which would be the best way of using varnish with more than one box. >>> >>> We have a load balancer, which balance the traffic between two varnish boxes, and each one of the varnish boxes has three backends >>> configured. Is there any way to sync the cached objects betwen the two varnish? >>> >>> Cheers, >>> Manu. >>> >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From fatblowfish at gmail.com Mon Apr 2 05:51:26 2012 From: fatblowfish at gmail.com (fatblowfish) Date: Mon, 2 Apr 2012 17:51:26 +1200 Subject: Set beresp.ttl based on Cache Control header In-Reply-To: References: Message-ID: On Fri, Mar 30, 2012 at 11:43 AM, Roberto O. Fern?ndez Crisial < roberto.fernandezcrisial at gmail.com> wrote: > Hi fatblowfish (?), > > Maybe you should try with something like: > > if req.url ~ REGEX ( set beresp.ttl == TIME) > > I see...thanks! I found something [0] that sort of seems like what I want to achieve. Still, I wish the feature I wanted was in VCL... [0] - https://www.varnish-cache.org/trac/wiki/VCLExampleExtendingCacheControl On vcl_fetch subrutine. > > > All the best, > Roberto (a.k.a. @rofc) > > On Thu, Mar 29, 2012 at 7:34 PM, fatblowfish wrote: > >> Hi, >> >> I'm pretty much a Varnish 3 newb... >> >> Would it be a good idea to set the beresp.ttl for a specific url based on >> the cache control header, if present and set public? What can be the >> tradeoff for doing that? >> >> Any hints/examples are appreciated too. >> >> Thanks! >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen.ooms at stat.ucla.edu Mon Apr 2 05:55:56 2012 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Sun, 1 Apr 2012 22:55:56 -0700 Subject: caching http POST Message-ID: My back-end server does rpc stuff mostly with http POST and always sends appropriate cache-control headers in the response. I would like to use varnish to do caching. However, I noticed that caching POST request might not be as easy as caching GET. If I return 'lookup' in vcl_recv for a http POST, then it seems to be replaced with a http GET. Is there any way Varnish can be used to cache POST request? Or does Varnish only cache by url? Naturally, the body of the post request should match that of the cached response for a hit... Thanks, Jeroen -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Mon Apr 2 06:46:58 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Mon, 2 Apr 2012 08:46:58 +0200 Subject: caching http POST In-Reply-To: References: Message-ID: <20120402064658.GW12685@nerd.dk> On Sun, Apr 01, 2012 at 10:55:56PM -0700, Jeroen Ooms wrote: > > Is there any way Varnish can be used to cache POST request? No. > Or does Varnish only cache by url? No, but it does not cache by POST body. -- Andreas From bedis9 at gmail.com Mon Apr 2 07:52:38 2012 From: bedis9 at gmail.com (Baptiste) Date: Mon, 2 Apr 2012 09:52:38 +0200 Subject: More than one varnish In-Reply-To: References: Message-ID: Well, in HAProxy, you have two interesting balance methods for caching: - balance uri (as mentionned previously) - balance url_param The url_param allows you to stick a parameter value to a backend server, which may be usefull for websites loading images from a databe wich this kind of URLs: /image.php?imageid=1 /image.php?imageid=2 /image.php?imageid=3 With "balance uri", all these objects would be loaded from the same server. With "balance url_param imageid", then each object may be loaded by a different backend. Note you can combine both parameters with "hash-type" to improve hit rate when servers go up and down. cheers From johnson at nmr.mgh.harvard.edu Mon Apr 2 16:20:37 2012 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Mon, 2 Apr 2012 12:20:37 -0400 (EDT) Subject: Moving from 2.1.5 -> 3.0.2 Message-ID: Hi. Still running 2.1.5 in production. Have a new server we set up and want to update to 3.0.2. Last time I looked at this I recall there were some VCL changes in between. We have made a couple of configuration tweaks in the VCL init file. I have a vague recollection of there being some variable changes and few other things. Is there any quick list of the VCL changes? I see the language wiki. I need to know specifically what needs changing to what. BTW, new system is CentOS 6.2. ------------------------------------------------------------------------------- http://help.nmr.mgh.harvard.edu/ http://faq.nmr.mgh.harvard.edu/ ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |Life stinks. If you're very lucky, sometimes Charlestown, MA., 02129 USA |it stinks a little less. Me ------------------------------------------------------------------------------- The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. From kokoniimasu at gmail.com Mon Apr 2 16:51:23 2012 From: kokoniimasu at gmail.com (kokoniimasu) Date: Tue, 3 Apr 2012 01:51:23 +0900 Subject: Moving from 2.1.5 -> 3.0.2 In-Reply-To: References: Message-ID: Hi Chris. Refer to this url. https://www.varnish-cache.org/docs/trunk/installation/upgrade.html Hope this helps, -- Syohei Tanaka(@xcir) http://xcir.net/ (:3[__]) 2012?4?3?1:20 Chris Johnson : > Hi. > > Still running 2.1.5 in production. Have a new server we set up > and want to update to 3.0.2. Last time I looked at this I recall > there were some VCL changes in between. We have made a couple of > configuration tweaks in the VCL init file. I have a vague > recollection of there being some variable changes and few other > things. > > Is there any quick list of the VCL changes? I see the language > wiki. I need to know specifically what needs changing to what. > > BTW, new system is CentOS 6.2. > > ------------------------------------------------------------------------------- > http://help.nmr.mgh.harvard.edu/ http://faq.nmr.mgh.harvard.edu/ > ------------------------------------------------------------------------------- > Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu > Systems Administrator |Web: > http://www.nmr.mgh.harvard.edu/~johnson > NMR Center |Voice: 617.726.0949 > Mass. General Hospital |FAX: 617.726.7422 > 149 (2301) 13th Street |Life stinks. If you're very lucky, sometimes > Charlestown, MA., 02129 USA |it stinks a little less. Me > ------------------------------------------------------------------------------- > > > The information in this e-mail is intended only for the person to whom it is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/complianceline . If the e-mail was sent to you in > error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From roberto.fernandezcrisial at gmail.com Mon Apr 2 17:09:02 2012 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Mon, 2 Apr 2012 14:09:02 -0300 Subject: Moving from 2.1.5 -> 3.0.2 In-Reply-To: References: Message-ID: Hi Chris, I recommend to take a look at https://www.varnish-cache.org/trac/browser/doc/changes.rst, in particular to "Changes from 2.1.5 to 3.0 beta 1" (#VCL) Regards, Roberto (a.k.a. @rofc) On Mon, Apr 2, 2012 at 1:20 PM, Chris Johnson wrote: > Hi. > > Still running 2.1.5 in production. Have a new server we set up > and want to update to 3.0.2. Last time I looked at this I recall > there were some VCL changes in between. We have made a couple of > configuration tweaks in the VCL init file. I have a vague > recollection of there being some variable changes and few other > things. > > Is there any quick list of the VCL changes? I see the language > wiki. I need to know specifically what needs changing to what. > > BTW, new system is CentOS 6.2. > > ------------------------------**------------------------------** > ------------------- > http://help.nmr.mgh.harvard.**edu/ > http://faq.nmr.mgh.harvard.**edu/ > ------------------------------**------------------------------** > ------------------- > Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu > Systems Administrator |Web: http://www.nmr.mgh.harvard.** > edu/~johnson > NMR Center |Voice: 617.726.0949 > Mass. General Hospital |FAX: 617.726.7422 > 149 (2301) 13th Street |Life stinks. If you're very lucky, sometimes > Charlestown, MA., 02129 USA |it stinks a little less. Me > ------------------------------**------------------------------** > ------------------- > > > The information in this e-mail is intended only for the person to whom it > is > addressed. If you believe this e-mail was sent to you in error and the > e-mail > contains patient information, please contact the Partners Compliance > HelpLine at > http://www.partners.org/**complianceline. If the e-mail was sent to you in error > but does not contain patient information, please contact the sender and > properly > dispose of the e-mail. > > > ______________________________**_________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/**lists/mailman/listinfo/**varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.hayter at gmail.com Mon Apr 2 18:58:33 2012 From: jim.hayter at gmail.com (Jim Hayter) Date: Mon, 2 Apr 2012 14:58:33 -0400 Subject: Upgrading 2.0.5 to 3.0.0 Message-ID: With 2.0.5 I was monitoring/graphing the values from "varnishstat -1 -f uptime,cache_hit,client_req,n_lru_nuked,sma_nobj,sma_nbytes". I am using memory cache. It is not clear to me what values in 3.0.0 correspond to sma_nobj and sma_nbytes. I'd appreciate some clarification. Thanks, Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai at ich-geh-kaputt.de Mon Apr 2 19:12:38 2012 From: kai at ich-geh-kaputt.de (Kai Moritz) Date: Mon, 02 Apr 2012 21:12:38 +0200 Subject: Does varnish support "If-None-Match"-requests? Message-ID: <4F79FA26.5030203@ich-geh-kaputt.de> Because I do not want to teach my backend-server, how to purge cached pages, I gave my pages a relatively short max-age (about 300s), so that caches regularly have to ask the server, if the page has changed. I also set a strong ETag-header on all pages, so that these requests can be done with a "If-None-Matches"-header. The ETag only changes, when the content really changes, so that most requests can be answred with a 304 Not-Modified. This setup works well with browser-caches and squid & co. But it seems as if varnish always issues a non-conditional request, when the page was expired since the last cache-hit. Does varnish ignore the ETag-header if max-age and/or Expires are set or is this a configuration issue? Greetings Kai Moritz From n.j.saunders at gmail.com Mon Apr 2 19:25:12 2012 From: n.j.saunders at gmail.com (Neil Saunders) Date: Mon, 2 Apr 2012 20:25:12 +0100 Subject: Round robin weighted 'collections' of backends? Message-ID: Hi all. I'd like to determine if it's possible, via hack or otherwise, to preferentially round robin 'collections' of backends. For context: We run services in EC2. We have 3 web heads, one per availability zone (a,b,c) For one of our services we have 9 backends, 3 per availability zone (1a,2a,3a,1b,2b,3b,1c,2c,3c). All varnish instances know about all backends. What I'd like to do is have to have each varnish web head in each availability zone round robin healthy backends in the same AZ, only "falling back" to web services in another AZ in the event of all services being sick. Ie Varnish in AZ 1 round robins 1a,2a,3a, but can fall back to 1b,2b,3b,1c,2c,3c in the event of sickness, and so forth. Any advice gratefully received! Kind Regards, Neil Saunders. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.hayter at gmail.com Mon Apr 2 21:47:30 2012 From: jim.hayter at gmail.com (Jim Hayter) Date: Mon, 2 Apr 2012 17:47:30 -0400 Subject: Upgrading 2.0.5 to 3.0.0 In-Reply-To: References: Message-ID: Looking at the values reported by varnishstat, I think I want SMA.s0.nobj and SMA.s0.nbytes. Would someone please confirm this? Jim On Mon, Apr 2, 2012 at 2:58 PM, Jim Hayter wrote: > With 2.0.5 I was monitoring/graphing the values from "varnishstat -1 -f > uptime,cache_hit,client_req,n_lru_nuked,sma_nobj,sma_nbytes". I am using > memory cache. It is not clear to me what values in 3.0.0 correspond to > sma_nobj and sma_nbytes. I'd appreciate some clarification. > > Thanks, > Jim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Mon Apr 2 21:59:54 2012 From: rtshilston at gmail.com (Rob S) Date: Mon, 2 Apr 2012 22:59:54 +0100 Subject: Round robin weighted 'collections' of backends? In-Reply-To: References: Message-ID: <51306753-A98C-4EE7-A13B-A34DC8860A4D@gmail.com> Yes, this is entirely possible and something we use in production. From memory, the essence of the config is: set req.backend = primarypool; if (!req.backend.healthy) { set req.backend = secondarypool; } Where both primary and secondary pools are round robin directors. Rob On 2 Apr 2012, at 20:28, Neil Saunders wrote: > Hi all. > > I'd like to determine if it's possible, via hack or otherwise, to preferentially round robin 'collections' of backends. > > For context: > > We run services in EC2. > > We have 3 web heads, one per availability zone (a,b,c) > > For one of our services we have 9 backends, 3 per availability zone (1a,2a,3a,1b,2b,3b,1c,2c,3c). > > All varnish instances know about all backends. > > What I'd like to do is have to have each varnish web head in each availability zone round robin healthy backends in the same AZ, only "falling back" to web services in another AZ in the event of all services being sick. > > Ie Varnish in AZ 1 round robins 1a,2a,3a, but can fall back to 1b,2b,3b,1c,2c,3c in the event of sickness, and so forth. > > Any advice gratefully received! > > Kind Regards, > > Neil Saunders. > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From zawierta at gmail.com Tue Apr 3 12:52:01 2012 From: zawierta at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Zawierta?=) Date: Tue, 3 Apr 2012 14:52:01 +0200 Subject: URL rewrite with Varnish Message-ID: Hello, Is it possible to handle such case: my webapp is running on http://10.0.0.10:8888/content/site/EN.html and whole site is on base url: http://10.0.0.10:8888/content/site/. I want to make my site available via Varnish on url: http://mysite.mydomain.com/ - I want to remove whole stuff after / from url. Rule: sub vcl_recv { if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { set req.url = regsub(req.url, "^/content/site/", "/"); } } isn't working at all. Regards R. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.gareth.light at gmail.com Tue Apr 3 13:07:17 2012 From: j.gareth.light at gmail.com (James Light) Date: Tue, 3 Apr 2012 09:07:17 -0400 Subject: Fwd: Re: URL rewrite with Varnish In-Reply-To: References: Message-ID: On Apr 3, 2012 8:53 AM, "Rafa? Zawierta" wrote: > > Hello, > > Is it possible to handle such case: my webapp is running on http://10.0.0.10:8888/content/site/EN.html and whole site is on base url: http://10.0.0.10:8888/content/site/. > > I want to make my site available via Varnish on url: http://mysite.mydomain.com/ - I want to remove whole stuff after / from url. > > Rule: > sub vcl_recv { > if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { > set req.url = regsub(req.url, "^/content/site/", "/"); > } > } > > isn't working at all. > > Regards > R. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Is your backend configured to respond to requests to that hostname and does it have a way to know that "/content/site" is aliased to "/" ? Sorry if I'm missing something but it seems like a rewrite rule on the backend is more appropriate for this sort of thing, no? -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Tue Apr 3 13:25:48 2012 From: straightflush at gmail.com (AD) Date: Tue, 3 Apr 2012 09:25:48 -0400 Subject: URL rewrite with Varnish In-Reply-To: References: Message-ID: You should be able to easily rewrite the req.http.header value in vcl_recv if (req.http.host == "mysite.domain.com") { set req.http.host = "10.0.0.10"; } To Rafal's point, this will pass the Host header of 10.0.0.10 to your backend. Make sure you backend is configured to be port 8888 backend default { .host = "127.0.0.1"; .port = "8888"; } On Tue, Apr 3, 2012 at 9:07 AM, James Light wrote: > On Apr 3, 2012 8:53 AM, "Rafa? Zawierta" wrote: > > > > Hello, > > > > Is it possible to handle such case: my webapp is running on > http://10.0.0.10:8888/content/site/EN.html and whole site is on base url: > http://10.0.0.10:8888/content/site/. > > > > I want to make my site available via Varnish on url: > http://mysite.mydomain.com/ - I want to remove whole stuff after / from > url. > > > > Rule: > > sub vcl_recv { > > if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { > > set req.url = regsub(req.url, "^/content/site/", "/"); > > } > > } > > > > isn't working at all. > > > > Regards > > R. > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > Is your backend configured to respond to requests to that hostname and > does it have a way to know that "/content/site" is aliased to "/" ? > Sorry if I'm missing something but it seems like a rewrite rule on the > backend is more appropriate for this sort of thing, no? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zawierta at gmail.com Tue Apr 3 13:35:31 2012 From: zawierta at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Zawierta?=) Date: Tue, 3 Apr 2012 15:35:31 +0200 Subject: URL rewrite with Varnish In-Reply-To: References: Message-ID: W dniu 3 kwietnia 2012 15:25 u?ytkownik AD napisa?: > You should be able to easily rewrite the req.http.header value in vcl_recv > > if (req.http.host == "mysite.domain.com") { > set req.http.host = "10.0.0.10"; > } > > To Rafal's point, this will pass the Host header of 10.0.0.10 to your > backend. Make sure you backend is configured to be port 8888 > > backend default { > .host = "127.0.0.1"; > .port = "8888"; > } > > On Tue, Apr 3, 2012 at 9:07 AM, James Light wrote: > >> On Apr 3, 2012 8:53 AM, "Rafa? Zawierta" wrote: >> > >> > Hello, >> > >> > Is it possible to handle such case: my webapp is running on >> http://10.0.0.10:8888/content/site/EN.html and whole site is on base >> url: http://10.0.0.10:8888/content/site/. >> > >> > I want to make my site available via Varnish on url: >> http://mysite.mydomain.com/ - I want to remove whole stuff after / from >> url. >> > >> > Rule: >> > sub vcl_recv { >> > if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { >> > set req.url = regsub(req.url, "^/content/site/", "/"); >> > } >> > } >> > >> > isn't working at all. >> > >> > Regards >> > R. >> > Sorry AD, but I'm not sure if your tip is helpful. Once again: my backend server has ip 10.0.0.10, port 8888. It runs multiple apps, so if I type http://10.0.0.10:8888 i get default site http://10.0.0.10:8888/content/default/EN.html. Therefore I'd like varnish to point me to /content/site/EN.html AND to remove "content/site/" from URL. That's why passing http.host won't work at all. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Tue Apr 3 13:48:22 2012 From: straightflush at gmail.com (AD) Date: Tue, 3 Apr 2012 09:48:22 -0400 Subject: URL rewrite with Varnish In-Reply-To: References: Message-ID: Does your backend answer for the hostname mysite.domain.com in apache/nginx ? You didnt indicate that you have virtualhosts setup and what the hostname header needs to be in order for your origin to respond properly. On Tue, Apr 3, 2012 at 9:35 AM, Rafa? Zawierta wrote: > > > W dniu 3 kwietnia 2012 15:25 u?ytkownik AD napisa?: > > You should be able to easily rewrite the req.http.header value in vcl_recv >> >> if (req.http.host == "mysite.domain.com") { >> set req.http.host = "10.0.0.10"; >> } >> >> To Rafal's point, this will pass the Host header of 10.0.0.10 to your >> backend. Make sure you backend is configured to be port 8888 >> >> backend default { >> .host = "127.0.0.1"; >> .port = "8888"; >> } >> >> On Tue, Apr 3, 2012 at 9:07 AM, James Light wrote: >> >>> On Apr 3, 2012 8:53 AM, "Rafa? Zawierta" wrote: >>> > >>> > Hello, >>> > >>> > Is it possible to handle such case: my webapp is running on >>> http://10.0.0.10:8888/content/site/EN.html and whole site is on base >>> url: http://10.0.0.10:8888/content/site/. >>> > >>> > I want to make my site available via Varnish on url: >>> http://mysite.mydomain.com/ - I want to remove whole stuff after / from >>> url. >>> > >>> > Rule: >>> > sub vcl_recv { >>> > if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { >>> > set req.url = regsub(req.url, "^/content/site/", "/"); >>> > } >>> > } >>> > >>> > isn't working at all. >>> > >>> > Regards >>> > R. >>> >> > > Sorry AD, but I'm not sure if your tip is helpful. > Once again: my backend server has ip 10.0.0.10, port 8888. It runs > multiple apps, so if I type http://10.0.0.10:8888 i get default site > http://10.0.0.10:8888/content/default/EN.html. Therefore I'd like varnish > to point me to /content/site/EN.html AND to remove "content/site/" from > URL. > That's why passing http.host won't work at all. > > Regards > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish at mm.quex.org Tue Apr 3 14:01:55 2012 From: varnish at mm.quex.org (Michael Alger) Date: Tue, 3 Apr 2012 22:01:55 +0800 Subject: URL rewrite with Varnish In-Reply-To: References: Message-ID: <20120403140155.GA15777@grum.quex.org> On Tue, Apr 03, 2012 at 03:35:31PM +0200, Rafa? Zawierta wrote: > W dniu 3 kwietnia 2012 15:25 u?ytkownik AD napisa?: > > > On Tue, Apr 3, 2012 at 9:07 AM, James Light wrote: > > > >> On Apr 3, 2012 8:53 AM, "Rafa? Zawierta" wrote: > >> > > >> > Hello, > >> > > >> > Is it possible to handle such case: my webapp is running on > >> > http://10.0.0.10:8888/content/site/EN.html and whole site is on base > >> > url: http://10.0.0.10:8888/content/site/. > >> > > >> > I want to make my site available via Varnish on url: > >> > http://mysite.mydomain.com/ - I want to remove whole stuff after > >> > / from url. > >> > > >> > Rule: > >> > sub vcl_recv { > >> > if (req.http.host ~ "^(www\.)?mysite\.mydomain\.com$" ) { > >> > set req.url = regsub(req.url, "^/content/site/", "/"); > >> > } > >> > } > >> > > >> > isn't working at all. > >> > > >> > Regards > >> > R. > >> > > Sorry AD, but I'm not sure if your tip is helpful. > Once again: my backend server has ip 10.0.0.10, port 8888. It runs multiple > apps, so if I type http://10.0.0.10:8888 i get default site > http://10.0.0.10:8888/content/default/EN.html. Therefore I'd like varnish > to point me to /content/site/EN.html AND to remove "content/site/" from > URL. > That's why passing http.host won't work at all. I think your original example is doing the reverse of what you want. You want to ADD /content/site/ to the incoming request from the client, rather than remove it. So, client requests / or /EN.html from your server; you want to do something like set req.url = "/content/site" + req.url; # (not entirely sure of the syntax here... I think in 2.1 you can just # put set req.url = "prefix" req.url; or you can use a regsub with # "^" as the pattern to replace) in your VCL in order to prepend "/content/site" to the request the client sent, so that when it's sent to the backend it will look like a request for /content/site/ or /content/site/EN.html. You may also need to rewrite the req.http.Host header to what your backend is expecting, if it's looking for a specific hosts entry. The only complication you'll have is that if your backend generates URLs, it will do so using the full path (/content/site/) - depending on what the server is what application (if any) it's running, you might be able to override that somehow. From listas at kurtkraut.net Tue Apr 3 14:25:12 2012 From: listas at kurtkraut.net (Kurt Kraut) Date: Tue, 3 Apr 2012 11:25:12 -0300 Subject: How to ignore chars after question mark in URL? Message-ID: Hi, I need to cache URLs of the same file but the front-end developer generates a random string, e.g.: www.kurtkraut.net/index.css?234234432 www.kurtkraut.net/index.css?654323256 www.kurtkraut.net/index.css?905837831 How can I get a HIT for www.kurtkraut.net/index.css not matter the presence or not of the question mark and the random string after the question mark? Is it possible to ignore the presence of the question mark and the rest of the URL? Thanks in advance, Kurt Kraut -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Apr 3 14:38:09 2012 From: r at roze.lv (Reinis Rozitis) Date: Tue, 3 Apr 2012 17:38:09 +0300 Subject: How to ignore chars after question mark in URL? In-Reply-To: References: Message-ID: <443D0B6004F548CC9533B6F5824B79DA@DD21> > Is it possible to ignore the presence of the question mark and the rest of the URL? Yes. https://www.varnish-cache.org/docs/trunk/faq/general.html 'How do I instruct varnish to ignore the query parameters and only cache one instance of an object?' sub vcl_recv { set req.url = regsub(req.url, "\?.*", ""); } From contact at jpluscplusm.com Tue Apr 3 17:39:21 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 3 Apr 2012 18:39:21 +0100 Subject: How to ignore chars after question mark in URL? In-Reply-To: References: Message-ID: On 3 April 2012 15:25, Kurt Kraut wrote: > Hi, > > I need to cache URLs of the same file but the front-end developer generates > a random string, e.g.: > > www.kurtkraut.net/index.css?234234432 > www.kurtkraut.net/index.css?654323256 > www.kurtkraut.net/index.css?905837831 > > How can I get a HIT for www.kurtkraut.net/index.css not matter the presence > or not of the question mark and the random string after the question mark? > Is it possible to ignore the presence of the question mark and the rest of > the URL? In my experience, developers often do this deliberately as a method of versioning their CSS and JS. They'll append a "random" string (often a unix timestamp) at deployment time, rendering that particular version of the resource cachable until the next deployment. Are you absolutely sure that this isn't what your devs are doing? Because if they are, you'd be doing *exactly* the wrong thing by explicitly ignoring the query-string, and may well break the application during a random deploy, sometime in the future. But possibly not during *each* deployment, leading to nicely intermittent bug reports depending on the amount each CSS/JS has changed, and the state of the user's browser cache :-) HTH, Jonathan -- Jonathan Matthews London, Oxford, UK http://www.jpluscplusm.com/contact.html From justindavis at mail.utexas.edu Tue Apr 3 18:59:46 2012 From: justindavis at mail.utexas.edu (Justin Davis) Date: Tue, 3 Apr 2012 13:59:46 -0500 Subject: Backend health polling with client or round-robin Message-ID: Hello all. Is "/server-status?auto" an appropriate URL to use for backend health polling with a client or round-robin director? We recently had one of our two load balanced servers go belly-up, during which Varnish continued attempting to server content from the failed node. Would a small file be preferable? It's my understanding that a backend node is marked as unhealthy if the health check times out or returns a status other than 200 more than the configured threshhold times per configured window. My backend declaration: backend web1 { .host = "XX.XX.XX.XX"; .port = "http"; .probe = { .url = "/server-status?auto"; .timeout = 34 ms; .interval = 3 s; .window = 10; .threshold = 8; } } backend web2 { .host = "XX.XX.XX.XX"; .port = "http"; .probe = { .url = "/server-status?auto"; .timeout = 34 ms; .interval = 3 s; .window = 10; .threshold = 8; } } director web client { { .backend = web1; .weight = 1; } { .backend = web2; .weight = 1; } } Thank you all for any assistance, direction, pointers, consideration or words of encouragement. Cheers, Justin -- Justin Davis Liberal Arts ITS Senior Systems Administrator justindavis at mail.utexas.edu From hugo.cisneiros at gmail.com Tue Apr 3 19:19:48 2012 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Tue, 3 Apr 2012 16:19:48 -0300 Subject: Backend health polling with client or round-robin In-Reply-To: References: Message-ID: On Tue, Apr 3, 2012 at 3:59 PM, Justin Davis wrote: > Is "/server-status?auto" an appropriate URL to use for backend health polling with a client or round-robin director? > > We recently had one of our two load balanced servers go belly-up, during which Varnish continued attempting to server content from the failed node. > > Would a small file be preferable? It's my understanding that a backend node is marked as unhealthy if the health check times out or returns a status other than 200 more than the configured threshhold times per configured window. >From what I understand, it depends on what is running on your web server. For example, if it's only static files, a small file that returns HTTP 200 is ok. If it's a dynamic system, the recommended way is a page that checks if the components of the system are ok (like 'is it using the database sucessfully?'; 'are the permissions on the server-managed folder ok?', and so on). Sometimes a system can go all wrong, but a static page will still return HTTP 200 for varnish, thus varnish will still be delivering requests to the server/system. -- []'s Hugo www.devin.com.br From zs at enternewmedia.com Wed Apr 4 00:50:58 2012 From: zs at enternewmedia.com (Zachary Stern) Date: Tue, 3 Apr 2012 20:50:58 -0400 Subject: varnishadm param.show meanings? Message-ID: Hey there, Is there somewhere I can get a good definition of each parameter that comes up when I run a param.show during a varnisadm session? I've searched the wiki and the 3.0.2 docs and not really found good results. Any advice offered would be greatly appreciated. Thanks! -Zachary -- zachary alex stern I systems architect o: 212.731.2033 | f: 212.202.6488 | zs at enternewmedia.com 60-62 e. 11th street, 4th floor | new york, ny | 10003 www.enternewmedia.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen at sitesquad.net Wed Apr 4 01:31:32 2012 From: juergen at sitesquad.net (Juergen Schreck) Date: Tue, 3 Apr 2012 20:31:32 -0500 Subject: Varnish and Load Balancing Message-ID: <7E9B4536-7480-49A7-9B6E-F7396BD95926@sitesquad.net> Hello - I'm currently researching my options to implement varnish into our hosting recipe. We provide load-balanced hosting services for Magento. We run a software load-balancer that handles all of our dedicated and VPS nodes. We've optimized each Magento instance such that we're using Apache/mod_php for the dynamic requests and nginx on a separate port and cookieless domain for all static assets. So we're not employing an reverse proxy methods at this point. Apache does all the php stuff and we're using memcached to help it out on the backend. I could a Varnish server picking up a lot of slack of what we do there and improve upon it. I don't know that I'd want it to cache assets, because that'll be moving to a CDN. But with the Page Cache Powered By Varnish Magento module, I think it could work very nicely. What I'm pondering is whether I should still run load-balancers in front of a Varnish server or let the Varnish built-in balancer do that work. I'm also not sure how much a single Varnish server could provide with multiple dual-node vps backends and how I would scale it as the number of sites/vps grows. Any suggestions to that layout? Right now it seems to me like I'd have some redundant puzzle pieces? Would appreciate your thoughts to help me sort it out. Thanks, Juergen From juergen at sitesquad.net Wed Apr 4 04:03:10 2012 From: juergen at sitesquad.net (Juergen Schreck) Date: Tue, 3 Apr 2012 23:03:10 -0500 Subject: Varnish and Load Balancing In-Reply-To: <4F7BB36A.9000803@spechal.com> References: <7E9B4536-7480-49A7-9B6E-F7396BD95926@sitesquad.net> <4F7BB36A.9000803@spechal.com> Message-ID: Aside from the application differences, we have a setup very similar to the one illustrated here (minus the Varnish servers currently): http://www.lullabot.com/articles/varnish-multiple-web-servers-drupal The websites are deployed sync'd on at least two nodes in the server farm and the load-balancer does it's thing for each site's nodes. We have two balancers setup with Linux-HA failover. My understanding is that Varnish's built-in load balancer would do the same work, plus it would also provide the cache benefit. So wouldn't Varnish actually replace the load-balancers then? Why would I want to load balance varnish servers? Thanks, Juergen On Apr 3, 2012, at 9:35 PM, Travis Crowder wrote: > > > On 4/3/2012 8:31 PM, Juergen Schreck wrote: >> Hello - >> >> I'm currently researching my options to implement varnish into our hosting recipe. >> >> We provide load-balanced hosting services for Magento. We run a software load-balancer that handles all of our dedicated and VPS nodes. We've optimized each Magento instance such that we're using Apache/mod_php for the dynamic requests and nginx on a separate port and cookieless domain for all static assets. So we're not employing an reverse proxy methods at this point. Apache does all the php stuff and we're using memcached to help it out on the backend. >> >> I could a Varnish server picking up a lot of slack of what we do there and improve upon it. I don't know that I'd want it to cache assets, because that'll be moving to a CDN. But with the Page Cache Powered By Varnish Magento module, I think it could work very nicely. >> >> What I'm pondering is whether I should still run load-balancers in front of a Varnish server or let the Varnish built-in balancer do that work. >> >> I'm also not sure how much a single Varnish server could provide with multiple dual-node vps backends and how I would scale it as the number of sites/vps grows. >> >> Any suggestions to that layout? Right now it seems to me like I'd have some redundant puzzle pieces? Would appreciate your thoughts to help me sort it out. >> >> Thanks, >> Juergen >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > We currently use Varnish 3 to load balance several backend Drupal multi-sites (20 so far, about 70 by the end of the year) using the round robin director and it handles upwards of 2000+ connections per second using memory based storage with a 0.01 load on the server. We use an F5 in front of Varnish to fail-over to a backup instance in case there is ever a problem and after several months of running, Varnish hasn't needed to go down or crashed once. > > Scaling is pretty easy horizontally. If we spin up another node, we just add it to the VCL and reload. > > HTH, > Travis From perbu at varnish-software.com Wed Apr 4 07:47:19 2012 From: perbu at varnish-software.com (Per Buer) Date: Wed, 4 Apr 2012 09:47:19 +0200 Subject: varnishadm param.show meanings? In-Reply-To: References: Message-ID: Hi. On Wed, Apr 4, 2012 at 2:50 AM, Zachary Stern wrote: > Hey there, > > Is there somewhere I can get a good definition of each parameter that > comes up when I run a param.show during a varnisadm session? > Sure. Watch this: varnish> param.show shortlived 200 shortlived 10.000000 [s] Default is 10.0 Objects created with TTL shorter than this are always put in transient storage. I've searched the wiki and the 3.0.2 docs and not really found good > results. > If you want it all in one place you can do a awk/sh hack or grep in the source. It's all in one place and should be easy to find. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Wed Apr 4 08:05:07 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 4 Apr 2012 10:05:07 +0200 Subject: varnishadm param.show meanings? In-Reply-To: References: Message-ID: <20120404080507.GY12685@nerd.dk> On Tue, Apr 03, 2012 at 08:50:58PM -0400, Zachary Stern wrote: > > Is there somewhere I can get a good definition of each parameter that comes > up when I run a param.show during a varnisadm session? param.show -l param.show man varnishd All are generated from the same source. -- Andreas From ahooper at bmjgroup.com Wed Apr 4 14:14:10 2012 From: ahooper at bmjgroup.com (Alex Hooper) Date: Wed, 4 Apr 2012 15:14:10 +0100 Subject: segfaults with varnish 2.1 on Ubuntu Message-ID: <4F7C5732.6020606@bmjgroup.com> Hi, We've been running varnish for the last few years on a RHEL box and it has been very stable -- I don't recall ever having to restart the service. The version we ran there was: # varnishd -V varnishd (varnish-2.1.3 SVN 5049:5055) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS It was run with these params: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /usr/local/etc/varnish/bmjgroup.vcl \ -s file,/var/varnish \ -n /var/varnish" and the system was - OS: Red Hat Enterprise Linux Server release 5.5 (Tikanga) - RAM: 16 GB We have recently migrated to a virtualised platform and are now running varnish on a VM running Ubuntu 10.04.3 LTS. This also has 16GB RAM. The version is slightly lower than the previous (as the preference is to use the packaged version rather than roll our own): $ varnishd -V varnishd (varnish-2.1 SVN ) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS It is started with these params: NFILES=131072 # (used for ulimit -n) MEMLOCK=82000 # (used for ulimit -l) DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/bmjgroup.vcl \ -s file,/var/lib/varnish/$INSTANCE \ -n varnish-01" We are encountering a problem with the new instance which will run fine for about a week and then start to give random errors, generally just resetting the connection to clients. At this point, trying to examine the running process with varnishlog or varnishstat results in the tool instantly segfaulting. Restarting the server fixes everything until it happens again I'm a little unsure as to the best place to start looking as I try to fix this (admittedly partly as I'm dealing with a large mopping-up exercise after migrating out entire production environment) so wondered whether anyone might have any suggestions. Cheers, Alex. -- Alex Hooper Operations Team Leader, BMJ Group, BMA House, London WC1H 9JR Tel: +44 (0) 20 7383 6049 http://group.bmj.com/ _______________________________________________________________________ The BMJ Group is one of the world's most trusted providers of medical information for doctors, researchers, health care workers and patients group.bmj.com. This email and any attachments are confidential. If you have received this email in error, please delete it and kindly notify us. If the email contains personal views then the BMJ Group accepts no responsibility for these statements. The recipient should check this email and attachments for viruses because the BMJ Group accepts no liability for any damage caused by viruses. Emails sent or received by the BMJ Group may be monitored for size, traffic, distribution and content. BMJ Publishing Group Limited trading as BMJ Group. A private limited company, registered in England and Wales under registration number 03102371. Registered office: BMA House, Tavistock Square, London WC1H 9JR, UK. _______________________________________________________________________ From hugo.cisneiros at gmail.com Thu Apr 5 17:08:39 2012 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Thu, 5 Apr 2012 14:08:39 -0300 Subject: segfaults with varnish 2.1 on Ubuntu In-Reply-To: <4F7C5732.6020606@bmjgroup.com> References: <4F7C5732.6020606@bmjgroup.com> Message-ID: On Wed, Apr 4, 2012 at 11:14 AM, Alex Hooper wrote: > We have recently migrated to a virtualised platform and are now running > varnish on a VM running Ubuntu 10.04.3 LTS. This also has 16GB RAM. The > version is slightly lower than the previous (as the preference is to use the > packaged version rather than roll our own): Maybe that's not your case, but I also have the preference on using packaged version, but I already use varnish 3.x on CentOS 5.6 and Ubuntu 10.04 using packages from varnish's site. The packages are also high quality work :) And there's packages for 2.1.5 there too. I think you should give it a try :) -- []'s Hugo www.devin.com.br From elodieuse at gmail.com Thu Apr 5 22:03:29 2012 From: elodieuse at gmail.com (=?ISO-8859-1?Q?=C9lodie_BOSSIER?=) Date: Fri, 06 Apr 2012 00:03:29 +0200 Subject: pragma no-cache HTML only with beresp.http.Pragma ? Message-ID: <4F7E16B1.8080301@gmail.com> Greetings, I would like take in consideration the cache information from a HTML page only, example this : Hello world ! And this is a part of my default.vcl (only a debug part) : sub vcl_fetch { if ( beresp.http.Pragma ~ "no-cache" ) { set beresp.http.X-Cacheable = "found no-cache !"; return(hit_for_pass); } But it's don't work, it seem Varnish see only a header information (sent by PHP exemple), but nothing from a HTML page only. Do you have an idea to take in consideration the HTML with Varnish 3.0 please ? Thanks so much, Elodie. From thierry.magnien at sfr.com Fri Apr 6 12:18:12 2012 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Fri, 6 Apr 2012 12:18:12 +0000 Subject: pragma no-cache HTML only with beresp.http.Pragma ? In-Reply-To: <4F7E16B1.8080301@gmail.com> References: <4F7E16B1.8080301@gmail.com> Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C02977C@EXCN015.encara.local.ads> Hi, Varnish deals with HTTP headers, not HTML content. The meta tags tell your browser not to cache the page, but if the web server does not add the corresponding HTTP headers in its response, varnish will have no clue about what to do, and will apply the default TTL. Regards, Thierry -----Message d'origine----- De?: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] De la part de ?lodie BOSSIER Envoy??: vendredi 6 avril 2012 00:03 ??: varnish-misc at varnish-cache.org Objet?: pragma no-cache HTML only with beresp.http.Pragma ? Greetings, I would like take in consideration the cache information from a HTML page only, example this : Hello world ! And this is a part of my default.vcl (only a debug part) : sub vcl_fetch { if ( beresp.http.Pragma ~ "no-cache" ) { set beresp.http.X-Cacheable = "found no-cache !"; return(hit_for_pass); } But it's don't work, it seem Varnish see only a header information (sent by PHP exemple), but nothing from a HTML page only. Do you have an idea to take in consideration the HTML with Varnish 3.0 please ? Thanks so much, Elodie. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From carrot at carrotis.com Fri Apr 6 15:53:11 2012 From: carrot at carrotis.com (Calvin Park) Date: Sat, 7 Apr 2012 00:53:11 +0900 Subject: How to keep object even TTL expired. Message-ID: Hello * Environment. - varnish3.0.2 When TTL expired , varnish unconditionally delete object and then re-get object from origin even same object. I want to keep object if it's same thing and I tried to use 'beresp.keep' , but it's not working. Please show me sample VCL. From perbu at varnish-software.com Sat Apr 7 19:25:44 2012 From: perbu at varnish-software.com (Per Buer) Date: Sat, 7 Apr 2012 21:25:44 +0200 Subject: How to keep object even TTL expired. In-Reply-To: References: Message-ID: Hi Calvin, On Fri, Apr 6, 2012 at 5:53 PM, Calvin Park wrote: > > * Environment. - varnish3.0.2 > > When TTL expired , varnish unconditionally delete object and then > re-get object from origin even same object. > I want to keep object if it's same thing and I tried to use > 'beresp.keep' , but it's not working. > Varnish doesn't do this (yet). > Please show me sample VCL. > You might want to go over the docs or the "Varnish book". -- Per Buer Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From kelvin1111111 at gmail.com Tue Apr 10 10:36:59 2012 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Tue, 10 Apr 2012 18:36:59 +0800 Subject: Varnish as Redirection Server Message-ID: I am using Varnish purely for redirection and cater for below scenario: 1. If http://www.abc.com/url1, redirect to http://www.abc.com/url2 2. If http://www.xyz.com/url1, redirect to http://www.xyz.com/url2 I tried to put the req.http.host variable inside, but Varnish has error when restarting, I am wondering on how to actually input the variable into error message? ========================================= sub vcl_recv { if (req.url ~ "/url1") { error 750 "http://" req.http.host "/url2"; } error 404 "NOT FOUND"; } sub vcl_error { if (obj.status == 750) { set obj.http.Location = obj.response; set obj.status = 301; return(deliver); } } ========================================= From contact at jpluscplusm.com Tue Apr 10 10:43:19 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 10 Apr 2012 11:43:19 +0100 Subject: Varnish as Redirection Server In-Reply-To: References: Message-ID: On 10 April 2012 11:36, Kelvin Loke wrote: > I am using Varnish purely for redirection and cater for below scenario: This may be the wrong list on which to state this opinion but, if you're really *only* doing redirection, I'd look at using nginx for this task. YMMV, as may other list members, but this does seem to me to be using Varnish to its own disadvantage. > 1. If http://www.abc.com/url1, redirect to http://www.abc.com/url2 > 2. If http://www.xyz.com/url1, redirect to http://www.xyz.com/url2 > > I tried to put the req.http.host variable inside, but Varnish has > error when restarting You're going to get more help here if you post the error and its context. Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From kelvin1111111 at gmail.com Tue Apr 10 10:53:48 2012 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Tue, 10 Apr 2012 18:53:48 +0800 Subject: Varnish as Redirection Server In-Reply-To: References: Message-ID: I have actually just figured it out, I need a "+" sign :) error 750 "http://" + req.http.host + "/url2"; On Tue, Apr 10, 2012 at 6:43 PM, Jonathan Matthews wrote: > On 10 April 2012 11:36, Kelvin Loke wrote: >> I am using Varnish purely for redirection and cater for below scenario: > > This may be the wrong list on which to state this opinion but, if > you're really *only* doing redirection, I'd look at using nginx for > this task. YMMV, as may other list members, but this does seem to me > to be using Varnish to its own disadvantage. > >> 1. If http://www.abc.com/url1, redirect to http://www.abc.com/url2 >> 2. If http://www.xyz.com/url1, redirect to http://www.xyz.com/url2 >> >> I tried to put the req.http.host variable inside, but Varnish has >> error when restarting > > You're going to get more help here if you post the error and its context. > > Jonathan > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From nbubingo at gmail.com Wed Apr 11 06:03:53 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 11 Apr 2012 14:03:53 +0800 Subject: Varnish can't start occasionally Message-ID: Varnish can't start occasionally in our server. The /var/log/messages shows: Apr 10 07:07:31 detailbeta028054.cm4 /home/admin/varnish/cache[32416]: Manager got SIGINT Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Platform: Linux,2.6.18-164.el5xen,x86_64,-sfile,-smalloc,-hcritbit Apr 10 07:07:32 detailbeta028054.cm4 varnishd[23092]: Not running as root, no priv-sep Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: child (23092) Started Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Pushing vcls failed: Apr 10 15:07:32 detailbeta028054.cm4 CLI communication error (hdr) Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Child (23092) died status=2 Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Child (-1) said Not running as root, no priv-sep Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Child (-1) said Child starts Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: Child (-1) said SMF.s0 mmap'ed 0 bytes of 1073741824 This problem occurs rarely. My varnishd start command is like this: /opt/taobao/install/varnish/sbin/varnishd -P /home/admin/varnish/pid/varnish.pid -a :8888 -f /home/admin/varnish/conf/varnish.vcl -n /home/admin/varnish/cache -T 0.0.0.0:2000 -t 120 -w 5,1000,120 -u admin -g admin -s file,/home/admin/varnish/cache/varnish_storage.bin,1G Is there anyone can give me some tips? Thanks. From nbubingo at gmail.com Wed Apr 11 09:10:13 2012 From: nbubingo at gmail.com (=?GB2312?B?0qbOsLHz?=) Date: Wed, 11 Apr 2012 17:10:13 +0800 Subject: Varnish can't start occasionally In-Reply-To: References: Message-ID: I find the reason caused this problem. In our automatic start shell script, There is such a line: main >/dev/null 2>&1 <&- & The shell script of "<&-" closes the STDIN. It seems the varnish CLI interface need this. And The child process can't start at the beginning. You may reproduce this problem like this: /path/to/varnish/init.d/varnish start <&- This problem may be specific to our servers. It's not a bug. Hope this can help someone else encounter the same problem. 2012/4/11 ??? : > Varnish can't start occasionally in our server. The /var/log/messages shows: > > Apr 10 07:07:31 detailbeta028054.cm4 /home/admin/varnish/cache[32416]: > Manager got SIGINT > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Platform: Linux,2.6.18-164.el5xen,x86_64,-sfile,-smalloc,-hcritbit > Apr 10 07:07:32 detailbeta028054.cm4 varnishd[23092]: Not running as > root, no priv-sep > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > child (23092) Started > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Pushing vcls failed: > Apr 10 15:07:32 detailbeta028054.cm4 CLI communication error (hdr) > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Child (23092) died status=2 > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Child (-1) said Not running as root, no priv-sep > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Child (-1) said Child starts > Apr 10 07:07:32 detailbeta028054.cm4 /home/admin/varnish/cache[23091]: > Child (-1) said SMF.s0 mmap'ed 0 bytes of 1073741824 > > This problem occurs rarely. > > My varnishd start command is like this: > /opt/taobao/install/varnish/sbin/varnishd -P > /home/admin/varnish/pid/varnish.pid -a :8888 -f > /home/admin/varnish/conf/varnish.vcl -n /home/admin/varnish/cache -T > 0.0.0.0:2000 -t 120 -w 5,1000,120 -u admin -g admin -s > file,/home/admin/varnish/cache/varnish_storage.bin,1G > > Is there anyone can give me some tips? > > Thanks. From mib at electronic-minds.de Thu Apr 12 17:46:30 2012 From: mib at electronic-minds.de (Michael Borejdo) Date: Thu, 12 Apr 2012 17:46:30 +0000 Subject: Problem with ESI, gzip, Accept-Encoding-Normalisation and 404-Status Message-ID: Hello List, I am facing this problem: I have a page with a few esi:includes, which gets cached and served correctly. My 404-Page has a similar structure. (not getting cached) I am using the "Accept-Encoding"-Normalisation-snippet and remove the accept-encoding header for images et. al. So far so good. Now I am requesting a non-existent image (http://example.com/foo.jpg). Varnish sees the .jpg and removes the accept-encoding header. Since the resource is missing, my 404-page is served. (I do not rewrite the uri). The 404-Page is served uncompressed (since we unset the accept-encoding) but contains gzipped esi:include blocks (now showing binary characters/data in the browser) What am I doing wrong? Thanks Michael From sta at netimage.dk Fri Apr 13 06:21:10 2012 From: sta at netimage.dk (=?ISO-8859-1?Q?S=F8ren_Thing_Andersen?=) Date: Fri, 13 Apr 2012 08:21:10 +0200 Subject: Problem with ESI, gzip, Accept-Encoding-Normalisation and 404-Status In-Reply-To: References: Message-ID: <4F87C5D6.40405@netimage.dk> Hi Michael. > The 404-Page is served uncompressed (since we unset the accept-encoding) but contains gzipped esi:include blocks > (now showing binary characters/data in the browser) I don't think you are doing anything wrong - except perhaps using version 2.x? I had the same issue - it was fixed in 3.0.1: https://www.varnish-cache.org/trac/ticket/1029#comment:4 /Thing From mib at electronic-minds.de Fri Apr 13 07:02:54 2012 From: mib at electronic-minds.de (Michael Borejdo) Date: Fri, 13 Apr 2012 07:02:54 +0000 Subject: AW: Problem with ESI, gzip, Accept-Encoding-Normalisation and 404-Status In-Reply-To: <4F87C5D6.40405@netimage.dk> References: <4F87C5D6.40405@netimage.dk> Message-ID: Hello, I am using varnish-3.0.2 revision cbf1284. Thanks Michael -----Urspr?ngliche Nachricht----- Von: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] Im Auftrag von S?ren Thing Andersen Gesendet: Freitag, 13. April 2012 08:21 An: varnish-misc at varnish-cache.org Betreff: Re: Problem with ESI, gzip, Accept-Encoding-Normalisation and 404-Status Hi Michael. > The 404-Page is served uncompressed (since we unset the > accept-encoding) but contains gzipped esi:include blocks (now showing > binary characters/data in the browser) I don't think you are doing anything wrong - except perhaps using version 2.x? I had the same issue - it was fixed in 3.0.1: https://www.varnish-cache.org/trac/ticket/1029#comment:4 /Thing _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From aaron.desouza at roh.org.uk Fri Apr 13 11:04:37 2012 From: aaron.desouza at roh.org.uk (Aaron de Souza) Date: Fri, 13 Apr 2012 12:04:37 +0100 Subject: Elastic Load Balancer as backend to Varnish Message-ID: <2090CE17E3960E41A21FB92477DB77C40364EC5737@VARESCO.roh.org.uk> Hi, I'm using Varnish 3.0.2 and my backend is an Amazon ELB so we can use the auto scaling features of AWS to add and remove instances from the load balancer. The problem is, is that if the backend load balancer is in multiple availability zones then it is given multiple IP addresses and varnish doesn't like that Message from VCC-compiler: Backend host "#######.eu-west-1.elb.amazonaws.com": resolves to multiple IPv4 addresses. Only one address is allowed. Please specify which exact address you want to use, we found these: 176.34.#.# 54.247.#.# ('input' Line 14 Pos 13) .host = "###########.eu-west-1.elb.amazonaws.com"; ------------############################################################- Any suggestions on how to get around this problem? Thanks Aaron -------------- next part -------------- An HTML attachment was scrubbed... URL: From kelvin1111111 at gmail.com Fri Apr 13 11:13:42 2012 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Fri, 13 Apr 2012 19:13:42 +0800 Subject: Cache HTTP 404 Page Message-ID: I am not sure if this is workable, is these a way for Varnish to cache 404 page? I have a HTTP flood attack to the website, and they always target invalid URL, I do not want these traffic to go to backend server. Of course I could do more on network layer, but unfortunately it's out of my control and it always takes longer time for change request :) From apj at mutt.dk Fri Apr 13 11:33:51 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Fri, 13 Apr 2012 13:33:51 +0200 Subject: Cache HTTP 404 Page In-Reply-To: References: Message-ID: <20120413113351.GA12685@nerd.dk> On Fri, Apr 13, 2012 at 07:13:42PM +0800, Kelvin Loke wrote: > I am not sure if this is workable, is these a way for Varnish to cache 404 page? Varnish will do so by default. If it doesn't, it's because you have VCL or backend headers (Expires or Cache-Control) preventing the caching. -- Andreas From kelvin1111111 at gmail.com Fri Apr 13 11:54:19 2012 From: kelvin1111111 at gmail.com (Kelvin Loke) Date: Fri, 13 Apr 2012 19:54:19 +0800 Subject: Cache HTTP 404 Page In-Reply-To: <20120413113351.GA12685@nerd.dk> References: <20120413113351.GA12685@nerd.dk> Message-ID: Thanks Andreas, just realized that the backend server sets "Age: 0", this might the reason why Varnish didn't cache 404 page. By the way, if Varnish will cache 404 by default, what is the default TTL for this? Is there a way to set a custom TTL for 404 caching? On Fri, Apr 13, 2012 at 7:33 PM, Andreas Plesner Jacobsen wrote: > On Fri, Apr 13, 2012 at 07:13:42PM +0800, Kelvin Loke wrote: > >> I am not sure if this is workable, is these a way for Varnish to cache 404 page? > > Varnish will do so by default. If it doesn't, it's because you have VCL or > backend headers (Expires or Cache-Control) preventing the caching. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From apj at mutt.dk Fri Apr 13 12:26:56 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Fri, 13 Apr 2012 14:26:56 +0200 Subject: Cache HTTP 404 Page In-Reply-To: References: <20120413113351.GA12685@nerd.dk> Message-ID: <20120413122656.GB12685@nerd.dk> On Fri, Apr 13, 2012 at 07:54:19PM +0800, Kelvin Loke wrote: > >> I am not sure if this is workable, is these a way for Varnish to cache 404 page? > > > > Varnish will do so by default. If it doesn't, it's because you have VCL or > > backend headers (Expires or Cache-Control) preventing the caching. > > Thanks Andreas, just realized that the backend server sets "Age: 0", > this might the reason why Varnish didn't cache 404 page. No. Age is used to provide clients an indication of how long an object has been stored in a cache. > By the way, if Varnish will cache 404 by default, what is the default > TTL for this? Is there a way to set a custom TTL for 404 caching? Same as for every other object: derived from cache-control (max-age, s-maxage), expires or default_ttl Please fix your quoting. -- Andreas From apj at mutt.dk Fri Apr 13 12:33:57 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Fri, 13 Apr 2012 14:33:57 +0200 Subject: Cache HTTP 404 Page In-Reply-To: <20120413122656.GB12685@nerd.dk> References: <20120413113351.GA12685@nerd.dk> <20120413122656.GB12685@nerd.dk> Message-ID: <20120413123357.GC12685@nerd.dk> On Fri, Apr 13, 2012 at 02:26:56PM +0200, Andreas Plesner Jacobsen wrote: > > Same as for every other object: derived from cache-control (max-age, s-maxage), > expires or default_ttl Or of course set by VCL -- Andreas From vampire.mr at gmail.com Fri Apr 6 09:45:59 2012 From: vampire.mr at gmail.com (Maksim Ryabchenko) Date: Fri, 6 Apr 2012 11:45:59 +0200 Subject: problem with varnish (FetchError) Message-ID: Hi When I try to open one of my web pages (fo example - http://81.23.10.255/blogs/260 ) I'll see 503 error. This web page differs from other in that it contains a nonexistent image from non-existent domain ( http://stage.ibusiness.ru/ow_userfiles/plugins/base/64-51-e1307300732878.jpg) .\ What should I do that the page was opened? Log file in attach Thanks Best regards, Maxim Ryabchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705090 1.0 13 TxRequest - GET 13 TxURL - /blogs/260 13 TxProtocol - HTTP/1.1 13 TxHeader - Host: 81.23.10.255 13 TxHeader - Cache-Control: max-age=0 13 TxHeader - Authorization: Basic cG1hOmRidkNJOEdidnZZanZPVTRKbU5h 13 TxHeader - User-Agent: Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11 13 TxHeader - Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 13 TxHeader - Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4 13 TxHeader - Accept-Charset: windows-1251,utf-8;q=0.7,*;q=0.3 13 TxHeader - Cookie: __utma=173066986.1187527007.1333625932.1333697236.1333705664.5; __utmc=173066986; __utmz=173066986.1333625932.1.1.utmcsr=(direct)|utmccn=(direct)|ut 13 TxHeader - Accept-Encoding: gzip 13 TxHeader - X-Forwarded-For: 176.36.111.51, 176.36.111.51 13 TxHeader - X-Varnish: 900698867 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705093 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705096 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705099 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705102 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705105 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705108 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705111 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705114 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705117 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705120 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705123 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705126 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705129 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705132 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705135 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705138 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705141 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705144 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705147 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705150 1.0 13 BackendClose - default 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705153 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705156 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705159 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705162 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705165 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705168 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705171 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705174 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705177 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705180 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705183 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705186 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705189 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705192 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705195 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705198 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705201 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705204 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705207 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1333705210 1.0 13 BackendOpen b default 127.0.0.1 33985 127.0.0.1 9099 13 TxRequest b GET 13 TxURL b /blogs/260 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: 81.23.10.255 13 TxHeader b Cache-Control: max-age=0 13 TxHeader b Authorization: Basic cG1hOmRidkNJOEdidnZZanZPVTRKbU5h 13 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11 13 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 13 TxHeader b Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4 13 TxHeader b Accept-Charset: windows-1251,utf-8;q=0.7,*;q=0.3 13 TxHeader b Cookie: __utma=173066986.1187527007.1333625932.1333697236.1333705664.5; __utmc=173066986; __utmz=173066986.1333625932.1.1.utmcsr=(direct)|utmccn=(direct)|ut 13 TxHeader b Accept-Encoding: gzip 13 TxHeader b X-Forwarded-For: 176.36.111.51, 176.36.111.51 13 TxHeader b X-Varnish: 900698867 13 BackendClose b default 12 SessionOpen c 176.36.111.51 57241 :80 12 ReqStart c 176.36.111.51 57241 900698867 12 RxRequest c GET 12 RxURL c /blogs/260 12 RxProtocol c HTTP/1.1 12 RxHeader c Host: 81.23.10.255 12 RxHeader c Connection: keep-alive 12 RxHeader c Cache-Control: max-age=0 12 RxHeader c Authorization: Basic cG1hOmRidkNJOEdidnZZanZPVTRKbU5h 12 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11 12 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 12 RxHeader c Accept-Encoding: gzip,deflate,sdch 12 RxHeader c Accept-Language: ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4 12 RxHeader c Accept-Charset: windows-1251,utf-8;q=0.7,*;q=0.3 12 RxHeader c Cookie: __utma=173066986.1187527007.1333625932.1333697236.1333705664.5; __utmc=173066986; __utmz=173066986.1333625932.1.1.utmcsr=(direct)|utmccn=(direct)|ut 12 VCL_call c recv pass 12 VCL_call c hash 12 Hash c /blogs/260 12 Hash c 81.23.10.255 12 VCL_return c hash 12 VCL_call c pass pass 12 Backend c 13 default default 12 FetchError c http first read error: -1 11 (No error recorded) 12 Backend c 13 default default 12 FetchError c http first read error: -1 11 (No error recorded) 12 VCL_call c error deliver 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Retry-After: 5 12 TxHeader c Content-Length: 435 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Fri, 06 Apr 2012 09:40:10 GMT 12 TxHeader c X-Varnish: 900698867 12 TxHeader c Age: 120 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 Length c 435 12 ReqEnd c 900698867 1333705090.718406439 1333705210.729175329 0.000053167 120.010689974 0.000078917 12 SessionClose c error 12 StatSess c 176.36.111.51 57241 120 1 1 0 1 0 258 435 13 BackendOpen b default 127.0.0.1 33993 127.0.0.1 9099 13 TxRequest b GET 13 TxURL b /favicon.ico 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: 81.23.10.255 13 TxHeader b Authorization: Basic cG1hOmRidkNJOEdidnZZanZPVTRKbU5h 13 TxHeader b Accept: */* 13 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11 13 TxHeader b Accept-Lan From esbjorn at bazooka.se Wed Apr 11 11:26:42 2012 From: esbjorn at bazooka.se (=?iso-8859-1?Q?Esbj=F6rn_Eriksson?=) Date: Wed, 11 Apr 2012 13:26:42 +0200 Subject: "Large" Binary Files triggering 503 Response Message-ID: I got this exact same error. Solved it by doing a check on req.url for the troublesome url and piping the response. if (req.url ~ "/troublesome/") { return (pipe); } -- Esbj?rn Eriksson | http://bazooka.se | 08 31 70 00 From robert.worley at gmail.com Thu Apr 12 15:26:12 2012 From: robert.worley at gmail.com (Rob Worley) Date: Thu, 12 Apr 2012 16:26:12 +0100 Subject: Ban Expressions and Grace Mode Message-ID: I'm using "smart bans" to expire large chunks of cache content. In good conditions everything works great but I want to serve stale content from the cache if the backend is down. It appears that Varnish evaluates the ban before the fetch operation fails, presumably meaning there's nothing left to serve to the client? Any advice on handling this requirement, short of adding more backends? At this stage I'm not even sure that grace works with ban expressions. My VCL is very similar to this from the Varnish Book but it's not currently working: https://www.varnish-software.com/static/book/Saving_a_request.html#example-evil-backend-hack I read suggestions RE: only accepting ban requests when the backend is healthy but I don't see how that'll help, given that they are applied "just in time". Steps to reproduce: 1. Warm cache using wget or whatever. Response is good and Varnish is caching as expected. 2. Kill the "normal" backend process. 3. Issue the ban expression (note that ban_lurker_sleep=0). 4. Hit URL with wget again and receive a 503 from Varnish (log output follows). TIA for any help/advice, Rob 3 SessionOpen c 127.0.0.1 64767 0.0.0.0:8080 3 ReqStart c 127.0.0.1 64767 1914142754 3 RxRequest c GET 3 RxURL c / 3 RxProtocol c HTTP/1.0 3 RxHeader c User-Agent: Wget/1.12 (darwin10.6.0) 3 RxHeader c Accept: */* 3 RxHeader c Host: xxxx.dev 3 RxHeader c Connection: Keep-Alive 3 VCL_call c recv 6 41.3 7 42.5 9 48.5 12 59.3 14 64.3 16 70.3 18 75.3 20 82.3 26 97.3 lookup 3 VCL_call c hash 1 24.3 3 Hash c / 3 Hash c xxxx.dev 3 VCL_return c hash 3 ExpBan c 1914142751 was banned 3 VCL_call c miss 30 110.3 32 113.1 67 99.5 fetch 3 FetchError c no backend connection 3 VCL_call c error 47 163.3 48 164.5 restart 3 VCL_call c recv 6 41.3 8 44.5 10 50.5 12 59.3 14 64.3 16 70.3 18 75.3 20 82.3 26 97.3 lookup 3 VCL_call c hash 1 24.3 3 Hash c / 3 Hash c xxxx.dev 3 VCL_return c hash 3 VCL_call c miss 30 110.3 32 113.1 67 99.5 fetch 3 FetchError c no backend connection 3 VCL_call c error 47 163.3 49 167.3 deliver 3 VCL_call c deliver 42 153.3 44 157.3 46 160.1 71 116.5 deliver 3 TxProtocol c HTTP/1.1 3 TxStatus c 503 3 TxResponse c Service Unavailable 3 TxHeader c Server: Varnish 3 TxHeader c Content-Type: text/html; charset=utf-8 3 TxHeader c Retry-After: 30 3 TxHeader c Content-Length: 383 3 TxHeader c Accept-Ranges: bytes 3 TxHeader c Date: Thu, 12 Apr 2012 14:28:03 GMT 3 TxHeader c X-Varnish: 1914142754 3 TxHeader c Age: 0 3 TxHeader c Via: 1.1 varnish 3 TxHeader c Connection: close 3 Length c 383 3 ReqEnd c 1914142754 1334240883.726897955 1334240883.727870941 0.000333071 0.000754118 0.000218868 3 SessionClose c error 3 StatSess c 127.0.0.1 64767 0 1 1 0 0 0 258 383 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.cisneiros at gmail.com Fri Apr 13 13:34:55 2012 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Fri, 13 Apr 2012 10:34:55 -0300 Subject: Elastic Load Balancer as backend to Varnish In-Reply-To: <2090CE17E3960E41A21FB92477DB77C40364EC5737@VARESCO.roh.org.uk> References: <2090CE17E3960E41A21FB92477DB77C40364EC5737@VARESCO.roh.org.uk> Message-ID: On Fri, Apr 13, 2012 at 8:04 AM, Aaron de Souza wrote: > Hi, > > I?m using Varnish 3.0.2 and my backend is an Amazon ELB so we can use the > auto scaling features of AWS to add and remove instances from the load > balancer. > > The problem is, is that if the backend load balancer is in multiple > availability zones then it is given multiple IP addresses and varnish > doesn?t like that This won't be possible using Varnish, at least the usual way. When varnish compiles its VCL (on startup or on vcl.load), it resolves the DNS names from the backend's definitions and it keeps that way. Because varnish is very much concerned about performance, it won't use DNS lookups at run-time and can't accept multiple IP addresses on one backend. If you look through this mailing list history, you'll see a few people (including me) wanting to do that, and some crazy solutions, like recompiling and loading the VCL from time to time with multiple backends and a director. But sadly there's no good and secure way to do this. :( -- []'s Hugo www.devin.com.br From drtaber at northcarolina.edu Fri Apr 13 14:08:45 2012 From: drtaber at northcarolina.edu (Douglas R Taber) Date: Fri, 13 Apr 2012 14:08:45 +0000 Subject: "Large" Binary Files triggering 503 Response In-Reply-To: References: Message-ID: <5C9C6B6F-99D6-4262-8A39-5D3A99F6CA4D@northcarolina.edu> On Apr 11, 2012, at 7:26 AM, Esbj?rn Eriksson wrote: > I got this exact same error. Solved it by doing a check on req.url for the troublesome url and piping the response. > > if (req.url ~ "/troublesome/") { > return (pipe); > } > > -- > Esbj?rn Eriksson | http://bazooka.se | 08 31 70 00 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc We found that our developers were manually setting a value for the content-length header that was causing varnish to 503 on us. They were calculating in code based on the file system size causing it to be incorrect. Removing that manually set header solved the issue. From kokoniimasu at gmail.com Fri Apr 13 14:13:52 2012 From: kokoniimasu at gmail.com (kokoniimasu) Date: Fri, 13 Apr 2012 23:13:52 +0900 Subject: problem with varnish (FetchError) In-Reply-To: References: Message-ID: Hi, Maksim Your backend is healthy? connection is ok . But response is bad. that may have exceeded the first_byte_timeout or between_bytes_timeout. (default parameter is 60sec) -- Syohei Tanaka(@xcir) http://xcir.net/ (:3[__]) 2012?4?6?18:45 Maksim Ryabchenko : > Hi > When I try to open one of my web pages (fo example - > http://81.23.10.255/blogs/260 ) I'll see 503 error. This web page differs > from other in that it contains a nonexistent image from non-existent domain > (http://stage.ibusiness.ru/ow_userfiles/plugins/base/64-51-e1307300732878.jpg).\ > What should I do that the page was opened? > Log file in attach > Thanks > > Best regards, Maxim Ryabchenko > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From shahab371 at gmail.com Sun Apr 15 17:14:36 2012 From: shahab371 at gmail.com (shahab bakhtiyari) Date: Sun, 15 Apr 2012 19:14:36 +0200 Subject: ignoring the "uncacheable" header Message-ID: Hi everybody I am doing some benchmarking tests using polygraph tool. Polygraph simulates both client and server processes to benchmark proxies. When running with varnish, I get large number of errors telling "hit on uncacheable objects". It seems that varnish ignores(at least some of) uncacheable headers. I am just using the default configurations and not familiar with vcl. Would really appreciate if you tell me how to solve the problem Regards Shahab -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun Apr 15 17:25:15 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 15 Apr 2012 18:25:15 +0100 Subject: ignoring the "uncacheable" header In-Reply-To: References: Message-ID: On 15 April 2012 18:14, shahab bakhtiyari wrote: > Hi everybody > > I am doing some benchmarking tests using polygraph tool. Polygraph simulates > both client and server processes to benchmark proxies. When running with > varnish, I get large number of errors telling "hit on uncacheable objects". > It seems that varnish ignores(at least some of) uncacheable headers. I am > just using the default configurations and not familiar with vcl. Would > really appreciate if you tell me how to solve the problem Without more information, I'd guess your problem is the same as that experienced in this thread: https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-March/021826.html. Read the thread through. Then change your default_ttl setting to 0, or override it in VCL. Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From fatblowfish at gmail.com Mon Apr 16 01:00:48 2012 From: fatblowfish at gmail.com (fatblowfish) Date: Mon, 16 Apr 2012 13:00:48 +1200 Subject: backend polling basic auth Message-ID: Hi, For the backend health polling, I had to use the .request parameter to specify the basic auth credential necessary to authenticate with the backend; I'd definitely get a 401 if I just used .url plainly. For example, using a bogus url,password of course... .request = "GET /foo/i_is_healthy HTTP/1.1" "Host: 127.0.0.5" "Authorization: Basic ZmF0Ymxvd2Zpc2g6c29sb25nYW5kdGhhbmtzZm9yYWxsdGhlZmlzaA==" "Connection: close"; My question is: Is there a much elegant approach to this? I'd rather *not*have the base64 encoded string on a(ny) file on the server. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahab371 at gmail.com Mon Apr 16 18:04:09 2012 From: shahab371 at gmail.com (shahab bakhtiyari) Date: Mon, 16 Apr 2012 20:04:09 +0200 Subject: ignoring the "uncacheable" header In-Reply-To: References: Message-ID: Thank you very much Jonathan I set -t 0 in the /etc/default/varnish file and problem was partialy solved. Now I dont get the "hit on uncacheable objects" , but there are still many " hit on reload request" errors. Do you have any clue how to fix that? Reagrds Shahab On 15 April 2012 19:25, Jonathan Matthews wrote: > On 15 April 2012 18:14, shahab bakhtiyari wrote: > > Hi everybody > > > > I am doing some benchmarking tests using polygraph tool. Polygraph > simulates > > both client and server processes to benchmark proxies. When running with > > varnish, I get large number of errors telling "hit on uncacheable > objects". > > It seems that varnish ignores(at least some of) uncacheable headers. I am > > just using the default configurations and not familiar with vcl. Would > > really appreciate if you tell me how to solve the problem > > Without more information, I'd guess your problem is the same as that > experienced in this thread: > > https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-March/021826.html > . > Read the thread through. > > Then change your default_ttl setting to 0, or override it in VCL. > > Jonathan > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgervais at agaetis.fr Tue Apr 17 07:46:14 2012 From: mgervais at agaetis.fr (=?ISO-8859-1?Q?Micka=EBl_GERVAIS?=) Date: Tue, 17 Apr 2012 09:46:14 +0200 Subject: Varnish crash every 30 min... Message-ID: <4F8D1FC6.2020107@agaetis.fr> Hi all, I'm using varnish 2, and after some investigation, I've seen that it crashes every 30minutes and is restarted by the watch dog, I've found this log: / / /Apr 12 18:37:16 alpinixv varnishd[30061]: Child (30064) Panic message: Assert error in VRT_IP_string(), cache_vrt.c line 888: Condition((p = WS_Alloc(sp->http->ws, len)) != 0) not true. thread = (cache-worker) ident = Linux,2.6.33,x86_64,-sfile,-//hcritbit,epoll Backtrace: 0x423e76: /usr/sbin/varnishd [0x423e76] 0x42bde0: /usr/sbin/varnishd(VRT_IP_//string+0x120) [0x42bde0] 0x7f8dd71f0644: ./vcl.OaGbHZXp.so [0x7f8dd71f0644] 0x4288a6: /usr/sbin/varnishd(VCL_//deliver_method+0x46) [0x4288a6] 0x4136ff: /usr/sbin/varnishd [0x4136ff] 0x414509: /usr/sbin/varnishd(CNT_//Session+0x369) [0x414509] 0x426298: /usr/sbin/varnishd [0x426298] 0x42557d: /usr/sbin/varnishd [0x42557d] 0x7f8fddc0573d: /lib64/libpthread.so.0 [0x7f8fddc0573d] 0x7f8fdd4e0d1d: /lib64/libc.so.6(clone+0x6d) [0x7f8fdd4e0d1d] sp = 0x7f8dd2adf008 { fd = 137, id = 137, xid = 835416094, client = 92.90.20.11 45272, step = STP_DELIVER, handling = deliver, restarts = 0, esis = 0 ws = 0x7f8dd2adf080 { overflow id = "sess", {s,f/ /Apr 12 18:37:16 alpinixv varnishd[30061]: child (31129) Started/ /Apr 12 18:37:16 alpinixv varnishd[30061]: Child (31129) said/ /Apr 12 18:37:16 alpinixv varnishd[30061]: Child (31129) said Child starts/ /Apr 12 18:37:16 alpinixv varnishd[30061]: Child (31129) said managed to mmap8589934592 bytes of8589934592 / Here is the code: /log "[Deliver] (" client.ip ")" req.url " (" obj.hits ")"; /So we have commented this line and now we have some restart without errors, here is the code:/ //Apr 15 19:08:37 alpinixv varnishd[13463]: Child (32470) not responding to CLI, killing it. Apr 15 19:08:38 alpinixv varnishd[13463]: Child (32470) died signal=3 Apr 15 19:08:38 alpinixv varnishd[13463]: child (29841) Started/ Any suggestions would be appreciated! Thanks a lot Mickael / / -------------- next part -------------- An HTML attachment was scrubbed... URL: From arthur.lutz at logilab.fr Tue Apr 17 15:46:48 2012 From: arthur.lutz at logilab.fr (Arthur Lutz) Date: Tue, 17 Apr 2012 17:46:48 +0200 Subject: Logging expired objects/URLs Message-ID: <4F8D9068.9040604@logilab.fr> Hi list, I'm trying to log the URLs that expire in my varnish cache (version 2.1). My first approximation was to use the following command : varnishlog -u 2>&1 | grep -v RFC --line-buffered | grep -E "(ExpKill|TxURL|TTL)" --line-buffered > /tmp/logvarnish (I couldn't find a successful combination of -i and -I to make the same log) My idea was that I could then correlate between TxURL & the object_id found in TTL and ExpKill, but under heavy load it's not that simple, there is not simple way of getting the objectid from previous lines. I then started a python script to obtain URLs correlated to have expired, but results are not great. I even have an option to do a urlopen and check the "Age" in the headers (can share if needed). Any tips ? Arthur -- Arthur Lutz - LOGILAB, Paris (France). Formations - http://www.logilab.fr/formations D?veloppements - http://www.logilab.fr/services Gestion de connaissances - http://www.cubicweb.org/ Conf?rence Web S?mantique (mai 2012) - http://www.semweb.pro From shahab371 at gmail.com Wed Apr 18 09:05:18 2012 From: shahab371 at gmail.com (shahab bakhtiyari) Date: Wed, 18 Apr 2012 11:05:18 +0200 Subject: ignoring the "uncacheable" header In-Reply-To: References: Message-ID: Hi everybody Is'nt any answer for my previous quiestion out there? about how to say varnish not to serve "Reload" requests from the cache? However I have 2 further questions and really appreciate if somebody could help, I tried to find it in wiki , without any success 1. How to configure varnish to authenticate from an external server(like openldap), is it possible at all? 2. How varnish deals with ssl requests, does one need further configuration for that or it is done by default? Best regards Shahab On 16 April 2012 20:04, shahab bakhtiyari wrote: > Thank you very much Jonathan > > I set -t 0 in the /etc/default/varnish file and problem was partialy > solved. Now I dont get the "hit on uncacheable objects" , but there > are still many " hit on reload request" errors. Do you have any clue how > to fix that? > > Reagrds > Shahab > > > On 15 April 2012 19:25, Jonathan Matthews wrote: > >> On 15 April 2012 18:14, shahab bakhtiyari wrote: >> > Hi everybody >> > >> > I am doing some benchmarking tests using polygraph tool. Polygraph >> simulates >> > both client and server processes to benchmark proxies. When running with >> > varnish, I get large number of errors telling "hit on uncacheable >> objects". >> > It seems that varnish ignores(at least some of) uncacheable headers. I >> am >> > just using the default configurations and not familiar with vcl. Would >> > really appreciate if you tell me how to solve the problem >> >> Without more information, I'd guess your problem is the same as that >> experienced in this thread: >> >> https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-March/021826.html >> . >> Read the thread through. >> >> Then change your default_ttl setting to 0, or override it in VCL. >> >> Jonathan >> -- >> Jonathan Matthews >> Oxford, London, UK >> http://www.jpluscplusm.com/contact.html >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampe at hauke-lampe.de Wed Apr 18 17:44:59 2012 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Wed, 18 Apr 2012 19:44:59 +0200 Subject: ignoring the "uncacheable" header In-Reply-To: References: Message-ID: <4F8EFD9B.2000000@hauke-lampe.de> Hello Shahab On 18.04.2012 11:05, shahab bakhtiyari wrote: > Is'nt any answer for my previous quiestion out there? about how to say > varnish not to serve "Reload" requests from the cache? Use VCL to force a cache miss if the client sets Cache-Control: no-cache See https://www.varnish-cache.org/trac/wiki/VCLExampleEnableForceRefresh > 1. How to configure varnish to authenticate from an external server(like > openldap), is it possible at all? varnish doesn't authenticate users (except for the ACL lists of IP adresses). You could program a vmod and query it from within VCL. Authentication should be handled by the backend, IMHO. > 2. How varnish deals with ssl requests, It doesn't. Use an SSL proxy in front of varnish. nginx or pound come to mind. Hauke. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From shahab371 at gmail.com Thu Apr 19 11:26:40 2012 From: shahab371 at gmail.com (shahab bakhtiyari) Date: Thu, 19 Apr 2012 13:26:40 +0200 Subject: ignoring the "uncacheable" header In-Reply-To: <4F8EFD9B.2000000@hauke-lampe.de> References: <4F8EFD9B.2000000@hauke-lampe.de> Message-ID: Hi again , Thanx for your answer, But I got the following error Message from VCC-compiler: Unknown variable 'req.hash_always_miss' At: (input Line 21 Pos 11) set req.hash_always_miss = true; ----------####################-------- Running VCC-compiler failed, exit 1 VCL compilation failed my varnish version is varnish-3.0.1 regards On 18 April 2012 19:44, Hauke Lampe wrote: > Hello Shahab > > > On 18.04.2012 11:05, shahab bakhtiyari wrote: > > > Is'nt any answer for my previous quiestion out there? about how to say > > varnish not to serve "Reload" requests from the cache? > > Use VCL to force a cache miss if the client sets Cache-Control: no-cache > See https://www.varnish-cache.org/trac/wiki/VCLExampleEnableForceRefresh > > > 1. How to configure varnish to authenticate from an external server(like > > openldap), is it possible at all? > > varnish doesn't authenticate users (except for the ACL lists of IP > adresses). You could program a vmod and query it from within VCL. > Authentication should be handled by the backend, IMHO. > > > 2. How varnish deals with ssl requests, > > It doesn't. Use an SSL proxy in front of varnish. nginx or pound come to > mind. > > > > Hauke. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.begumisa at gmail.com Fri Apr 20 01:09:29 2012 From: j.begumisa at gmail.com (Joseph Begumisa) Date: Thu, 19 Apr 2012 18:09:29 -0700 Subject: Varnish 2.1.5 Assert Error Message-ID: Anyone have an idea what the panic message below means? I've looked at the list archives and tickets and the closest issue that I could find was a defect whereby a long string overflowing obj.ws. I'm not too sure that is what is happening here but if it helps the url in the "err_reason" below that I have shortened was actually 300 characters long. Any help would be appreciated. Thanks. --- Apr 19 22:54:16 cache013 Y.Y.Y[4368]: child (28917) Started Apr 19 22:54:16 cache013 Y.Y.Y[4368]: Child (28917) said Apr 19 22:54:16 cache013 Y.Y.Y[4368]: Child (28917) said Child starts Apr 19 22:54:20 cache013 Y.Y.Y[4368]: Child (28917) died signal=6 Apr 19 22:54:20 cache013 Y.Y.Y[4368]: Child (28917) Panic message: Assert error in WS_Release(), cache_ws.c line 193: Condition(bytes <= ws->e - ws->f) not true. thread = (cache-worker) ident = Linux,2.6.18-164.11.1.el5,x86_64,-smalloc,-hclassic,epoll Backtrace: 0x423f86: pan_ic+b6 0x42f105: WS_Release+f5 0x4294b6: vrt_assemble_string+f6 0x42d8d5: VRT_SetHdr+f5 0x2aaaac503ee0: _end+2aaaabe90358 0x428846: VCL_error_method+46 0x4137d9: cnt_error+159 0x414271: CNT_Session+321 0x426378: wrk_do_cnt_sess+b8 0x42567e: wrk_thread_real+32e sp = 0x2aac6e302008 { fd = 161, id = 161, xid = 1694570931, client = X.X.X.X 51048, step = STP_ERROR, handling = deliver, err_code = 750, err_reason = /a/b.html --- Regards, Joseph From n.j.saunders at gmail.com Sat Apr 21 08:25:06 2012 From: n.j.saunders at gmail.com (Neil Saunders) Date: Sat, 21 Apr 2012 09:25:06 +0100 Subject: Sharing a cache between multiple Varnish servers Message-ID: Hi all - A two part question on cache sharing: a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like them to share a cache but don't want to use the experimental persistant storage backend - Are there any other options? b) We run a cache warming script to ensure a certain set of URL's are always cached, but at the moment the script requests to all 3 web heads to ensure cache consistency - I see that Varnish supports PUT operations - Would it be feasible for the cache warmer to request content from webhead 1 and make a "PUT request to servers 2 & 3? I've searched high and low for documentation on this but can't find anything. All help greatly appreciated! Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Sat Apr 21 10:36:02 2012 From: perbu at varnish-software.com (Per Buer) Date: Sat, 21 Apr 2012 12:36:02 +0200 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: References: Message-ID: Hi Neil. On Sat, Apr 21, 2012 at 10:25 AM, Neil Saunders wrote: > Hi all - > > A two part question on cache sharing: > > a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like them to > share a cache but don't want to use the experimental persistant storage > backend - Are there any other options? > I don't think what you have in mind would work. Varnish requires an explicit lock on the files in manages. Sharing a cache between Varnish instances won't ever work. What I would recommend you do is to hash incoming requests based on URL so each time the same URL is hit it is served from the same server. That way you don't duplicate the content between caches. Varnish can do this, F5's can do it, haproxy should be able to do this as well. b) We run a cache warming script to ensure a certain set of URL's are > always cached, but at the moment the script requests to all 3 web heads to > ensure cache consistency - I see that Varnish supports PUT operations - > Would it be feasible for the cache warmer to request content from webhead 1 > and make a "PUT request to servers 2 & 3? I've searched high and low for > documentation on this but can't find anything. > No. Varnish requires a client requesting the data. But my solution above would take care of that. -- Per Buer Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sat Apr 21 11:08:44 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 21 Apr 2012 12:08:44 +0100 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: References: Message-ID: On 21 April 2012 09:25, Neil Saunders wrote: > Hi all - > > A two part question on cache sharing: > > a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like them to > share a cache but don't want to use the experimental persistant storage > backend - Are there any other options? As Per's said, this isn't possible. > b) We run a cache warming script to ensure a certain set of URL's are always > cached, but at the moment the script requests to all 3 web heads to ensure > cache?consistency?- I see that Varnish supports PUT operations - Would it be > feasible for the cache warmer to request content from webhead 1 and make a > "PUT request to servers 2 & 3? I've searched high and low for documentation > on this but can't find anything. If Per's suggestion of hashing content across the caches doesn't fit in with what you're trying to do, how about this. Put some logic in your VCL that looks out for a "X-Cache-Warming: True" (or whatever) request header, which your warming script will explicitly set. If this header is present, then use VCL to get Varnish to switch over to another cache as its backend such that, instead of just going cache1->origin, the request instead goes cache1->cache2->cache3->origin. You'll need this logic on N-1 of your caches (the last one doesn't need to know it's part of this scheme) , and it should enable you to make 1 request to warm N caches. You might even be able abstract this so it works for cache flushes, too, and not just warming operations. HTH, Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From n.j.saunders at gmail.com Sat Apr 21 13:29:53 2012 From: n.j.saunders at gmail.com (Neil Saunders) Date: Sat, 21 Apr 2012 14:29:53 +0100 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: References: Message-ID: <1597836141382543290@unknownmsgid> Sent from my iPhone On 21 Apr 2012, at 12:09, Jonathan Matthews wrote: > On 21 April 2012 09:25, Neil Saunders wrote: >> Hi all - >> >> A two part question on cache sharing: >> >> a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like them to >> share a cache but don't want to use the experimental persistant storage >> backend - Are there any other options? > > As Per's said, this isn't possible. > >> b) We run a cache warming script to ensure a certain set of URL's are always >> cached, but at the moment the script requests to all 3 web heads to ensure >> cache consistency - I see that Varnish supports PUT operations - Would it be >> feasible for the cache warmer to request content from webhead 1 and make a >> "PUT request to servers 2 & 3? I've searched high and low for documentation >> on this but can't find anything. > > If Per's suggestion of hashing content across the caches doesn't fit > in with what you're trying to do, how about this. > > Put some logic in your VCL that looks out for a "X-Cache-Warming: > True" (or whatever) request header, which your warming script will > explicitly set. > > If this header is present, then use VCL to get Varnish to switch over > to another cache as its backend such that, instead of just going > cache1->origin, the request instead goes > cache1->cache2->cache3->origin. You'll need this logic on N-1 of your > caches (the last one doesn't need to know it's part of this scheme) , > and it should enable you to make 1 request to warm N caches. > > You might even be able abstract this so it works for cache flushes, > too, and not just warming operations. > > HTH, > Jonathan > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Excellent suggestions-Thank you both. From bedis9 at gmail.com Sat Apr 21 13:42:16 2012 From: bedis9 at gmail.com (Baptiste) Date: Sat, 21 Apr 2012 15:42:16 +0200 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: References: Message-ID: > > What I would recommend you do is to hash incoming requests based on URL > so each time the same URL is hit it is served from the same server. That > way you don't duplicate the content between caches. Varnish can do this, > F5's can do it, haproxy should be able to do this as well. > Hey, Actually any "layer 7" load-balancer can do it. By the way, HAProxy does it even better than F5 ;) cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Sat Apr 21 14:47:27 2012 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sat, 21 Apr 2012 16:47:27 +0200 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: References: Message-ID: <20120421164727.07ce30b1@linux-wb36.example.org> Am Sat, 21 Apr 2012 09:25:06 +0100 schrieb Neil Saunders : > Hi all - > > A two part question on cache sharing: > > a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like > them to share a cache but don't want to use the experimental > persistant storage backend - Are there any other options? I think Java-software like ehcache (or some advanded derivative of it) can do that. But AFAIK, it requires close integration with the app that is cached. I don't think you can just bolt ehcache on top of stuff like you can do with varnish. Rainer From glenn at mp3lyrics.org Mon Apr 23 09:13:51 2012 From: glenn at mp3lyrics.org (glenn at mp3lyrics.org) Date: Mon, 23 Apr 2012 11:13:51 +0200 Subject: varnishd restarting randomly Message-ID: We've been running varnish on www.mp3lyrics.com for many years, there has been a problem with varnish restarting at random intervals. Some times varnish can work for several days before it automatically restarts (but its rare), and some times it restarts up to 10 times per day (more "normal"). It does not seem to have anything to do with traffic amount. This is starting to get annoying and we've reached the point where we want to either fix the problem with varnish or get another cache machine. Last night varnish didn't restart, but it seems it kind of froze. It didn't fetch data from the webserver, and didn't respond to http-requsts from the net (this is the first time I've experienced a varnish freeze like this) When it froze, all I had to do was to manually restart it, then it started running "normally" again. I am unsure whether varnish responds slowly to http-requests from the net during the automatic restarts or not, but I have come across situations when varnish is starting to respond slowly, then I restart it and it starts responding quick again. This behavior seems totally random, but I have noticed that varnish can seem slow when very many url.purge commands have been sent to it. However, the automatic restarting does not seem to have anything to do with the amount of url.purges sent to varnish (but I can be wrong). mp3lyrics is a fairly big site, it has several millions (cacheable) pages, about 200.000 unique users per day, about 300.000 pageviews, and about 4mill varnish requests per day. The server is in no way overloaded, apache alone is very capable of handling the requests, but a cache machine is off course wanted anyway if it works correctly. Currently varnish has been given 16GB of ram, I've tried with different amounts, but the problem seems to be the same. I need some help to figure out where to start looking for the reason to this. If someone?could tell me what kind of server type / configuration and varnish stats / configuration data is needed to start searching for the reason I'll be happy to provide it. (or should this request go to the varnish-bugs mailinglist?) - Glenn-Erik Sandbakken From glennerik at glennerik.com Mon Apr 23 11:05:49 2012 From: glennerik at glennerik.com (Glenn-Erik Sandbakken) Date: Mon, 23 Apr 2012 13:05:49 +0200 Subject: varnishd restarting randomly In-Reply-To: References: Message-ID: Theres nothing about grace in the vcl configuration used, so if grace is not default we're not using grace. I have never experienced any problems with other applications/processes running on www.mp3lyrics.com If it's due to bad ram then I guess the only solution is switching to a new server if we wan't to continue using varnish? Attached is the default.vcl varnish configuration used. - Glenn-Erik (ps I changed my varnish mailinglist email address) 2012/4/23 Jonathan Matthews : > Hey Glenn - > > I'm not replying on-list as I'm in a rush, not able to give you > anything definitive, and you sound like you're in a jam, > operationally. > Here're my 2 ideas for you from my sysadmin PoV: > > 1) Dodgy RAM (or other hardware; but most likely RAM). My #1 > suggestion from what you've described. > > 2) Are you using "grace"? It has some undocumented (except on the > list) flaw where it gobbles more and more and more RAM, eventually > leading to something going wrong somewhere. > > Anyway, HTH. > Jonathan > > On 23 April 2012 10:13, glenn at mp3lyrics.org wrote: >> We've been running varnish on www.mp3lyrics.com for many years, >> there has been a problem with varnish restarting at random intervals. >> Some times varnish can work for several days before it automatically >> restarts (but its rare), and some times it restarts up to 10 times per >> day (more "normal"). >> It does not seem to have anything to do with traffic amount. >> This is starting to get annoying and we've reached the point where we >> want to either fix the problem with varnish or get another cache >> machine. >> Last night varnish didn't restart, but it seems it kind of froze. It >> didn't fetch data from the webserver, and didn't respond to >> http-requsts from the net (this is the first time I've experienced a >> varnish freeze like this) >> When it froze, all I had to do was to manually restart it, then it >> started running "normally" again. >> >> I am unsure whether varnish responds slowly to http-requests from the >> net during the automatic restarts or not, but I have come across >> situations when varnish is starting to respond slowly, then I restart >> it and it starts responding quick again. >> This behavior seems totally random, but I have noticed that varnish >> can seem slow when very many url.purge commands have been sent to it. >> However, the automatic restarting does not seem to have anything to do >> with the amount of url.purges sent to varnish (but I can be wrong). >> >> mp3lyrics is a fairly big site, it has several millions (cacheable) >> pages, about 200.000 unique users per day, about 300.000 pageviews, >> and about 4mill varnish requests per day. >> The server is in no way overloaded, apache alone is very capable of >> handling the requests, but a cache machine is off course wanted anyway >> if it works correctly. >> Currently varnish has been given 16GB of ram, I've tried with >> different amounts, but the problem seems to be the same. >> >> I need some help to figure out where to start looking for the reason to this. >> If someone could tell me what kind of server type / configuration and >> varnish stats / configuration data is needed to start searching for >> the reason I'll be happy to provide it. >> >> (or should this request go to the varnish-bugs mailinglist?) >> >> - Glenn-Erik Sandbakken >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > -- > Jonathan Matthews > Oxford, London, UK > http://www.jpluscplusm.com/contact.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: default.vcl Type: application/octet-stream Size: 6247 bytes Desc: not available URL: From lasse.karstensen at gmail.com Mon Apr 23 12:37:12 2012 From: lasse.karstensen at gmail.com (Lasse Karstensen) Date: Mon, 23 Apr 2012 14:37:12 +0200 Subject: varnishd restarting randomly In-Reply-To: References: Message-ID: <20120423123711.GA30274@yankee.samfundet.no> Glenn-Erik Sandbakken: > Theres nothing about grace in the vcl configuration used, so if grace is > not default we're not using grace. > I have never experienced any problems with other applications/processes > running on www.mp3lyrics.com > If it's due to bad ram then I guess the only solution is switching to a new > server if we wan't to continue using varnish? > Attached is the default.vcl varnish configuration used. Hi. What version of Varnish are you running? Are you running distribution binaries, or something you've compiled yourself? The management process should log to syslog when the child dies. Can you supply an example of these lines, please. Is there any debugging information available when you run "panic.show" in varnishadm/the management console? -- Lasse Karstensen Varnish Software AS From apj at mutt.dk Mon Apr 23 14:03:04 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Mon, 23 Apr 2012 16:03:04 +0200 Subject: varnishd restarting randomly In-Reply-To: References: Message-ID: <20120423140304.GD12685@nerd.dk> On Mon, Apr 23, 2012 at 01:05:49PM +0200, Glenn-Erik Sandbakken wrote: > > 2) Are you using "grace"? It has some undocumented (except on the > > list) flaw where it gobbles more and more and more RAM, eventually > > leading to something going wrong somewhere. I'd like to hear more about this, since I can't recognize what you're describing, and if there's a bug, it should be in the tracker. -- Andreas From glennerik at glennerik.com Mon Apr 23 15:48:16 2012 From: glennerik at glennerik.com (Glenn-Erik Sandbakken) Date: Mon, 23 Apr 2012 17:48:16 +0200 Subject: varnishd restarting randomly In-Reply-To: <20120423140304.GD12685@nerd.dk> References: <20120423140304.GD12685@nerd.dk> Message-ID: We're running varnish version 2.0.2 on FreeBSD 7.0 Varnish has been installed with the freebsd ports method, and upgraded once. It is several years since we upgraded varnish, and it was upgraded in the effort of fixing this problem, without luck. There is no panic.show option in the varnishadm console: *panic.show* *101 44 * *Unknown request.* I will upgrade varnish with the ports method on the server asap, and hope it remedies the situation. BTW: The mp3lyrics.com server where varnish is running is a 64Bit XEON with 32GB ram and 8 processors. The much less powerful server we used prior to the current was a 32Bit XEON with 8GB ram (using a PAE kernel method to overcome the 4GB address limit) with 2 processors. On the previous server we were also running varnish and this was not a problem ! One of the main reasons for the server switch was to get more ram for varnish. Usually varnish operates at very low cpu usage (even though traffic is high), but some times it uses 100% or more cpu for several minutes. I'm not sure what varnish is working on during these high cpu usage periods, but it seems that this is when it sometimes restarts (but not always, sometimes it reduces the cpu usage without restarting). In addition, during the high cpu usage periods, sometimes the web response from varnish on mp3lyrics is slow (seemingly due to varnish working), and some times it seems quick despite the high cpu usage. I have no idea for a reason to any of this strange behavior, I used to think it was normal that varnish periodically used much cpu to create indexes or whatever, but there's clearly something that is not working right when it restarts so often. - Glenn-Erik http://www.mp3lyrics.com/ 2012/4/23 Andreas Plesner Jacobsen > On Mon, Apr 23, 2012 at 01:05:49PM +0200, Glenn-Erik Sandbakken wrote: > > > > 2) Are you using "grace"? It has some undocumented (except on the > > > list) flaw where it gobbles more and more and more RAM, eventually > > > leading to something going wrong somewhere. > > I'd like to hear more about this, since I can't recognize what you're > describing, and if there's a bug, it should be in the tracker. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Apr 23 16:04:52 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 23 Apr 2012 17:04:52 +0100 Subject: varnishd restarting randomly In-Reply-To: <20120423140304.GD12685@nerd.dk> References: <20120423140304.GD12685@nerd.dk> Message-ID: On 23 April 2012 15:03, Andreas Plesner Jacobsen wrote: > On Mon, Apr 23, 2012 at 01:05:49PM +0200, Glenn-Erik Sandbakken wrote: > >> > 2) Are you using "grace"? It has some undocumented (except on the >> > list) flaw where it gobbles more and more and more RAM, eventually >> > leading to something going wrong somewhere. > > I'd like to hear more about this, since I can't recognize what you're > describing, and if there's a bug, it should be in the tracker. https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-March/021886.html was the thread I was thinking of. Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html From apj at mutt.dk Mon Apr 23 16:24:18 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Mon, 23 Apr 2012 18:24:18 +0200 Subject: varnishd restarting randomly In-Reply-To: References: <20120423140304.GD12685@nerd.dk> Message-ID: <20120423162418.GE12685@nerd.dk> On Mon, Apr 23, 2012 at 05:04:52PM +0100, Jonathan Matthews wrote: > > > >> > 2) Are you using "grace"? It has some undocumented (except on the > >> > list) flaw where it gobbles more and more and more RAM, eventually > >> > leading to something going wrong somewhere. > > > > I'd like to hear more about this, since I can't recognize what you're > > describing, and if there's a bug, it should be in the tracker. > > https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-March/021886.html > was the thread I was thinking of. It's not a flaw as such. It's working as designed: The object is kept in memory for the duration of grace. Rogier's patch is an optimization where graced objects can be discarded sooner, if there are alternatives that can cover delivery of all variants. I'm not even sure that Per has analyzed the situation correctly. I'm much more inclined to believe that the reason for the unbounded memory growth is that the TTL of the objects is less than the shortlived parameter, thus they end up in the Transient store, which has no limit by default. So I really believe that mixing grace into this is a wild goose chase. -- Andreas From apj at mutt.dk Mon Apr 23 18:30:07 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Mon, 23 Apr 2012 20:30:07 +0200 Subject: The grace config eat the memory greatly In-Reply-To: References: Message-ID: <20120423183007.GF12685@nerd.dk> On Fri, Mar 30, 2012 at 11:50:34AM +0200, Per Buer wrote: > > > (..) > > Half an hour later, the total resident memory is about 3.7 G. It's 3 > > times more than the configured size. > > This is a know weakness. DocWilco has provided a patch that I think is is > accepted in master. Hopefully it will be out in a future release. I'm > guessing it should not be to hard to backport the patch if you need to. I don't believe that's what's at play here. I think it's a case of TTL below the shortlived parameter, and ending up in the Transient store, which is unbounded by default. Also, remember that you're configuring the size of the store, not the size of varnishd. There will be some overhead for threads and housekeeping. -- Andreas From m.khurram.abbas at gmail.com Tue Apr 24 07:45:55 2012 From: m.khurram.abbas at gmail.com (Khurram Abbas) Date: Tue, 24 Apr 2012 08:45:55 +0100 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicole4pt at gmail.com Tue Apr 24 21:50:01 2012 From: nicole4pt at gmail.com (Nicole H.) Date: Tue, 24 Apr 2012 14:50:01 -0700 Subject: Few questions regarding varnish with FreeBSD Message-ID: Hello, I have a few questions about using varnish on FreeBSD 1) Is it better to use malloc or to create a memory disk? 2) Any idea why a FreeBSD server running varnish takes forever to shutdown or restart if rebooted? When performing say a shutdown -r now / -h now The system will stay at "All Buffers Synced" once I let it run and it took an hour to finally reach Uptime: TIME / The operating system has halted.(for -h) 3) I have a cache server with 3 disks 135 gigs ea and 4 Gigs ram. The disks are only 8G full ea. Why would my n_lru_nuked (381451 ) be so high? I understand this to be "The number of objects varnish has had to evict from cache before they expired to make room for other content" Why would this happen when I still have so much room? Thanks for any help Thanks Nicole -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Apr 24 21:55:00 2012 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 24 Apr 2012 21:55:00 +0000 Subject: Few questions regarding varnish with FreeBSD In-Reply-To: Your message of "Tue, 24 Apr 2012 14:50:01 MST." Message-ID: <1639.1335304500@critter.freebsd.dk> In message , "Nicole H." writes: >1) Is it better to use malloc or to create a memory disk? Malloc is better. >2) Any idea why a FreeBSD server running varnish takes forever to shutdown >or restart if rebooted? > When performing say a shutdown -r now / -h now The system will stay at >"All Buffers Synced" once I let it run and it took an hour to finally >reach Uptime: TIME / The operating system has halted.(for -h) Interesting, havn't seen that one before. What happens if you run the "sync" command before shutdown ? >3) I have a cache server with 3 disks 135 gigs ea and 4 Gigs ram. The >disks are only 8G full ea. > Why would my n_lru_nuked (381451 ) be so high? If this is a 32bit system, you are limited by the 32bit address space and Varnish can probably only get about 3GB of address space for caching. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From janfrode at tanso.net Wed Apr 25 11:57:53 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 25 Apr 2012 13:57:53 +0200 Subject: retry on 503 -- but not right away please Message-ID: <20120425115753.GA30971@dibs.tanso.net> We have a backend apache server that will generate 503 errors on some files because of a TOCTOU problems while replacing the files. Unfortunately we see no way of fixing the updates of these files to be atomic, and avoid the problem -- so I was hoping varnish might be able to solve it for us. I tried to have varnish retry on 503 using: ========================================================= sub vcl_error { # retry on errors if (obj.status == 503) { if ( req.restarts < 12 ) { restart; } } } ========================================================= but it didn't seem to have the desired effect. We still get the 503's. Maybe varnish is trying again too quickly.. Is it possible to either insert a short delay before restarting here? Or possibly send the client a HTTP 307 pointing at the same URL to cause a delay that way? Any other ideas? -jf From anders at comoyo.com Wed Apr 25 12:44:37 2012 From: anders at comoyo.com (Anders Daljord Morken) Date: Wed, 25 Apr 2012 14:44:37 +0200 Subject: retry on 503 -- but not right away please In-Reply-To: <20120425115753.GA30971@dibs.tanso.net> References: <20120425115753.GA30971@dibs.tanso.net> Message-ID: I think you want https://www.varnish-cache.org/trac/wiki/VCLExampleSaintMode =) See also https://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html - at M -- "Sooner or later we're going to run out of things to go wrong, and then it'll work great." - Jamie Hyneman, The Mythbusters -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 25 12:58:36 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 25 Apr 2012 14:58:36 +0200 Subject: retry on 503 -- but not right away please In-Reply-To: References: <20120425115753.GA30971@dibs.tanso.net> Message-ID: <20120425125836.GA31986@dibs.tanso.net> On Wed, Apr 25, 2012 at 02:44:37PM +0200, Anders Daljord Morken wrote: > I think you want https://www.varnish-cache.org/trac/wiki/VCLExampleSaintMode > =) Unfortunately I don't think that helps. I don't have a stale version of these files when it happens (or the stale version would have been outdated) and I only have one backend server. > See also > https://www.varnish-cache.org/docs/trunk/tutorial/handling_misbehaving_servers.html The "Known limitations on grace- and saint mode" on that page says: "If your request fails while it is being fetched you're thrown into vcl_error. vcl_error has access to a rather limited set of data so you can't enable saint mode or grace mode here." -jf From apj at mutt.dk Wed Apr 25 13:08:35 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 25 Apr 2012 15:08:35 +0200 Subject: retry on 503 -- but not right away please In-Reply-To: <20120425115753.GA30971@dibs.tanso.net> References: <20120425115753.GA30971@dibs.tanso.net> Message-ID: <20120425130835.GG12685@nerd.dk> On Wed, Apr 25, 2012 at 01:57:53PM +0200, Jan-Frode Myklebust wrote: > > if (obj.status == 503) { > if ( req.restarts < 12 ) { > restart; > } > > but it didn't seem to have the desired effect. We still get the 503's. Maybe > varnish is trying again too quickly.. Is it possible to either insert a > short delay before restarting here? Or possibly send the client a HTTP 307 > pointing at the same URL to cause a delay that way? The only effect of the above is probably that you're DoSing the backend :) And unless you changed the default, max_restarts will kick in before you reach 12. > Any other ideas? I don't understand why you're seeing this 503 in vcl_error. A backend responding with a 503 would just hit the regular fetch->deliver path, not vcl_error, unless you explicitly return error somewhere in your vcl, so maybe your initial analysis of the problem is wrong? I think my preferred solution would be to return some synthetic page with meta refresh (and that would require you to use error, since that's the only place you can do synthetic currently). -- Andreas From janfrode at tanso.net Wed Apr 25 13:38:13 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 25 Apr 2012 15:38:13 +0200 Subject: retry on 503 -- but not right away please In-Reply-To: <20120425130835.GG12685@nerd.dk> References: <20120425115753.GA30971@dibs.tanso.net> <20120425130835.GG12685@nerd.dk> Message-ID: <20120425133813.GA32526@dibs.tanso.net> On Wed, Apr 25, 2012 at 03:08:35PM +0200, Andreas Plesner Jacobsen wrote: > > > > The only effect of the above is probably that you're DoSing the backend :) It's been happening only a few times a day (eight 503's on a cache that server 1.7 million requests yesterday), so I think that small DoS might be acceptable :-) > And unless you changed the default, max_restarts will kick in before you reach 12. Oh.. didn't know about that one... > > > Any other ideas? > > I don't understand why you're seeing this 503 in vcl_error. A backend > responding with a 503 would just hit the regular fetch->deliver path, not > vcl_error, unless you explicitly return error somewhere in your vcl, so maybe > your initial analysis of the problem is wrong? I'm uncertain where the error is coming from (backend or varnish), but I think I understand why we get it. The file served is PUT on the backend apache server using webdav (mod_dav), so every now and then varnish will request a file that is in the process of being written. Then f.ex. the file length will change between apache stat'ing it, and reading it -- so apache or varnish will bail out with a 503.. > I think my preferred solution would be to return some synthetic page with meta > refresh (and that would require you to use error, since that's the only place > you can do synthetic currently). It's not html-files being served, so I don't know if the client will understand meta refresh. I'm wondering if maybe we need to solve this on the backend then.. write the files to a temp-name and use incron to move them to the right place (we don't have much control over the client writing the files). -jf From apj at mutt.dk Wed Apr 25 14:11:01 2012 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 25 Apr 2012 16:11:01 +0200 Subject: retry on 503 -- but not right away please In-Reply-To: <20120425133813.GA32526@dibs.tanso.net> References: <20120425115753.GA30971@dibs.tanso.net> <20120425130835.GG12685@nerd.dk> <20120425133813.GA32526@dibs.tanso.net> Message-ID: <20120425141101.GH12685@nerd.dk> On Wed, Apr 25, 2012 at 03:38:13PM +0200, Jan-Frode Myklebust wrote: > > I don't understand why you're seeing this 503 in vcl_error. A backend > > responding with a 503 would just hit the regular fetch->deliver path, not > > vcl_error, unless you explicitly return error somewhere in your vcl, so maybe > > your initial analysis of the problem is wrong? > > I'm uncertain where the error is coming from (backend or varnish), but Please investigate using varnishlog. Something like varnishlog -m TxStatus:503 -- Andreas From nicole4pt at gmail.com Wed Apr 25 21:15:21 2012 From: nicole4pt at gmail.com (Nicole H.) Date: Wed, 25 Apr 2012 14:15:21 -0700 Subject: Few questions regarding varnish with FreeBSD In-Reply-To: <1639.1335304500@critter.freebsd.dk> References: <1639.1335304500@critter.freebsd.dk> Message-ID: On Tue, Apr 24, 2012 at 2:55 PM, Poul-Henning Kamp wrote: > In message oAA_0MN9h4TYH14qYBUJPKE8CBdc5dR80MQA at mail.gmail.com>, "Nicole H." writes: > > >1) Is it better to use malloc or to create a memory disk? > > Malloc is better. > > Thanks > >2) Any idea why a FreeBSD server running varnish takes forever to shutdown > >or restart if rebooted? > > When performing say a shutdown -r now / -h now The system will stay at > >"All Buffers Synced" once I let it run and it took an hour to finally > >reach Uptime: TIME / The operating system has halted.(for -h) > > Interesting, havn't seen that one before. > > What happens if you run the "sync" command before shutdown ? > > I have tried that to no avail. This occurs on systems running FreeBSD 8.X and 9.0. Syslogd: exiting on signal 15 Waiting (max 60 seconds) for system process 'vnlru' to stop... done Waiting (max 60 seconds) for system process 'bufdaemon' to stop... done Waiting (max 60 seconds) for system process 'syncer' to stop...Syncing disks, vnodes remaining...2 2 2 1 1 0 0 0 done All buffers synced. And that's where it will stay for a very very long time. Is there a way to find out what its hanging on? What is left for the server to do after that point? ---- Another other odd thing is when it boots back up, it says Starting varnishd WARNING: (-sfile) file size reduced to 86100456243 (80% of available disk space) WARNING: (-sfile) file size reduced to 86100456243 (80% of available disk space) WARNING: (-sfile) file size reduced to 86100456243 (80% of available disk space) Each disk is 135Gigs formatted. Each shows 24G of space used via df or only 19% Capacity. They are mounted ufs, local, noatime, soft-updates (I know soft updates don't help but they were already there) They are dedicated for use by varnish. However rc.conf settings are varnishd_storage="file,/ cache1/varnish-cache.bin,65%" ( I added other entries for a varnish_storage2 and 3) There is no malloc storage on this particular server since there is only 4G of RAM. I have tried adding some but it did not seem to make any difference. So why does it say reduced to 80%? > >3) I have a cache server with 3 disks 135 gigs ea and 4 Gigs ram. The > >disks are only 8G full ea. > > Why would my n_lru_nuked (381451 ) be so high? > > If this is a 32bit system, you are limited by the 32bit address space > and Varnish can probably only get about 3GB of address space for caching. > > It's running 64 bit software. I basically took a few servers that we had been running squid on and tried running Varnish. Squid reverse proxy always did fine and never seemed to push the systems resources. Is it because its limited by the 4gigs of ram or by the number of objects? How can I tell how many objects are stored? Thanks! Nicole > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -- --- Nicole Melody Harrington - UNIX Systems Mistress admin1 at picturetrail.com -- http://www.picturetrail.com Obey or the button gets it! -------------- next part -------------- An HTML attachment was scrubbed... URL: From chaokovsky.lee at gmail.com Thu Apr 26 08:28:01 2012 From: chaokovsky.lee at gmail.com (Chaos Lee) Date: Thu, 26 Apr 2012 16:28:01 +0800 Subject: unscripted Message-ID: <5D1A8059-5200-44D2-8AA3-FCF16A73E19B@gmail.com> ???? iPhone From mkadusale at novenix.net Thu Apr 26 11:13:58 2012 From: mkadusale at novenix.net (Myles Kadusale) Date: Thu, 26 Apr 2012 07:13:58 -0400 Subject: How to cache a page and ignore its cookie? Message-ID: Good Day! I am having a problem with my site because I am using cookies for all my pages. I need to use cookies in order to know if the user has already visited the page in the past so that I would have a meaningful data in my SiteCatalyst. Is it possible to setup Varnish to ignore the cookie and still cache the pages? Thank you in advance. Thanks and Regards, Myles From mkadusale at novenix.net Thu Apr 26 11:19:38 2012 From: mkadusale at novenix.net (Myles Kadusale) Date: Thu, 26 Apr 2012 07:19:38 -0400 Subject: Clarification about Varnish Message-ID: Good Day! I would like to clarify on something about varnish's behavior. I have a page and its including a javascript file. Now the page content does not change or it's a static page while the javascript file's content is always changing. Would varnish still cache the page and serve the page using the cache? Thanks, Myles -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Thu Apr 26 11:38:40 2012 From: perbu at varnish-software.com (Per Buer) Date: Thu, 26 Apr 2012 13:38:40 +0200 Subject: Clarification about Varnish In-Reply-To: References: Message-ID: Hi Myles, On Thu, Apr 26, 2012 at 1:19 PM, Myles Kadusale wrote: > Good Day!**** > > ** ** > > I would like to clarify on something about varnish?s behavior.**** > > ** ** > > I have a page and its including a javascript file.**** > > Now the page content does not change or it?s a static page while the > javascript file?s content is always changing.**** > > Would varnish still cache the page and serve the page using the cache? > Varnish would cache both the web page and the Javascript page according to headers or configuration. Set a long TTL for the HTML page and a short TTL for the Javascript file and you'll be fine. -- Per Buer Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Thu Apr 26 11:39:26 2012 From: perbu at varnish-software.com (Per Buer) Date: Thu, 26 Apr 2012 13:39:26 +0200 Subject: How to cache a page and ignore its cookie? In-Reply-To: References: Message-ID: Hi On Thu, Apr 26, 2012 at 1:13 PM, Myles Kadusale wrote: > Good Day! > > I am having a problem with my site because I am using cookies for all my > pages. > I need to use cookies in order to know if the user has already visited the > page in the past so that I would have a meaningful data in my SiteCatalyst. > > Is it possible to setup Varnish to ignore the cookie and still cache the > pages? > Yes. This is possible. It is covered in the documentation which you should have read before posting here. -- Per Buer Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.heigermoser at googlemail.com Fri Apr 27 09:47:30 2012 From: martin.heigermoser at googlemail.com (Martin Heigermoser) Date: Fri, 27 Apr 2012 11:47:30 +0200 Subject: Remove cookie on some pages only Message-ID: <4F9A6B32.4040207@googlemail.com> Hello, im pretty new to varnish so forgive me if this is handled in the documentation. Could not find any info though. I need to remove a cookie on some pages while passing it on the other pages. This Cookie is changing some Information on the backend application but is not user specific ( think of regions ) and should be cached if possible. Im not quite sure if this is possible at all, so any hints would be appreciated. i thought of sending out a HTTP Header on all pages that need the cookie and remove it on all other pages. >From my understanding the correct place to remove a Cookie would be vcl_recv, while the place to read a Header from the backend would be vcl_fetch Is this correct or am i wrong already? I'm using the bundled varnish 2.1.5 on ubuntu if that makes any difference. Are there any ways to do this? Thanks, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.j.saunders at gmail.com Fri Apr 27 16:31:29 2012 From: n.j.saunders at gmail.com (Neil Saunders) Date: Fri, 27 Apr 2012 17:31:29 +0100 Subject: Sharing a cache between multiple Varnish servers In-Reply-To: <20120421164727.07ce30b1@linux-wb36.example.org> References: <20120421164727.07ce30b1@linux-wb36.example.org> Message-ID: On Sat, Apr 21, 2012 at 3:47 PM, Rainer Duffner wrote: > Am Sat, 21 Apr 2012 09:25:06 +0100 > schrieb Neil Saunders : > > > Hi all - > > > > A two part question on cache sharing: > > > > a) I've got 3 web servers each with a 3.5Gb memory cache. I'd like > > them to share a cache but don't want to use the experimental > > persistant storage backend - Are there any other options? > > > I think Java-software like ehcache (or some advanded derivative of > it) can do that. > But AFAIK, it requires close integration with the app that is cached. > I don't think you can just bolt ehcache on top of stuff like you can do > with varnish. > > > Rainer > Hi all - I've tried implementing the web server chaining suggested above but have run in to a dead end. Broad configuration: On all servers except the "last" in the chain I've defined: backend next_web_server { .host = "webX.domain.com"; .port = "80"; } And have added the following to vcl_recv: if (req.http.X-Cache-Warming ~ "true") { set req.backend = next_web_server; set req.hash_always_miss = true; return(lookup); } set req.backend = system; } And have the following vcl_hash: sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } return(hash); } My issue is that all "real" requests (i.e. those without the X-Cache-Warming header following cache warming) are cache missing on all web servers except the last one (The one thats actually going to the real backend during cache warming) - It's like the backend name is being explicitly hashed, but can't see anything that would indicate what's going on in the documentation. Any help appreciated! Ta, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahab371 at gmail.com Sat Apr 28 11:28:39 2012 From: shahab371 at gmail.com (shahab bakhtiyari) Date: Sat, 28 Apr 2012 13:28:39 +0200 Subject: varnish and ram size Message-ID: Hi I have configured varnish to use 1GB of cache: -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G but it seems that varnish aggresively uses the whole 4Gb Ram of the machine and then crashes, followed by a restart. here is the log Apr 28 13:18:47 varnish varnishd[2952]: Child (5852) not responding to ping, killing it. Apr 28 13:18:49 varnish varnishd[2952]: Child (5852) not responding to ping, killing it. Apr 28 13:18:49 varnish varnishd[2952]: Child (5852) died signal=3 Apr 28 13:18:49 varnish varnishd[2952]: Child cleanup complete Apr 28 13:18:49 varnish varnishd[2952]: child (9522) Started Apr 28 13:18:49 varnish varnishd[2952]: Child (9522) said Apr 28 13:18:49 varnish varnishd[2952]: Child (9522) said Child starts Apr 28 13:18:49 varnish varnishd[2952]: Child (9522) said managed to mmap 6275375104 bytes of 6275375104 what am I doing wrong? is there a way prohibit that or at least tell varnish not to use the whole cache? Thank you in advance Shahab B. -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.j.saunders at gmail.com Mon Apr 30 11:49:17 2012 From: n.j.saunders at gmail.com (Neil Saunders) Date: Mon, 30 Apr 2012 12:49:17 +0100 Subject: Varnish implicitly hashing backend Message-ID: Hi all - I'm attempting to implement a suggestion provided as a solution to another question I posted regarding cache warming. Long story short, I have 6 webservers that I'm pre-warming 60,000 urls via a script. I had previously been sending each request to each web-server, but it was suggested that it would be much quicker, and indeed more elegant, to be able to set a header (X-Cache-Warming in this case) that if set would cause the web-server to use the next web server as the backend, until it reached the last web server and it would be fetched via the actual backend: Goal - Make a single request on the first web server to warm all 6. The issue I'm seeing is following cache warming the I get cache misses on actual requests on all web servers except the last in the chain, which would imply to me that Varnish implicitly hashes the backend name used. A summary of the VCLused: On all servers except the "last" in the chain I've defined: backend system { .host = system1.domain.com .port=80 } backend next_web_server { .host = "webX.domain.com "; .port = "80"; } And have added the following to vcl_recv for all web servers except the last: # On all webservers except the last START if (req.http.X-Cache-Warming ~ "true") { set req.backend = next_web_server; set req.hash_always_miss = true; return(lookup); } # On all webservers except the last END set req.backend = system; } And have the following vcl_hash: sub vcl_hash { hash_data(req.url); hash_data(req.http.host); return(hash); } Any help would be very much appreciated, even if only a "yes this is how it works, no there's no workaround" :) Cheers, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Apr 30 13:18:24 2012 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 30 Apr 2012 14:18:24 +0100 Subject: Varnish implicitly hashing backend In-Reply-To: References: Message-ID: On 30 April 2012 12:49, Neil Saunders wrote: > Hi all - > > I'm attempting to implement a suggestion provided as a solution to another > question I posted regarding cache warming. > > Long story short, I have 6 webservers that I'm pre-warming 60,000 urls via a > script. I had previously been sending each request to each web-server, but > it was suggested that it would be much quicker, and indeed more elegant, to > be able to set a header (X-Cache-Warming in this case) that if set would > cause the web-server to use the next web server as the backend, until it > reached the last web server and it would be fetched via the actual backend: > Goal - Make a single request on the first web server to warm all 6. > > The issue I'm seeing is following cache warming the I get cache misses on > actual requests on all web servers except the last in the chain, which would > imply to me that Varnish?implicitly?hashes the backend name used. I don't have an answer for this, but here's a thought: try explicitly logging the "req.hash" value as late as possible in the vcl_* chain (I don't know when/where it's an acceptable variable to query) to see what it produces. https://www.varnish-cache.org/docs/3.0/reference/vcl.html#the-hash-director says it uses just this as the key. I don't know if it contains an opaque lookup key, or something more useful. Perhaps comparing a single request's req.hash value between multiple chained caches in your setup will show something interesting ... Jonathan -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html