From rrauenza at gmail.com Mon May 2 19:45:57 2011 From: rrauenza at gmail.com (Rich Rauenzahn) Date: Mon, 2 May 2011 12:45:57 -0700 Subject: *.keep and if-modified-since Message-ID: I can't seem to get this to work (keeping a file around if its if-modified-since hasn't changed) -- I'm also having a hard time validating whether it is working or not. Right now I observe that the file is taking longer to download, so I assume it is pulling the entire file again from the apache backend. (I also see new entries in the apache log -- but I'm unclear if those would also occur with the if-modified-since header.) I'm also unclear what to look for in the varnish log. This is what I have right now for my vcl: backend build_download1 { .host = "build-download1"; .port = "80"; } sub vcl_recv { #set req.ttl = 1d; #set req.grace = 1h; #set req.keep = 365d; } sub vcl_fetch { set beresp.ttl = 1m; # lower to 1m for testing. set beresp.grace = 1h; set beresp.keep = 365d; } sub vcl_hit { #set obj.ttl = 1d; #set obj.grace = 1h; #set obj.keep = 365d; } I've looked at the diagram of the vcl subroutines, and I'm still having trouble figuring out where to put the timeouts. What I want is for an object never to be thrown out if the file has not been modified on the backend (if it is removed, I want it to expire, obviously) For grace, I want to always serve an old file rather than wait (so 1 hour should be enough). For TTL, I want to keep objects around for 12 hours before checking the if-modified timestamp. I also would want to set the frequency at which it checks if-modified after that as well -- so I would want to reset the TTL (maybe to 6 hours?) after an if-modified check. Anyone have some advice or further documentation to look at? I've looked at: http://www.varnish-cache.org/trac/wiki/BackendConditionalRequests http://www.varnish-cache.org/trac/wiki/VCL http://www.varnish-cache.org/trac/wiki/VCLExampleDefault Thanks, Rich From geoff at uplex.de Mon May 2 21:42:55 2011 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 02 May 2011 23:42:55 +0200 Subject: *.keep and if-modified-since In-Reply-To: References: Message-ID: <4DBF255F.2080504@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 5/2/11 9:45 PM, Rich Rauenzahn wrote: > I can't seem to get this to work (keeping a file around if its > if-modified-since hasn't changed) -- I'm also having a hard time > validating whether it is working or not. Have you applied the patch that was sent up to varnish-dev? The latest one can be found here: http://www.varnish-cache.org/lists/pipermail/varnish-dev/2011-March/002842.html This feature is not currently in the Varnish trunk, you have to apply the patch. I've been working with the Varnish committers towards getting it into a 3.0.x version, and parts of it are now in the trunk (the keep attribute in VCL). The full functionality is not going to be in 3.0, because the committers are not yet satisfied with the way it's configured. As I've understood the discussion, it's mainly because the feature adds a keep interval that is concurrent with the grace interval for an object whose TTL has expired, and it adds some control over the stale_obj for managing conditional requests, but there's nothing similar for grace. Keep and grace could be handled more consistently, but would require some thought and discussion, and the committers are focusing their energy on getting 3.0 out the door. Real Soon Now there should be an experimental branch in the trunk that includes the IMS feature. It's on my to do list, as soon as I get some customer work out of the way. (*grumble*) > Right now I observe that the file is taking longer to download, so I > assume it is pulling the entire file again from the apache backend. > (I also see new entries in the apache log -- but I'm unclear if those > would also occur with the if-modified-since header.) I'm also > unclear what to look for in the varnish log. If you're using the patch, there are a few ways you can verify the IMS feature: - - varnishstat will show non-zero values for fetch_304, if Varnish has sent conditional requests to the backend and received 304 responses; and the patch adds the counter cond_not_validated, which counts the non-304 responses from the backend for conditional requests from Varnish. - - If Varnish receives a 304 response from the backend, then it normally sends the status "200 Ok Not Modified" back to the client. (Unless Varnish decides for other reasons not to send a 200 status; for example, if the client had sent a conditional request, then Varnish may respond with 304 if the object qualifies.) - - Apache logs show status 304 (field %s, which is in most of the predefined custom formats). You could also use %{If-Modified-Since}i and %{If-None-Match}i in your own custom log format to get the contents of those headers. - - Snoop the request/response traffic > This is what I have right now for my vcl: > > backend build_download1 { > .host = "build-download1"; > .port = "80"; > } > > sub vcl_recv { > #set req.ttl = 1d; > #set req.grace = 1h; > #set req.keep = 365d; > } > > sub vcl_fetch { > set beresp.ttl = 1m; # lower to 1m for testing. > set beresp.grace = 1h; > set beresp.keep = 365d; > } > > sub vcl_hit { > #set obj.ttl = 1d; > #set obj.grace = 1h; > #set obj.keep = 365d; > } > > I've looked at the diagram of the vcl subroutines, and I'm still > having trouble figuring out where to put the timeouts. > > What I want is for an object never to be thrown out if the file has > not been modified on the backend (if it is removed, I want it to > expire, obviously) For that, you seem to be doing the right thing -- set beresp.keep in vcl_fetch(). It's probably what most people will want to do. req.keep in vcl_recv() sets an upper bound for the keep time just in the current session (like req.ttl and req.grace). That is, if Varnish finds a stale_obj with a keep time stored in cache that is greater than req.keep, then Varnish will act as if req.keep is the keep time, but won't change the keep time stored in cache. You can set obj.keep in vcl_hit() if you want to change the keep time of an object that is already in cache after it gets hit (also like obj.ttl and obj.grace). Please let us know how it goes -- any feedback will help to move the feature along. Thanks, Geoff - -- UPLEX Systemoptimierung Schwanenwik 24 22087 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNvyVeAAoJEOUwvh9pJNURTUwP/1rYCzzFk9S0lK1sa+AIWJUa g/DD1UROIfxZKgr0t33ooJjRcOt/xLzuTh3IwOoiodoPBPEljtS+V1+aKtQFeJei p5AZDNVfefrbOTFrA9bP1w2S9Do1t/sMLxPBNTcV23MMWHPB4uolqr9YSC0FLYc+ JVLwDBCJnfidp89GXuIcL5LsPeZ3GlzSPdk38tMTFdvz3P72yewORmWfw3Wrt2o0 dtNMtXDNZ4w81tAAbgqGQehlD1edKyJszCXz55yb2Z3ZI6dqm7vJZmQ3n66lm5+K 1OR6ttujQqbQGn/74w5xxA4XbH1+MgNg8uUHOqAPuBg9MQhfw0FZsHQwnVloGx/4 /5ftdZt71sgAIkEA0sYUefwiaa3ctmWC7QYK2HWyKwRgP66N5JFEDBwQ3sQUAFBA voEmIIceNeCYAPxLeCMauKciVv15XBPKSMz15+zt7nSuQ0khzTcXutsHkwqm0Cf7 wqZuvBUgJSJ1j2TqhH1o61WgVMUOATEeYJZg/vONfKIdNjyFTJ2bXayHicoYLZyI N7XjtWWqJsfQJwKmQF31thDBhMy8LQRXRbrL/UlyMfAa61ULJEfOjhvaFJ40Jsez Ls+IiJi/Il1zecsL7hpibkl6LDWtMoTQxq4MpJGcpea3bcIsu6eQ/pfQueNbOzj6 SqsH9O70EKRb+rSZBTPF =FS/5 -----END PGP SIGNATURE----- From rrauenza at gmail.com Mon May 2 21:54:57 2011 From: rrauenza at gmail.com (Rich Rauenzahn) Date: Mon, 2 May 2011 14:54:57 -0700 Subject: *.keep and if-modified-since In-Reply-To: <4DBF255F.2080504@uplex.de> References: <4DBF255F.2080504@uplex.de> Message-ID: > Have you applied the patch that was sent up to varnish-dev? The latest > one can be found here: I have not -- I only pulled a git clone last week. Do you think we're talking about weeks or months for it to get incorporated into mainline? Thanks for the pointer on 304 -- that's what I should be looking for in my apache logs! From michel.andre-de-la-porte at ubisoft.com Mon May 2 21:55:31 2011 From: michel.andre-de-la-porte at ubisoft.com (Michel Andre de la Porte) Date: Mon, 2 May 2011 17:55:31 -0400 Subject: Indexes with Varnish Message-ID: <4DBF2853.8080201@ubisoft.com> Hi, I am trying to get httpd's autoindex module to work through varnish. Right now, it's partially working, as in i get the normal layout Name Last Modified Size Description Parent Directory However, no directories are ever displayed. I've tested apache directly and it is showing the full listing as it's supposed to, however varnish is not. I've tried removing that directory from the caching rules, to no avail. Any help would be appreciated. Thanks Michel From geoff at uplex.de Tue May 3 08:39:18 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 03 May 2011 10:39:18 +0200 Subject: *.keep and if-modified-since In-Reply-To: <4DBF255F.2080504@uplex.de> References: <4DBF255F.2080504@uplex.de> Message-ID: <4DBFBF36.40302@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Following up my own mail, since I forgot something obvious: > If you're using the patch, there are a few ways you can verify the IMS > feature: In varnishlog you can look for a TxHeader containing If-Modified-Since and/or If-None-Match. Certainly a better idea than snooping request/response traffic. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNv782AAoJEOUwvh9pJNURFZgP/3K+8WeZ4gehZnhFh9PlgbLe WNrP7gd/XqT01jq8hMvIdKp+Af5Clp+2DHzuo2G6+h7va4ylwM6tqVcYPiImwFWr oD6qsPfL/x4rbki4r/Ao+le/rmphu10a+Jq1CovMUJ2HTZUcUaPxAfh/eR+2jYH8 i3zpdLZyUrqWYlw5KFX3nKFKt9500cV1kAvFmMGu3eNOayXDmvZgq5pPB+R7JD18 cof8pCKX5yaknRQqX48GoqGK5GXC6IL+uVlRTKHFlQKWKR08u5JwiHPeggItPXpl Wy8ZNSPqbDbHSBc1DvRcpzMTL/8GUYBOLXy667ID6sbLReb0Ia0L6B7dgzf2EFhY 1SSnyJxpAuFTBGuyI11xplBF844rLZtOjgYDjHjD7J0q5QRcOl7xp4FB4XhzFL8A mVDgw8H/R1ztD9PjedFXJPQ/YNxc6u2ztIO1/ilCdNjnQvh/zuTUlwwLVdV05UJN yMlsxlup89LOuYLi+5MQdjcvxW2larB2i8mQQ5QWHbfbWASqNaMWm73jk8a1C+qH jNcI2EzzldBgTSnaGIGbDhHdxeoGZXRBZcZSkW6VNwZ0jLvPkV37eL79fh5qy3sP aAU3UmtEt/jT0tyUWy536omuHckR1ubVE10tu4LBLkiZLML3+UYr0ZUsOuYEUKnJ Jg31yyWJhBuk+8rF62+m =E1Of -----END PGP SIGNATURE----- From geoff at uplex.de Tue May 3 09:01:02 2011 From: geoff at uplex.de (Geoff Simmons) Date: Tue, 03 May 2011 11:01:02 +0200 Subject: *.keep and if-modified-since In-Reply-To: References: <4DBF255F.2080504@uplex.de> Message-ID: <4DBFC44E.6090502@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/ 2/11 11:54 PM, Rich Rauenzahn wrote: > > Do you think we're > talking about weeks or months for it to get incorporated into > mainline? Not before 3.0 is released. After that, it's hard to say. phk and the team will probably want a solution for configuring keep and grace that they're comfortable with, and the result will need to be tested sufficiently. With luck, that can be closer to weeks than months, and any testing and feedback we can get will certainly help to get it closer. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJNv8ROAAoJEOUwvh9pJNURMR0QAIHJU3JtwHeCEJfqwthNUJ8N n0VlpiIStfCk3hreCsYXw2FFLC6HxoC6PVqkUPVQ39SNHF8d865K6pqw88fNRnVB Fa7ZIVHuq0BPYaxzobJxK7sx2sEnpt8VBYtnhCPre5z+D2+2gbnYlpCfCQ4zijG1 g5fheuiYkKFkK1UTpzrYkhoGY4yLzelTYqtGqMV8MVoSh6Kj2A/KxZyDwVn3rxjX Ya+qM5un2G0Pi95YSdkEaBwVat/1xcKftVk7rg9qZ3V52WV8uXqQLTPCK89Zzqf+ LhnbdH1H9xic7rp2otL/WxED4NYHoVJGjIYr2vv9FwQKlF9avamrZhRyYnRysNl2 9O5pMsbpwNkSL4YbGsLAPpjc14UvoC1XV+qw1j/JJ5269U/rLnPbOxxV8bmx9M/o 4N6wnBtcuiXRQy4GZHjbBz9oh5+ubNHlkQhGLA8ral9X11isCe9RsDd8R5RZPGgR Ciqa9tT3IEGqrQw/RwjzxzTDWct1tFNztqU1M4noBTntWKglfRYlUbWpc/2IZb7t xsUKXE5LKMj6vpG5S8wn7QfjR7M0ZQsqUYe2OZdFZ/VUM0CZih316xBNBU5sO7rn LRkE2LlhUEyptQQIki+3tNWvWJiYzm5pohJ8i4TxEEjB4b6nLKhFBBCrPRWybiD8 irq1TJJrgjZtzn3g3RJC =X5F3 -----END PGP SIGNATURE----- From mhettwer at team.mobile.de Tue May 3 09:18:09 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Tue, 3 May 2011 10:18:09 +0100 Subject: *.keep and if-modified-since In-Reply-To: <4DBFC44E.6090502@uplex.de> Message-ID: On 03.05.11 11:01, "Geoff Simmons" wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA256 > >On 05/ 2/11 11:54 PM, Rich Rauenzahn wrote: >> >> Do you think we're >> talking about weeks or months for it to get incorporated into >> mainline? > >Not before 3.0 is released. > >After that, it's hard to say. phk and the team will probably want a >solution for configuring keep and grace that they're comfortable with, >and the result will need to be tested sufficiently. With luck, that can >be closer to weeks than months, and any testing and feedback we can get >will certainly help to get it closer. I'm looking forward to help testing. Conditional gets to the backend is _the_ feature I'm waiting for :) I'll keep a close eye to that mailing list and will help testing where I can. Keep up the good work folks. Regards, Marian From perbu at varnish-software.com Tue May 3 09:24:11 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 3 May 2011 11:24:11 +0200 Subject: Indexes with Varnish In-Reply-To: <4DBF2853.8080201@ubisoft.com> References: <4DBF2853.8080201@ubisoft.com> Message-ID: Hi Michael. On Mon, May 2, 2011 at 11:55 PM, Michel Andre de la Porte < michel.andre-de-la-porte at ubisoft.com> wrote: > Hi, > > I am trying to get httpd's autoindex module to work through varnish. Right > now, it's partially working, as in i get the normal layout > > Name Last Modified Size Description > Parent Directory > > However, no directories are ever displayed. I've tested apache directly and > it is showing the full listing as it's supposed to, however varnish is not. > I've tried removing that directory from the caching rules, to no avail. > Could you show us a relevant piece of varnishlog? Per. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From kpettijohn at tarot.com Tue May 3 19:49:31 2011 From: kpettijohn at tarot.com (Kevin Pettijohn) Date: Tue, 3 May 2011 19:49:31 +0000 Subject: Varnish mobile redirect Message-ID: <0F881959-632D-4C5D-87AD-E4C7F1D6146B@tarot.com> Hello Everyone, With some help from Tom at mobiledrupal.com I am trying to get a mobile device redirect setup. The redirects are working in varnish but I am also trying to make it so if a mobile user clicks a link on our mobile site with /?nomobi=true appended it will then not be redirected and will be passed through to our main site where a cookie will be set to keep them on our main site. Currently my problem is that I can seem to get around the redirects that I have in place with req.url /?nomobi=true and a pass call. Here is my VCL for the mobile setup: sub device_detection { # Default to thinking it's a PC set req.http.X-Device = "pc"; # Add all possible agent strings # These are the most popular agent strings if (req.http.User-Agent ~ "iP(hone|od)" || req.http.User-Agent ~ "Android" || req.http.User-Agent ~ "Symbian" || req.http.User-Agent ~ "^BlackBerry" || req.http.User-Agent ~ "^SonyEricsson" || req.http.User-Agent ~ "^Nokia" || req.http.User-Agent ~ "^SAMSUNG" || req.http.User-Agent ~ "^LG" || req.http.User-Agent ~ " webOS") { set req.http.X-Device = "mobile"; } # These are some more obscure agent strings if (req.http.User-Agent ~ "^PalmSource"){ set req.http.X-Device = "mobile"; } if (req.http.X-Device == "mobile" && req.url ~ "^/?nomobi=true$") { return (pass); } # Decide if we need redirection if (req.http.X-Device == "mobile" && req.url !~ "^/?nomobi=true$") { if (req.http.host !~ "mv2.example.com" && req.url !~ "^/?nomobi=true$") { error 750 "mv2.example.com"; } elseif (req.http.host ~ "mv2.example.com" && req.url !~ "^/?nomobi=true$"){ error 750 "v2.example.com"; } } } sub vcl_recv { call device_detection; } sub vcl_error { if (obj.status == 750) { if (obj.response ~ "mv2.example.com") { set obj.http.Location = "http://mv2.example.com" req.url; } elsif (obj.response ~ "v2.example.com") { set obj.http.Location = "http://v2.example.com" req.url; } set obj.status = 302; } } I am fairly new to varnish so forgive me if I am going about this all wrong. Any insight would be much appreciated! -kp __________________ Kevin Pettijohn Operations & IT The Daily Insight Group kpettijohn at tarot.com From skyaoj at gmail.com Tue May 3 22:55:44 2011 From: skyaoj at gmail.com (Yaojian) Date: Wed, 4 May 2011 06:55:44 +0800 Subject: Ban spambot using varnish Message-ID: Hi, Varnish + Apache + Drupal 6.17 , these days suffering from spam bots. The varnish's ACL is loaded when it starts and I am looking for a way to avoid restarting varnish after adding a new robot. thx Yaojian From apj at mutt.dk Tue May 3 23:17:39 2011 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 4 May 2011 01:17:39 +0200 Subject: Ban spambot using varnish In-Reply-To: References: Message-ID: <20110503231739.GC960@nerd.dk> On Wed, May 04, 2011 at 06:55:44AM +0800, Yaojian wrote: > Varnish + Apache + Drupal 6.17 , these days suffering from spam bots. > > The varnish's ACL is loaded when it starts and I am looking for a way > to avoid restarting varnish after adding a new robot. You can use vcl.load, vcl.use and vcl.discard to load new VCL on the fly. -- Andreas From cristian.baciu at softlandro.com Tue May 3 13:14:36 2011 From: cristian.baciu at softlandro.com (cristian.baciu at softlandro.com) Date: Tue, 3 May 2011 16:14:36 +0300 Subject: Allow beresp to set only certain cookies Message-ID: <000001cc0994$0e567a60$2b036f20$@softlandro.com> Hi, I want to allow the backend to set only certain cookies (in the same time I want to remove PHP session cookie). How can I do this? Kind Regards, Cristian -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.andre-de-la-porte at ubisoft.com Tue May 3 14:11:24 2011 From: michel.andre-de-la-porte at ubisoft.com (Michel Andre de la Porte) Date: Tue, 3 May 2011 10:11:24 -0400 Subject: Indexes with Varnish In-Reply-To: References: <4DBF2853.8080201@ubisoft.com> Message-ID: <4DC00D0C.8060304@ubisoft.com> An HTML attachment was scrubbed... URL: From shib4u at gmail.com Wed May 4 09:45:07 2011 From: shib4u at gmail.com (Shibashish) Date: Wed, 4 May 2011 15:15:07 +0530 Subject: Allow beresp to set only certain cookies In-Reply-To: <000001cc0994$0e567a60$2b036f20$@softlandro.com> References: <000001cc0994$0e567a60$2b036f20$@softlandro.com> Message-ID: (allow only cookes named like fbs_,_twitter_sess,tt,tsk and remove all other cookies) in vcl_recv.... if (!req.http.Cookie ~ "fbs_|_twitter_sess|tt|tsk") { unset req.http.Cookie; } in vcl_fetch... if (!beresp.http.set-Cookie ~ "fbs_|_twitter_sess|tt|tsk") { unset beresp.http.set-Cookie; } ShiB. while ( ! ( succeed = try() ) ); On Tue, May 3, 2011 at 6:44 PM, wrote: > Hi, > > > > I want to allow the backend to set only certain cookies (in the same time > I want to remove PHP session cookie). > > How can I do this? > > > > Kind Regards, > > Cristian > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristian.baciu at softlandro.com Wed May 4 09:50:50 2011 From: cristian.baciu at softlandro.com (cristian.baciu at softlandro.com) Date: Wed, 4 May 2011 12:50:50 +0300 Subject: Allow beresp to set certain cookies Message-ID: <000901cc0a40$c0f491a0$42ddb4e0$@softlandro.com> Hi, I want to allow the backend to set only certain cookies (in the same time I want to remove PHP session cookie). How can I do this? Kind Regards, Cristian -------------- next part -------------- An HTML attachment was scrubbed... URL: From chm0dz at gmail.com Wed May 4 17:46:33 2011 From: chm0dz at gmail.com (Cristiano Fernandes (chm0d)) Date: Wed, 4 May 2011 14:46:33 -0300 Subject: [Workaround] Backend SICK!!! (HELP!!!) Message-ID: Whats is worng? vcl_fetch. # MAGICMARKER - if error !=200 fetch on cache. if (req.restarts == 0 && req.url ~ "^/$" && obj.status != 200) { C{ syslog(LOG_INFO, "HIT on vcl_error and config magicmarker on (%s) (%s) (%s)", VRT_r_req_url(sp), VRT_r_obj_response(sp), VRT_r_req_xid(sp)); }C set req.http.magicmarker = "sick"; restart; } ???? HELP!!!! -- _______ ? Cristiano Fernandes Google is my shepherd, no want shall I know -------------- next part -------------- An HTML attachment was scrubbed... URL: From chm0dz at gmail.com Wed May 4 17:48:24 2011 From: chm0dz at gmail.com (Cristiano Fernandes (chm0d)) Date: Wed, 4 May 2011 14:48:24 -0300 Subject: [Workaround] Backend SICK!!! (HELP!!!) In-Reply-To: References: Message-ID: Ops... sorry vcl_fetch and other is vcl_error if (req.http.host == "www.domain.com" && req.url ~ "^/$" && beresp.status != 200) { C{ syslog(LOG_INFO, "Pass on vcl_fetch with stats != 200 para ativar saintmode (%s) (%s)", VRT_r_req_url(sp), VRT_r_req_xid(sp)); }C set beresp.saintmode = 30s; restart; } set beresp.grace = 30m; :) On Wed, May 4, 2011 at 2:46 PM, Cristiano Fernandes (chm0d) < chm0dz at gmail.com> wrote: > Whats is worng? > > vcl_fetch. > > > # MAGICMARKER - if error !=200 fetch on cache. > if (req.restarts == 0 && req.url ~ "^/$" && obj.status != 200) { > C{ > syslog(LOG_INFO, "HIT on vcl_error and config > magicmarker on (%s) (%s) (%s)", VRT_r_req_url(sp), VRT_r_obj_response(sp), > VRT_r_req_xid(sp)); > }C > set req.http.magicmarker = "sick"; > restart; > } > > ???? HELP!!!! > > -- > _______ > ? Cristiano Fernandes > Google is my shepherd, no want shall I know > -- _______ ? Cristiano Fernandes Google is my shepherd, no want shall I know -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbrownfield at google.com Wed May 4 19:44:49 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Wed, 4 May 2011 12:44:49 -0700 Subject: [Workaround] Backend SICK!!! (HELP!!!) In-Reply-To: References: Message-ID: The first thing I see wrong is that you aren't explaining your problem. You might find that you can figure out the issue yourself if you work through a full explanation. -- kb On Wed, May 4, 2011 at 10:48, Cristiano Fernandes (chm0d) wrote: > Ops... sorry > > vcl_fetch and other is vcl_error > > > if (req.http.host == "www.domain.com" && req.url ~ "^/$" && > beresp.status != 200) { > C{ > syslog(LOG_INFO, "Pass on vcl_fetch with stats != > 200 para ativar saintmode (%s) (%s)", VRT_r_req_url(sp), VRT_r_req_xid(sp)); > }C > set beresp.saintmode = 30s; > restart; > } > set beresp.grace = 30m; > > :) > > > On Wed, May 4, 2011 at 2:46 PM, Cristiano Fernandes (chm0d) < > chm0dz at gmail.com> wrote: > >> Whats is worng? >> >> vcl_fetch. >> >> >> # MAGICMARKER - if error !=200 fetch on cache. >> if (req.restarts == 0 && req.url ~ "^/$" && obj.status != 200) { >> C{ >> syslog(LOG_INFO, "HIT on vcl_error and config >> magicmarker on (%s) (%s) (%s)", VRT_r_req_url(sp), VRT_r_obj_response(sp), >> VRT_r_req_xid(sp)); >> }C >> set req.http.magicmarker = "sick"; >> restart; >> } >> >> ???? HELP!!!! >> >> -- >> _______ >> ? Cristiano Fernandes >> Google is my shepherd, no want shall I know >> > > > > -- > _______ > ? Cristiano Fernandes > Google is my shepherd, no want shall I know > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Wed May 4 23:55:13 2011 From: checker at d6.com (Chris Hecker) Date: Wed, 04 May 2011 16:55:13 -0700 Subject: varnishncsa not logging POST requests? Message-ID: <4DC1E761.9020306@d6.com> I couldn't figure out why POSTs weren't showing up in my varnish logs, but from looking at the source, it looks like varnishncsa is hard coded to ignore "pipe"? The code is slightly different between 2.0.6 (sorry, I'm on an old version) and current, but both look like they toast the current logline if the session close is a "pipe" or "error" without a flag check to keep the txn. My VCL pipes POSTs, as do most, I assume. Is there any way to get these into the log short of modifing the source? Thanks, Chris From dredd422 at gmail.com Wed May 4 23:57:23 2011 From: dredd422 at gmail.com (Daniel Sell) Date: Wed, 4 May 2011 16:57:23 -0700 Subject: Ordering of query string parameters Message-ID: My webpage serves different content based off the query string parameters, and the order of the parameters doesn't matter. For example, these two URLs result in the same content: /index.php?a=1&b=2 /index.php?b=2&a=1 The URL generation is out of my control, and there are many possible parameters. As far as I can tell, Varnish will generate different hashes for these two URLs. Is there anything I can do? Thanks, Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From checker at d6.com Thu May 5 00:16:35 2011 From: checker at d6.com (Chris Hecker) Date: Wed, 04 May 2011 17:16:35 -0700 Subject: varnishncsa not logging POST requests? In-Reply-To: <4DC1E761.9020306@d6.com> References: <4DC1E761.9020306@d6.com> Message-ID: <4DC1EC63.4000508@d6.com> Ah, it looks like I should be 'pass'ing POSTs, not 'pipe'ing them. This makes them show up in the log. Is this the recommended code (http://open.blogs.nytimes.com/2010/09/15/using-varnish-so-news-doesnt-break-your-server/)? # Pass any requests that Varnish does not understand straight to the back end. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return(pipe); } /* Non-RFC2616 or CONNECT which is weird. */ # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return(pass); /* We deal only with GET and HEAD by default */ } Chris On 2011/05/04 16:55, Chris Hecker wrote: > > I couldn't figure out why POSTs weren't showing up in my varnish logs, > but from looking at the source, it looks like varnishncsa is hard coded > to ignore "pipe"? The code is slightly different between 2.0.6 (sorry, > I'm on an old version) and current, but both look like they toast the > current logline if the session close is a "pipe" or "error" without a > flag check to keep the txn. > > My VCL pipes POSTs, as do most, I assume. Is there any way to get these > into the log short of modifing the source? > > Thanks, > Chris > > From phk at phk.freebsd.dk Thu May 5 05:54:21 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 05 May 2011 05:54:21 +0000 Subject: Ordering of query string parameters In-Reply-To: Your message of "Wed, 04 May 2011 16:57:23 MST." Message-ID: <4686.1304574861@critter.freebsd.dk> In message , Daniel Sell wr ites: >For example, these two URLs result in the same content: > >/index.php?a=1&b=2 >/index.php?b=2&a=1 > >The URL generation is out of my control, and there are many possible >parameters. > >As far as I can tell, Varnish will generate different hashes for these two >URLs. Is there anything I can do? If you know the parameters are from a finite set, you can rewrite vcl_hash to do them one at a time in a specific order: hash_data (the url) hash_data (param 'a') hash_data (param 'b') hash_data (param 'c') A Vmod which sorts the params alphabetically might be a good idea if this is a general problem -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From jon at fido.net Thu May 5 13:49:54 2011 From: jon at fido.net (Jon Morby) Date: Thu, 05 May 2011 14:49:54 +0100 Subject: assertion error in 2.1.5 Message-ID: <4DC2AB02.8000102@fido.net> Hi We're seeing a number of segfaults / assertion errors in varnish Panic message: Assert error in HSH_Lookup(), cache_hash.c line 376: Condition((o)->magic == 0x32851d42) not true. thread = (cache-worker) ident = Linux,2.6.18-238.9.1.el5,x86_64,-sfile,-hcritbit,epoll Backtrace: 0x424446: /usr/sbin/varnishd [0x424446] 0x41e097: /usr/sbin/varnishd(HSH_Lookup+0x317) [0x41e097] 0x4121a0: /usr/sbin/varnishd [0x4121a0] 0x414744: /usr/sbin/varnishd(CNT_Session+0x4a4) [0x414744] 0x426898: /usr/sbin/varnishd [0x426898] 0x425b7d: /usr/sbin/varnishd [0x425b7d] 0x3bf580673d: /lib64/libpthread.so.0 [0x3bf580673d] 0x3bf4cd44bd: /lib64/libc.so.6(clone+0x6d) [0x3bf4cd44bd] sp = 0x2aaaeb642008 { fd = 114, id = 114, xid = 1781230731, client = 80.163.34.184 53244, step = STP_LOOKUP, handling = hash, restarts = 0, esis = 0 ws = 0x2aaaeb642080 { id = "sess", {s,f,r,e} = {0x2aaaeb642cd8,+488,(nil),+65536}, }, http[req] = { ws = 0x2aaaeb642080[sess] "GET", "/skin/frontend/default/bk-denmark/f Is this a known issue, or can we provide more info (if so what?) in order to help track it down further client_conn 19077 7.58 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 61788 24.55 Client requests received cache_hit 1666 0.66 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 55025 21.86 Cache misses backend_conn 60071 23.87 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 3 0.00 Fetch head fetch_length 45421 18.05 Fetch with Length fetch_chunked 9747 3.87 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 2 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 121 . N struct sess_mem n_sess 17 . N struct sess n_object 53856 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 53856 . N struct objectcore n_objecthead 53920 . N struct objecthead n_smf 107673 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 2 . N struct vbe_conn n_wrk 20 . N worker threads n_wrk_create 42 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 422 0.17 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 1169 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 1587 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 49906 19.83 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 19075 7.58 Total Sessions s_req 61788 24.55 Total Requests s_pipe 4949 1.97 Total pipe s_pass 148 0.06 Total pass s_fetch 55173 21.92 Total fetch s_hdrbytes 15551147 6178.45 Total header bytes s_bodybytes 483089050 191930.49 Total body bytes sess_closed 5683 2.26 Session Closed sess_pipeline 12 0.00 Session Pipeline sess_readahead 6 0.00 Session Read Ahead sess_linger 56242 22.34 Session Linger sess_herd 53038 21.07 Session herd shm_records 4306688 1711.04 SHM records shm_writes 290053 115.24 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 746 0.30 SHM MTX contention shm_cycles 2 0.00 SHM cycles through buffer sm_nreq 110143 43.76 allocator requests sm_nobj 107672 . outstanding allocations sm_balloc 826855424 . bytes allocated sm_bfree 246886400 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 55172 21.92 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 56684 22.52 HCB Lookups without lock hcb_lock 54986 21.85 HCB Lookups with lock hcb_insert 54986 21.85 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 2517 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) being called as /usr/sbin/varnishd -n pavodo -P /var/run/varnish-pv.pid -a 93.188.179.124:80 -f /etc/varnish/pavodo.vcl -T 127.0.0.1:6083 -t 120 -w 4,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,1G From kongfranon at gmail.com Thu May 5 14:26:35 2011 From: kongfranon at gmail.com (Mike Franon) Date: Thu, 5 May 2011 10:26:35 -0400 Subject: Multiple instances of varnish Message-ID: Has anyone run multiple instances of varnish on the same server and how is the performance overall? I am looking into it, because I want to serve up the same url, but depending if it is a bot go to a different backend. I am trying to figure out the best way to do that. Thanks From pom at dmsp.de Thu May 5 16:07:57 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Thu, 05 May 2011 18:07:57 +0200 Subject: Cache usage (varnishstat) in persitent mode (2.1.5) Message-ID: <4DC2CB5D.8060605@dmsp.de> Hi all, I am using varnish 2.1.5 and switched to persistent mode for testing purposes. Usually I am monitoring cache usage by calling varnishstat frequently (cron job) and I save the values for sm_balloc and sm_bfree in a rrdtool db file. After switching from -s file to -s persistent it seems that these values aren't fed with reasonable values any more (staying at zero). Is this still normal for 2.1.5 or am I doing it wrong? ^^ Any advice would be great. Thanks. Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From pom at dmsp.de Thu May 5 16:22:05 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Thu, 05 May 2011 18:22:05 +0200 Subject: Multiple instances of varnish In-Reply-To: References: Message-ID: <4DC2CEAD.6080403@dmsp.de> Am 05.05.2011 16:26, schrieb Mike Franon: > Has anyone run multiple instances of varnish on the same server and > how is the performance overall? > > I am looking into it, because I want to serve up the same url, but > depending if it is a bot go to a different backend. > > I am trying to figure out the best way to do that. Hi Mike, although I am not completely understanding your motivation I still can state that my current customer had two varnish instances running on several machines. Therefore it works :) The reason was not to run out of file descriptors (I was told...) Meanwhile I migrated every two instances to a single instance per machine. Well, performance (with two instances) was ok when you have cpu in mind. On the other hand it's not very effective to have the same objects in two different caches because virtual memory (and/or file cache of course) isn't shared between both instances. This results in a lower hitrate and the double number of backend requests. Technically you have to setup two instances using different instance names using the -n parameter and of course different ports to listen on (client req and cli). A reasonable setup would be to split domain names at the load balancer and feed them to different varnish instances - those could be run on same hardware. On the other hand you double your overhead memory usage (cache fragmentation, spare threads and such). I personally see no reason why to run more than one varnish instance on the same hardware and I think you can do everything in VCL. Stefan -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From kongfranon at gmail.com Thu May 5 16:52:00 2011 From: kongfranon at gmail.com (Mike Franon) Date: Thu, 5 May 2011 12:52:00 -0400 Subject: Multiple instances of varnish In-Reply-To: <4DC2CEAD.6080403@dmsp.de> References: <4DC2CEAD.6080403@dmsp.de> Message-ID: Well what I am trying to do is for example the site is www.testingv.com I have at the f5 check to see if the request is coming from a bot, and if it is then said it to a pool that points to a different virtual host on the appache server using a different port, specifically for bot requests. Otherwise if www.testingv.com is a normal browser request then send to the main virtual host on port 80 I am trying to see if there is a way with using a single instance 1) Varnish using VCL to recognize that it is a bot 2) If it is a bot request, varnish will then use a different backend. Thanks On Thu, May 5, 2011 at 12:22 PM, Stefan Pommerening wrote: > Am 05.05.2011 16:26, schrieb Mike Franon: > > Has anyone run multiple instances of varnish on the same server and > how is the performance overall? > > I am looking into it, because I want to serve up the same url, but > depending if it is a bot go to a different backend. > > I am trying to figure out the best way to do that. > > Hi Mike, > although I am not completely understanding your motivation I still can state > that my current customer had two varnish instances running on several > machines. > Therefore it works :) > > The reason was not to run out of file descriptors (I was told...) > Meanwhile I migrated every two instances to a single instance per machine. > > Well, performance (with two instances) was ok when you have cpu in mind. > On the other hand it's not very effective to have the same objects in two > different caches > because virtual memory (and/or file cache of course) isn't shared between > both instances. > This results in a lower hitrate and the double number of backend requests. > > Technically you have to setup two instances using different instance names > using the -n parameter and of course different ports to listen on (client > req and cli). > > A reasonable setup would be to split domain names at the load balancer and > feed > them to different varnish instances - those could be run on same hardware. > On the other hand you double your overhead memory usage (cache > fragmentation, > spare threads and such). > > I personally see no reason why to run more than one varnish instance on the > same > hardware and I think you can do everything in VCL. > > Stefan > > -- > > Dipl.-Inform. Stefan Pommerening > Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching > http://www.dmsp.de > From drais at icantclick.org Thu May 5 17:00:34 2011 From: drais at icantclick.org (david raistrick) Date: Thu, 5 May 2011 13:00:34 -0400 (EDT) Subject: Multiple instances of varnish In-Reply-To: References: <4DC2CEAD.6080403@dmsp.de> Message-ID: On Thu, 5 May 2011, Mike Franon wrote: > I am trying to see if there is a way with using a single instance > > 1) Varnish using VCL to recognize that it is a bot > > 2) If it is a bot request, varnish will then use a different backend. Sure. You don't need multiple varnishes for this. Define your backend, create a rule that matches on the headers you're matching on, set the backend. I don't have anything that matches on user agents (which is what I assume you're looking at on the F5), and I'm not going to look at the docs to find out how to match on them, but an example that does the same thing for URIs: sub vcl_recv { #send webservices to its own backend if (req.url ~ "^/ws/.*$") { set req.backend = default_81; } } -- david raistrick http://www.netmeister.org/news/learn2quote.html drais at icantclick.org http://www.expita.com/nomime.html From kongfranon at gmail.com Thu May 5 18:09:46 2011 From: kongfranon at gmail.com (Mike Franon) Date: Thu, 5 May 2011 14:09:46 -0400 Subject: Multiple instances of varnish In-Reply-To: References: <4DC2CEAD.6080403@dmsp.de> Message-ID: Great thanks, exactly what I was looking for and I can look at http.User-Agent header and match against the object to send to that backend, and just disable the rule on the F5 all together. On Thu, May 5, 2011 at 1:00 PM, david raistrick wrote: > On Thu, 5 May 2011, Mike Franon wrote: > >> I am trying to see if there is a way with using a single instance >> >> 1) ?Varnish using VCL to recognize that it is a bot >> >> 2) ?If it is a bot request, varnish will then use a different backend. > > Sure. ?You don't need multiple varnishes for this. > > Define your backend, create a rule that matches on the headers you're > matching on, set the backend. > > I don't have anything that matches on user agents (which is what I assume > you're looking at on the F5), and I'm not going to look at the docs to find > out how to match on them, but an example that does the same thing for URIs: > > sub vcl_recv { > ? ? ? ?#send webservices to its own backend > ? ? ? ?if (req.url ~ "^/ws/.*$") { > ? ? ? ? ?set req.backend = default_81; > ? ? ? ?} > } > > > > > -- > david raistrick ? ? ? ?http://www.netmeister.org/news/learn2quote.html > drais at icantclick.org ? ? ? ? ? ? http://www.expita.com/nomime.html > > From rudi at hyperfocusmedia.com Fri May 6 04:43:51 2011 From: rudi at hyperfocusmedia.com (Rudi) Date: Fri, 06 May 2011 12:43:51 +0800 Subject: Varnish DAEMON_OPTS Options Errors Message-ID: <4DC37C87.5070709@hyperfocusmedia.com> Hi, When using inline C with Varnish I've not been able to get /etc/varnish/default to be happy at start up. I've posted this on stackoverflow.com but no replies. I've tested inline C with varnish for two things: GeoIP detection and Anti-Site-Scraping functions. The DAEMON_OPTS always complains even though I'm following what other seem to indicate works fine. My problem is that this command line start up works: varnishd -f /etc/varnish/varnish-default.conf -s file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s' But it errors out with trying to start up from default start scripts: /etc/default/varnish has this in it: DAEMON_OPTS="-a :8080 \ -T localhost:2000 \ -f /etc/varnish/varnish-default.conf \ -s file,/var/lib/varnish/varnish_storage.bin,512M \ -p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'" The error is: # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Error: Unknown parameter "'cc_command". If I try change the last line to: -p cc_command='exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'" It's error is now: # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Error: Unknown storage method "hared" It's trying to interpret the '-shared' as -s hared and 'hared' is not a storage type. For both GeoIP and the Anti-Site-Scrape I've used the exact recommended daemon options plus have tried all sorts of variations like adding \' and '' but no joy. Here is a link to the instruction I've followed that work fine except the DAEMON_OPTS part. http://drcarter.info/2010/04/how-fighting-against-scraping-using-varnish-vcl-inline-c-memcached/ I'm using Debian and the exact DAEMON_OPTS as stated in the instructions. Can anyone help with a pointer on what's going wrong here? Many thanks! From checker at d6.com Fri May 6 04:57:20 2011 From: checker at d6.com (Chris Hecker) Date: Thu, 05 May 2011 21:57:20 -0700 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC37C87.5070709@hyperfocusmedia.com> References: <4DC37C87.5070709@hyperfocusmedia.com> Message-ID: <4DC37FB0.4010405@d6.com> I have no idea, but: Maybe make a separate script that runs cc with all the parms in it and see if that works? Chris On 2011/05/05 21:43, Rudi wrote: > Hi, > > When using inline C with Varnish I've not been able to get > /etc/varnish/default > to be happy at start up. > > I've posted this on stackoverflow.com but no replies. > > I've tested inline C with varnish for two things: GeoIP detection and > Anti-Site-Scraping functions. > > The DAEMON_OPTS always complains even though I'm following what other seem > to indicate works fine. > > My problem is that this command line start up works: > > varnishd -f /etc/varnish/varnish-default.conf -s > file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a > 0.0.0.0:8080 -p 'cc_command=exec cc -fpic -shared -Wl,-x > -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s' > > But it errors out with trying to start up from default start scripts: > > /etc/default/varnish has this in it: > > DAEMON_OPTS="-a :8080 \ > -T localhost:2000 \ > -f /etc/varnish/varnish-default.conf \ > -s file,/var/lib/varnish/varnish_storage.bin,512M \ > -p 'cc_command=exec cc -fpic -shared -Wl,-x > -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'" > > The error is: > > # /etc/init.d/varnish start > Starting HTTP accelerator: varnishd failed! > storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin > size 512 MB. > Error: > Unknown parameter "'cc_command". > > If I try change the last line to: > > -p cc_command='exec cc -fpic -shared -Wl,-x > -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s'" > > It's error is now: > > # /etc/init.d/varnish start > Starting HTTP accelerator: varnishd failed! > storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin > size 512 MB. > Error: Unknown storage method "hared" > > It's trying to interpret the '-shared' as -s hared and 'hared' is not a > storage type. > > > For both GeoIP and the Anti-Site-Scrape I've used the exact recommended > daemon options > plus have tried all sorts of variations like adding \' and '' but no joy. > > Here is a link to the instruction I've followed that work fine except > the DAEMON_OPTS part. > http://drcarter.info/2010/04/how-fighting-against-scraping-using-varnish-vcl-inline-c-memcached/ > > I'm using Debian and the exact DAEMON_OPTS as stated in the instructions. > > Can anyone help with a pointer on what's going wrong here? > > Many thanks! > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From rudi at hyperfocusmedia.com Fri May 6 05:15:58 2011 From: rudi at hyperfocusmedia.com (Rudi) Date: Fri, 06 May 2011 13:15:58 +0800 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC37FB0.4010405@d6.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> Message-ID: <4DC3840E.4020107@hyperfocusmedia.com> Hi, You mean something like this? 1) DAEMON_OPTS="-a :8080 \ -T localhost:2000 \ -f /srv/xshare/conf/varnish-default.conf-example -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,512M \ -p cc_command='exec /etc/varnish/docc.sh'" 2) # cat /etc/varnish/docc.sh cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s 3) # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Too many arguments (/etc/varnish/docc.sh'...) I've tried a few slight variations but all have errors. Thanks for the suggestion, do you think there's anything else I could try along these lines? From rudi at hyperfocusmedia.com Fri May 6 05:20:25 2011 From: rudi at hyperfocusmedia.com (Rudi) Date: Fri, 06 May 2011 13:20:25 +0800 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC37FB0.4010405@d6.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> Message-ID: <4DC38519.7070106@hyperfocusmedia.com> Hi, Getting closer but still no joy: 1) DAEMON_OPTS="-a :8080 \ -T localhost:2000 \ -f /srv/xshare/conf/varnish-default.conf-example -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,512M \ -p cc_command=/etc/varnish/docc.sh" 2) # cat /etc/varnish/docc.sh exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s 3) # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Message from C-compiler: cc: %s: No such file or directory Running C-compiler failed, exit 1 VCL compilation failed From checker at d6.com Fri May 6 05:36:26 2011 From: checker at d6.com (Chris Hecker) Date: Thu, 05 May 2011 22:36:26 -0700 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC38519.7070106@hyperfocusmedia.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> <4DC38519.7070106@hyperfocusmedia.com> Message-ID: <4DC388DA.8080503@d6.com> I think you need the %o and %s on the cc_command, so something like: -p cc_command='script %o %s' and then the script uses $1 and $2. Chris On 2011/05/05 22:20, Rudi wrote: > Hi, > > Getting closer but still no joy: > > 1) > DAEMON_OPTS="-a :8080 \ > -T localhost:2000 \ > -f /srv/xshare/conf/varnish-default.conf-example > -S /etc/varnish/secret \ > -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,512M \ > -p cc_command=/etc/varnish/docc.sh" > > > 2) > # cat /etc/varnish/docc.sh > exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h > -lmemcached -o %o %s > > 3) > # /etc/init.d/varnish start > Starting HTTP accelerator: varnishd failed! > storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin > size 512 MB. > Message from C-compiler: > cc: %s: No such file or directory > Running C-compiler failed, exit 1 > VCL compilation failed > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From rudi at hyperfocusmedia.com Fri May 6 05:44:01 2011 From: rudi at hyperfocusmedia.com (Rudi) Date: Fri, 06 May 2011 13:44:01 +0800 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC388DA.8080503@d6.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> <4DC38519.7070106@hyperfocusmedia.com> <4DC388DA.8080503@d6.com> Message-ID: <4DC38AA1.5020001@hyperfocusmedia.com> Hi, Thanks a tonne for your replies .. still no joy though. 1) -p cc_command='/etc/varnish/docc.sh %o %s'" 2) # cat /etc/varnish/docc.sh exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s 3) # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Too many arguments (%o...) From checker at d6.com Fri May 6 05:52:13 2011 From: checker at d6.com (Chris Hecker) Date: Thu, 05 May 2011 22:52:13 -0700 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC38AA1.5020001@hyperfocusmedia.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> <4DC38519.7070106@hyperfocusmedia.com> <4DC388DA.8080503@d6.com> <4DC38AA1.5020001@hyperfocusmedia.com> Message-ID: <4DC38C8D.3000800@d6.com> Your script still has %o and %s in it, which are going to need to be $1 and $2 or something like that (not a bash expert). But, that's probably not your problem, since it looks like it's erroring before you get to the command. Maybe it will just implicitly pass %o and %s as $1 and $2 or something. You should replace the body of your script with an echo > file.txt and see how varnish is calling it. Return an error so varnish will bail after calling you. If that doesn't work, then I would browse the source code at the version of varnish you're running and see how it calls the cc_command: http://www.varnish-cache.org/trac/browser The Visit: listbox will let you choose your version. Chris On 2011/05/05 22:44, Rudi wrote: > Hi, > > Thanks a tonne for your replies .. still no joy though. > > > 1) > -p cc_command='/etc/varnish/docc.sh %o %s'" > > 2) > # cat /etc/varnish/docc.sh > exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h > -lmemcached -o %o %s > > 3) > # /etc/init.d/varnish start > Starting HTTP accelerator: varnishd failed! > storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin > size 512 MB. > Too many arguments (%o...) > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From rudi at hyperfocusmedia.com Fri May 6 06:07:16 2011 From: rudi at hyperfocusmedia.com (Rudi) Date: Fri, 06 May 2011 14:07:16 +0800 Subject: Varnish DAEMON_OPTS Options Errors In-Reply-To: <4DC38C8D.3000800@d6.com> References: <4DC37C87.5070709@hyperfocusmedia.com> <4DC37FB0.4010405@d6.com> <4DC38519.7070106@hyperfocusmedia.com> <4DC388DA.8080503@d6.com> <4DC38AA1.5020001@hyperfocusmedia.com> <4DC38C8D.3000800@d6.com> Message-ID: <4DC39014.1050400@hyperfocusmedia.com> Hi, On 5/6/2011 1:52 PM, Chris Hecker wrote: > > Your script still has %o and %s in it, which are going to need to be > $1 and $2 or something like that (not a bash expert). But, that's > probably not your problem, since it looks like it's erroring before > you get to the command. Maybe it will just implicitly pass %o and %s > as $1 and $2 or something. Yes will have to dig deeper. It doesn't appear to be implicitly passing any variables: 1) -p cc_command=/etc/varnish/docc.sh" 2a) # cat /etc/varnish/docc.sh exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o $1 $2 OR 2b) # cat /etc/varnish/docc.sh #!/bin/bash exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o $1 $2 3) # /etc/init.d/varnish start Starting HTTP accelerator: varnishd failed! storage_file: filename: /var/lib/varnish/vbox.local/varnish_storage.bin size 512 MB. Message from C-compiler: cc: argument to '-o' is missing Running C-compiler failed, exit 1 VCL compilation failed It does appear '-p cc_command= ' wants a single input and not a space separated list like -p cc_command='/etc/varnish/docc.sh %o %s'" . I will have a look at the source code from the link you provided. I'm not much of a C programmer so if it's a dead end for me perhaps I'll just try re-write these debian start scripts into something simpler. If I just use the command line everything starts fine so maybe I should just work with that. Ex: varnishd -f /srv/xshare/conf/varnish-default.conf-example -s file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p 'cc_command=exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s' storage_file: filename: /var/lib/varnish/varnish_storage.bin size 512 MB. Using old SHMFILE vbox:~# ps ax| grep varn 5607 ? Ss 0:00 varnishd -f /srv/xshare/conf/varnish-default.conf-example -s file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p cc_command exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s 5608 ? Sl 0:00 varnishd -f /srv/xshare/conf/varnish-default.conf-example -s file,/var/lib/varnish/varnish_storage.bin,512M -T 127.0.0.1:2000 -a 0.0.0.0:8080 -p cc_command exec cc -fpic -shared -Wl,-x -L/usr/include/libmemcached/memcached.h -lmemcached -o %o %s Thanks again. From mhettwer at team.mobile.de Fri May 6 07:54:16 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Fri, 6 May 2011 08:54:16 +0100 Subject: Multiple instances of varnish In-Reply-To: Message-ID: On 05.05.11 19:00, "david raistrick" wrote: >On Thu, 5 May 2011, Mike Franon wrote: > >> I am trying to see if there is a way with using a single instance >> >> 1) Varnish using VCL to recognize that it is a bot >> >> 2) If it is a bot request, varnish will then use a different backend. > >Sure. You don't need multiple varnishes for this. > >Define your backend, create a rule that matches on the headers you're >matching on, set the backend. > >I don't have anything that matches on user agents (which is what I assume >you're looking at on the F5), and I'm not going to look at the docs to >find out how to match on them, but an example that does the same thing >for >URIs: If the F5 is able to recognize a bot, let it just insert a custom http header (X-F5-Found-Bot or something like that) and later match in vcl_recv on this header. Pretty much like this: sub vcl_recv { if (req.http.X-F5-Found-Bot) { set req.backend = backendforbots; } } HTH, Marian From thelogix at gmail.com Fri May 6 12:15:35 2011 From: thelogix at gmail.com (Dan) Date: Fri, 6 May 2011 14:15:35 +0200 Subject: Controlling connection pooling Message-ID: Hi all. Is there any way to control how connections to backends are reused? I would very much like to have (at least) one TCP connection per vhost and have clients only reusing a connection that fits the Host header they are trying to reach (and create a new one, if no one fits) Im thinking, somthing like: sub vcl_recv { set backend.connection_pool = req.http.Host; } It might seem like a strange question, but its a question of "misbehaving" servers. 9 months ago i tried to fix it with the "retry patch" here: http://www.varnish-cache.org/trac/ticket/749 Not really well thought trough, since a race-condition occurs with 2+ clients on the same connection at the same time trying to get 2 different vhosts. They will, in effect "kill" each other's apache process by trying to go to another vhost. Then they will both retry.. The one that retries first will loose and get the 503 and the other will get its page. So is it possible to control the pools? Or disable reuse/pooling completely? regards. - Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kongfranon at gmail.com Fri May 6 16:27:42 2011 From: kongfranon at gmail.com (Mike Franon) Date: Fri, 6 May 2011 12:27:42 -0400 Subject: Multiple instances of varnish In-Reply-To: References: Message-ID: Thanks that is even easier then what I was trying to do was bypass F5 like this if (req.http.user-agent ~ "(.*bingbot.*|.*control.*|.*crawler.*)") { set req.backend = bots_81; } But was going to be a really long list. This will be easier to implement. On Fri, May 6, 2011 at 3:54 AM, Hettwer, Marian wrote: > > On 05.05.11 19:00, "david raistrick" wrote: > >>On Thu, 5 May 2011, Mike Franon wrote: >> >>> I am trying to see if there is a way with using a single instance >>> >>> 1) ?Varnish using VCL to recognize that it is a bot >>> >>> 2) ?If it is a bot request, varnish will then use a different backend. >> >>Sure. ?You don't need multiple varnishes for this. >> >>Define your backend, create a rule that matches on the headers you're >>matching on, set the backend. >> >>I don't have anything that matches on user agents (which is what I assume >>you're looking at on the F5), and I'm not going to look at the docs to >>find out how to match on them, but an example that does the same thing >>for >>URIs: > > If the F5 is able to recognize a bot, let it just insert a custom http > header (X-F5-Found-Bot or something like that) and later match in vcl_recv > on this header. > > Pretty much like this: > > sub vcl_recv { > ?if (req.http.X-F5-Found-Bot) { > ?set req.backend = backendforbots; > ? ? ? ?} > } > > > > HTH, > Marian > > From richard.chiswell at mangahigh.com Fri May 6 16:39:11 2011 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Fri, 06 May 2011 17:39:11 +0100 Subject: Multiple instances of varnish In-Reply-To: References: Message-ID: <4DC4242F.9060308@mangahigh.com> On 06/05/2011 17:27, Mike Franon wrote: > Thanks that is even easier then what I was trying to do was bypass F5 > > like this > > if (req.http.user-agent ~ "(.*bingbot.*|.*control.*|.*crawler.*)") { > set req.backend = bots_81; > } > > > But was going to be a really long list. This will be easier to implement. I've only just seen this, but we use: set req.http.X-Varnish-Robots = "No"; if (req.http.user-agent ~ "(msnbot|Teoma|Googlebot|Gigabot|ia_archiver|Yahoo! Slurp|ScoutJet|wabot)") { set req.http.X-Varnish-Robots = "Yes : " regsub(req.http.user-agent,"(msnbot|Teoma|Googlebot|Gigabot|ia_archiver|Yahoo! Slurp|ScoutJet|wabot)","\2"); } if (req.http.X-Varnish-Robots == "No") { /** * Do stuff we don't need to do for robots */ } To identify robots to bypass Geolocation. Hope it helps, Richard Chiswell > > > On Fri, May 6, 2011 at 3:54 AM, Hettwer, Marian wrote: >> On 05.05.11 19:00, "david raistrick" wrote: >> >>> On Thu, 5 May 2011, Mike Franon wrote: >>> >>>> I am trying to see if there is a way with using a single instance >>>> >>>> 1) Varnish using VCL to recognize that it is a bot >>>> >>>> 2) If it is a bot request, varnish will then use a different backend. >>> Sure. You don't need multiple varnishes for this. >>> >>> Define your backend, create a rule that matches on the headers you're >>> matching on, set the backend. >>> >>> I don't have anything that matches on user agents (which is what I assume >>> you're looking at on the F5), and I'm not going to look at the docs to >>> find out how to match on them, but an example that does the same thing >>> for >>> URIs: >> If the F5 is able to recognize a bot, let it just insert a custom http >> header (X-F5-Found-Bot or something like that) and later match in vcl_recv >> on this header. >> >> Pretty much like this: >> >> sub vcl_recv { >> if (req.http.X-F5-Found-Bot) { >> set req.backend = backendforbots; >> } >> } >> >> >> >> HTH, >> Marian >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jonaskraw at gmail.com Mon May 9 12:59:54 2011 From: jonaskraw at gmail.com (Jonas Kraw) Date: Mon, 9 May 2011 14:59:54 +0200 Subject: backend polling mistery Message-ID: Dear All, we have a strange random backend polling error. There is a varnish box (2.1.4 running on debian lenny) for debugging and testing which polls our two production backends: a) STATICnode - nginx b) APACHEnode - apache 2.2.9 with Keepalive ON Timeout 30 KeepAliveTimeout 12 MaxKeepAliveRequests 0 Prefork-MaxRequestsPerChild 20000 VCL: backend APACHEnode { .host = "10.0.80.15"; .port = "8082"; .probe = { .url = "/robots.txt"; .timeout = 10ms; .interval = 1s; .window = 10; .threshold = 9; } } backend STATICnode { .host = "10.0.80.11"; .port = "8080"; .probe = { .url = "/robots.txt"; .timeout = 300ms; .interval = 2s; .window = 10; .threshold = 8; } } The test case is for simulating our production varnish server, which acts exactly the same way... Here is the varnishlog: 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000914 0.000993 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000354 0.000371 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000686 0.000917 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000639 0.000847 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000400 0.000379 HTTP/1.1 200 OK 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1304944200 1.0 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000685 0.000807 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000696 0.000779 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000366 0.000376 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Went sick 4--X--- 8 9 10 0.000000 0.000779 HTTP/1.1 200 OK Date: Mon, 09 May 2011 12:30:02 GMT Server: Apache/2.2.9 Last-Modified: Fri, 08 Apr 2011 09:52:29 GMT ETag: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1304944203 1.0 0 Backend_health - APACHEnode Still sick 4--X-RH 8 9 10 0.002321 0.001165 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000367 0.000373 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Back healthy 4--X-RH 9 9 10 0.001611 0.001276 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000676 0.001126 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000391 0.000378 HTTP/1.1 200 OK 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1304944206 1.0 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.006641 0.002505 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.002885 0.002600 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000436 0.000392 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000750 0.002137 HTTP/1.1 200 OK 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1304944209 1.0 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000787 0.001800 HTTP/1.1 200 OK 0 Backend_health - STATICnode Still healthy 4--X-RH 10 8 10 0.000384 0.000390 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000720 0.001530 HTTP/1.1 200 OK 0 Backend_health - APACHEnode Still healthy 4--X-RH 9 9 10 0.000825 0.001354 HTTP/1.1 200 OK ^C # varnishstat -1 client_conn 0 0.00 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 0 0.00 Client requests received cache_hit 0 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 0 0.00 Cache misses backend_conn 0 0.00 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused Can anyone give us a hint for tracking down this error? Much Obliged Jonas -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiernanl at mskcc.org Mon May 9 18:23:27 2011 From: kiernanl at mskcc.org (kiernanl at mskcc.org) Date: Mon, 9 May 2011 14:23:27 -0400 Subject: Varnish & Load Message-ID: <5ABAD392227A3145B2685BA1DFF1F78901DB5796BFD4@SMSKPEX7MBX4.MSKCC.ROOT.MSKCC.ORG> Our configuration assumes we will be running Varnish processes on each of two web servers behind an F5 load balancer. We accept that multiple Varnish instances will lead to differences in what's cached. But is it better to have each Varnish instance load-balance both the web process running on the same server AND the web process running on the other server, or just provide caching for the web process running on the local server? Thanks, Lissa ===================================================================== Please note that this e-mail and any files transmitted with it may be privileged, confidential, and protected from disclosure under applicable law. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, or other use of this communication or any of its attachments is strictly prohibited. If you have received this communication in error, please notify the sender immediately by replying to this message and deleting this message, any attachments, and all copies and backups from your computer. From MAILER-DAEMON at projects.linpro.no Tue May 10 02:29:33 2011 From: MAILER-DAEMON at projects.linpro.no (Mail Administrator) Date: Tue, 10 May 2011 04:29:33 +0200 (CEST) Subject: ***SPAM detected in Finance.gov.bd*** Hi Message-ID: <20110510022933.D46D513A4B5@projects.linpro.no> Your mail 202.40.187.2:2781->87.238.46.8:25 contains contaminated file _From__Mail_Administrator___MAILER_DAEMON_projects.linpro.no___Date_10_May_2011_08:33:12__Subj____SPAM_detected_in_Finance.gov.bd____Hi_/instruction.scr with virus Email-Worm.Win32.Mydoom.m,so it is dropped. From kbrownfield at google.com Tue May 10 11:18:32 2011 From: kbrownfield at google.com (Ken Brownfield) Date: Tue, 10 May 2011 04:18:32 -0700 Subject: response time in varnishncsa In-Reply-To: <6A7ABA19243F1E4EADD8BB1563CDDCCB0D614C@TIL-EXCH-05.netmatch.local> References: <717687862.6179.1280994948003.JavaMail.root@imap1b> <1970065075.273993.1286537149850.JavaMail.root@imap1b> <6A7ABA19243F1E4EADD8BB1563CDDCCB083920@TIL-EXCH-05.netmatch.local> <6A7ABA19243F1E4EADD8BB1563CDDCCB0BC4C0@TIL-EXCH-05.netmatch.local> <6A7ABA19243F1E4EADD8BB1563CDDCCB0D614C@TIL-EXCH-05.netmatch.local> Message-ID: Angelo's email address isn't valid anymore, so here's a patch against 2.1.5 for anyone interested in some extra fields. The patch adds three fields to the end of the log line if you provide the -e switch to varnishncsa: - long: Age in seconds - A 0 value doesn't necessarily mean it was a miss. - long: Hits; requires X-Hits header to be added in vcl_deliver(): - set resp.http.X-Hits = obj.hits; - This doesn't seem to be available in the log, otherwise. - A 0 value means it was a miss. - float: Response time in seconds (resp-req) Not heavily tested, v1.0, YMMV, etc., but it seems fine under load and doesn't seem to leak. I think adding LogFormat functionality (per the TODO) is the better implementation, but that's outside the scope of my free time. ;) -- kb On Wed, Nov 3, 2010 at 07:10, Angelo H?ngens wrote: > >> -----Original Message----- > >> From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc- > >> bounces at varnish-cache.org] On Behalf Of Angelo H?ngens > >> Sent: vrijdag 8 oktober 2010 13:35 > >> To: 'Varnish-misc' > >> Subject: RE: response time in varnishncsa > >> > >> Havent seen anything on the list about this. I would also really like > >> to see the time-taken field in the varnishnca log. And what I would > >> want even more, is a field indicating hit/miss! I might even convince > >> my boss to pay for the hours taken. > > > > -----Original Message----- > > From: Angelo H?ngens > > Sent: zaterdag 23 oktober 2010 17:41 > > To: 'Poul-Henning Kamp (phk at phk.freebsd.dk)' > > Subject: RE: response time in varnishncsa > > > > Poul-Henning, > > > > Just a quick question off-list: I really want the extra feature below > > (2 extra fields in varnishncsa output). > > > > Would you be willing to do this for pay? I'm pretty sure I can get my > > employer to pay for this feature. Don't know if you have any time, or > > how many hours it would take you, but I was thinking about some fixed > > price based on an initial estimate of your hours? > > > > Please let me know.. > > > Since PHK is not responding, is there anyone else that's interested in > making a buck on this? > > > -- > > > With kind regards, > > > Angelo H?ngens > > Systems Administrator > > ------------------------------------------ > NetMatch > tourism internet software solutions > > Ringbaan Oost 2b > 5013 CA Tilburg > T: +31 (0)13 5811088 > F: +31 (0)13 5821239 > > mailto:A.Hongens at netmatch.nl > http://www.netmatch.nl > ------------------------------------------ > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnishncsa_ext.patch Type: application/octet-stream Size: 2521 bytes Desc: not available URL: From david.kitchen at yellgroup.com Tue May 10 15:35:52 2011 From: david.kitchen at yellgroup.com (David Kitchen) Date: Tue, 10 May 2011 16:35:52 +0100 Subject: ESI limits and worker thread deaths Message-ID: Hi all, I've got a curious problem with using ESI. We're building a JSON web service that is composed of smaller bits of JSON, and one of the requirements is that the callee can specify the number of records they get back (it's intranet facing but high load). What we're seeing is that if a document esi:includes more than a certain number of other files, that the worker process dies and no data is returned to the browser. As the "certain number" seemed to be variable using our live service, I built a quick test to find out where the limit was and to determine whether the limit had any special meaning (powers of 2, the obvious signs). The test that I have that demonstrates the ESI processing failing requires: 1) Many unique files 2) Some files to esi:include the files in #1 Steps to reproduce: 1) Create the many unique files, bash helps here (run this in your web root): for i in $(seq 1 1000); do echo $i > $i.htm; done 2) Include files that esi:includes the many unique files: I had 4 test files: index125.htm index250.htm index500.htm index1000.htm They each contained lines like these, the number of lines corresponding to the number in the index*.htm file name: 1 = , 2 = , 3 = , 4 = , 5 = , 6 = , 7 = , 8 = , 9 = , 10 = , 3) Access each index file, smallest to largest until it fails whilst logging using varnishlog. For me, 125, 250 worked, 500 and 1,000 failed. As soon as one of those returns no data to the browser, the problem is reproduced. The interesting thing I found was in the logs for the failed attempts: 12 Debug c "tag {include src="http://localhost/test/207.htm" } 0 1 0" 12 Debug c "Incl " src="http://localhost/test/207.htm" "" 12 Debug c "tag {include src="http://localhost/test/208.htm" } 0 1 0" 0 WorkThread - 0x7f9d890afbf0 start 0 CLI - Rd vcl.load "boot" ./vcl.1P9zoqAU.so 0 CLI - Wr 200 36 Loaded "./vcl.1P9zoqAU.so" as "boot" 0 CLI - Rd vcl.use "boot" 0 CLI - Wr 200 0 0 CLI - Rd start 0 Debug - "Acceptor is epoll" 0 CLI - Wr 200 0 0 WorkThread - 0x7f9d04bfebf0 start It successfully processed the 250 entries file each time it was called, but when processing the 1,000 entries file it consistently and repeatedly showed that the worker process died at tag 208. Presumably this disconnected the request resulting in the symptom of "no data received" by the browser. The question boils down to: 1) Is this behaviour a bug? 2) Is this behaviour the result of a by-design limitation? And if yes to this, what is the limitation and is it something we can change (i.e. ulimits)? Varnish version is 2.1.4 Thanks in advance for any insight offered, and apologies in advance if I've jumped in and broken any mailing list etiquette by just pushing the question out there. David [Information] -- PostMaster: This transmission is intended solely for the addressee(s) and may be confidential. If you are not the named addressee, or if the message has been addressed to you in error, you must not read, disclose, reproduce, distribute or use this transmission. Delivery of this message to any person other than the named addressee is not intended in any way to waive confidentiality. If you have received this transmission in error please contact the sender or delete the message. Thank you. Yell Limited, One Reading Central, Forbury Road, Reading, Berkshire, RG1 3YL. Registered in England No. 4205228 Yellow Pages Sales Limited, One Reading Central, Forbury Road, Reading, Berkshire, RG1 3YL. Registered in England No. 1403041 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank at huddler-inc.com Tue May 10 17:08:33 2011 From: frank at huddler-inc.com (Frank Farmer) Date: Tue, 10 May 2011 10:08:33 -0700 Subject: response time in varnishncsa In-Reply-To: <4CAF5E9F.2070403@netmatch.nl> References: <717687862.6179.1280994948003.JavaMail.root@imap1b> <1970065075.273993.1286537149850.JavaMail.root@imap1b> <6A7ABA19243F1E4EADD8BB1563CDDCCB083920@TIL-EXCH-05.netmatch.local> <7F0AA702B8A85A4A967C4C8EBAD6902C5D1761@TMG-EVS02.torstar.net> <4CAF5E9F.2070403@netmatch.nl> Message-ID: This is a bit of a kludge, but I've been tracking these in my nginx access log. I have nginx in front of varnish, primarily to gzip (at one point, ESI and gzip were incompatible). Nginx captures extra headers output by varnish and write them to its own log. example nginx log config line: log_format main '$host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" $upstream_response_time $upstream_http_x_cache'; On Fri, Oct 8, 2010 at 11:10 AM, Angelo H?ngens wrote: > On 8-10-2010 18:59, Caunter, Stefan wrote: > > Hit/miss is a vcl addition to vcl_deliver: > > > > sub vcl_deliver { > > .. > > Stefan, > > Thanks for your response. I know I can output the hit/miss in the > response headers (I already do), but I want the extra fields (time taken > and 'action') in the varnishncsa output. I write all varnishncsa output > of all nodes to a central logging server, and we want to do analysis on > those log files later. > > Angelo. > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://lists.varnish-cache.org/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Wed May 11 06:31:07 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Wed, 11 May 2011 08:31:07 +0200 Subject: response time in varnishncsa In-Reply-To: (Ken Brownfield's message of "Tue, 10 May 2011 04:18:32 -0700") References: <717687862.6179.1280994948003.JavaMail.root@imap1b> <1970065075.273993.1286537149850.JavaMail.root@imap1b> <6A7ABA19243F1E4EADD8BB1563CDDCCB083920@TIL-EXCH-05.netmatch.local> <6A7ABA19243F1E4EADD8BB1563CDDCCB0BC4C0@TIL-EXCH-05.netmatch.local> <6A7ABA19243F1E4EADD8BB1563CDDCCB0D614C@TIL-EXCH-05.netmatch.local> Message-ID: <871v05695g.fsf@qurzaw.varnish-software.com> ]] Ken Brownfield | I think adding LogFormat functionality (per the TODO) is the better | implementation, but that's outside the scope of my free time. ;) LogFormat is already in git master. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From imanandshah at gmail.com Wed May 11 08:39:37 2011 From: imanandshah at gmail.com (Anand Shah) Date: Wed, 11 May 2011 14:09:37 +0530 Subject: Varnish Redirect Origin Message-ID: Hi All, How can i redirect my requests to the origin servers incase any of my backends are not responding. Say if backend A does not respond and restart loop used in the vcl sends the request to backend B which also do not respond then i need to redirect the URL to the origin URL. (Redirect 302). a.anand.com (Edge domain) a cdn domain should be redirected to abc.anand.com (Origin domain) if backends do not respond. Is there any way i can achieve this? Regards, Anand From armdan20 at gmail.com Wed May 11 15:45:29 2011 From: armdan20 at gmail.com (andan andan) Date: Wed, 11 May 2011 17:45:29 +0200 Subject: 3.0 Beta1 RPM dependencies on RHEL5 Message-ID: Hi all. I'm not sure is this is the correct list (but varnish-test seems not active), my apologies if I'm wrong. I'm testing Varnish 3.0 using the RPMS provided at: http://repo.varnish-cache.org/redhat/varnish-3.0/el5/ These RPMs are dependents of an external jemalloc library, AFAIK Varnish comes with his own jemalloc implementation, so the external jemalloc could be omited, will be the final RPMs compiled without this dependence or is a new and required dependence for the 3.x branch ? By the way, the first tests with the beta after fix config file with the new syntax are fine, we're playing with gzip/esi and so on. Thanks in advance. From roberto.fernandezcrisial at gmail.com Thu May 12 13:52:50 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Thu, 12 May 2011 10:52:50 -0300 Subject: ESI Referrer Message-ID: Hi guys, I need to know if it is possible to show referrer url when an ESI block is being requested to the backend server. Thank you, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From imanandshah at gmail.com Thu May 12 18:41:09 2011 From: imanandshah at gmail.com (Anand Shah) Date: Fri, 13 May 2011 00:11:09 +0530 Subject: Varnish Redirect Origin In-Reply-To: References: Message-ID: No updates on mailing list.... isn't there any one who has faced this issue earlier .... Regards, Anand On 5/11/11, Anand Shah wrote: > Hi All, > > How can i redirect my requests to the origin servers incase any of my > backends are not responding. Say if backend A does not respond and > restart loop used in the vcl sends the request to backend B which also > do not respond then i need to redirect the URL to the origin URL. > (Redirect 302). > > a.anand.com (Edge domain) a cdn domain should be redirected to > abc.anand.com (Origin domain) if backends do not respond. > > > Is there any way i can achieve this? > > Regards, > Anand > From TFigueiro at au.westfield.com Thu May 12 22:55:27 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Fri, 13 May 2011 08:55:27 +1000 Subject: Varnish Redirect Origin In-Reply-To: References: Message-ID: Anand Shah wrote: > No updates on mailing list.... isn't there any one who has faced > this issue earlier .... Have you read the doco? I had a quick look and it seems to have what you need. http://www.varnish-cache.org/docs/2.1/reference/vcl.html Count the number of redirects. If you reach your limit do the 302. I believe vcl_recv would be the place for that. From the doco: sub vcl_recv { if (req.restarts > X ) { # do your redirect here } ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From ericlin at tamama.org Fri May 13 06:48:52 2011 From: ericlin at tamama.org (Lin Jui-Nan Eric) Date: Fri, 13 May 2011 14:48:52 +0800 Subject: Varnish 3.0 beta1 with persistent cache crashes every 10 minutes Message-ID: Hello All, I have tried varnish recently but found it crashes every 10 minutes: tmp-1-117 [/big] -jnlin- varnishd -V varnishd (varnish-3.0.0-beta1 revision varnish-3.0.0-beta1) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS tmp-1-117 [/big] -jnlin- uname -a FreeBSD tmp-1-117 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 root at mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 tmp-1-117 [/big] -jnlin- dmesg -a | less -RELEASE,amd64,-spersistent,-smalloc,-hcritbit,kqueue sp = 0x280692a008 { fd = 17, id = 17, xid = 617873085, client = 10.1.1.5 1799, step = STP_ERROR, handling = deliver, err_code = 400, err_reason = (null), restarts = 0, esi_level = 0 ws = 0x280692a080 { id = "sess", {s,f,r,e} = {0x280692ace0,+2520,0x0,+65536}, }, http[req] = { ws = 0x280692a080[sess] "GET", "/f.pixnet.net/js/all.js?v=545d6dbf1980b02c6ff9e5ea863d979f", "HTTP/1.1", "Host: s.pixfs.net", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-TW; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6", "Accept: */*", "Accept-Language: zh-tw,en-us;q=0.7,en;q=0.3", "Accept-Encoding: gzip,deflate", "Accept-Charset: UTF-8,*", "Keep-Alive: 115", "Connection: k pid 95058 (varnishd), uid 65534: exited on signal 6 May 13 02:04:14 tmp-1-117 /big[35707]: Child (95058) Panic message: Assert error in smp_allocobj(), storage_persistent.c line 497: Condition((sp->objcore) != 0) not true. thread = (cache-worker) ident = FreeBSD,8.2-RELEASE,amd64,-spersistent,-smalloc,-hcritbit,kqueue sp = 0x28071dd008 { fd = 33, id = 33, xid = 637471130, client = 10.1.1.5 49373, step = STP_ERROR, handling = deliver, err_code = 400, err_reason = (null), restarts = 0, esi_level = 0 ws = 0x28071dd080 { id = "sess", {s,f,r,e} = {0x28071ddce0,+2536,0x0,+65536}, }, http[req] = { ws = 0x28071dd080[sess] "GET", "/panel/images/blog/common/pixmore/trans.gif", "HTTP/1.1", "Host: s.pixfs.net", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-TW; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6", "Accept: image/png,image/*;q=0.8,*/*;q=0.5", "Accept-Language: zh-tw,en-us;q=0.7,en;q=0.3", "Accept-Encoding: gzip,deflate", "Accept-Charset: UTF-8,*", "Keep-Alive: 115", pid 49696 (varnishd), uid 65534: exited on signal 6 May 13 02:04:17 tmp-1-117 /big[35707]: Child (49696) Panic message: Assert error in smp_allocobj(), storage_persistent.c line 497: Condition((sp->objcore) != 0) not true. thread = (cache-worker) ident = FreeBSD,8.2-RELEASE,amd64,-spersistent,-smalloc,-hcritbit,kqueue sp = 0x2807136008 { fd = 20, id = 20, xid = 89186144, client = 10.1.1.6 44867, step = STP_ERROR, handling = deliver, err_code = 400, err_reason = (null), restarts = 0, esi_level = 0 ws = 0x2807136080 { id = "sess", {s,f,r,e} = {0x2807136ce0,+2536,0x0,+65536}, }, http[req] = { ws = 0x2807136080[sess] "GET", "/common/toolbar/userinfo.css?v=545d6dbf1980b02c6ff9e5ea863d979f", "HTTP/1.1", "Host: s.pixfs.net", "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-TW; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6", "Accept: text/css,*/*;q=0.1", "Accept-Language: zh-tw,en-us;q=0.7,en;q=0.3", "Accept-Encoding: gzip,deflate", "Accept-Charset: UTF-8,*", "Keep-Alive: 115", pid 94341 (varnishd), uid 65534: exited on signal 6 [...sniped...] My varnish config is here: https://gist.github.com/970095 And my parameters of running varnish is: tmp-1-117 [/big] -jnlin- sudo varnishd -a :3128 -T 127.0.0.1:6082 -f /big/test.vcl -n /big -s persistent,/big/cache/varnish.cache,96G -w 100,2000 -u nobody Any suggestion is welcomed :) From thomas.woinke at gmail.com Fri May 13 08:18:56 2011 From: thomas.woinke at gmail.com (Thomas Woinke) Date: Fri, 13 May 2011 10:18:56 +0200 Subject: Varnish 3.0beta1 build on Solaris 10 Message-ID: Hi, I just tried to build Varnish 3.0beta1 on Solaris 10 using gcc 4.3.3. The build failed due to a missing parenthesis in lib/libvarnish/time.c. Here's a small patch that fixed it for me. Regards, Thomas Woinke *** time.c Fri May 13 10:05:03 2011 --- time.c.orig Fri May 13 10:04:42 2011 *************** *** 166,172 **** (void)nanosleep(&ts, NULL); #else if (t >= 1.) { ! (void)sleep(floor(t)); t -= floor(t); } /* XXX: usleep() is not mandated to be thread safe */ --- 166,172 ---- (void)nanosleep(&ts, NULL); #else if (t >= 1.) { ! (void)sleep(floor(t); t -= floor(t); } /* XXX: usleep() is not mandated to be thread safe */ From tfheen at varnish-software.com Fri May 13 08:25:22 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 13 May 2011 10:25:22 +0200 Subject: Varnish 3.0beta1 build on Solaris 10 In-Reply-To: (Thomas Woinke's message of "Fri, 13 May 2011 10:18:56 +0200") References: Message-ID: <87tyczj9cd.fsf@qurzaw.varnish-software.com> ]] Thomas Woinke | I just tried to build Varnish 3.0beta1 on Solaris 10 using gcc 4.3.3. | The build failed due to a missing parenthesis in | lib/libvarnish/time.c. | Here's a small patch that fixed it for me. Thanks, applied. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From contact at jpluscplusm.com Fri May 13 19:11:18 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 13 May 2011 20:11:18 +0100 Subject: Cross-verb cache invalidation (POST/PUT invalidates GET) Message-ID: Hi all - I'm mucking around this weekend designing a "My-First-JSON-API" service to learn about the process of API design, and I'd obviously like to make it as cache- and varnish-friendly as possible. Using HTTP verbs "correctly" (something like the ideas expressed at http://jcalcote.wordpress.com/2008/10/16/put-or-post-the-rest-of-the-story/) I'll be POSTing and PUTting incremental and complete objects into the service and GETting their state - all to and from the same URI per object. How can I (and *should* I?) instruct Varnish to obey the TTL that the backed sets for content received by GETting a specific URI (say http://example.com/api/object-1/) but to invalidate that content when a PUT/POST to the same URI is observed? I've not started poking at it through Varnish yet and, given the definition of the HTTP verbs in the RFCs, I kind of hope it might Just Work - that a POST/PUT to a URI will invalidate (and possibly overwrite with the response body?) the cached content from the last GET request to that URI. If not - I throw the question open to the floor ... is this a job that should be done in VCL? How? Will I just find it done already? Thanks in advance for any advice! Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From kongfranon at gmail.com Fri May 13 21:35:51 2011 From: kongfranon at gmail.com (Mike Franon) Date: Fri, 13 May 2011 17:35:51 -0400 Subject: Multiple instances of varnish In-Reply-To: References: Message-ID: So I got the F5 to insert a custom header, and that works. I also want to send everything that is a bot to a different backend, and except for two urls, and have it cache everything. That works, except I realized I am missing one thing, unset cookies from bots from the different backend. Currently I do not want to unset all cookies from all url's unless it is a bot. If it is a bot then yes unset all cookies, but the custom header that the F5 inserts, when it comes back, is not there anymore. Anyone have any ideas just curious. Thanks sub vcl_recv { if (req.http.x-f5-bot-found){ if (req.url ~ "^/some" || req.url ~ "^/sample"){ set req.backend = bots_81; return(pass); } else { unset req.http.cookie; set req.backend = bots_81; } } if (req.url ~ "^/$" || req.url ~ "^/sale"){ unset req.http.cookie; return(lookup); } else { return(pass); } } sub vcl_fetch { if (req.url ~ "^/$" || req.url ~ "^/sale"){ set beresp.ttl = 300s; set beresp.http.cache-control = "public, max-age = 300"; set beresp.http.X-CacheReason = "varnishcache"; unset beresp.http.set-cookie; return(deliver); } } On Fri, May 6, 2011 at 3:54 AM, Hettwer, Marian wrote: > > On 05.05.11 19:00, "david raistrick" wrote: > >>On Thu, 5 May 2011, Mike Franon wrote: >> >>> I am trying to see if there is a way with using a single instance >>> >>> 1) ?Varnish using VCL to recognize that it is a bot >>> >>> 2) ?If it is a bot request, varnish will then use a different backend. >> >>Sure. ?You don't need multiple varnishes for this. >> >>Define your backend, create a rule that matches on the headers you're >>matching on, set the backend. >> >>I don't have anything that matches on user agents (which is what I assume >>you're looking at on the F5), and I'm not going to look at the docs to >>find out how to match on them, but an example that does the same thing >>for >>URIs: > > If the F5 is able to recognize a bot, let it just insert a custom http > header (X-F5-Found-Bot or something like that) and later match in vcl_recv > on this header. > > Pretty much like this: > > sub vcl_recv { > ?if (req.http.X-F5-Found-Bot) { > ?set req.backend = backendforbots; > ? ? ? ?} > } > > > > HTH, > Marian > > From imanandshah at gmail.com Sat May 14 03:37:11 2011 From: imanandshah at gmail.com (Anand Shah) Date: Sat, 14 May 2011 09:07:11 +0530 Subject: Varnish Redirect Origin In-Reply-To: References: Message-ID: HI Thiago, I have made this working. Just a regsuball in vcl_error. set obj.http.Location = "http://" regsub(req.http.host, "^a.com", "abc.com") req.url; Thanks to Tollef Regards, Anand On Fri, May 13, 2011 at 4:25 AM, Thiago Figueiro wrote: > Anand Shah wrote: > > > No updates on mailing list.... isn't there any one who has faced > > this issue earlier .... > > Have you read the doco? I had a quick look and it seems to have what you > need. > > http://www.varnish-cache.org/docs/2.1/reference/vcl.html > > Count the number of redirects. If you reach your limit do the 302. > > I believe vcl_recv would be the place for that. From the doco: > > sub vcl_recv { > if (req.restarts > X ) { > # do your redirect here > } > > > ______________________________________________________ > CONFIDENTIALITY NOTICE > This electronic mail message, including any and/or all attachments, is for > the sole use of the intended recipient(s), and may contain confidential > and/or privileged information, pertaining to business conducted under the > direction and supervision of the sending organization. All electronic mail > messages, which may have been established as expressed views and/or opinions > (stated either within the electronic mail message or any of its > attachments), are left to the sole responsibility of that of the sender, and > are not necessarily attributed to the sending organization. Unauthorized > interception, review, use, disclosure or distribution of any such > information contained within this electronic mail message and/or its > attachment(s), is (are) strictly prohibited. If you are not the intended > recipient, please contact the sender by replying to this electronic mail > message, along with the destruction all copies of the original electronic > mail message (along with any attachments). > ______________________________________________________ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sun May 15 21:37:31 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 15 May 2011 22:37:31 +0100 Subject: Cross-verb cache invalidation (POST/PUT invalidates GET) In-Reply-To: References: Message-ID: On 13 May 2011 20:11, Jonathan Matthews wrote: > How can I (and *should* I?) instruct Varnish to obey the TTL that the > backed sets for content received by GETting a specific URI (say > http://example.com/api/object-1/) but to invalidate that content when > a PUT/POST to the same URI is observed? > > I've not started poking at it through Varnish yet and, given the > definition of the HTTP verbs in the RFCs, I kind of hope it might Just > Work OK, having read the RFCs a bit more I note that this behaviour is mandated: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10. Indeed, it's item #325 in the table at http://www.varnish-cache.org/trac/wiki/HTTPFeatures. The default VCL doesn't implement this, but a naive and possibly incomplete implementation is obviously trivial: sub vcl_recv { if (req.request == "POST") { purge("req.url == " req.url " && req.http.host == " req.http.host); } } How can this go wrong, however? In what circumstances might we wish not to take this rather draconian action? Well, if we go by the RFC, the response code to the POST request is of no consequence in determining if the content should be invalidated. So my reading of this suggests we're technically safe in taking action in the vcl_recv stage, and not after a lookup/fetch/deliver. Which is lucky, since the default VCL, I believe, (someone please do correct me on this!) doesn't put us in a situation where we *could* take any action in those places, due to us being in pass mode by this point. So what other gotchas are there? One that comes to mind is authentication. We don't want to allow an unauthenticated or wrongly-authenticated POST request (malicious or otherwise) to drop authenticated content from the cache, but that's about it. Both POST requests and the presence of Authentication headers cause the default VCL to enter pass mode, so I *think* that the only situation that needs thought is when content exists in cache (hence resulted from an unauthenticated request) and an authenticated POST request arrives. As before, the RFC seems unforgiving, not mentioning any mitigating circumstances, so I suppose we might as well just invalidate in that circumstance. So, coming back to the initial naive implementation, it looks like it could be correct. But what have I missed? There's got to be some complication to explain why this (or something like it) isn't in the default VCL - was a decision taken not to adhere to section 13.10 of RFC2616 at some point in the past? Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From rtshilston at gmail.com Mon May 16 07:31:26 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Mon, 16 May 2011 08:31:26 +0100 Subject: Cross-verb cache invalidation (POST/PUT invalidates GET) In-Reply-To: References: Message-ID: <92431B28-6E3D-4496-BD79-291D99E20C74@gmail.com> On 15 May 2011, at 22:37, Jonathan Matthews wrote: > On 13 May 2011 20:11, Jonathan Matthews wrote: >> How can I (and *should* I?) instruct Varnish to obey the TTL that the >> backed sets for content received by GETting a specific URI (say >> http://example.com/api/object-1/) but to invalidate that content when >> a PUT/POST to the same URI is observed? >> >> I've not started poking at it through Varnish yet and, given the >> definition of the HTTP verbs in the RFCs, I kind of hope it might Just >> Work > > OK, having read the RFCs a bit more I note that this behaviour is > mandated: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10. > Indeed, it's item #325 in the table at > http://www.varnish-cache.org/trac/wiki/HTTPFeatures. > ... > So, coming back to the initial naive implementation, it looks like it > could be correct. But what have I missed? There's got to be some > complication to explain why this (or something like it) isn't in the > default VCL - was a decision taken not to adhere to section 13.10 of > RFC2616 at some point in the past? > > Jonathan Jonathan, As a person using managing Varnish config, I'd suggest that the answer might simply be that Varnish isn't really an HTTP proxy/cache, but an accelerator. You can't (sensibly) use Varnish as a naive HTTP cache, but instead tune it to the exact needs and behaviour of the application sitting behind Varnish. So, whilst the RFC describes fail-safe behaviour if you're using a backend whose behaviour is unknown to the cache, I don't think it's fair to compare this to Varnish, which is typically closely coupled with the backend. Rob From perbu at varnish-software.com Mon May 16 07:58:39 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 16 May 2011 09:58:39 +0200 Subject: Cross-verb cache invalidation (POST/PUT invalidates GET) In-Reply-To: References: Message-ID: On Sun, May 15, 2011 at 11:37 PM, Jonathan Matthews wrote: > So, coming back to the initial naive implementation, it looks like it > could be correct. But what have I missed? There's got to be some > complication to explain why this (or something like it) isn't in the > default VCL - was a decision taken not to adhere to section 13.10 of > RFC2616 at some point in the past? > We won't do this because it would totally break if you have more then one Varnish servers. It's the backend job to notify the caches when the content actually changes. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Tue May 17 00:04:34 2011 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 16 May 2011 17:04:34 -0700 Subject: Avoiding big objects In-Reply-To: References: Message-ID: On Tue, Apr 26, 2011 at 5:25 PM, Mark Moseley wrote: > I was working on something in my quest to keep big (eventually > uncacheable) objects from wreaking havoc on my cache. Even if I employ > a scheme to call "restart" from vcl_fetch, after adding a header that > tells vcl_recv to call 'pipe', the object still gets fetched from the > origin server. And if it's 1.5 gig, it can be pretty painful. > > So I was hoping to throw this by you guys, esp the Varnish devs. > Mainly I wanted to hear if anyone thought this was a tremendously bad > idea. I wrote this about 45 minutes ago, so it's not particularly > well-tested out, but if you guys said this was the worst idea ever, > then I might reconsider putting a lot more time into perfecting it. > Thus there are likely to be big corner cases here. There was another > recent thread about this subject, so I know there are some other > people looking for a similar solution, so I thought I'd throw this out > there too. This doesn't protect me from 1.5 gig JPEG files but it does > most of the job. and a further comment is that, yes, I'm ok with all > the extra backend reqs, providing their HEADs. > > Mainly what it's doing is this: > > 1. Huge files won't ever be HITs in my environment, since I'll have piped them. > 2. If a MISS (as it should be), rewrite backend method from GET (I > don't do POSTs on varnish) to HEAD in vcl_miss if it's a file > extension likely to be a biggish file and matches other conditions. > 3. In vcl_fetch, if it's a rewritten HEAD, do size check. If it's too > big, add the header that indicates to vcl_fetch to drop immediately to > 'pipe' > 4. In either case, in vcl_fetch, rewrite the method back to GET and > call 'restart'. > > > Here's the essence of the VCL (imagine regularly-working VCL alongside > it). I typed this out so ignore dumb typos: > > sub vcl_fetch { > ? .... > ? # If we've got the header that says to pipe this request, pipe it > (thanks Tollef) > ? if ( req.http.X-PIPEME && req.restarts > 0 ) { > ? ? ? ? ? ? ? ?return( pipe ); > ? } > ? .... > } > > > # The URLs in this regex are some sample ones that are often huge in > size; the eventual list would be bigger and have others like 'mpg' > etc. Note that I don't send POSTs over varnish, so ignore lack of POST > sub vcl_miss { > ? ? ? ?# If no headcheck header and GET and type is on big list, > rewrite to HEAD > ? ? ? ?if ( ! req.http.X-HEADCHECK && bereq.request == "GET" && > req.url ~ "\.(gz|wmv|zip|flv|avi)$" && req.restarts == 0 ) { > ? ? ? ? ? ? ? ?set req.http.X-HEADCHECK = "1"; > ? ? ? ? ? ? ? ?set bereq.request = "HEAD"; > ? ? ? ? ? ? ? ?set bereq.http.User-Agent = "HEAD Check"; > ? ? ? ? ? ? ? ?log "DEBUG: Rewriting to HEAD"; > ? ? ? ?} > } > > > > sub vcl_fetch { > ? ? ? ?# If this used to be a GET request that we changed to HEAD, do > length check. But try to avoid restart loops. > ? ? ? ?if ( req.http.X-HEADCHECK && req.request == "GET" && > bereq.request == "HEAD" && req.url ~ "\.(gz|wmv|zip|flv|avi)$" && > req.restarts < 1) { > ? ? ? ? ? ? ? ?unset req.http.X-HEADCHECK; > ? ? ? ? ? ? ? ?set bereq.request = "GET"; > ? ? ? ? ? ? ? ?log "DEBUG: [fetch] Rewriting to HEAD"; > > ? ? ? ? ? ? ? ?# If content is over 10 meg, pipe it > ? ? ? ? ? ? ? ?if ( beresp.http.Content-Length ~ "[0-9]{8,}" ) { > ? ? ? ? ? ? ? ? ? ? ? ?set req.http.X-PIPEME = "1"; > ? ? ? ? ? ? ? ?} > > ? ? ? ? ? ? ? ?restart; > ? ? ? ?} > ? ? ? .... > } > > > > Mainly I'm just looking for whether the Varnish devs think that this > would cause something to completely explode and/or melt down or this > is the worst security hole ever. It seems to work ok so far. For reqs > that match 'beresp.http.Content-Length ~ "[0-9]{8,}"', the "SMA bytes > allocated" counter never budges, where it normally does for anything > fetched (memory backend). > > Thanks! Hope someone else can benefit from this too. If someone else > uses this (after thorough testing), be sure to remove the 'log' calls > in production. > Just to update: Works great so far. Prior to this, I was hitting that stevedore.c error on lots of my boxes after a few days of uptime (thanks to customers with gigantic files). Since I rolled this out, most of my boxes' varnishd's now have uptimes from when I deployed this solution across the board about 2 weeks ago. If you try it yourself, watch for loops. From d1+varnish at postinbox.com Tue May 17 17:09:45 2011 From: d1+varnish at postinbox.com (d1+varnish at postinbox.com) Date: Tue, 17 May 2011 10:09:45 -0700 Subject: does Varnish (between nginx frontend and apache backend) need separate instances/listeners for IPv4/6 dual-stack use? Message-ID: <1305652185.12414.1452940769@webmail.messagingengine.com> my current varnish-using web stack, running on 64-bit linux, is, nginx multiple listeners on IPv4:80, IPv4:443 proxypass to varnish-cache on 127.0.0.1:9000 | | varnish-cache listener on 127.0.0.1:9000 filter/pass to 'faux-CDN' on Apache2 'img' -> 127.0.0.1:12003 'css' -> 127.0.0.1:12002 'js' -> 127.0.0.1:12001 '...' -> 127.0.0.1:12000 | | apache2/mod_php,mod_deflate + Pressflow6/memcached(cache_inc/session_inc/lock_inc) listeners/vhosts on 127.0.0.1:1200{0,1,2,3} all works as planned. i'm now adding IPv6 listeners on assigned AAAA records @ each nginx server for hybrid, dual-stack IPv4+IPv6 operation, nginx - multiple listeners on IPv4:80, IPv4:443 +- multiple listeners on IPv4/IPv6:80, IPv4/IPv6:443 i.e., at nginx.conf server { ... listen [::]:80; server_name myserver.domain.com; (...) } my question is -- what to do with Varnish config, and, ultimately, *its* apache backend(s)? is it sufficient to keep one varnish instance listening only on IPv4 to both of nginx's ingress AddressFamilies, passed via a single/common proxypass? or, do i need separate Varnish instances/listeners, one for each AddressFamilty -- effectively setting up a parallel path for IPv4 or IPv6 traffic? and, in any case, should Varnish-config then get Apache backends config'd for each IPv4/6 AddressFamily? thanks. From kristian at varnish-software.com Wed May 18 10:08:11 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 18 May 2011 12:08:11 +0200 Subject: does Varnish (between nginx frontend and apache backend) need separate instances/listeners for IPv4/6 dual-stack use? In-Reply-To: <1305652185.12414.1452940769@webmail.messagingengine.com> References: <1305652185.12414.1452940769@webmail.messagingengine.com> Message-ID: <20110518100811.GC3055@freud> Hi, On Tue, May 17, 2011 at 10:09:45AM -0700, d1+varnish at postinbox.com wrote: > my question is -- what to do with Varnish config, and, ultimately, *its* > apache backend(s)? > > is it sufficient to keep one varnish instance listening only on IPv4 to > both of nginx's ingress AddressFamilies, passed via a single/common > proxypass? > or, do i need separate Varnish instances/listeners, one for each > AddressFamilty -- effectively setting up a parallel path for IPv4 or > IPv6 traffic? Varnish will listen to both IPv4 and IPv6 if available. It will also talk to backends using both ipv4 and ipv6. It currently prefers ipv4 for backend communication if the supplied hostname resolves to both ipv4 and ipv6, which is configurable via the prefer_ipv6 param. Varnish more or less Just Works with IPv6. - Kristian From d1+varnish at postinbox.com Wed May 18 15:13:05 2011 From: d1+varnish at postinbox.com (d1+varnish at postinbox.com) Date: Wed, 18 May 2011 08:13:05 -0700 Subject: does Varnish (between nginx frontend and apache backend) need separate instances/listeners for IPv4/6 dual-stack use? In-Reply-To: <20110518100811.GC3055@freud> References: <1305652185.12414.1452940769@webmail.messagingengine.com> <20110518100811.GC3055@freud> Message-ID: <1305731585.12059.1453333877@webmail.messagingengine.com> Hi On Wed, 18 May 2011 12:08 +0200, "Kristian Lyngstol" wrote: > Varnish will listen to both IPv4 and IPv6 if available. It will also > talk to backends using both ipv4 and ipv6. It currently prefers ipv4 for > backend communication if the supplied hostname resolves to both ipv4 and > ipv6, which is configurable via the prefer_ipv6 param. > > Varnish more or less Just Works with IPv6. I understand that Varnish CAN listen at ipv4/ipv6. I'm still unclear as to what SHOULD be done in this stack. Are you suggesting that this is sufficient? NGINX server { listen @ both IPv4 & IPv6 ... location / { proxypass http://varnish_IPv4_listener ... | | varnish listening ONLY on IPv4 (launched as ... -a :port1 -T :port2 ...) | | drupal/apache backends listening ONLY on IPv4 ? From kristian at varnish-software.com Fri May 20 05:26:46 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 20 May 2011 07:26:46 +0200 Subject: does Varnish (between nginx frontend and apache backend) need separate instances/listeners for IPv4/6 dual-stack use? In-Reply-To: <1305731585.12059.1453333877@webmail.messagingengine.com> References: <1305652185.12414.1452940769@webmail.messagingengine.com> <20110518100811.GC3055@freud> <1305731585.12059.1453333877@webmail.messagingengine.com> Message-ID: <20110520052646.GA27668@luke.kly.no> On Wed, May 18, 2011 at 08:13:05AM -0700, d1+varnish at postinbox.com wrote: > Hi > > On Wed, 18 May 2011 12:08 +0200, "Kristian Lyngstol" > wrote: > > Varnish will listen to both IPv4 and IPv6 if available. It will also > > talk to backends using both ipv4 and ipv6. It currently prefers ipv4 for > > backend communication if the supplied hostname resolves to both ipv4 and > > ipv6, which is configurable via the prefer_ipv6 param. > > > > Varnish more or less Just Works with IPv6. > > I understand that Varnish CAN listen at ipv4/ipv6. I'm still unclear as > to what SHOULD be done in this stack. > > Are you suggesting that this is sufficient? (...) > varnish listening ONLY on IPv4 (launched as ... -a :port1 -T > :port2 ...) No. What you set up is up to you. I don't know why you'd ever want Varnish to only speak IPv4, but I don't know your architecture. What Varnish should and shouldn't do in this regards is up to you, not Varnish, since Varnish is happy either way. - Kristian From webbbr at ohsu.edu Fri May 20 16:47:45 2011 From: webbbr at ohsu.edu (Brendan Webb) Date: Fri, 20 May 2011 09:47:45 -0700 Subject: Consulting on HTTP caching Message-ID: Hey everyone, We're looking for someone to do a bit of phone consulting with us on some high level topics regarding the implementation of an HTTP cache. Currently we're looking at Varnish as well as Squid and Apache Traffic Server. We know very little right now, so we're just looking for some insight from someone who's been through this before. Let me know if you (or someone you know) might be interested. Thanks -- Brendan Webb Web Strategies Oregon Health & Science University (503) 418-3346 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Fri May 20 17:31:43 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Fri, 20 May 2011 19:31:43 +0200 Subject: Your own local Varnish Cache 3.0 release party? Message-ID: Hi everyone, As planning of the launch of Varnish 3.0 is on I thought I would ask you all in the list, and at the same sneak the thought in your mind, if it is an idea to hold your own <3 (Varnish 3 heart ;) release party/gathering in the middle of June. Changes are that if you are not in Santa Clara, London or Oslo then you and your Varnish loving friends still want to get your hands down and dirty on the code while it's still hot, right? Why not gather then and do exactly that and at the same time hang with other like minded people on IRC. Maybe even develop your first Varnish Module? ***DISCLAIMER*** This is by no means an official invitation from Varnish Software, but rather an invitation from me, personally, to organize something :-) ***DISCLAIMER*** If there is people out there wanting to do something like this I promise that I'll do my best and send you at least one (1) Varnish 3.0 t-shirt per event with two or more people. Drop me a line if you are interested and we'll coordinate it from there. And remember: No pictures? It didn't happen! Have a great weekend everyone! -- Best regards, -- Rub?n Romero Self-appointed <3 Non-Official Release Party Coordinator, Cheerleader and T-shirt giver. e-mail: ruben at varnish-software.com?/?skype: ruben_varnish P: +47 21 98 92?62 /?M: +47 95 96 40 88 From guly at luv.guly.org Sat May 21 13:12:29 2011 From: guly at luv.guly.org (Sandro guly Zaccarini) Date: Sat, 21 May 2011 15:12:29 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: Message-ID: <20110521131229.GM32689@shivaya.guly.org> i'd love to attend to such party but i think i won't find enough people around me to actualy can call it party. if anyone in italy wants to share something..drop me a line. and well, for unlucky people like me, you can still open a store and sell some tshirt. sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ From mls at pooteeweet.org Sat May 21 13:18:55 2011 From: mls at pooteeweet.org (Lukas Kahwe Smith) Date: Sat, 21 May 2011 15:18:55 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: Message-ID: <3DA5B226-6E7F-4468-89EB-EF8341DACCC1@pooteeweet.org> On 20.05.2011, at 19:31, Rub?n Romero wrote: > f there is people out there wanting to do something like this I > promise that I'll do my best and send you at least one (1) Varnish 3.0 > t-shirt per event with two or more people. Drop me a line if you are > interested and we'll coordinate it from there. And remember: No > pictures? It didn't happen! I might be able to organize something like this here in Zurich. But the bigger question is when will Varnish 3.0 stable be released? regards, Lukas Kahwe Smith mls at pooteeweet.org From amedeo at oscert.net Sat May 21 13:19:13 2011 From: amedeo at oscert.net (Amedeo Salvati) Date: Sat, 21 May 2011 15:19:13 +0200 Subject: Your own local Varnish Cache 3.0 release party? Message-ID: +1 from turin italy Sandro guly Zaccarini ha scritto: >i'd love to attend to such party but i think i won't find enough people >around me to actualy can call it party. if anyone in italy wants to >share something..drop me a line. > >and well, for unlucky people like me, you can still open a store and >sell some tshirt. > >sz >-- > /"\ taste your favourite IT consultant > \ / gpg public key http://www.guly.org/guly.asc > X > / \ > > >_______________________________________________ >varnish-misc mailing list >varnish-misc at varnish-cache.org >http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ruben at varnish-software.com Sat May 21 14:07:46 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Sat, 21 May 2011 16:07:46 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: <20110521131229.GM32689@shivaya.guly.org> Message-ID: Hi Sandro, Somebody has to get the ball rolling :-) And I can tell you that I know at least 3 people in Italy that would be down with this. I'll note you down and will make a spreadsheet (Google Docs) available the next few days where we can organize things further. Even probably set up a registration form (Docs as well). And keep you posted. So we have Argentina, Italy and counting... Best wishes, - Rub?n Romero Varnish Software On May 21, 2011 3:13 PM, "Sandro guly Zaccarini" wrote: i'd love to attend to such party but i think i won't find enough people around me to actualy can call it party. if anyone in italy wants to share something..drop me a line. and well, for unlucky people like me, you can still open a store and sell some tshirt. sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cac... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Sat May 21 14:10:00 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Sat, 21 May 2011 16:10:00 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: <3DA5B226-6E7F-4468-89EB-EF8341DACCC1@pooteeweet.org> References: <3DA5B226-6E7F-4468-89EB-EF8341DACCC1@pooteeweet.org> Message-ID: Hi Lukas, Now we have Switzerland as well :) The answer to that question (release) would be the middle of June. Best regards, - Rub?n Romero Varnish Software On May 21, 2011 3:18 PM, "Lukas Kahwe Smith" wrote: On 20.05.2011, at 19:31, Rub?n Romero wrote: > f there is people out there wanting to do something... I might be able to organize something like this here in Zurich. But the bigger question is when will Varnish 3.0 stable be released? regards, Lukas Kahwe Smith mls at pooteeweet.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Sat May 21 14:35:38 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Sat, 21 May 2011 16:35:38 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: <20110521131229.GM32689@shivaya.guly.org> Message-ID: Bon journo again, There is some Varnish gear in CafePress if you want to keep your tea warm, your coffe in a cup or a t-shirt. See it on the 'Varnish Shop' at http://www.cafepress.com/varnish Check also the Community menu on http://www.varnish-cache.org But, you cannot buy the <3 t-shirt there ;-) Where in Italy are you? It is a vast and extense country after all... Have a nice day! Regards, - Rub?n Romero Varnish Software On May 21, 2011 3:13 PM, "Sandro guly Zaccarini" wrote: i'd love to attend to such party but i think i won't find enough people around me to actualy can call it party. if anyone in italy wants to share something..drop me a line. and well, for unlucky people like me, you can still open a store and sell some tshirt. sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cac... -------------- next part -------------- An HTML attachment was scrubbed... URL: From d1+varnish at postinbox.com Sat May 21 14:39:16 2011 From: d1+varnish at postinbox.com (d1+varnish at postinbox.com) Date: Sat, 21 May 2011 07:39:16 -0700 Subject: testing v3b, getting error @ launch "Expected ';' ...". syntax? Message-ID: <1305988756.25726.1454444245@webmail.messagingengine.com> i've installed to test, varnishd -V varnishd (varnish-3.0.0-beta1 revision 5e2c77b) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS at varnish launch, Message from VCC-compiler: Expected ';' got '", "' (program line 174), at ('/etc/varnish/vcl.svr1.conf' Line 90 Pos 42) req.http.X-forwarded-For ", " -----------------------------------------#### Running VCC-compiler failed, exit 1 VCL compilation failed where, in /etc/varnish/vcl.svr1.conf if (req.http.X-Forwarded-For) { set req.http.X-forwarded-For = 90 req.http.X-forwarded-For ", " regsub(client.ip, ":.*", ""); } as the stanza's find in v2.1.5x, reading v3's doc/changes.rst, i suspect i'm missing/misunderstanding a syntax change. is this the relevant stmt, - Change ``req.hash += value`` to ``hash_data(value)`` &/or what's the issue with the stanza above? thx. From d1+varnish at postinbox.com Sat May 21 15:32:39 2011 From: d1+varnish at postinbox.com (d1+varnish at postinbox.com) Date: Sat, 21 May 2011 08:32:39 -0700 Subject: testing v3b, getting error @ launch "Expected ';' ...". syntax? In-Reply-To: <1305988756.25726.1454444245@webmail.messagingengine.com> References: <1305988756.25726.1454444245@webmail.messagingengine.com> Message-ID: <1305991959.7830.1454455485@webmail.messagingengine.com> looking in the wrong place :-/ found the changes required in the new default.vcl. nm, thx! From guly at luv.guly.org Sat May 21 18:26:10 2011 From: guly at luv.guly.org (Sandro guly Zaccarini) Date: Sat, 21 May 2011 20:26:10 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: <20110521131229.GM32689@shivaya.guly.org> Message-ID: <20110521182610.GB11613@shivaya.guly.org> On Sat, May 21, 2011 at 04:35:38PM +0200, Rub?n Romero wrote: > Bon journo again, > > Where in Italy are you? It is a vast and extense country after all... > i'm from modena, where people used to make car fly ;) so we're at least three from italy, cool. maybe we can manage to have something done. sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ From MAILER-DAEMON at projects.linpro.no Sun May 22 07:29:12 2011 From: MAILER-DAEMON at projects.linpro.no (Post Office) Date: Sun, 22 May 2011 09:29:12 +0200 (CEST) Subject: Delivery reports about your e-mail Message-ID: <20110522072912.1D0181391BF@projects.linpro.no> Your mail 202.40.187.2:4185->87.238.46.8:25 contains contaminated file _From__Post_Office___MAILER_DAEMON_projects.linpro.no___Date_22_May_2011_13:32:47__Subj_Delivery_reports_about_your_e_mail_/letter.com with virus Email-Worm.Win32.Mydoom.m,so it is dropped. From sime at sime.net.au Mon May 23 04:19:34 2011 From: sime at sime.net.au (Simon Males) Date: Mon, 23 May 2011 14:19:34 +1000 Subject: Understanding the caching of Vary Message-ID: Hello, I've discovered mid migration to gzip (from no gzip) that assets where not being cached across different clients. Following the friendly Vary[1] tutorial I managed to resolve that. But does this only cache gzip requesting clients. e.g. If non gzip requesting client such as curl made a request, that object would be cached separately? [1] http://www.varnish-cache.org/docs/2.1/tutorial/vary.html -- Simon Males From perbu at varnish-software.com Mon May 23 06:54:21 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 23 May 2011 08:54:21 +0200 Subject: Understanding the caching of Vary In-Reply-To: References: Message-ID: On Mon, May 23, 2011 at 6:19 AM, Simon Males wrote: > But does this only cache gzip requesting clients. e.g. If non gzip > requesting client such as curl made a request, that object would be > cached separately? > Those object would be cached separately in Varnish 2.1. Varnish 3.0 however, will only store one compressed version and decompress whenever needed. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon May 23 06:57:37 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 23 May 2011 06:57:37 +0000 Subject: Understanding the caching of Vary In-Reply-To: Your message of "Mon, 23 May 2011 14:19:34 +1000." Message-ID: <11315.1306133857@critter.freebsd.dk> In message , Simon Males writes: >Hello, > >I've discovered mid migration to gzip (from no gzip) that assets where >not being cached across different clients. > >Following the friendly Vary[1] tutorial I managed to resolve that. > >But does this only cache gzip requesting clients. e.g. If non gzip >requesting client such as curl made a request, that object would be >cached separately? No, curl should get the cached gziped copy, but varnish will gunzip it during delivery. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From sime at sime.net.au Mon May 23 08:04:23 2011 From: sime at sime.net.au (Simon Males) Date: Mon, 23 May 2011 18:04:23 +1000 Subject: Understanding the caching of Vary In-Reply-To: <11315.1306133857@critter.freebsd.dk> References: <11315.1306133857@critter.freebsd.dk> Message-ID: >>But does this only cache gzip requesting clients. e.g. If non gzip >>requesting client such as ?curl made a request, that object would be >>cached separately? > > No, curl should get the cached gziped copy, but varnish will gunzip > it during delivery. Not in this example. Load the following page in Firefox (e.g.): http://www.varnish-cache.org/themes/bluemarine/style.css I get the X-Varnish identifier as 120088317 Hitting the same URL with curl I get : 120088247 I guess this example is invalid? -- Simon Males From phk at phk.freebsd.dk Mon May 23 08:15:24 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 23 May 2011 08:15:24 +0000 Subject: Understanding the caching of Vary In-Reply-To: Your message of "Mon, 23 May 2011 18:04:23 +1000." Message-ID: <94164.1306138524@critter.freebsd.dk> In message , Simon Males writes: >Not in this example. > >Load the following page in Firefox (e.g.): > >http://www.varnish-cache.org/themes/bluemarine/style.css > >I get the X-Varnish identifier as 120088317 > >Hitting the same URL with curl I get : 120088247 > >I guess this example is invalid? Use varnishlog to see what goes on. Very likely, your browser sends other headers (cookies ?) which makes the two requests not hit the same object. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From alteriks at gmail.com Mon May 23 15:18:54 2011 From: alteriks at gmail.com (Krzysztof Dajka) Date: Mon, 23 May 2011 17:18:54 +0200 Subject: Cache'ing request despite PASS Message-ID: Hi, I'd like to know whether somebody tried to write a vcl that would serve always fresh object from backend (for example PASSed) but despite this request would be stripped from cookies and cached. This cached object would be only served if whole director went sick. Is this possible to write such vcl or it would require changes in varnish code? If this idea is insane, I'd be happy to hear somebody shouting 'it's totally crazy!' ;) From l at lrowe.co.uk Mon May 23 19:52:46 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Mon, 23 May 2011 20:52:46 +0100 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: On 23 May 2011 16:18, Krzysztof Dajka wrote: > Hi, > I'd like to know whether somebody tried to write a vcl that would > serve always fresh object from backend (for example PASSed) but > despite this request would be stripped from cookies and cached. This > cached object would be only served if whole director went sick. Is > this possible to write such vcl or it would require changes in varnish > code? > > If this idea is insane, I'd be happy to hear somebody shouting 'it's > totally crazy!' ;) I believe this is one of the problems that req.grace solves. See http://www.varnish-cache.org/trac/wiki/VCLExampleGrace Laurence From alteriks at gmail.com Mon May 23 21:16:33 2011 From: alteriks at gmail.com (Krzysztof Dajka) Date: Mon, 23 May 2011 23:16:33 +0200 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: >From my observations, grace only works when object is already cached in varnish, it doesn't work unless object has been HIT. In case when all backends are sick grace just keeps object in memory even if expires/ttl has ran out. Object is STALE for amount of time set in grace. I'm already using grace for static objects on my websites, but I'd like to cache htmls which currently are passed to cms backends. I'd like to serve stale objects only in case of disaster in my cms. Has anyone created acl based for example on googlebot IP addresses and cache htmls which are crawled by bots and serve them only in case whole cms director went down? I think it sounds nice but I haven't tried that yet. 2011/5/23 Laurence Rowe : > On 23 May 2011 16:18, Krzysztof Dajka wrote: >> Hi, >> I'd like to know whether somebody tried to write a vcl that would >> serve always fresh object from backend (for example PASSed) but >> despite this request would be stripped from cookies and cached. This >> cached object would be only served if whole director went sick. Is >> this possible to write such vcl or it would require changes in varnish >> code? >> >> If this idea is insane, I'd be happy to hear somebody shouting 'it's >> totally crazy!' ;) > > I believe this is one of the problems that req.grace solves. See > http://www.varnish-cache.org/trac/wiki/VCLExampleGrace > > Laurence > From mls at pooteeweet.org Mon May 23 21:21:00 2011 From: mls at pooteeweet.org (Lukas Kahwe Smith) Date: Mon, 23 May 2011 23:21:00 +0200 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: On 23.05.2011, at 23:16, Krzysztof Dajka wrote: > From my observations, grace only works when object is already cached > in varnish, it doesn't work unless object has been HIT. In case when > all backends are sick grace just keeps object in memory even if > expires/ttl has ran out. Object is STALE for amount of time set in > grace. > > I'm already using grace for static objects on my websites, but I'd > like to cache htmls which currently are passed to cms backends. I'd > like to serve stale objects only in case of disaster in my cms. Has > anyone created acl based for example on googlebot IP addresses and > cache htmls which are crawled by bots and serve them only in case > whole cms director went down? I think it sounds nice but I haven't > tried that yet. so what you want is saint mode? regards, Lukas Kahwe Smith mls at pooteeweet.org From twiztar at gmail.com Mon May 23 21:56:05 2011 From: twiztar at gmail.com (Erik Weber) Date: Mon, 23 May 2011 23:56:05 +0200 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: On Mon, May 23, 2011 at 5:18 PM, Krzysztof Dajka wrote: > Hi, > I'd like to know whether somebody tried to write a vcl that would > serve always fresh object from backend (for example PASSed) but > despite this request would be stripped from cookies and cached. This > cached object would be only served if whole director went sick. Is > this possible to write such vcl or it would require changes in varnish > code? > > If this idea is insane, I'd be happy to hear somebody shouting 'it's > totally crazy!' ;) I might be overlooking something or be mistaken, but can't you just set the ttl to zero and do normal lookups? Thus if the backend is down you should manage to hit your grace/saint mode and deliver cached content. -- Erik From alteriks at gmail.com Mon May 23 22:04:04 2011 From: alteriks at gmail.com (Krzysztof Dajka) Date: Tue, 24 May 2011 00:04:04 +0200 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: Well, saintmode is another way to tell if your director is sick. If you use it like this (taken from wiki): sub vcl_fetch { if (beresp.status == 500) { set beresp.saintmode = 20s; restart; } set beresp.grace = 1h; } Varnish will try getting object from healthy backend, if it fails - all backend returned 500 status code, it won't try getting fresh object for next 20 seconds and will respond with stale one. We also have varnishd parameter - saintmode_threshold by default set to 10. If I remember correctly after trying to fetch 10 different objects, varnish will mark whole director as sick it before probes would say so. 2011/5/23 Lukas Kahwe Smith : > > On 23.05.2011, at 23:16, Krzysztof Dajka wrote: > >> From my observations, grace only works when object is already cached >> in varnish, it doesn't work unless object has been HIT. In case when >> all backends are sick grace just keeps object in memory even if >> expires/ttl has ran out. Object is STALE for amount of time set in >> grace. >> >> I'm already using grace for static objects on my websites, but I'd >> like to cache htmls which currently are passed to cms backends. I'd >> like to serve stale objects only in case of disaster in my cms. Has >> anyone created acl based for example on googlebot IP addresses and >> cache htmls which are crawled by bots and serve them only in case >> whole cms director went down? I think it sounds nice but I haven't >> tried that yet. > > > so what you want is saint mode? > > regards, > Lukas Kahwe Smith > mls at pooteeweet.org > > > > From alteriks at gmail.com Mon May 23 22:06:22 2011 From: alteriks at gmail.com (Krzysztof Dajka) Date: Tue, 24 May 2011 00:06:22 +0200 Subject: Cache'ing request despite PASS In-Reply-To: References: Message-ID: 2011/5/23 Erik Weber : > I might be overlooking something or be mistaken, but can't you just > set the ttl to zero and do normal lookups? Thus if the backend is down > you should manage to hit your grace/saint mode and deliver cached > content. > > -- > Erik I haven't try that maybe that's a cure, I'll try that tomorrow. From mhettwer at team.mobile.de Tue May 24 09:12:54 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Tue, 24 May 2011 10:12:54 +0100 Subject: varnish 2.1.5 memory and swap Message-ID: Hi List, I'm running varnish successfully in front of a high traffic website. However, once in a while (every few weeks), my Nagios notifies me that one of my varnish machines is about to run out of swap. I can't get around Linux and swap usage. So here is what I have: A varnishd with 6GB malloc. root at kvarnish46-1:~ # ps ax | grep varnish 21692 ? Ss 1:07 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S /etc/varnish/secret -s malloc,6G 21693 ? Sl 1709:15 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f /etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S /etc/varnish/secret -s malloc,6G 27134 pts/0 S+ 0:00 grep --color=auto varnish On a machine with 8GB RAM and 1 GB swap. root at kvarnish46-1:~ # free total used free shared buffers cached Mem: 8194928 8007964 186964 0 99988 144468 -/+ buffers/cache: 7763508 431420 Swap: 1052636 950848 101788 There is not running anything else. Apart from system cronjobs and sshd. The machine is exclusive for running varnish. Uname -a Linux kvarnish46-1 2.6.35-mobile.de.lenny #1 SMP Tue Aug 17 17:57:04 CEST 2010 x86_64 GNU/Linux It's a Debian 5.0.8. Any hints or even better explanation what is going on here? I thought about chaning the vm.swapiness parameter. But I'm not sure whether this would do the trick. Or better: What it actually would change. My best theory is, that varnish is using virtual memory and that the memory management of linux is kinda stubbornly putting the pages into swap, because varnish hasn't ask for them quite a while. I really don't like my own theory... Yak! Any help appreciated and thanks in advance, Marian From mhettwer at team.mobile.de Tue May 24 09:39:54 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Tue, 24 May 2011 10:39:54 +0100 Subject: varnish 2.1.5 memory and swap In-Reply-To: Message-ID: Reply to myself Although varnishd runs with malloc 6G, it seems that it's using much more memory: PID USER PR NI VIRT RES SHR S %CPU COMMAND 21693 nobody 20 0 12.5g 7.2g 80m S 0 varnishd Hu? Resistent at 7,2GB and Virtual at 12,5GB. Why? ./Marian On 24.05.11 11:12, "Hettwer, Marian" wrote: >Hi List, > >I'm running varnish successfully in front of a high traffic website. >However, once in a while (every few weeks), my Nagios notifies me that one >of my varnish machines is about to run out of swap. > > >I can't get around Linux and swap usage. >So here is what I have: > >A varnishd with 6GB malloc. >root at kvarnish46-1:~ # ps ax | grep varnish >21692 ? Ss 1:07 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f >/etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p >thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p >thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S >/etc/varnish/secret -s malloc,6G >21693 ? Sl 1709:15 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f >/etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p >thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p >thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S >/etc/varnish/secret -s malloc,6G >27134 pts/0 S+ 0:00 grep --color=auto varnish > > > >On a machine with 8GB RAM and 1 GB swap. > >root at kvarnish46-1:~ # free > total used free shared buffers cached >Mem: 8194928 8007964 186964 0 99988 144468 >-/+ buffers/cache: 7763508 431420 >Swap: 1052636 950848 101788 > >There is not running anything else. Apart from system cronjobs and sshd. > >The machine is exclusive for running varnish. > >Uname -a >Linux kvarnish46-1 2.6.35-mobile.de.lenny #1 SMP Tue Aug 17 17:57:04 CEST >2010 x86_64 GNU/Linux > >It's a Debian 5.0.8. > >Any hints or even better explanation what is going on here? >I thought about chaning the vm.swapiness parameter. But I'm not sure >whether this would do the trick. Or better: What it actually would change. > >My best theory is, that varnish is using virtual memory and that the >memory management of linux is kinda stubbornly putting the pages into >swap, because varnish hasn't ask for them quite a while. >I really don't like my own theory... Yak! > >Any help appreciated and thanks in advance, >Marian > > >_______________________________________________ >varnish-misc mailing list >varnish-misc at varnish-cache.org >http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From stig at zedge.net Tue May 24 10:03:14 2011 From: stig at zedge.net (Stig Bakken) Date: Tue, 24 May 2011 12:03:14 +0200 Subject: varnish 2.1.5 memory and swap In-Reply-To: References: Message-ID: Running with "malloc,6G" only means that you are giving Varnish a 6GB budget for caching. It needs memory for other things as well, such as thread stack space, temporary space for buffering objects (Varnish will buffer stuff as long as you do not pipe), and so on. - Stig On Tue, May 24, 2011 at 11:39 AM, Hettwer, Marian wrote: > Reply to myself > > Although varnishd runs with malloc 6G, it seems that it's using much more > memory: > PID USER PR NI VIRT RES SHR S %CPU COMMAND > > > > 21693 nobody 20 0 12.5g 7.2g 80m S 0 varnishd > > > > > > Hu? Resistent at 7,2GB and Virtual at 12,5GB. Why? > > ./Marian > > > > On 24.05.11 11:12, "Hettwer, Marian" wrote: > > >Hi List, > > > >I'm running varnish successfully in front of a high traffic website. > >However, once in a while (every few weeks), my Nagios notifies me that one > >of my varnish machines is about to run out of swap. > > > > > >I can't get around Linux and swap usage. > >So here is what I have: > > > >A varnishd with 6GB malloc. > >root at kvarnish46-1:~ # ps ax | grep varnish > >21692 ? Ss 1:07 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f > >/etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p > >thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p > >thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S > >/etc/varnish/secret -s malloc,6G > >21693 ? Sl 1709:15 /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -f > >/etc/varnish/kvarnish.vcl -T 127.0.0.1:6082 -t 120 -h critbit -p > >thread_pools 4 -p thread_pool_min 100 -p thread_pool_max 5000 -p > >thread_pool_add_delay 2 -p session_linger 120 -p connect_timeout 4 -S > >/etc/varnish/secret -s malloc,6G > >27134 pts/0 S+ 0:00 grep --color=auto varnish > > > > > > > >On a machine with 8GB RAM and 1 GB swap. > > > >root at kvarnish46-1:~ # free > > total used free shared buffers cached > >Mem: 8194928 8007964 186964 0 99988 144468 > >-/+ buffers/cache: 7763508 431420 > >Swap: 1052636 950848 101788 > > > >There is not running anything else. Apart from system cronjobs and sshd. > > > >The machine is exclusive for running varnish. > > > >Uname -a > >Linux kvarnish46-1 2.6.35-mobile.de.lenny #1 SMP Tue Aug 17 17:57:04 CEST > >2010 x86_64 GNU/Linux > > > >It's a Debian 5.0.8. > > > >Any hints or even better explanation what is going on here? > >I thought about chaning the vm.swapiness parameter. But I'm not sure > >whether this would do the trick. Or better: What it actually would change. > > > >My best theory is, that varnish is using virtual memory and that the > >memory management of linux is kinda stubbornly putting the pages into > >swap, because varnish hasn't ask for them quite a while. > >I really don't like my own theory... Yak! > > > >Any help appreciated and thanks in advance, > >Marian > > > > > >_______________________________________________ > >varnish-misc mailing list > >varnish-misc at varnish-cache.org > >http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Stig Bakken CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ppyy at juyide.com Tue May 24 10:22:16 2011 From: ppyy at juyide.com (=?UTF-8?B?5b2t5YuH?=) Date: Tue, 24 May 2011 18:22:16 +0800 Subject: dns director Message-ID: thanks for your great varnish and dns director. i test dns director and plan to use it when i have many backend. it's ok when i have backend server which IP is in ".list". but i have more than 200 backends which are not in same address block, should i add them to ".list" ? then if i add more backends, i should to modify vcl. -- Peng Yong From guly at luv.guly.org Tue May 24 16:04:10 2011 From: guly at luv.guly.org (Sandro guly Zaccarini) Date: Tue, 24 May 2011 18:04:10 +0200 Subject: varnish 3.0betatesting Message-ID: <20110524160410.GU4221@shivaya.guly.org> paragraph "Removing all BUT some cookies" from http://www.varnish-cache.org/trac/wiki/VCLExampleRemovingSomeCookies gives an example invalid for 3.0, expliciting concatenation works. Change req.hash += value to hash_data(value) to be more clear should be: Change set req.hash += value to hash_data(value) actually 3.0beta seems slower than 2.1.5, very slower..i'll make more tests at low-traffic time sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ From andrea.campi at zephirworks.com Tue May 24 17:02:54 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Tue, 24 May 2011 19:02:54 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: <20110521182610.GB11613@shivaya.guly.org> References: <20110521131229.GM32689@shivaya.guly.org> <20110521182610.GB11613@shivaya.guly.org> Message-ID: <35BCDED1-60C7-4602-AA5F-8D01BC9BA807@zephirworks.com> We are going to host a Varnish release party in Milano, Italy; beers will be on us, we'll see what we can do about swag. People from neighboring (or distant) countries are welcome too of course. Please drop me an email if you're interested and I'll keep you posted on the details. Logistics are not finalized yet but the location will be close to the central station for those who'll be traveling. On May 21, 2011, at 8:26 PM, Sandro guly Zaccarini wrote: > On Sat, May 21, 2011 at 04:35:38PM +0200, Rub?n Romero wrote: >> Bon journo again, >> >> Where in Italy are you? It is a vast and extense country after all... >> > > i'm from modena, where people used to make car fly ;) > > so we're at least three from italy, cool. maybe we can manage to have > something done. > > sz > -- > /"\ taste your favourite IT consultant > \ / gpg public key http://www.guly.org/guly.asc > X > / \ > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From perbu at varnish-software.com Wed May 25 07:13:00 2011 From: perbu at varnish-software.com (Per Buer) Date: Wed, 25 May 2011 09:13:00 +0200 Subject: varnish 2.1.5 memory and swap In-Reply-To: References: Message-ID: On Tue, May 24, 2011 at 11:39 AM, Hettwer, Marian wrote: > Reply to myself > > Although varnishd runs with malloc 6G, it seems that it's using much more > memory: > PID USER PR NI VIRT RES SHR S %CPU COMMAND > > > > 21693 nobody 20 0 12.5g 7.2g 80m S 0 varnishd > > > > > > Hu? Resistent at 7,2GB and Virtual at 12,5GB. Why? > Virtual memory is, uhm, virtual. There is nothing tangible about it, so you shouldn't really pay any attention to it. Just spawning a thread will take up a lot of virtual memory without hardly using any physical memory. Stig has already answered your other question, I see. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhettwer at team.mobile.de Wed May 25 09:05:34 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Wed, 25 May 2011 10:05:34 +0100 Subject: varnish 2.1.5 memory and swap In-Reply-To: Message-ID: Hi Per and Stig, On 25.05.11 09:13, "Per Buer" wrote: >On Tue, May 24, 2011 at 11:39 AM, Hettwer, Marian > wrote: > > >Reply to myself > >Although varnishd runs with malloc 6G, it seems that it's using much more >memory: > PID USER PR NI VIRT RES SHR S %CPU COMMAND > > > >21693 nobody 20 0 12.5g 7.2g 80m S 0 varnishd > > > > > >Hu? Resistent at 7,2GB and Virtual at 12,5GB. Why? > > > >Virtual memory is, uhm, virtual. There is nothing tangible about it, so >you shouldn't really pay any attention to it. Just spawning a thread >will take up a lot of virtual memory without hardly using any physical >memory. Stig has already answered your other question, I see. > Okay, I just ignore Virtual and have a look at Resistent. If I understood Stig correct, the behaviour of varnish is expected. Using 7,2GB RES mem, although malloc 6G was configured. I can live with that :) Still it's kinda odd, that Linux starts swapping out stuff, although RAM would have been sufficient. The machine has 8 gig ram and varnishd is using 7,2 gig. Linux decided to use nearly 1GB of swap. I hope that I can stop the kernel doing this by adding vm.swappiness=10... Let's see. If that doesn't do the trick, it seems that I have to lower the malloc 6G. Which I actually could happily do. I don't see any LRU_nuked objects. My vcl is configured to just cache specific urls. Every other url gets PASSed. Come to think of it. Looks like pipe would be better here? sub vcl_recv { # # always use this backend set req.backend = febayk46; # normalize accept-encoding header if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } if ( req.url ~ "^/anzeigen/s-beliebte-angebote.html.*$" || req.url ~ "^/anzeigen/s-suchbegriff-empfehlungen.html.*$" || req.url ~ "^/anzeigen/sitemap_.*$" ) { # always cache urls above unset req.http.Accept-Encoding; return(lookup); } # static content caching if ( req.url ~ "^/static/.*$" ) { return(lookup); } if ( req.url ~ "^/REL-.*\.[\d]+/" ) { return(lookup); } # don't cache the rest return(pass); } sub vcl_fetch { # backend behaves stupid and set-cookie on sitemap*.xml # we remove this here if ( req.url ~ "^/anzeigen/s-beliebte-angebote.html.*$" || req.url ~ "^/anzeigen/s-suchbegriff-empfehlungen.html.*$" || req.url ~ "^/anzeigen/sitemap_.*$" ) { unset beresp.http.Set-Cookie; } } Thanks for your answers, Stig and Per :) ./Marian From guly at luv.guly.org Wed May 25 10:09:53 2011 From: guly at luv.guly.org (Sandro guly Zaccarini) Date: Wed, 25 May 2011 12:09:53 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: <35BCDED1-60C7-4602-AA5F-8D01BC9BA807@zephirworks.com> References: <20110521131229.GM32689@shivaya.guly.org> <20110521182610.GB11613@shivaya.guly.org> <35BCDED1-60C7-4602-AA5F-8D01BC9BA807@zephirworks.com> Message-ID: <20110525100953.GB18@shivaya.guly.org> On Tue, May 24, 2011 at 07:02:54PM +0200, Andrea Campi wrote: > We are going to host a Varnish release party in Milano, Italy; beers will be on us, we'll see what we can do about swag. > People from neighboring (or distant) countries are welcome too of course. > > Please drop me an email if you're interested and I'll keep you posted on the details. Logistics are not finalized yet but the location will be close to the central station for those who'll be traveling. count me in sz -- /"\ taste your favourite IT consultant \ / gpg public key http://www.guly.org/guly.asc X / \ From ruben at varnish-software.com Wed May 25 12:19:21 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Wed, 25 May 2011 14:19:21 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: Message-ID: Hello again, 2011/5/20 Rub?n Romero : > Hi everyone, > > As planning of the launch of Varnish 3.0 is on I thought I would ask > you all in the list, and at the same sneak the thought in your mind, > if it is an idea to hold your own <3 (Varnish 3 heart ;) release > party/gathering in the middle of June. It seems that there is a lot of interest on runing these parties :) I have added a couple of pages in the wiki: * Current Party Overview: http://www.varnish-cache.org/trac/wiki/Varnish3_Release_Parties * Organizing a release party guide: http://www.varnish-cache.org/trac/wiki/Running_Release_Party Feel free to add suggestions or your own party to these pages. You can also post here them to this thread and I will keep on adding them to the overview/guide as they tick in! :) What do you think about using a GoogleDocs form and document for coordination and registration parallel to the wiki pages? -- Best wishes, -- Rub?n Romero Self-appointed <3 Non-Official Release Party Coordinator, Cheerleader and T-shirt giver. e-mail ruben at varnish-software.com?/?skype: ruben_varnish P: +47 21 98 92?62 /?M: +47 95 96 40 88 From geoff at uplex.de Wed May 25 12:59:30 2011 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 25 May 2011 14:59:30 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: Message-ID: <4DDCFD32.7050203@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/20/11 07:31 PM, Rub?n Romero wrote: > > As planning of the launch of Varnish 3.0 is on I thought I would ask > you all in the list, and at the same sneak the thought in your mind, > if it is an idea to hold your own <3 (Varnish 3 heart ;) release > party/gathering in the middle of June. Our entire company in Hamburg (meaning: the two of us) would be happy to join in the festivities. Since we meet the criteria for participation, I guess that makes it offical. %^) Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJN3P0yAAoJEOUwvh9pJNURub4QAILdv7qyq+BwclU0iqa+4Vqz gB8fC3p5ULQebULqqW5qBOKLDyMgTXeQu2afZrglCBjp1PivfAKgh9bo4GRCsKKe PDSYBBLhXsmLUSAr+U51PqmQ5efvC0Wv06Y4RZ6gR0Nox8IcQLhQmtTvSZvQEj65 MGolM3NzegJ4Rd3y9pJsVWbu/Pl7xBCGSeus7l38Dtnk6nxVe3V2r8MXzVW7eSuC yfUvtUHsU4h/bG4cXkFFy1EGy6yFnKivSpZpbKaQPZrhwqlilCGTp8zd5Qu/jKTk cJo6d0/Sorq/8sNAvPr0HK7EmJT1P7XTDdwR1nyyX2aLw2Vf8VNMPFGDQ0AstaXq F1TKoWf+eWkAed9llDEHDyoeC764zuukIVWDofD907yXk1qAiGnCXhUawZNOXtOS U+48XtBb5o7g85kTN1eqTrM8fFPoZawo4kMc7tPiCSMNUtlpnVks03y0xzoa2B26 SfzuGN4BfkzfE+rKXJzBilS4+ZQWM7xny/OHmfdMWvoGyPatPCyDHkychZOnXE5D 9DBuHz0sLkniaddq468tapijKtkfy6gRmSszLAnq6AxQdq3V/YEsMoQcxL/r2yVY yCvVuLTlQQjb99xGvGdUn7NHZMjqc1V3mf3SpUcSrSyR1SGw/wQwMv921EmqWLXL qIrhcuhuKYMqawPrgIYz =D+nA -----END PGP SIGNATURE----- From david.birdsong at gmail.com Wed May 25 20:04:11 2011 From: david.birdsong at gmail.com (David Birdsong) Date: Wed, 25 May 2011 13:04:11 -0700 Subject: varnish 2.1.5 memory and swap In-Reply-To: References: Message-ID: On Wed, May 25, 2011 at 2:05 AM, Hettwer, Marian wrote: > Hi Per and Stig, > > > > On 25.05.11 09:13, "Per Buer" wrote: > >>On Tue, May 24, 2011 at 11:39 AM, Hettwer, Marian >> wrote: >> >> >>Reply to myself >> >>Although varnishd runs with malloc 6G, it seems that it's using much more >>memory: >> PID USER ? ? ?PR ?NI ?VIRT ?RES ?SHR S %CPU COMMAND >> >> >> >>21693 nobody ? ?20 ? 0 12.5g 7.2g ?80m S ? ?0 varnishd >> >> >> >> >> >>Hu? Resistent at 7,2GB and Virtual at 12,5GB. Why? >> >> >> >>Virtual memory is, uhm, virtual. There is nothing tangible about it, so >>you shouldn't really pay any attention to it. ?Just spawning a thread >>will take up a lot of virtual memory without hardly using any physical >>memory. Stig has already answered your other question, I see. >> > > Okay, I just ignore Virtual and have a look at Resistent. > If I understood Stig correct, the behaviour of varnish is expected. Using > 7,2GB RES mem, although malloc 6G was configured. > I can live with that :) > > Still it's kinda odd, that Linux starts swapping out stuff, although RAM > would have been sufficient. The machine has 8 gig ram and varnishd is > using 7,2 gig. > Linux decided to use nearly 1GB of swap. > I hope that I can stop the kernel doing this by adding vm.swappiness=10... > Let's see. If that doesn't do the trick, it seems that I have to lower the > malloc 6G. Check out cat /proc/sys/vm/min_free_kbytes; it's probably trying to maintain that level. Also, kswapd will run through malloc'd address spaces looking for inactive pages to flush to the swap partition/file. So you may have loaded a bunch of objects into the address space by pulling them through varnish, but kswapd may decide that the object storage portion of the address space is swap'able since they're not actively written to (or read from?). > Which I actually could happily do. I don't see any LRU_nuked objects. > > My vcl is configured to just cache specific urls. Every other url gets > PASSed. Come to think of it. Looks like pipe would be better here? > sub vcl_recv { > > ? ? ? ?# > ? ? ? ?# always use this backend > ? ? ? ?set req.backend = febayk46; > > ? ? ? ?# normalize accept-encoding header > ? ?if (req.http.Accept-Encoding) { > ? ? ? ?if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { > ? # No point in compressing these > ? ? ? ? ? ?remove req.http.Accept-Encoding; > ? ? ? ?} elsif (req.http.Accept-Encoding ~ "gzip") { > ? ? ? ? ? ?set req.http.Accept-Encoding = "gzip"; ? ? ? ?} elsif > (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") { > ? ? ? ? ? ?set req.http.Accept-Encoding = "deflate"; > ? ? ? ?} else { ? ? ? ? ? ?# unkown algorithm > ? ? ? ? ? ?remove req.http.Accept-Encoding; > ? ? ? ?} ? ?} > > ? ? ? ?if ( ? ?req.url ~ "^/anzeigen/s-beliebte-angebote.html.*$" || > ? ? ? ? ? ? ? ?req.url ~ "^/anzeigen/s-suchbegriff-empfehlungen.html.*$" > || > ? ? ? ? ? ? ? ?req.url ~ "^/anzeigen/sitemap_.*$" ) { > ? ? ? ? ? ? ? ?# always cache urls above > ? ? ? ? ? ? ? ?unset req.http.Accept-Encoding; > ? ? ? ? ? ? ? ?return(lookup); > ? ? ? ?} > > ? ? ? ?# static content caching > ? ? ? ?if ( ? ?req.url ~ "^/static/.*$" ) { > ? ? ? ? ? ? ? ?return(lookup); > ? ? ? ?} > ? ? ? ?if ( ? ?req.url ~ "^/REL-.*\.[\d]+/" ) { > ? ? ? ? ? ? ? ?return(lookup); > ? ? ? ?} > > > > ? ? ? ? ? ? ? ?# don't cache the rest > ? ? ? ? ? ? ? ?return(pass); > > > } > > sub vcl_fetch { > ? ? ? ?# backend behaves stupid and set-cookie on sitemap*.xml > ? ? ? ?# we remove this here > ? ? ? ?if ( ? ?req.url ~ "^/anzeigen/s-beliebte-angebote.html.*$" || > ? ? ? ? ? ? ? ?req.url ~ "^/anzeigen/s-suchbegriff-empfehlungen.html.*$" > || > ? ? ? ? ? ? ? ?req.url ~ "^/anzeigen/sitemap_.*$" ) { > ? ? ? ? ? ? ? ?unset beresp.http.Set-Cookie; > ? ? ? ?} > > } > > > > Thanks for your answers, Stig and Per :) > ./Marian > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From andrea.campi at zephirworks.com Thu May 26 08:10:18 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Thu, 26 May 2011 10:10:18 +0200 Subject: vmod-redis Message-ID: Hi, I have release a simple module to let the VCL access a Redis server: https://github.com/zephirworks/libvmod-redis At this stage it is mostly a proof-of-concept; it has only received minimal testing and we have never used it in production. At the very minimum, it will slow down Varnish a fair amount (at least a few milliseconds per request, depending on how fast your network and your redis server are). Also, I only built it and used it on FreeBSD--on other platforms, you are on your own (pull requests welcome). If you do try running it, I would like hearing your experience. My goal was mostly to get a feeling for the current support for building vmods as a non-core developer, and provide some feedback. So I will be :) I think I saw it on the wiki somewhere, but I can't find it anymore--can you guys add it to the list of current vmods (and link to that page somewhere). Andrea -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhettwer at team.mobile.de Thu May 26 09:29:09 2011 From: mhettwer at team.mobile.de (Hettwer, Marian) Date: Thu, 26 May 2011 10:29:09 +0100 Subject: varnish 2.1.5 memory and swap In-Reply-To: Message-ID: On 25.05.11 22:04, "David Birdsong" wrote: >On Wed, May 25, 2011 at 2:05 AM, Hettwer, Marian > wrote: >>Okay, I just ignore Virtual and have a look at Resistent. >> If I understood Stig correct, the behaviour of varnish is expected. >>Using >> 7,2GB RES mem, although malloc 6G was configured. >> I can live with that :) >> >> Still it's kinda odd, that Linux starts swapping out stuff, although RAM >> would have been sufficient. The machine has 8 gig ram and varnishd is >> using 7,2 gig. >> Linux decided to use nearly 1GB of swap. >> I hope that I can stop the kernel doing this by adding >>vm.swappiness=10... >> Let's see. If that doesn't do the trick, it seems that I have to lower >>the >> malloc 6G. > >Check out cat /proc/sys/vm/min_free_kbytes; it's probably trying to >maintain that level. That's valuable information. Thanks! > >Also, kswapd will run through malloc'd address spaces looking for >inactive pages to flush to the swap partition/file. So you may have >loaded a bunch of objects into the address space by pulling them >through varnish, but kswapd may decide that the object storage portion >of the address space is swap'able since they're not actively written >to (or read from?). Quite possible. The question to me would be, when does kswapd decide whether some pages are swapable? After which time of inactivity with regards to reads/writes. I suspected a behaviour like that. Thanks for pointing out that kswapd would make the decision. I believe from here I can go with educating myself. (reads: "Enjoying" the fine "documentation" of Linux internals *SCNR*). Thanks again to all! :) ./Marian From howachen at gmail.com Thu May 26 14:05:44 2011 From: howachen at gmail.com (howard chen) Date: Thu, 26 May 2011 22:05:44 +0800 Subject: Restrict file cache to be cached in varnish Message-ID: Hello, Is it possible to specify that varnish only cache a file if the size is in the range A to B? From pom at dmsp.de Thu May 26 16:00:29 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Thu, 26 May 2011 18:00:29 +0200 Subject: Backend connection failure Message-ID: <4DDE791D.80104@dmsp.de> Hi all, for a longer time I notice a few backend connection failures ('backend_fail' in varnishstat). It occurs about 1-2 times per hour (on varnish 2.1.5 serving about 600 req/sec). I wrote a small perl script (attached) in order to hunt down this error. My plan was to check the 'backend_fail' parameter by cyclic checks using 'varnishstat' and to dump the shared log memory using 'varnishlog' if the number increases. The only interesting message (grepping for 'fail' and 'error') I got is the following: 456 Debug c "Write error, retval = 39424, len = 140216, errno = Connection reset by peer" Does anyone has an idea what the issue could be? Might this be solved by changing some varnish parameters? If yes, which ones? Any hint is very appreciated. Thanks a lot in advance. Stefan -*-*-*- Here is my short (q&d) perl script (maybe it also helps others to hunt other issues)... #!/usr/bin/perl my $last_be_fail = &Fetch_Backend_Fail(); my $curr_be_fail; my $dumps = 10; while (1) { print "\n"; sleep 10; $curr_be_fail = &Fetch_Backend_Fail(); print "cmp[$curr_be_fail][$last_be_fail] "; if ($curr_be_fail > $last_be_fail) { if ($dumps > 0) { &Dump_Varnishlog(); $dumps--; } $last_be_fail = $curr_be_fail; } } sub Fetch_Backend_Fail { my $VARNISHSTAT = "/usr/bin/varnishstat -n port80 -1 | /usr/bin/grep backend_fail | /usr/bin/awk '{ print \$2 }'"; my $be_fail = `$VARNISHSTAT`; chomp $be_fail; my $date = `/bin/date +"%Y-%m-%d %H:%M:%S"`; chomp $date; print "$date: backend_fail:[$be_fail] "; return $be_fail; } sub Dump_Varnishlog { my $date = `/bin/date +"%Y%m%d%H%M%S"`; chomp $date; my $VARNISHLOG = "/usr/bin/varnishlog -d -n port80 > /var/tmp/varnishlog.$date.log"; print "Schreibe Logdatei...\n$VARNISHLOG\n"; `$VARNISHLOG`; } 1; -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From ghstridr at gmail.com Thu May 26 23:26:31 2011 From: ghstridr at gmail.com (Mike Gracy) Date: Thu, 26 May 2011 16:26:31 -0700 Subject: not understang=ding the errors I'm getting and why. Message-ID: Using Varnish 2.1. I'm rather new to varnish...... root at ip-10-170-69-235:/etc/varnish# varnishd -a localhost:9010 -a localhost:8080 -f /etc/varnish/default.vcl -d Message from VCC-compiler: Unused backend ffpool, defined: (input Line 37 Pos 10) director ffpool round-robin { ---------######-------------- Unused backend sppool, defined: (input Line 74 Pos 10) director sppool round-robin { ---------######-------------- Running VCC-compiler failed, exit 1 VCL compilation failed My config: backend ff1 { .host = "10.160.223.159"; .port = "9010"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } backend ff2 { .host = "10.166.234.193"; .port = "9010"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } director ffpool round-robin { { .backend = ff1; } { .backend = ff2; } } backend sp1 { .host = "184.72.7.220"; .port = "8080"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } backend sp2 { .host = "184.72.27.138"; .port = "8080"; .probe = { .url = "/"; .interval = 5s; .timeout = 1 s; .window = 5; .threshold = 3; } } director sppool round-robin { { .backend = sp1; } { .backend = sp2; } } From simon at darkmere.gen.nz Fri May 27 01:27:53 2011 From: simon at darkmere.gen.nz (Simon Lyall) Date: Fri, 27 May 2011 13:27:53 +1200 (NZST) Subject: Little stats script - tophits.sh Message-ID: In case anyone finds this useful. It is a little script that outputs the URLs that are doing the most hits and most bandwidth. It's a bit of a hack (I see bits I could tidy just now) but works okay for me. Main bug is that URLs with different sizes (gziped/non-gziped mainly) are totalled seperately. #!/bin/bash varnishncsa -d > /tmp/vlog # START1=`head -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' ` START=`date +%s --date="$START1"` FIN1=`tail -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' ` FIN=`date +%s --date="$FIN1"` DIFF=` echo " $FIN - $START " | bc ` echo "Data for the last $DIFF seconds " cat /tmp/vlog | sed "s/\%5F/_/g" | sed "s/\%2E/\./g" > /tmp/tophits.tmp echo "" echo "Top Hits per second URLs" echo "" cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { COUNT += 1 } END { OFMT = "%f" ; printf "Total Hits/second: %i\n" , COUNT/interval }' echo "" cat /tmp/tophits.tmp | awk ' { print $7 }' | sort | uniq -c | sort -rn | head -20 | awk -v interval=$DIFF ' { printf "%4.1f Hits/s %s\n" , $1/interval , $2 } ' echo "" echo "" echo "URLs using the most bandwidth" echo "" cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { SUM += $10} END { OFMT = "%f" ; printf "Total Bits/second: %6.1f Kb/s \n", SUM*8/interval/1000 }' echo "" cat /tmp/tophits.tmp | awk ' { print $10 " " $7 }' | sort | uniq -c | awk -v interval=$DIFF ' { printf "%6.1f Kb/s %i h/min %i KB %s\n" , $1*$2/interval*8/1000,$1*60/interval,$2/1000,$3}' | sort -rn | head -20 echo "" echo "" -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT. From varnish at mm.quex.org Fri May 27 04:12:31 2011 From: varnish at mm.quex.org (Michael Alger) Date: Fri, 27 May 2011 12:12:31 +0800 Subject: not understanding the errors I'm getting and why. In-Reply-To: References: Message-ID: <20110527041231.GA16609@grum.quex.org> On Thu, May 26, 2011 at 04:26:31PM -0700, Mike Gracy wrote: > Using Varnish 2.1. I'm rather new to varnish...... > Message from VCC-compiler: > Unused backend ffpool, defined: > (input Line 37 Pos 10) > director ffpool round-robin { > ---------######-------------- > Unused backend sppool, defined: > (input Line 74 Pos 10) > director sppool round-robin { > ---------######-------------- > Running VCC-compiler failed, exit 1 > VCL compilation failed This message is issued whenever you have defined backends or directors which you don't actually reference in your VCL. Somewhere in vcl_fetch you'll want to use set req.backend = ffpool; and set req.backend = sppool; Normally you'd do this as part of a conditional construct where you check the request hostname and/or requested path and select which backend should be used to serve that content. Have a look at this page for examples and more explanation: http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html From tfheen at varnish-software.com Fri May 27 07:19:12 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 27 May 2011 09:19:12 +0200 Subject: Restrict file cache to be cached in varnish In-Reply-To: (howard chen's message of "Thu, 26 May 2011 22:05:44 +0800") References: Message-ID: <87vcwwobkf.fsf@qurzaw.varnish-software.com> ]] howard chen | Is it possible to specify that varnish only cache a file if the size | is in the range A to B? Sure, you check on that in vcl_fetch and pass if it's outside the wanted range. Regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From tfheen at varnish-software.com Fri May 27 07:20:22 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Fri, 27 May 2011 09:20:22 +0200 Subject: Backend connection failure In-Reply-To: <4DDE791D.80104@dmsp.de> (Stefan Pommerening's message of "Thu, 26 May 2011 18:00:29 +0200") References: <4DDE791D.80104@dmsp.de> Message-ID: <87r57kobih.fsf@qurzaw.varnish-software.com> ]] Stefan Pommerening | The only interesting message (grepping for 'fail' and 'error') I got | is the following: | 456 Debug c "Write error, retval = 39424, len = 140216, errno | = Connection reset by peer" | | Does anyone has an idea what the issue could be? Might this be solved | by changing some varnish parameters? If yes, which ones? This looks like the client closing the connection on us which shouldn't be a connection failure. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From thomas.woinke at gmail.com Fri May 27 07:22:34 2011 From: thomas.woinke at gmail.com (Thomas Woinke) Date: Fri, 27 May 2011 09:22:34 +0200 Subject: not understang=ding the errors I'm getting and why. In-Reply-To: References: Message-ID: Hi, On Fri, May 27, 2011 at 1:26 AM, Mike Gracy wrote: > Using Varnish 2.1. I'm rather new to varnish...... > root at ip-10-170-69-235:/etc/varnish# varnishd -a localhost:9010 -a > localhost:8080 -f /etc/varnish/default.vcl -d > Message from VCC-compiler: > Unused backend ffpool, defined: > (input Line 37 Pos 10) > director ffpool round-robin { > ---------######-------------- > Unused backend sppool, defined: > (input Line 74 Pos 10) > director sppool round-robin { Looks to me like you did declare backends ffpool and sppool and didn't use them in your VCL.Varnish doesn't seem to like orphan backends in its VCL, so comment out the backend declarations for ffpool and sppool as long as you don't use them and you should be okay. /thomas From pavionove at gmail.com Fri May 27 07:33:30 2011 From: pavionove at gmail.com (Jean-Francois Laurens) Date: Fri, 27 May 2011 09:33:30 +0200 Subject: Little stats script - tophits.sh In-Reply-To: References: Message-ID: <919EE33A-04D1-4F82-AEE0-7B44EBB3FABA@gmail.com> Thanks for sharing this !! I'll try it asap ! Jef Jean-Francois Laurens pavionove at gmail.com Le 27 mai 2011 ? 03:27, Simon Lyall a ?crit : > > In case anyone finds this useful. It is a little script that outputs the URLs that are doing the most hits and most bandwidth. > > It's a bit of a hack (I see bits I could tidy just now) but works okay for me. Main bug is that URLs with different sizes (gziped/non-gziped mainly) are totalled seperately. > > > #!/bin/bash > varnishncsa -d > /tmp/vlog > # > START1=`head -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' ` > START=`date +%s --date="$START1"` > FIN1=`tail -1 /tmp/vlog | cut -f4 -d" " | cut -f2 -d"[" | sed "s/\/[0-9]*\:/\//" | awk -F/ ' { print $2" "$1" "$3 } ' ` > FIN=`date +%s --date="$FIN1"` > DIFF=` echo " $FIN - $START " | bc ` > > echo "Data for the last $DIFF seconds " > > cat /tmp/vlog | sed "s/\%5F/_/g" | sed "s/\%2E/\./g" > /tmp/tophits.tmp > echo "" > echo "Top Hits per second URLs" > echo "" > cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { COUNT += 1 } END { OFMT = "%f" ; printf "Total Hits/second: %i\n" , COUNT/interval }' > echo "" > cat /tmp/tophits.tmp | awk ' { print $7 }' | sort | uniq -c | sort -rn | head -20 | awk -v interval=$DIFF ' { printf "%4.1f Hits/s %s\n" , $1/interval , $2 } ' > echo "" > echo "" > echo "URLs using the most bandwidth" > echo "" > cat /tmp/tophits.tmp | awk -v interval=$DIFF ' { SUM += $10} END { OFMT = "%f" ; printf "Total Bits/second: %6.1f Kb/s \n", SUM*8/interval/1000 }' > echo "" > cat /tmp/tophits.tmp | awk ' { print $10 " " $7 }' | sort | uniq -c | awk -v interval=$DIFF ' { printf "%6.1f Kb/s %i h/min %i KB %s\n" , $1*$2/interval*8/1000,$1*60/interval,$2/1000,$3}' | sort -rn | head -20 > echo "" > echo "" > > > > -- > Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ > "To stay awake all night adds a day to your life" - Stilgar | eMT. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From pom at dmsp.de Fri May 27 10:36:53 2011 From: pom at dmsp.de (Stefan Pommerening) Date: Fri, 27 May 2011 12:36:53 +0200 Subject: Backend connection failure In-Reply-To: <87r57kobih.fsf@qurzaw.varnish-software.com> References: <4DDE791D.80104@dmsp.de> <87r57kobih.fsf@qurzaw.varnish-software.com> Message-ID: <4DDF7EC5.5060308@dmsp.de> Am 27.05.2011 09:20, schrieb Tollef Fog Heen: > ]] Stefan Pommerening > > | The only interesting message (grepping for 'fail' and 'error') I got > | is the following: > | 456 Debug c "Write error, retval = 39424, len = 140216, errno > | = Connection reset by peer" > | > | Does anyone has an idea what the issue could be? Might this be solved > | by changing some varnish parameters? If yes, which ones? > > This looks like the client closing the connection on us which shouldn't > be a connection failure. Hi Tollef, my main problem is that I don't know what is the reason for the 'backend_fail' counter being increased every now and then. Therefore I dumped the varnishlog -d (meanwhile also varnishncsa -d) directly after the counter has been increased by the varnish server. Any idea in which direction I should continue searching? How to analyze the issue or what to search for? Stefan ( Updated version of my used varnishhunt.pl script: http://bit.ly/iTqfdC ) -- *Dipl.-Inform. Stefan Pommerening Informatik-B?ro: IT-Dienste & Projekte, Consulting & Coaching* http://www.dmsp.de From ghstridr at gmail.com Fri May 27 20:36:23 2011 From: ghstridr at gmail.com (Mike Gracy) Date: Fri, 27 May 2011 13:36:23 -0700 Subject: not understanding the errors I'm getting and why. In-Reply-To: <20110527041231.GA16609@grum.quex.org> References: <20110527041231.GA16609@grum.quex.org> Message-ID: Thanks! On Thu, May 26, 2011 at 9:12 PM, Michael Alger wrote: > On Thu, May 26, 2011 at 04:26:31PM -0700, Mike Gracy wrote: >> Using Varnish 2.1. I'm rather new to varnish...... >> Message from VCC-compiler: >> Unused backend ffpool, defined: >> (input Line 37 Pos 10) >> director ffpool round-robin { >> ---------######-------------- >> Unused backend sppool, defined: >> (input Line 74 Pos 10) >> director sppool round-robin { >> ---------######-------------- >> Running VCC-compiler failed, exit 1 >> VCL compilation failed > > This message is issued whenever you have defined backends or directors > which you don't actually reference in your VCL. > > Somewhere in vcl_fetch you'll want to use > > ?set req.backend = ffpool; > > and > > ?set req.backend = sppool; > > Normally you'd do this as part of a conditional construct where you > check the request hostname and/or requested path and select which > backend should be used to serve that content. > > Have a look at this page for examples and more explanation: > > ?http://www.varnish-cache.org/docs/2.1/tutorial/advanced_backend_servers.html > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From jonathan.hursey at adrevolution.com Fri May 27 23:26:49 2011 From: jonathan.hursey at adrevolution.com (Jonathan Hursey) Date: Fri, 27 May 2011 18:26:49 -0500 Subject: IRC Message-ID: is there an active IRC channel for varnish? -- *Jonathan M. Hursey* *Linux Systems Administrator* * * -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Sat May 28 00:00:24 2011 From: roberto.fernandezcrisial at gmail.com (=?utf-8?B?Um9iZXJ0byBPLiBGZXJuw6FuZGV6IENyaXNpYWw=?=) Date: Sat, 28 May 2011 00:00:24 +0000 Subject: IRC In-Reply-To: References: Message-ID: <1556131980-1306540826-cardhu_decombobulator_blackberry.rim.net-1070104210-@b5.c27.bise6.blackberry> irc.linpro.no #varnish ;) @rofc -----Original Message----- From: Jonathan Hursey Sender: varnish-misc-bounces at varnish-cache.org Date: Fri, 27 May 2011 18:26:49 To: Subject: IRC _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jonathan.hursey at adrevolution.com Sat May 28 00:02:07 2011 From: jonathan.hursey at adrevolution.com (Jonathan Hursey) Date: Fri, 27 May 2011 19:02:07 -0500 Subject: IRC In-Reply-To: <1556131980-1306540826-cardhu_decombobulator_blackberry.rim.net-1070104210-@b5.c27.bise6.blackberry> References: <1556131980-1306540826-cardhu_decombobulator_blackberry.rim.net-1070104210-@b5.c27.bise6.blackberry> Message-ID: thanks! 2011/5/27 Roberto O. Fern?ndez Crisial > irc.linpro.no #varnish ;) > > @rofc > -----Original Message----- > From: Jonathan Hursey > Sender: varnish-misc-bounces at varnish-cache.org > Date: Fri, 27 May 2011 18:26:49 > To: > Subject: IRC > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -- *Jonathan M. Hursey* *Linux Systems Administrator* * * -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Sat May 28 07:09:15 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sat, 28 May 2011 07:09:15 +0000 Subject: IRC In-Reply-To: Your message of "Fri, 27 May 2011 18:26:49 EST." Message-ID: <29971.1306566555@critter.freebsd.dk> In message , Jonathan Hurse y writes: >is there an active IRC channel for varnish? Yes: http://www.varnish-cache.org/docs/2.1/installation/help.html -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From contact at jpluscplusm.com Sat May 28 18:54:07 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 28 May 2011 19:54:07 +0100 Subject: VCL documentation doesn't seem to mention vcl_timeout Message-ID: http://www.varnish-cache.org/docs/2.1/reference/vcl.html doesn't detail vcl_timeout, or indeed mention it at all. Nothing useful comes up via http://www.varnish-cache.org/docs/2.1/search.html?q=vcl_timeout&check_keywords=yes&area=default either. Could this be fixed (or have I missed something)? TIA, Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From perbu at varnish-software.com Sat May 28 19:07:42 2011 From: perbu at varnish-software.com (Per Buer) Date: Sat, 28 May 2011 21:07:42 +0200 Subject: VCL documentation doesn't seem to mention vcl_timeout In-Reply-To: References: Message-ID: On Sat, May 28, 2011 at 8:54 PM, Jonathan Matthews wrote: > http://www.varnish-cache.org/docs/2.1/reference/vcl.html doesn't > detail vcl_timeout, or indeed mention it at all. vcl_timeout was removed in Varnish 2.1.0. It never worked particularly well. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Sat May 28 20:33:05 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 28 May 2011 21:33:05 +0100 Subject: VCL documentation doesn't seem to mention vcl_timeout In-Reply-To: References: Message-ID: On 28 May 2011 20:07, Per Buer wrote: > > On Sat, May 28, 2011 at 8:54 PM, Jonathan Matthews wrote: >> >> http://www.varnish-cache.org/docs/2.1/reference/vcl.html doesn't >> detail vcl_timeout, or indeed mention it at all. > > vcl_timeout was removed in Varnish 2.1.0. It never worked?particularly?well. Thanks for the clarification. So what happens when any of the backend's timeout values are exceeded? Do we get any control over the resulting behaviour programatically, via command line settings, or ... ? Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From ruben at varnish-software.com Sun May 29 02:23:51 2011 From: ruben at varnish-software.com (=?ISO-8859-1?Q?Rub=E9n_Romero?=) Date: Sun, 29 May 2011 04:23:51 +0200 Subject: Your own local Varnish Cache 3.0 release party? In-Reply-To: References: Message-ID: Hello, 2011/5/25 Rub?n Romero : > Hello again, > > 2011/5/20 Rub?n Romero : >> Hi everyone, >> >> As planning of the launch of Varnish 3.0 is on I thought I would ask >> you all in the list, and at the same sneak the thought in your mind, >> if it is an idea to hold your own <3 (Varnish 3 heart ;) release >> party/gathering in the middle of June. > > It seems that there is a lot of interest on runing these parties :) I > have added a couple of pages in the wiki: > * Current Party Overview: > http://www.varnish-cache.org/trac/wiki/Varnish3_Release_Parties > * Organizing a release party guide: > http://www.varnish-cache.org/trac/wiki/Running_Release_Party > > Feel free to add suggestions or your own party to these pages. You can > also post here them to this thread and I will keep on adding them to > the overview/guide as they tick in! :) > A quick update: * Updated party overview (7): http://www.varnish-cache.org/trac/wiki/Varnish3_Release_Parties * Map with all <3 parties: http://maps.google.com/maps/ms?ie=UTF8&msa=0&msid=208289899159557573833.0004a45e28026a1198533&ll=22.268764,4.921875&spn=132.376875,304.101563&z=2 Want to see your own city in the map? It is not too late! To make your own party just do it :-) It is not more difficult than meeting in a bar/coffee shop/company/home and cheer for the release with other Varnish Cache users. Just drop me a line and we'll make it happen :-) > What do you think about using a GoogleDocs form and document for > coordination and registration parallel to the wiki pages? In order to have an idea of who is coming where I got the suggestion of making a webform that you can use to tell others about your assistance to the scheduled parties. What do you think? I can esily do it if you agree (and will use it). Please let me know. Also I have been asked to share the files in order to make your own <3 t-shirts. We will be uploading these to the varnish-cache.org website the coming week. Have a nice weekend everyone! Cheers, -- Rub?n Romero Varnish Software From perbu at varnish-software.com Sun May 29 17:29:25 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 29 May 2011 19:29:25 +0200 Subject: VCL documentation doesn't seem to mention vcl_timeout In-Reply-To: References: Message-ID: On Sat, May 28, 2011 at 10:33 PM, Jonathan Matthews wrote: > > Thanks for the clarification. So what happens when any of the > backend's timeout values are exceeded? Do we get any control over the > resulting behaviour programatically, via command line settings, or ...? I'm pretty sure you'll end up in vcl_error. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From Audun.Ytterdal at vg.no Mon May 30 11:03:12 2011 From: Audun.Ytterdal at vg.no (Audun Ytterdal) Date: Mon, 30 May 2011 13:03:12 +0200 Subject: Varnish 3.0: Please help test. In-Reply-To: <6054.1305749948@critter.freebsd.dk> References: <6054.1305749948@critter.freebsd.dk> Message-ID: <4DE37970.6040808@vg.no> Ok. I've been running 3.0-beta1 over the weekend on one of our 4 main varnishes. The following (ugly and very long) url compares a 2.1 against the new 3.0 , seems to be behaving almost identical on moderate traffic http://munin.vgnett.no/naveed/#::int.vgnett.no::batista.int.vgnett.no:quinn.int.vgnett.no::acpi:bonding_err_bond0:cpu:df:df_inode:diskstats_iops:diskstats_latency:diskstats_throughput:diskstats_utilization:entropy:forks:fw_packets:http_loadtime:if_bond0:if_err_bond0:if_err_eth0:if_err_eth1:if_eth0:if_eth1:interrupts:iostat:iostat_ios:irqstats:load:memory:netstat:ntp_kernel_err:ntp_kernel_pll_freq:ntp_kernel_pll_off:ntp_offset:open_files:open_inodes:postfix_mailqueue:postfix_mailvolume:proc_pri:processes:sendmail_mailqueue:sendmail_mailstats:sendmail_mailtraffic:swap:threads:uptime:users:varnish_backend_traffic:varnish_expunge:varnish_hit_rate:varnish_memory_usage:varnish_objects:varnish_request_rate:varnish_threads:varnish_transfer_rates:varnish_uptime:vmstat:yum Quinn has 3.0 batista has 2.1.5 Did the following changes to the vcl: in vcl_fetch: esi; -> set beresp.do_esi = true; return(pass); -> return(hit_for_pass); in vcl_hash set req.hash += req.http.hash-input; -> hash_data(req.http.hash-input); in vcl_error explicit add + to concatinate strings I find it a bit confusing that esi goes from a functional way of calling it to a "setting variable/paramter"-way while req.hash goes in the opposite direction from variable to functional way of calling it. Any clearifying thoughts about that? On 2011-05-18 22:19, Poul-Henning Kamp wrote: > > Hi, > > It's me, your Varnish software developer, got a minute ? > > Cool, I'll make it really brief: > > As you may, or may not, have noticed, we have pushed out a Varnish > 3.0 Beta1 release: > > http://www.varnish-cache.org/releases/varnish-cache-3.0-beta1 > > The major news are two features: > > GZIP/GUNZIP support, with or without ESI. > > Streaming PASS and FETCH support. > > I have also added, undoubtedly, some bugs, and this is where you > come into the picture: > > My website gets 75 hits an hour, but I am pretty sure you have a > website that takes more traffic than that, why else would you be > on the Varnish announce mailing list ? > > So if you could find a couple of hours to test out Varnish 3.0 and > report back to me how it goes, I would really appreciate it. > > Thanks in advance, > > Poul-Henning > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by > incompetence. > > _______________________________________________ > varnish-announce mailing list > varnish-announce at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-announce > -- Audun Ytterdal Driftsjef VG Multimedia tlf 92402277 From audun at ytterdal.net Mon May 30 14:07:40 2011 From: audun at ytterdal.net (Audun Ytterdal) Date: Mon, 30 May 2011 16:07:40 +0200 Subject: Varnish 3.0: Please help test. In-Reply-To: <4DE37970.6040808@vg.no> References: <6054.1305749948@critter.freebsd.dk> <4DE37970.6040808@vg.no> Message-ID: <1D6D083A-899A-468F-A10A-D99BCC33CE4E@ytterdal.net> And a screenshot of varnishstat and varnishhist http://dl.dropbox.com/u/866639/varnish-3.0-comp.png http://dl.dropbox.com/u/866639/varnishhist-3.0-comp.png Varnishist seems to indicate that it's a bit slower (The violet terminal is the 3.0-beta1) It also seems like it does not write it's fqdn in the top right corner anymore ;-) Den 30. mai 2011 kl. 13:03 skrev Audun Ytterdal : > Ok. I've been running 3.0-beta1 over the weekend on one of our 4 main > varnishes. > > The following (ugly and very long) url compares a 2.1 against the new > 3.0 , seems to be behaving almost identical on moderate traffic > > http://munin.vgnett.no/naveed/#::int.vgnett.no::batista.int.vgnett.no:quinn.int.vgnett.no::acpi:bonding_err_bond0:cpu:df:df_inode:diskstats_iops:diskstats_latency:diskstats_throughput:diskstats_utilization:entropy:forks:fw_packets:http_loadtime:if_bond0:if_err_bond0:if_err_eth0:if_err_eth1:if_eth0:if_eth1:interrupts:iostat:iostat_ios:irqstats:load:memory:netstat:ntp_kernel_err:ntp_kernel_pll_freq:ntp_kernel_pll_off:ntp_offset:open_files:open_inodes:postfix_mailqueue:postfix_mailvolume:proc_pri:processes:sendmail_mailqueue:sendmail_mailstats:sendmail_mailtraffic:swap:threads:uptime:users:varnish_backend_traffic:varnish_expunge:varnish_hit_rate:varnish_memory_usage:varnish_objects:varnish_request_rate:varnish_threads:varnish_transfer_rates:varnish_uptime:vmstat:yum > > > Quinn has 3.0 batista has 2.1.5 > > Did the following changes to the vcl: > > in vcl_fetch: > > esi; -> set beresp.do_esi = true; > > return(pass); -> return(hit_for_pass); > > in vcl_hash > > set req.hash += req.http.hash-input; -> hash_data(req.http.hash-input); > > in vcl_error > > explicit add + to concatinate strings > > I find it a bit confusing that esi goes from a functional way of calling > it to a "setting variable/paramter"-way while req.hash goes in the > opposite direction from variable to functional way of calling it. Any > clearifying thoughts about that? > > On 2011-05-18 22:19, Poul-Henning Kamp wrote: >> >> Hi, >> >> It's me, your Varnish software developer, got a minute ? >> >> Cool, I'll make it really brief: >> >> As you may, or may not, have noticed, we have pushed out a Varnish >> 3.0 Beta1 release: >> >> http://www.varnish-cache.org/releases/varnish-cache-3.0-beta1 >> >> The major news are two features: >> >> GZIP/GUNZIP support, with or without ESI. >> >> Streaming PASS and FETCH support. >> >> I have also added, undoubtedly, some bugs, and this is where you >> come into the picture: >> >> My website gets 75 hits an hour, but I am pretty sure you have a >> website that takes more traffic than that, why else would you be >> on the Varnish announce mailing list ? >> >> So if you could find a couple of hours to test out Varnish 3.0 and >> report back to me how it goes, I would really appreciate it. >> >> Thanks in advance, >> >> Poul-Henning >> >> -- >> Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 >> phk at FreeBSD.ORG | TCP/IP since RFC 956 >> FreeBSD committer | BSD since 4.3-tahoe >> Never attribute to malice what can adequately be explained by >> incompetence. >> >> _______________________________________________ >> varnish-announce mailing list >> varnish-announce at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-announce >> > > > -- > Audun Ytterdal > Driftsjef > VG Multimedia > tlf 92402277 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.cao_huu_thien at upmc.fr Mon May 30 15:39:17 2011 From: patrick.cao_huu_thien at upmc.fr (Patrick CAO HUU THIEN) Date: Mon, 30 May 2011 17:39:17 +0200 Subject: time out on big object ?? Message-ID: hello. I made a basic ?out of the box? varnish server to access a ?personal? web server ... in fact, just define a backend. But I have a problem when I test the download of big object (iso with ~630Mb) with this message: wget http://xxxxx/iso/ubuntu-zz-10.04.2-desktop-i386.iso --2011-05-30 09:28:19--http://xxxx/iso/ubuntu-zz-10.04.2-desktop-i386.iso Resolving xxxxx... yy.yy.yy.yy Connecting to xxxxx|yy.yy.yy.yy|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 854108160 (815M) [application/x-iso9660-image] Saving to: `ubuntu-zz-10.04.2-desktop-i386.iso' 53% [===================> ] 458,763,695 453K/s in 10m 0s 2011-05-30 09:38:26 (746 KB/s) - Connection closed at byte 458763695. Retrying The time-out always appear after 10 mins. Can I have some advices to help me resolve this .... like don't cache big object or specific URL ? -- Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Mon May 30 16:13:57 2011 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 30 May 2011 18:13:57 +0200 Subject: time out on big object ?? In-Reply-To: References: Message-ID: <4DE3C245.9030406@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 05/30/11 05:39 PM, Patrick CAO HUU THIEN wrote: > hello. > > I made a basic ?out of the box? varnish server to access a ?personal? web > server ... in fact, just define a backend. > But I have a problem when I test the download of big object (iso with > ~630Mb) with this message: > > wget http://xxxxx/iso/ubuntu-zz-10.04.2-desktop-i386.iso > --2011-05-30 09:28:19--http://xxxx/iso/ubuntu-zz-10.04.2-desktop-i386.iso > Resolving xxxxx... yy.yy.yy.yy > Connecting to xxxxx|yy.yy.yy.yy|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 854108160 (815M) [application/x-iso9660-image] > Saving to: `ubuntu-zz-10.04.2-desktop-i386.iso' > > 53% [===================> ] 458,763,695 453K/s in 10m 0s > > 2011-05-30 09:38:26 (746 KB/s) - Connection closed at byte 458763695. Retrying > > > The time-out always appear after 10 mins. > > Can I have some advices to help me resolve this .... like don't cache big > object or specific URL ? The default value of the send_timeout parameter is 600 seconds: "Send timeout for client connections. If no data has been sent to the client in this many seconds, the session is closed. See setsockopt(2) under SO_SNDTIMEO for more information." Probably what's happening is that Varnish needs more than 10 minutes to read the 630 MB monster from your backend, during which the client connection sits idle, and the timeout elapses. You could set a higher timeout with -p send_timeout=, but you're probably much better off having VCL return pass on that particular URL. Unless you really need Varnish to take up 630 MB of space just to cache your Ubuntu ISO. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Schwanenwik 24 22087 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (SunOS) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBCAAGBQJN48JFAAoJEOUwvh9pJNURvJ4P/Rgtas2aTzKrqWsUkJPhAgbe WSr3HCouWukARJRzHL6Hz5HB5XME2Di0Qj7QmplEUF35ftQLP0pE/iu3EPvXJ2R9 b2LmXrpK7R1GVEIMo5sBE43hJsftl1B5GWTZKl6w+z5ESSPgltYQrBax888yZK4d tIphnGlu/T63db8IGvrFQ212QrNtv+tt1z8QEfuhJLUkVCFa2ajDpWJV34rYTOaC IQd6qepDmMKipEApJdL8mZY6c7Y7wUlmXTnrXLBEQiaIFTONyCv4a72PoLyYa3Xl fA9cRQ11V059Ws2T7S1/ZN0CEY6T2ms1dHKLTh8B68pGgLGCMufro6KkTMvBXhMZ uLHMEmOx9ejRL9w6iOhxtYnG1PlNXOpF/BGj2q+8kGQthptgeiBTBnWMfW1jI7y1 mtl9LeQ+5NhVmgpIOgj+j4xKzMHED2f0DjvtN+VmJAZpr4hwY5pxadB+bFlKGq5B v7bOWQm5imQ91CM7E0PoYa85nR+5Z0JyZMJNRq7FwBhNrsFRFTz8z46t2Ezd/pAe 5H3BT+EDkHhsejMOE3cNZoe0k9TYX+nEtymJUJg6tCGrLynvux3m3XA7mU29prfm q3Xl+Vkbv9c9XxGJSlAqWF09hSJP8shtpcQ0qGa/eFT6qwNI419NkypVSnD1Yl6y Yp1+SxymNgnju54t4Frb =9vzr -----END PGP SIGNATURE----- From contact at jpluscplusm.com Mon May 30 22:10:01 2011 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 30 May 2011 23:10:01 +0100 Subject: VCL documentation doesn't seem to mention vcl_timeout In-Reply-To: References: Message-ID: On 29 May 2011 18:29, Per Buer wrote: > > On Sat, May 28, 2011 at 10:33 PM, Jonathan Matthews wrote: >> >> Thanks for the clarification. So what happens when any of the >> backend's timeout values are exceeded? Do we get any control over the >> resulting behaviour programatically, via command line settings, or ...? > > I'm pretty sure you'll end up in vcl_error. Brill, thank you. Appreciated. Jonathan -- Jonathan Matthews London, UK http://www.jpluscplusm.com/contact.html From mls at pooteeweet.org Mon May 30 22:23:33 2011 From: mls at pooteeweet.org (Lukas Kahwe Smith) Date: Tue, 31 May 2011 00:23:33 +0200 Subject: Is LCI on the radar? Message-ID: <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047@pooteeweet.org> Hi, I assume some of you have stumbled over LCI by now: http://www.ietf.org/id/draft-nottingham-linked-cache-inv-00.txt This is actually quite interesting. For an application we are building we are looking to create an invalidation service to which the various independent frontend server applications can register and which gets notified by the backend. Of course the frontends then have to figure out which pages all need to be invalidated. The original article will be easy. Some of the category overviews will also be easy to delete. What will already get harder is invalidating all articles that reference the given article and worse yet would be if we start caching search results. So I am wondering if you guys are looking at LCI for a future varnish impovement and if someone has build something like this on top of varnish today already that could maybe help us here. regards, Lukas Kahwe Smith mls at pooteeweet.org From perbu at varnish-software.com Tue May 31 09:34:15 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 31 May 2011 11:34:15 +0200 Subject: Is LCI on the radar? In-Reply-To: <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047@pooteeweet.org> References: <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047@pooteeweet.org> Message-ID: Hi On Tue, May 31, 2011 at 12:23 AM, Lukas Kahwe Smith wrote: > Hi, > > I assume some of you have stumbled over LCI by now: > http://www.ietf.org/id/draft-nottingham-linked-cache-inv-00.txt > > This is actually quite interesting. For an application we are building we > are looking to create an invalidation service to which the various > independent frontend server applications can register and which gets > notified by the backend. Of course the frontends then have to figure out > which pages all need to be invalidated. The original article will be easy. > Some of the category overviews will also be easy to delete. What will > already get harder is invalidating all articles that reference the given > article and worse yet would be if we start caching search results. > > So I am wondering if you guys are looking at LCI for a future varnish > impovement and if someone has build something like this on top of varnish > today already that could maybe help us here. > I'm pretty sure this can be implemented in VCL. No need to place it on the radar. I have an upcoming blog-post describing something similar. It might get a bit hairy with all the regular expression so it might be cleaner in a module. -- Per Buer, CEO Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer *Varnish makes websites fly!* Whitepapers | Video | Twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From audun at ytterdal.net Tue May 31 09:40:51 2011 From: audun at ytterdal.net (Audun Ytterdal) Date: Tue, 31 May 2011 11:40:51 +0200 Subject: Varnish 3.0: Please help test. In-Reply-To: <1D6D083A-899A-468F-A10A-D99BCC33CE4E@ytterdal.net> References: <6054.1305749948@critter.freebsd.dk> <4DE37970.6040808@vg.no> <1D6D083A-899A-468F-A10A-D99BCC33CE4E@ytterdal.net> Message-ID: Ok. I think we have some sort of bug here. On the day i deployed varnish 3.0 our bigip started to complain about: http_process_state_prepend - Invalid action EV_INGRESS_DATA during ST_HTTP_PREPEND_HEADERS (Server side: vip=vg.no_m323 profile=http pool=varnish_m323 all the time.. May 31 09:53:53 local/tmm1 info tmm1[32346]: 011f0007:6: Resuming log processing at this invocation; held 23810 messages. https://support.f5.com/kb/en-us/solutions/public/5000/900/sol5922.html (login required) This page explains that this error occurs when the backed does not answer in a rfc2616 compliant way On Mon, May 30, 2011 at 4:07 PM, Audun Ytterdal wrote: > And a screenshot of varnishstat and varnishhist > > http://dl.dropbox.com/u/866639/varnish-3.0-comp.png > http://dl.dropbox.com/u/866639/varnishhist-3.0-comp.png > > Varnishist seems to indicate that it's a bit slower (The violet > terminal is the 3.0-beta1) > > It also seems like it does not write it's fqdn in the top right corner > anymore ;-) > > Den 30. mai 2011 kl. 13:03 skrev Audun Ytterdal : > > Ok. I've been running 3.0-beta1 over the weekend on one of our 4 main > varnishes. > > The following (ugly and very long) url compares a 2.1 against the new > 3.0 , seems to be behaving almost identical on moderate traffic > > http://munin.vgnett.no/naveed/#::int.vgnett.no::batista.int.vgnett.no:quinn.int.vgnett.no::acpi:bonding_err_bond0:cpu:df:df_inode:diskstats_iops:diskstats_latency:diskstats_throughput:diskstats_utilization:entropy:forks:fw_packets:http_loadtime:if_bond0:if_err_bond0:if_err_eth0:if_err_eth1:if_eth0:if_eth1:interrupts:iostat:iostat_ios:irqstats:load:memory:netstat:ntp_kernel_err:ntp_kernel_pll_freq:ntp_kernel_pll_off:ntp_offset:open_files:open_inodes:postfix_mailqueue:postfix_mailvolume:proc_pri:processes:sendmail_mailqueue:sendmail_mailstats:sendmail_mailtraffic:swap:threads:uptime:users:varnish_backend_traffic:varnish_expunge:varnish_hit_rate:varnish_memory_usage:varnish_objects:varnish_request_rate:varnish_threads:varnish_transfer_rates:varnish_uptime:vmstat:yum > > > Quinn has 3.0 batista has 2.1.5 > > Did the following changes to the vcl: > > in vcl_fetch: > > esi; -> set beresp.do_esi = true; > > return(pass); -> return(hit_for_pass); > > in vcl_hash > > set req.hash += req.http.hash-input; -> hash_data(req.http.hash-input); > > in vcl_error > > explicit add + to concatinate strings > > I find it a bit confusing that esi goes from a functional way of calling > it to a "setting variable/paramter"-way while req.hash goes in the > opposite direction from variable to functional way of calling it. Any > clearifying thoughts about that? > > On 2011-05-18 22:19, Poul-Henning Kamp wrote: > > Hi, > > It's me, your Varnish software developer, got a minute ? > > Cool, I'll make it really brief: > > As you may, or may not, have noticed, we have pushed out a Varnish > > 3.0 Beta1 release: > > ???????http://www.varnish-cache.org/releases/varnish-cache-3.0-beta1 > > The major news are two features: > > ???????GZIP/GUNZIP support, with or without ESI. > > ???????Streaming PASS and FETCH support. > > I have also added, undoubtedly, some bugs, and this is where you > > come into the picture: > > My website gets 75 hits an hour, but I am pretty sure you have a > > website that takes more traffic than that, why else would you be > > on the Varnish announce mailing list ? > > So if you could find a couple of hours to test out Varnish 3.0 and > > report back to me how it goes, I would really appreciate it. > > Thanks in advance, > > Poul-Henning > > -- > > Poul-Henning Kamp ??????| UNIX since Zilog Zeus 3.20 > > phk at FreeBSD.ORG ????????| TCP/IP since RFC 956 > > FreeBSD committer ??????| BSD since 4.3-tahoe > > Never attribute to malice what can adequately be explained by > > incompetence. > > _______________________________________________ > > varnish-announce mailing list > > varnish-announce at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-announce > > > > -- > Audun Ytterdal > Driftsjef > VG Multimedia > tlf 92402277 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From andrea.campi at zephirworks.com Tue May 31 10:22:28 2011 From: andrea.campi at zephirworks.com (Andrea Campi) Date: Tue, 31 May 2011 12:22:28 +0200 Subject: Varnish <3 party in Milano Message-ID: Hey guys, the details on the party have been finalized, the free T-shirts are being printed, the beer has been ordered :) All that is missing is for you to register: http://varnish-release-party-milano.eventbrite.com/ We have 3 talks scheduled so far, if you want to do a talk on Varnish just drop me an email. Andrea From l at lrowe.co.uk Tue May 31 10:58:33 2011 From: l at lrowe.co.uk (Laurence Rowe) Date: Tue, 31 May 2011 11:58:33 +0100 Subject: Is LCI on the radar? In-Reply-To: References: <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047@pooteeweet.org> Message-ID: On 31 May 2011 10:34, Per Buer wrote: > Hi > > On Tue, May 31, 2011 at 12:23 AM, Lukas Kahwe Smith wrote: > >> Hi, >> >> I assume some of you have stumbled over LCI by now: >> http://www.ietf.org/id/draft-nottingham-linked-cache-inv-00.txt >> >> This is actually quite interesting. For an application we are building we >> are looking to create an invalidation service to which the various >> independent frontend server applications can register and which gets >> notified by the backend. Of course the frontends then have to figure out >> which pages all need to be invalidated. The original article will be easy. >> Some of the category overviews will also be easy to delete. What will >> already get harder is invalidating all articles that reference the given >> article and worse yet would be if we start caching search results. >> >> So I am wondering if you guys are looking at LCI for a future varnish >> impovement and if someone has build something like this on top of varnish >> today already that could maybe help us here. >> > > I'm pretty sure this can be implemented in VCL. No need to place it on the > radar. I have an upcoming blog-post describing something similar. It might > get a bit hairy with all the regular expression so it might be cleaner in a > module. > I experimented with something that sounds similar. Each page set a header recording the the content item ids that were used in rendering the page. They could then be purged with a regex including any dependents id. http://dev.plone.org/collective/browser/experimental.depends/trunk/varnish.vcl It works when you update or delete a content item, but it can't help the case where you add a new content item and want that to appear in listing. Laurence -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue May 31 13:17:04 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 31 May 2011 13:17:04 +0000 Subject: Is LCI on the radar? In-Reply-To: Your message of "Tue, 31 May 2011 00:23:33 +0200." <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047@pooteeweet.org> Message-ID: <36899.1306847824@critter.freebsd.dk> In message <4E0D7A8A-2219-4B7E-BBD5-5BF5DBC54047 at pooteeweet.org>, Lukas Kahwe S mith writes: >Hi, > >I assume some of you have stumbled over LCI by now: >http://www.ietf.org/id/draft-nottingham-linked-cache-inv-00.txt > >This is actually quite interesting. And unfortunately very very very troublesome. I'm still composing my reply to it. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From cd at sentia.nl Tue May 31 17:27:22 2011 From: cd at sentia.nl (Camiel Dobbelaar) Date: Tue, 31 May 2011 19:27:22 +0200 Subject: percent sign in vcl strings Message-ID: <4DE524FA.5070100@sentia.nl> It turns out to be pretty painful to try to match a URL that looks like this: http://hostname/%2fmedia%2f76525%2fcoverthumbnail.jpg I spent quite some time in pcre manpages before stumbling on this thread: http://comments.gmane.org/gmane.comp.web.varnish.misc/3604 The answer there is to use "%25" to match a literal "%", which means the regex ends up looking this: req.url ~ "^/%252fmedia%252f" What about allowing "%%" for a literal "%" like printf() ? -- Cam