From rtshilston at gmail.com Sun Jan 2 18:16:46 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Sun, 2 Jan 2011 18:16:46 +0000 Subject: varnishreplay and varnishlog Message-ID: <0DA235A5-CE0D-47FA-8C19-76E027F4C478@gmail.com> I recall a question recently about the use of varnishreplay. I've got a bug in a web application, and I know the varnish transaction ID for an offending request. It would be handy to use varnishreplay to send this to varnish and through to the backend for testing. However, I don't appear to be able to extract a portion from the varnishlog file into another varnishlog file. This is the sort of command I'd like to run: varnishlog -r /var/log/varnish/varnish.log -c -o TxHeader 1234567890 -w /tmp/1234567890.varnishlog then varnishreplay -a localhost:80 -r /tmp/1234567890.varnishlog I admit that I'm not running the latest code (I'm using ), but I've reviewed the changes in trac, and don't see anything similar to the above. Has anyone got a cunning suggestion as to how I can achieve this? Thanks, and best wishes for 2011 Rob From rtshilston at gmail.com Sun Jan 2 18:26:09 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Sun, 2 Jan 2011 18:26:09 +0000 Subject: varnishreplay and varnishlog In-Reply-To: <0DA235A5-CE0D-47FA-8C19-76E027F4C478@gmail.com> References: <0DA235A5-CE0D-47FA-8C19-76E027F4C478@gmail.com> Message-ID: <5F4ABD91-B1C1-4A99-A113-9948B04CF7E7@gmail.com> My version: varnish-2.0.4-1.el5, on 64 bit Centos54 Rob On 2 Jan 2011, at 18:16, Robert Shilston wrote: > I recall a question recently about the use of varnishreplay. I've got a bug in a web application, and I know the varnish transaction ID for an offending request. It would be handy to use varnishreplay to send this to varnish and through to the backend for testing. However, I don't appear to be able to extract a portion from the varnishlog file into another varnishlog file. > > This is the sort of command I'd like to run: > > varnishlog -r /var/log/varnish/varnish.log -c -o TxHeader 1234567890 -w /tmp/1234567890.varnishlog > > then > > varnishreplay -a localhost:80 -r /tmp/1234567890.varnishlog > > I admit that I'm not running the latest code (I'm using ), but I've reviewed the changes in trac, and don't see anything similar to the above. Has anyone got a cunning suggestion as to how I can achieve this? > > Thanks, and best wishes for 2011 > > > Rob > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From steamboatid at gmail.com Sun Jan 2 22:36:52 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Mon, 3 Jan 2011 05:36:52 +0700 Subject: truncated VRT_r_req_url Message-ID: hello, because apache can not handle decoded url correctly, i try to develop url decoding at vcl_recv. but when i try to use VRT_r_req_url(sp), it return truncated url when inputed url consists white space. for example: inputed request : http://adomain/q?word1 word2 output of VRT_r_req_url: http://adomain/q?word1 how i can get the full url from VRT_r_req_url ? many thanks in advance. dwi. From phk at phk.freebsd.dk Mon Jan 3 07:57:58 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 03 Jan 2011 07:57:58 +0000 Subject: truncated VRT_r_req_url In-Reply-To: Your message of "Mon, 03 Jan 2011 05:36:52 +0700." Message-ID: <15465.1294041478@critter.freebsd.dk> In message , dwi kristianto writes: >inputed request : http://adomain/q?word1 word2 The HTTP client has to encode the space as %20 before sending it, the URL field cannot contain white-space in the HTTP protocol. See RFC2616 section 5.1 -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From amedeo at oscert.net Mon Jan 3 09:35:01 2011 From: amedeo at oscert.net (Amedeo Salvati) Date: Mon, 3 Jan 2011 10:35:01 +0100 Subject: Ubuntu installation guide page installs outdated Varnish cache In-Reply-To: References: Message-ID: hi Aaron, some days ago i wrote this howto for ubuntu, it's in italian but if you want you can use google translator or something like that... Original link: http://lab.oscert.net/varnish/installazione-di-varnish-dai-sorgenti google translator: http://translate.google.it/translate?js=n&prev=_t&hl=it&ie=UTF-8&layout=2&eotf=1&sl=it&tl=en&u=http%3A%2F%2Flab.oscert.net%2Fvarnish%2Finstallazione-di-varnish-dai-sorgenti regards amedeo 2010/12/22 Aaron . : > Hello, tried contacting varnish contact person, but couldn't find any ways > to do so - thus the mailing list. Sorry if I'm causing others any > inconvenience ;-) > > Just a heads up for those using the Ubuntu installation guide: it installs > an outdated varnish: > > root at xen7:/etc/apache2# varnishd -V > > varnishd (varnish-1.0.3) > > Copyright (c) 2006 Linpro AS / Verdens Gang AS > > root at xen7:/etc/apache2# > > Here's what I did (rather than re-compliing from source to get 2.1.4 (am > getting 2.1.2 on Ubuntu Hardy) > > 1. > > From > http://repo.varnish-cache.org/debian/dists/hardy/varnish-2.1/binary-amd64/: > > > wget > http://repo.varnish-cache.org/debian/dists/hardy/varnish-2.1/binary-amd64/varnish_2.1.2-1~hardy1_amd64.deb > > 2. > > To resolve dependency issues: > > apt-get install libvarnish1 > 3. > dpkg -i varnish_2.1.2-1~hardy1_amd64.deb > > After installation: > > root at xen7:/home/bsg/download# varnishd -V > > varnishd (varnish-2.1.2 SVN ) > > Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS > > root at xen7:/home/bsg/download# > > > Hope this helps :-) > > Keep up the great work guys! > > Cheers, > Aaron > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From perbu at varnish-software.com Mon Jan 3 09:36:24 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 3 Jan 2011 10:36:24 +0100 Subject: I've let the hordes loose on the wiki Message-ID: as a test we're now not requiring the "wiki" bit to be set on a user to edit. Only that the user needs to be authenticated. I'll clean it up if spammers break it. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From phk at phk.freebsd.dk Mon Jan 3 09:38:07 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 03 Jan 2011 09:38:07 +0000 Subject: I've let the hordes loose on the wiki In-Reply-To: Your message of "Mon, 03 Jan 2011 10:36:24 +0100." Message-ID: <8626.1294047487@critter.freebsd.dk> In message , Per Buer writes: >as a test we're now not requiring the "wiki" bit to be set on a user >to edit. Only that the user needs to be authenticated. > >I'll clean it up if spammers break it. Why ? What makes you think spammers will leave us alone now ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Mon Jan 3 09:51:46 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 3 Jan 2011 10:51:46 +0100 Subject: I've let the hordes loose on the wiki In-Reply-To: <8626.1294047487@critter.freebsd.dk> References: <8626.1294047487@critter.freebsd.dk> Message-ID: On Mon, Jan 3, 2011 at 10:38 AM, Poul-Henning Kamp wrote: > In message , Per > Buer writes: > >>as a test we're now not requiring the "wiki" bit to be set on a user >>to edit. Only that the user needs to be authenticated. >> >>I'll clean it up if spammers break it. > > Why ? > > What makes you think spammers will leave us alone now ? The naming of the vcl function seems to keep them at bay. Either that or you've been deleting accounts faster than I noticed them lately. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From phk at phk.freebsd.dk Mon Jan 3 09:54:16 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 03 Jan 2011 09:54:16 +0000 Subject: I've let the hordes loose on the wiki In-Reply-To: Your message of "Mon, 03 Jan 2011 10:51:46 +0100." Message-ID: <24057.1294048456@critter.freebsd.dk> In message , Per Buer writes: >The naming of the vcl function seems to keep them at bay. Either that >or you've been deleting accounts faster than I noticed them lately. That helps a lot, but notice that probe accounts still get through. Once the probers find out that they can deface the wiki, the "howto" lists will be updated with the answer to write in the VCL challenge and we're back to square one. The challenge prevents the scripted attempts, it will not prevent the spammers who pay "call-centers" to do the job. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kristian at varnish-software.com Mon Jan 3 09:56:06 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Mon, 3 Jan 2011 10:56:06 +0100 Subject: Memory usage In-Reply-To: References: Message-ID: <20110103095606.GA2454@freud> Hi, On Thu, Dec 30, 2010 at 12:47:36PM -0300, Roberto O. Fern?ndez Crisial wrote: > I have two servers running varnish (varnish-2.1.3 SVN), and both started > with "-s malloc,28G" option. > > I've tried with varnishstat, looking for sma's values (I think "sma_balloc + > sma_bfree = 28G"), but only in one server shows "correct" information: > > sma_balloc 1641487715428 . SMA bytes allocated > sma_bfree 1611596371850 . SMA bytes free > > The other server shows: > > sma_balloc 239065444117 . SMA bytes allocated > sma_bfree 226776048576 . SMA bytes free > > What should I do to see the memory used and memory free from varnish cache? sma_nbytes is probably easier to read - though you'll get the same result. This only indicates that the second server hasn't filled the cache. To verify this, check if the first server has n_lru_nuked objects (it should) and compare it to the second server (which should have 0 nuked - unless the cache size shrunk dramatically). The sma_nbytes and related counters refer to memory used for current objects, not what is available. To allocate space for a new object, Varnish checks that: (new object size) + (sma_nbytes) < (total size, specified by -s) Hope this helps. - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From amedeo at oscert.net Mon Jan 3 10:21:01 2011 From: amedeo at oscert.net (Amedeo Salvati) Date: Mon, 3 Jan 2011 11:21:01 +0100 Subject: Ubuntu installation guide page installs outdated Varnish cache In-Reply-To: References: Message-ID: hi Aaron, generally i prefer to don't build from source application on production env, related for security issue to have c compiler installed on the same host, but for varnish i think this doesn't matter because varnish require gcc or c compiler to compile vcl script at run time; for this issue i have configured varnish on chroot jail, and so i wrote (yesterday 02-01-2011) another howto: http://lab.oscert.net/varnish/varnish-in-chroot-jail google translator: http://translate.google.it/translate?js=n&prev=_t&hl=it&ie=UTF-8&layout=2&eotf=1&sl=it&tl=en&u=http%3A%2F%2Flab.oscert.net%2Fvarnish%2Fvarnish-in-chroot-jail another solution that you can walk is to build varnish on dev host, pack a chroot env with a script, it's a very simple script but for me work: https://github.com/amedeos/UVarnishChroot (if you find bug or enhancement please let me know) and finally deploy to production host. regards as 2011/1/3 Aaron . : > Hello Amedeo, > > Thank you for your kind response and link to the how-to. > > I was trying to avoid building Varnish from source as I'm using Puppet to > implement changes to all my servers (rather than doing repetitive work on > each machine) and multiple EC2 instances. Appreciate the write-up though, > it's in my bookmark now ;-) > > Cheers, and Happy 2011! > > Best regards, > Aaron > > >> Date: Mon, 3 Jan 2011 10:35:01 +0100 >> Subject: Re: Ubuntu installation guide page installs outdated Varnish >> cache >> From: amedeo at oscert.net >> To: ratz at hotmail.com >> CC: varnish-misc at varnish-cache.org >> >> hi Aaron, >> >> some days ago i wrote this howto for ubuntu, it's in italian but if >> you want you can use google translator or something like that... >> >> Original link: >> http://lab.oscert.net/varnish/installazione-di-varnish-dai-sorgenti >> >> google translator: >> >> http://translate.google.it/translate?js=n&prev=_t&hl=it&ie=UTF-8&layout=2&eotf=1&sl=it&tl=en&u=http%3A%2F%2Flab.oscert.net%2Fvarnish%2Finstallazione-di-varnish-dai-sorgenti >> >> regards >> amedeo >> >> 2010/12/22 Aaron . : >> > Hello, tried contacting varnish contact person, but couldn't find any >> > ways >> > to do so - thus the mailing list. Sorry if I'm causing others any >> > inconvenience ;-) >> > >> > Just a heads up for those using the Ubuntu installation guide: it >> > installs >> > an outdated varnish: >> > >> > root at xen7:/etc/apache2# varnishd -V >> > >> > varnishd (varnish-1.0.3) >> > >> > Copyright (c) 2006 Linpro AS / Verdens Gang AS >> > >> > root at xen7:/etc/apache2# >> > >> > Here's what I did (rather than re-compliing from source to get 2.1.4 (am >> > getting 2.1.2 on Ubuntu Hardy) >> > >> > 1. >> > >> > From >> > >> > http://repo.varnish-cache.org/debian/dists/hardy/varnish-2.1/binary-amd64/: >> > >> > >> > wget >> > >> > http://repo.varnish-cache.org/debian/dists/hardy/varnish-2.1/binary-amd64/varnish_2.1.2-1~hardy1_amd64.deb >> > >> > 2. >> > >> > To resolve dependency issues: >> > >> > apt-get install libvarnish1 >> > 3. >> > dpkg -i varnish_2.1.2-1~hardy1_amd64.deb >> > >> > After installation: >> > >> > root at xen7:/home/bsg/download# varnishd -V >> > >> > varnishd (varnish-2.1.2 SVN ) >> > >> > Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS >> > >> > root at xen7:/home/bsg/download# >> > >> > >> > Hope this helps :-) >> > >> > Keep up the great work guys! >> > >> > Cheers, >> > Aaron >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > From perbu at varnish-software.com Mon Jan 3 10:47:27 2011 From: perbu at varnish-software.com (Per Buer) Date: Mon, 3 Jan 2011 11:47:27 +0100 Subject: I've let the hordes loose on the wiki In-Reply-To: <24057.1294048456@critter.freebsd.dk> References: <24057.1294048456@critter.freebsd.dk> Message-ID: On Mon, Jan 3, 2011 at 10:54 AM, Poul-Henning Kamp wrote: > > That helps a lot, but notice that probe accounts still get through. Ok. I've put the account creation process under monitoring so I'll have a better feeling on how the spammers are doing. The wiki bit is on a bit longer. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From mail at danielbruessler.de Mon Jan 3 11:04:41 2011 From: mail at danielbruessler.de (=?ISO-8859-1?Q?Daniel_Br=FC=DFler?=) Date: Mon, 03 Jan 2011 12:04:41 +0100 Subject: I've let the hordes loose on the wiki In-Reply-To: References: <24057.1294048456@critter.freebsd.dk> Message-ID: <4D21AD49.5020704@danielbruessler.de> Hello Per, additionally to watching the user-registrations you can look at this project: http://www.*bot*-*trap*.de They're very active in maintaining a list of spammers. You can register in the forum, and add the script to the wiki. This redirects to a special page when one of the spammers trys to post to the wiki. So the most spam-bots won't have access to the wiki. Greets! Daniel :: Daniel Br??ler - Emilienstr. 10 - 90489 N?rnberg >> That helps a lot, but notice that probe accounts still get through. > Ok. I've put the account creation process under monitoring so I'll > have a better feeling on how the spammers are doing. The wiki bit is > on a bit longer. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steamboatid at gmail.com Mon Jan 3 14:21:17 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Mon, 3 Jan 2011 21:21:17 +0700 Subject: truncated VRT_r_req_url In-Reply-To: <15465.1294041478@critter.freebsd.dk> References: <15465.1294041478@critter.freebsd.dk> Message-ID: hello poul, thanks for the reply. i agree with you that the standard said so, but some bad bot and browser may ignore that. that's the problem i'm facing now, and i'm creating something at varnish to handle that corectly at the backend. On Mon, Jan 3, 2011 at 2:57 PM, Poul-Henning Kamp wrote: > In message , dwi > kristianto writes: > >>inputed request : http://adomain/q?word1 word2 > > The HTTP client has to encode the space as %20 before sending it, > the URL field cannot contain white-space in the HTTP protocol. > > See RFC2616 section 5.1 > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -- http://design.ebali.web.id/ ym: steamboatid at yahoo.com Add your sites for FREE at: http://freeaddlinks.info/ http://freeseolinks.info/ From phk at phk.freebsd.dk Mon Jan 3 14:53:14 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 03 Jan 2011 14:53:14 +0000 Subject: truncated VRT_r_req_url In-Reply-To: Your message of "Mon, 03 Jan 2011 21:21:17 +0700." Message-ID: <51130.1294066394@critter.freebsd.dk> In message , dwi kristianto writes: >hello poul, > >thanks for the reply. >i agree with you that the standard said so, but some bad bot and >browser may ignore that. Then they don't get a response from Varnish... There is no sane way to work around such brokenness, contact the oweners of those bots and tell them to fix it, or block access for them if they are not important. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From roberto.fernandezcrisial at gmail.com Mon Jan 3 15:14:39 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Mon, 3 Jan 2011 12:14:39 -0300 Subject: Memory usage In-Reply-To: <20110103095606.GA2454@freud> References: <20110103095606.GA2454@freud> Message-ID: Kristian, Thank you for your reply. I was looking for a command that shows me the memory usage (userd and free) to monitors servers from Nagios/Cacti software. Do you know where can I start? Do you know anoutabout any test script to find out how many memory is being used? Thank you, Roberto. 2011/1/3 Kristian Lyngstol > Hi, > > On Thu, Dec 30, 2010 at 12:47:36PM -0300, Roberto O. Fern?ndez Crisial > wrote: > > I have two servers running varnish (varnish-2.1.3 SVN), and both started > > with "-s malloc,28G" option. > > > > I've tried with varnishstat, looking for sma's values (I think > "sma_balloc + > > sma_bfree = 28G"), but only in one server shows "correct" information: > > > > sma_balloc 1641487715428 . SMA bytes allocated > > sma_bfree 1611596371850 . SMA bytes free > > > > The other server shows: > > > > sma_balloc 239065444117 . SMA bytes allocated > > sma_bfree 226776048576 . SMA bytes free > > > > What should I do to see the memory used and memory free from varnish > cache? > > sma_nbytes is probably easier to read - though you'll get the same result. > > This only indicates that the second server hasn't filled the cache. To > verify this, check if the first server has n_lru_nuked objects (it should) > and compare it to the second server (which should have 0 nuked - unless the > cache size shrunk dramatically). > > The sma_nbytes and related counters refer to memory used for current > objects, not what is available. To allocate space for a new object, Varnish > checks that: > > (new object size) + (sma_nbytes) < (total size, specified by -s) > > Hope this helps. > > - Kristian > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iQEcBAEBAgAGBQJNIZ01AAoJEIC1IG5rXEfqjWAIAKSfq5l0cMdzpBBjSEQmq8el > QHyt779tk67y6sjtnclraOYopuqQW33atO92ycgIxHSucXqkMJW340a2u6Pax4hj > Mq7jTRS0BPk+SKhkUirjTBZf8tg+ZNdo7sOyD6GDRXhe6xLD6l3SsujaHj1bpklj > nepWoZa5DIWy/uaZ+O/RIpp+iL8tdY9tJL9Xh9VrjjP1VC6FGGl7T4ieie2SSrgb > eyFvGxX6H8l3Q/gZ+4gzjdyMKfDvkeIsVkBFZ3njZFawTjNblQELXBJh9tsqs54L > Vg6GxagRLDFHYiPcO/q+uDWUAD5EzKFa9q1wP3BFn0X9GMhvBHsF/jKxhaEg7VM= > =9c3K > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fla_torres at yahoo.com.br Mon Jan 3 16:10:19 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Mon, 03 Jan 2011 14:10:19 -0200 Subject: Trouble understanding Varnishlog In-Reply-To: References: Message-ID: <4D21F4EB.3010403@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/28/2010 09:27 PM, . wrote: > 15 TxHeader c Cache-Control: private, max-age=0, > must-revalidate Hello, Varnish must use HTTP/1.1 in its communications with the content servers. Hope this helps. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk0h9OkACgkQNRQApncg294cNwCfZSMNUZj2uNiUhyeoemLKAuml +DYAoOG0p1k8rVRglxqQrIxIEAANFe4w =pd7p -----END PGP SIGNATURE----- From gmoniey at gmail.com Tue Jan 4 04:31:04 2011 From: gmoniey at gmail.com (.) Date: Mon, 3 Jan 2011 20:31:04 -0800 Subject: Trouble understanding Varnishlog In-Reply-To: <4D21F4EB.3010403@yahoo.com.br> References: <4D21F4EB.3010403@yahoo.com.br> Message-ID: I'm using HTTP/1.1, so I don't believe that is the issue. I updated my VCL to handle cookies correctly, and it seems like it is caching things correctly, except for the fact that X-Varnish header is still only returning 1 field. Here is an excerpt from varnishlog: 15 VCL_call c recv 15 VCL_return c lookup 15 VCL_call c hash 15 VCL_return c hash 15 Hit c 1571591022 15 VCL_call c hit 15 VCL_return c deliver 15 VCL_call c deliver 15 VCL_return c deliver 15 TxProtocol c HTTP/1.1 15 TxStatus c 304 15 TxResponse c Not Modified 15 TxHeader c Date: Tue, 04 Jan 2011 04:26:20 GMT 15 TxHeader c Via: 1.1 varnish 15 TxHeader c X-Varnish: 1571591046 15 TxHeader c Cache-Control: private, max-age=0, must-revalidate 15 TxHeader c ETag: "ded8a984e5d814ae0b2113e91757adbc" 15 TxHeader c Connection: keep-alive 15 TxHeader c X-Cache: HIT >From what I can gather, there was a cache hit, and the object was delivered. I even added the X-Cache header as shown here: http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader Any idea why X-Varnish would indicate a cache miss by not specifying 2 numbers? Thanks. On Mon, Jan 3, 2011 at 8:10 AM, Flavio Torres wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 12/28/2010 09:27 PM, . wrote: > > 15 TxHeader c Cache-Control: private, max-age=0, > > must-revalidate > > Hello, > > Varnish must use HTTP/1.1 in its communications with the content servers. > > Hope this helps. > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk0h9OkACgkQNRQApncg294cNwCfZSMNUZj2uNiUhyeoemLKAuml > +DYAoOG0p1k8rVRglxqQrIxIEAANFe4w > =pd7p > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fla_torres at yahoo.com.br Tue Jan 4 12:22:26 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Tue, 04 Jan 2011 10:22:26 -0200 Subject: Trouble understanding Varnishlog In-Reply-To: References: <4D21F4EB.3010403@yahoo.com.br> Message-ID: <4D231102.4060407@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 01/04/2011 02:31 AM, . wrote: > I'm using HTTP/1.1, so I don't believe that is the issue. > Hello! I?m sorry I miscommunicated to you, I told u about HTTP/1.1 because your header says: 'Cache-Control: private, max-age=0, must-revalidate' and varnish should respect cache requests with private or max-age=0. > Any idea why X-Varnish would indicate a cache miss by not > specifying 2 numbers? I suggest you the following vcl: # for security reasons :) acl header { "localhost"; } # vcl_deliver sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } if (client.ip ~ header) { set resp.http.X-Served-By = server.hostname; set resp.http.X-Client-IP = client.ip; set resp.http.X-Cache-Hits = obj.hits; } else { unset resp.http.X-Server-ID; } # set resp.http.X-Cache-Hits = obj.hits; set resp.http.X-Age = resp.http.Age; unset resp.http.Age; remove resp.http.X-Varnish; remove resp.http.Via; } And try: $ curl -I -H "Host: www.yourdomain.com" http://localhost/upload/canal/22/topo.jpg HTTP/1.1 200 OK Last-Modified: Fri, 22 Oct 2010 15:02:47 GMT Cache-Control: max-age=86400, public Expires: Tue, 04 Jan 2011 12:15:32 GMT X-SID: 01 Content-Type: image/jpeg VID: 01 Content-Length: 5291 Date: Tue, 04 Jan 2011 12:11:59 GMT Connection: keep-alive X-Cache: HIT # HIT lol X-Served-By: cache-01.oi.com.br X-Client-IP: 127.0.0.1 X-Cache-Hits: 1355 # hits X-Age: 86187 hope this helps -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk0jEP8ACgkQNRQApncg296FcgCgsYOzbKHtR76n+GEHltsGt+RG i40An3RBL5/rOOkumISEKFE1q8v24YcP =nTOZ -----END PGP SIGNATURE----- From gmoniey at gmail.com Tue Jan 4 20:14:35 2011 From: gmoniey at gmail.com (.) Date: Tue, 4 Jan 2011 12:14:35 -0800 Subject: Trouble understanding Varnishlog In-Reply-To: <4D231102.4060407@yahoo.com.br> References: <4D21F4EB.3010403@yahoo.com.br> <4D231102.4060407@yahoo.com.br> Message-ID: Hi Flavio, Thanks for your reply. I'm curious as to why you suggested I remove the X-Varnish header? I guess my confusion is why the header doesn't include 2 numbers, even though it is a cache HIT, and the HIT counter is being incremented. Thanks. On Tue, Jan 4, 2011 at 4:22 AM, Flavio Torres wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 01/04/2011 02:31 AM, . wrote: > > I'm using HTTP/1.1, so I don't believe that is the issue. > > > > Hello! > > I?m sorry I miscommunicated to you, I told u about HTTP/1.1 because > your header says: 'Cache-Control: private, max-age=0, must-revalidate' > and varnish should respect cache requests with private or max-age=0. > > > Any idea why X-Varnish would indicate a cache miss by not > > specifying > 2 numbers? > > I suggest you the following vcl: > > # for security reasons :) > acl header { > "localhost"; > } > > # vcl_deliver > > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > > if (client.ip ~ header) { > set resp.http.X-Served-By = server.hostname; > set resp.http.X-Client-IP = client.ip; > set resp.http.X-Cache-Hits = obj.hits; > } else { > unset resp.http.X-Server-ID; > } > > # set resp.http.X-Cache-Hits = obj.hits; > set resp.http.X-Age = resp.http.Age; > unset resp.http.Age; > > remove resp.http.X-Varnish; > remove resp.http.Via; > > } > > > And try: > > $ curl -I -H "Host: www.yourdomain.com" > http://localhost/upload/canal/22/topo.jpg > HTTP/1.1 200 OK > Last-Modified: Fri, 22 Oct 2010 15:02:47 GMT > Cache-Control: max-age=86400, public > Expires: Tue, 04 Jan 2011 12:15:32 GMT > X-SID: 01 > Content-Type: image/jpeg > VID: 01 > Content-Length: 5291 > Date: Tue, 04 Jan 2011 12:11:59 GMT > Connection: keep-alive > X-Cache: HIT # HIT lol > X-Served-By: cache-01.oi.com.br > X-Client-IP: 127.0.0.1 > X-Cache-Hits: 1355 # hits > X-Age: 86187 > > > hope this helps > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk0jEP8ACgkQNRQApncg296FcgCgsYOzbKHtR76n+GEHltsGt+RG > i40An3RBL5/rOOkumISEKFE1q8v24YcP > =nTOZ > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdgraff at gmail.com Wed Jan 5 00:11:49 2011 From: cdgraff at gmail.com (Alejandro) Date: Tue, 4 Jan 2011 21:11:49 -0300 Subject: Load balancing streaming (rtsp) servers In-Reply-To: References: Message-ID: Nicholas, Any luck with this? works? I need the same feature.... you will need to use this for balancing Wowza Media Server right? Thanks, Alejandro 2010/12/29 Nicholas Tang > Question: is it possible to load balance rtsp servers using Varnish? > They'd need to "stick" based on client ip. My thought was to try something > like this: > > > backend mobile-1 { > .host = ""; > include "/usr/local/varnish/etc/varnish/backend.vcl"; > } > > backend mobile-2 { > .host = ""; > include "/usr/local/varnish/etc/varnish/backend.vcl"; > } > > backend mobile-3 { > .host = ""; > include "/usr/local/varnish/etc/varnish/backend.vcl"; > } > > backend mobile-4 { > .host = ""; > include "/usr/local/varnish/etc/varnish/backend.vcl"; > } > > director mobile_rtsp client { > { .backend = mobile-1; } > { .backend = mobile-2; } > { .backend = mobile-3; } > { .backend = mobile-4; } > } > > sub vcl_recv { > set req.backend = mobile_rtsp; > set client.identity = client.ip; > return (pipe); > } > > sub vcl_pipe { > # close backend connection after each pipe - this prevents requests from > stepping on each other > # http://www.varnish-cache.org/trac/wiki/VCLExamplePipe > set bereq.http.connection = "close"; > } > > > Thanks, > Nicholas > > > *Nicholas Tang**:* > VP, Dev Ops > > nicholas.tang at livestream.com > | > t: +1 (646) 495 9707 > | > m: +1 (347) 410 6066 > | > 111 8th Avenue, Floor 15, New York, NY 10011 > [image: www.livestream.com] > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholas.tang at livestream.com Wed Jan 5 01:13:39 2011 From: nicholas.tang at livestream.com (Nicholas Tang) Date: Tue, 4 Jan 2011 20:13:39 -0500 Subject: Load balancing streaming (rtsp) servers In-Reply-To: References: Message-ID: Haven't had a chance to test it yet, and no one has responded. And yeah, Wowza. :) Nicholas On Jan 4, 2011 7:11 PM, "Alejandro" wrote: > Nicholas, > > Any luck with this? works? > > I need the same feature.... you will need to use this for balancing Wowza > Media Server right? > > Thanks, > Alejandro > > 2010/12/29 Nicholas Tang > >> Question: is it possible to load balance rtsp servers using Varnish? >> They'd need to "stick" based on client ip. My thought was to try something >> like this: >> >> >> backend mobile-1 { >> .host = ""; >> include "/usr/local/varnish/etc/varnish/backend.vcl"; >> } >> >> backend mobile-2 { >> .host = ""; >> include "/usr/local/varnish/etc/varnish/backend.vcl"; >> } >> >> backend mobile-3 { >> .host = ""; >> include "/usr/local/varnish/etc/varnish/backend.vcl"; >> } >> >> backend mobile-4 { >> .host = ""; >> include "/usr/local/varnish/etc/varnish/backend.vcl"; >> } >> >> director mobile_rtsp client { >> { .backend = mobile-1; } >> { .backend = mobile-2; } >> { .backend = mobile-3; } >> { .backend = mobile-4; } >> } >> >> sub vcl_recv { >> set req.backend = mobile_rtsp; >> set client.identity = client.ip; >> return (pipe); >> } >> >> sub vcl_pipe { >> # close backend connection after each pipe - this prevents requests from >> stepping on each other >> # http://www.varnish-cache.org/trac/wiki/VCLExamplePipe >> set bereq.http.connection = "close"; >> } >> >> >> Thanks, >> Nicholas >> >> >> *Nicholas Tang**:* >> VP, Dev Ops >> >> nicholas.tang at livestream.com >> | >> t: +1 (646) 495 9707 >> | >> m: +1 (347) 410 6066 >> | >> 111 8th Avenue, Floor 15, New York, NY 10011 >> [image: www.livestream.com] >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From steamboatid at gmail.com Wed Jan 5 14:19:48 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Wed, 5 Jan 2011 21:19:48 +0700 Subject: md5 in inline C Message-ID: hello all, is there any easy way to use md5 with varnish VCL inline C?' i've tried this code with no luck. at sub vcl_recv: #include #include void md5_hashing(char *msg, char *md5hash){ EVP_MD_CTX mdctx; const EVP_MD *md; unsigned char md_value[EVP_MAX_MD_SIZE]; int md_len, i; EVP_MD_CTX_init(&mdctx); EVP_DigestInit_ex(&mdctx, md, NULL); EVP_DigestUpdate(&mdctx, msg, strlen(msg)); EVP_DigestFinal_ex(&mdctx, md5hash, &md_len); EVP_MD_CTX_cleanup(&mdctx); } taken from /etc/init.d/varnish start_varnishd() { log_daemon_msg "Starting $DESC" "$NAME" output=$(/bin/tempfile -s.varnish) if start-stop-daemon \ --start --quiet --pidfile ${PIDFILE} --exec ${DAEMON} -- \ -P ${PIDFILE} ${DAEMON_OPTS} \ -p cc_command='cc -fpic -shared -Wl,-x --verbose \ -L/usr/local/include/libmemcached/memcached.h -lmemcached \ -L/usr/local/include/openssl/ssl3.h -lssl \ -o %o %s' > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } many thanks in advance for any responses. regards, dwi From phk at phk.freebsd.dk Wed Jan 5 14:29:56 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Jan 2011 14:29:56 +0000 Subject: md5 in inline C In-Reply-To: Your message of "Wed, 05 Jan 2011 21:19:48 +0700." Message-ID: <26657.1294237796@critter.freebsd.dk> In message , dwi kristianto writes: >hello all, > >is there any easy way to use md5 with varnish VCL inline C?' > >i've tried this code with no luck. >at sub vcl_recv: >#include >#include You need to put the includes outside sub vcl_recv{} Varnish 3.0 will have support for loadable modules ("VMOD") and one of the ones I hope to see is one contaning common hash functions like MD5 etc. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From steamboatid at gmail.com Wed Jan 5 14:38:46 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Wed, 5 Jan 2011 21:38:46 +0700 Subject: md5 in inline C In-Reply-To: <26657.1294237796@critter.freebsd.dk> References: <26657.1294237796@critter.freebsd.dk> Message-ID: hi poul, i did as suggested, but still varnish reply nothing. below is taken from /var/log/syslog Jan 5 21:36:16 deblo varnishd[19722]: Child (19783) Panic message: Assert error in VRT_r_obj_cacheable(), cache_vrt.c line 626:#012 Condition((sp)->magic == 0x2c2f9c5a) not true.#012thread = (cache-worker)#012ident = Linux,2.6.26-2-686,i686,-smalloc,-hcritbit,epoll#012Backtrace:#012 0x8070232: /usr/sbin/varnishd [0x8070232]#012 0x8077572: /usr/sbin/varnishd(VRT_r_obj_cacheable+0x52) [0x8077572]#012 0xafb37a86: /usr/lib/i686/cmov/libcrypto.so.0.9.8(EVP_DigestInit_ex+0xc6) [0xafb37a86]#012 0xafbd2ba3: ./vcl.1P9zoqAU.so [0xafbd2ba3]#012 0xafbd25fa: ./vcl.1P9zoqAU.so [0xafbd25fa]#012 0xafbd310d: ./vcl.1P9zoqAU.so [0xafbd310d]#012 0x8076284: /usr/sbin/varnishd(VCL_pass_method+0x54) [0x8076284]#012 0x805a5bf: /usr/sbin/varnishd [0x805a5bf]#012 0x805d392: /usr/sbin/varnishd(CNT_Session+0x4a2) [0x805d392]#012 0x8072cd7: /usr/sbin/varnishd [0x8072cd7]#012sp = 0xb74c9004 {#012 fd = 11, id = 11, xid = 1999954527,#012 client = 192.168.1.25 45083,#012 step = STP_PASS,#012 handling = deliver,#012 restarts = 0, esis = 0#012 ws = 0xb74c9054 { #012 id = "sess",#012 {s,f,r,e} = {0xb74c97e4,+360,(nil),+16384},#012 },#012 http[req] = {#012 ws = 0xb74c9054[sess]#012 "HEAD",#012 "/",#012 "HTTP/1.1",#012 "User-Agent: curl/7.18.2 (i486-pc-linux-gnu) libcurl/7.18.2 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.8 libssh2/0.18",#012 "Host: www.findpdf.local",#012 "Accept: */*",#012 "X-Agen: curl/7.18.2 (i486-pc-linux-gnu) libcurl/7.18.2 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.8 libssh2/0.18",#012 "X-Durl: /",#012 "X-Host: www.findpdf.local",#012 },#012 worker = 0xae203194 {#012 ws = 0xae2032ac { #012 id = "wrk",#012 {s,f,r,e} = {0xae1fd150,+24,(nil),+16384},#012 },#012 http[bereq] = {#012 ws = 0xae2032ac[wrk]#012 "HEAD",#012 "/",#012 "HTTP/1.1",#012 "User-Agent: curl/7.18.2 (i486-pc-linux-gnu) libcurl/7.18.2 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.8 libssh2/0.18",#012 "Host: www.findpdf.local",#012 "Accept: */*", sorry, but i'm dumb on C. i see that "EVP_DigestInit_ex" is reported at syslog, is that mean the bug starting from there? thank you for your time and patient. :) On Wed, Jan 5, 2011 at 9:29 PM, Poul-Henning Kamp wrote: > In message , dwi > kristianto writes: >>hello all, >> >>is there any easy way to use md5 with varnish VCL inline C?' >> >>i've tried this code with no luck. >>at sub vcl_recv: >>#include >>#include > > You need to put the includes outside sub vcl_recv{} > > Varnish 3.0 will have support for loadable modules ("VMOD") and one > of the ones I hope to see is one contaning common hash functions > like MD5 etc. > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Jan 5 14:45:46 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Jan 2011 14:45:46 +0000 Subject: Please help testing gzip code in Varnish Message-ID: <48090.1294238746@critter.freebsd.dk> I have added the first part of gzip support to varnish-trunk. This is new code with semi-twisted logic under the hood, so I am very dependent on you guys helping to test it out. If you set the paramter http_gzip_support to true, varnish will always send "Accept-encoding: gzip" to the backend. If the client does not understand gzip, varnish will gunzip the object during delivery. This means that you only will only cache the gzip'ed version of objects. The responsibility for gzip'ing the object is with your backend, Varnish doesnt don't know which objects you want to gzip and which not (ie: images: no, html: yes, but what about .cgi ?) ESI is not supported with gzip mode yet, that is the next and even more involved step. When you file tickets, please use "version = trunk" in trac Thanks in advance, Poul-Henning PS: Also be aware that "purge" is now called "ban" in -trunk. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Wed Jan 5 15:20:31 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 05 Jan 2011 16:20:31 +0100 Subject: Connections dropped under load Message-ID: <4D248C3F.2010401@gmail.com> Hello all, I'm having trouble with dropped connections under a loadtest. The setup: Java app with Tomcat 6 under an Ubuntu 8.04 VPS (virtual linux server) proxied with a varnish 2.1.2 (16M malloc) The test: The load test itself is aimed at generating many HTTP connections, which after a brief setup phase are served entirely by varnish. Hence the system load remains low (2% CPU consumed by varnish at 300 requests / second). The problem: As a measure for response, I am requesting an image from the webapp running in Tomcat while the loadtest is underway. However that either times out or is delivered after several seconds. Varnishlog will often either not show the request (RxURL) at all, or show it several seconds after the browser dispatched it. I don't believe it to be a network/OS issue, as the server will happily accept SSH connections while the loadtest is ongoing and remains responsive throughout the test. How do I debug this? Thanks in advance, G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo at streppone.it Wed Jan 5 15:30:20 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Wed, 05 Jan 2011 16:30:20 +0100 Subject: Connections dropped under load In-Reply-To: <4D248C3F.2010401@gmail.com> References: <4D248C3F.2010401@gmail.com> Message-ID: On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis wrote: > I'm having trouble with dropped connections under a loadtest. > > The problem: As a measure for response, I am requesting an image from > the webapp running in Tomcat while the loadtest is underway. However > that either times out or is delivered after several seconds. Varnishlog > will often either not show the request (RxURL) at all, or show it > several seconds after the browser dispatched it. Hi George, if you measure the time you mention as "several seconds" and it's either 3 or 9 seconds, I think what you're seeing is a client-side TCP retransmit timeout. I experienced that, both under load testing, and in real production setups. -- Cosimo From mail at danielbruessler.de Wed Jan 5 15:32:01 2011 From: mail at danielbruessler.de (=?ISO-8859-1?Q?Daniel_Br=FC=DFler?=) Date: Wed, 05 Jan 2011 16:32:01 +0100 Subject: Please help testing gzip code in Varnish In-Reply-To: <48090.1294238746@critter.freebsd.dk> References: <48090.1294238746@critter.freebsd.dk> Message-ID: <4D248EF1.7050705@danielbruessler.de> Hello Poul-Henning, > I have added the first part of gzip support to varnish-trunk. > > This is new code with semi-twisted logic under the hood, so > I am very dependent on you guys helping to test it out. that's a very good feature, as gzip gives a good performance-boost! > PS: Also be aware that "purge" is now called "ban" in -trunk. Question about purge/ban: I'm confused about this renaming. Why "ban"? Please explain it. I didn't read about it in the mailing list, and I'm reading the most posts. Greets! Daniel ................................................... :: Daniel Br??ler - Emilienstr. 10 - 90489 N?rnberg From phk at phk.freebsd.dk Wed Jan 5 15:42:41 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Jan 2011 15:42:41 +0000 Subject: Please help testing gzip code in Varnish In-Reply-To: Your message of "Wed, 05 Jan 2011 16:32:01 +0100." <4D248EF1.7050705@danielbruessler.de> Message-ID: <48260.1294242161@critter.freebsd.dk> In message <4D248EF1.7050705 at danielbruessler.de>, =?ISO-8859-1?Q?Daniel_Br=FC=D Fler?= writes: >Question about purge/ban: I'm confused about this renaming. The new 3.x terminology is: A ban will prevent any objects currently stored, which matches the condition, from being served ever again. The test is only made when objects which are hit as result of a cache lookup, or if the "ban_lurker" can see a way to check without a request being present. Bans can test both the URL and HTTP headers, with exact matches or regular expressions, so you can for instance ban all images with one command, or ban all content tagged with a special purpose HTTP header. A purge removes objects from the storage immediately in response to a cache lookup. Therefore the only criteria you can use, implicitly, is the lookup hash value, normally URL+Host:, but you can change that in vcl_hash{}. Usually you would use this in PURGE like processing in vcl_hit{}. I am pondering ways to make it possible for PUT/POST to invalidate any cached copies of that object, but this is tricky and someway down my todo list. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Wed Jan 5 16:17:57 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 05 Jan 2011 17:17:57 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> Message-ID: <4D2499B5.8010601@gmail.com> Hello Cosimo, Thank you for the quick reply. After your hint I had the tests run again but couldn't detect that pattern. What susprised me though after looking through the logs is that almost all requests by the load generator complete in a timely manner ( < 1 sec), but all requests generated by a real browser (IE, FF, Opera) will be served much later or even run into a timeout. On 05.01.2011 16:30, Cosimo Streppone wrote: > On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis > wrote: > >> I'm having trouble with dropped connections under a loadtest. >> >> The problem: As a measure for response, I am requesting an image from >> the webapp running in Tomcat while the loadtest is underway. However >> that either times out or is delivered after several seconds. Varnishlog >> will often either not show the request (RxURL) at all, or show it >> several seconds after the browser dispatched it. > > Hi George, > > if you measure the time you mention as "several seconds" > and it's either 3 or 9 seconds, I think what you're seeing > is a client-side TCP retransmit timeout. > > I experienced that, both under load testing, > and in real production setups. > From lists at rtty.us Wed Jan 5 16:59:32 2011 From: lists at rtty.us (Bob Camp) Date: Wed, 5 Jan 2011 11:59:32 -0500 Subject: Connections dropped under load In-Reply-To: <4D2499B5.8010601@gmail.com> References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> Message-ID: <929452EAFEDD48DDB009A6695DD931C9@vectron.com> Hi Running simple load tests both on Apache directly, and on Varnish - both seem to experience "long delays" on a small percentage of the requests. The problem does not appear to happen with low loads. It does come up as CPU usage becomes an issue. It also is hard to make happen with a single stream of requests. It seems to come up much quicker with many requests done in parallel. I've always *assumed* that the poor little TCP/IP hamster simply ran out of breath and started dropping connections. Bob -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of George Georgovassilis Sent: Wednesday, January 05, 2011 11:18 AM To: varnish-misc at projects.linpro.no Subject: Re: Connections dropped under load Hello Cosimo, Thank you for the quick reply. After your hint I had the tests run again but couldn't detect that pattern. What susprised me though after looking through the logs is that almost all requests by the load generator complete in a timely manner ( < 1 sec), but all requests generated by a real browser (IE, FF, Opera) will be served much later or even run into a timeout. On 05.01.2011 16:30, Cosimo Streppone wrote: > On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis > wrote: > >> I'm having trouble with dropped connections under a loadtest. >> >> The problem: As a measure for response, I am requesting an image from >> the webapp running in Tomcat while the loadtest is underway. However >> that either times out or is delivered after several seconds. Varnishlog >> will often either not show the request (RxURL) at all, or show it >> several seconds after the browser dispatched it. > > Hi George, > > if you measure the time you mention as "several seconds" > and it's either 3 or 9 seconds, I think what you're seeing > is a client-side TCP retransmit timeout. > > I experienced that, both under load testing, > and in real production setups. > _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From bedis9 at gmail.com Wed Jan 5 21:43:03 2011 From: bedis9 at gmail.com (Bedis 9) Date: Wed, 5 Jan 2011 22:43:03 +0100 Subject: Please help testing gzip code in Varnish In-Reply-To: <48260.1294242161@critter.freebsd.dk> References: <4D248EF1.7050705@danielbruessler.de> <48260.1294242161@critter.freebsd.dk> Message-ID: On Wed, Jan 5, 2011 at 4:42 PM, Poul-Henning Kamp wrote: > In message <4D248EF1.7050705 at danielbruessler.de>, =?ISO-8859-1?Q?Daniel_Br=FC=D > Fler?= writes: > >>Question about purge/ban: I'm confused about this renaming. > > The new 3.x terminology is: > > A ban will prevent any objects currently stored, which matches the > condition, from being served ever again. > > The test is only made when objects which are hit as result of a > cache lookup, or if the "ban_lurker" can see a way to check > without a request being present. > > Bans can test both the URL and HTTP headers, with exact matches or > regular expressions, so you can for instance ban all images with > one command, or ban all content tagged with a special purpose > HTTP header. > > > A purge removes objects from the storage immediately in response > to a cache lookup. ?Therefore the only criteria you can use, > implicitly, is the lookup hash value, normally URL+Host:, but > you can change that in vcl_hash{}. > > Usually you would use this in PURGE like processing in vcl_hit{}. > > I am pondering ways to make it possible for PUT/POST to invalidate > any cached copies of that object, but this is tricky and someway > down my todo list. > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > Hey, Purging on Headers is a brilliant idea :) I'm looking forward to test it and to come back to you. cheers From phk at phk.freebsd.dk Wed Jan 5 21:48:43 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Jan 2011 21:48:43 +0000 Subject: Please help testing gzip code in Varnish In-Reply-To: Your message of "Wed, 05 Jan 2011 22:43:03 +0100." Message-ID: <64402.1294264123@critter.freebsd.dk> In message , Bedi s 9 writes: >Purging on Headers is a brilliant idea :) >I'm looking forward to test it and to come back to you. That is also possible in 2.1.x, but the "ban" command is mistakenly named "purge" there. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Wed Jan 5 22:18:34 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 05 Jan 2011 23:18:34 +0100 Subject: Connections dropped under load In-Reply-To: <929452EAFEDD48DDB009A6695DD931C9@vectron.com> References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> <929452EAFEDD48DDB009A6695DD931C9@vectron.com> Message-ID: <4D24EE3A.4070304@gmail.com> I removed the varnish instance so that the load generator is directly hitting Tomcat. Naturally, the request rate drops to 70 requests/sec with a CPU load of 100%... however connections don't drop anymore, no timeouts occur and the application remains pretty responsive. To recap, these are the possible scenarios: 1. The networking layer is overtaxed with the original 300 reqs/sec. I don't believe that, because the load generator doesn't record any dropped connections while a simple browser can't connect. 2. Tomcat is overtaxed. That also seems not plausible, since it is not servicing any requests under the load test - all is done by varnish. Even if, as I said when removing varnish from in between, it serves the requests just fine. 3. Varnish is overtaxed. Somehow that also doesn't make sense, since it is servicing the load generator just fine... but will refuse to serve browser requests. 4. Varnish, when under load, is picky about what connections to serve. I'm stuck :-) On 05.01.2011 17:59, Bob Camp wrote: > Hi > > Running simple load tests both on Apache directly, and on Varnish - both > seem to experience "long delays" on a small percentage of the requests. The > problem does not appear to happen with low loads. It does come up as CPU > usage becomes an issue. It also is hard to make happen with a single stream > of requests. It seems to come up much quicker with many requests done in > parallel. > > I've always *assumed* that the poor little TCP/IP hamster simply ran out of > breath and started dropping connections. > > Bob > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of George > Georgovassilis > Sent: Wednesday, January 05, 2011 11:18 AM > To: varnish-misc at projects.linpro.no > Subject: Re: Connections dropped under load > > Hello Cosimo, > > Thank you for the quick reply. After your hint I had the tests run again > but couldn't detect that pattern. What susprised me though after looking > through the logs is that almost all requests by the load generator > complete in a timely manner (< 1 sec), but all requests generated by a > real browser (IE, FF, Opera) will be served much later or even run into > a timeout. > > On 05.01.2011 16:30, Cosimo Streppone wrote: >> On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis >> wrote: >> >>> I'm having trouble with dropped connections under a loadtest. >>> >>> The problem: As a measure for response, I am requesting an image from >>> the webapp running in Tomcat while the loadtest is underway. However >>> that either times out or is delivered after several seconds. Varnishlog >>> will often either not show the request (RxURL) at all, or show it >>> several seconds after the browser dispatched it. >> Hi George, >> >> if you measure the time you mention as "several seconds" >> and it's either 3 or 9 seconds, I think what you're seeing >> is a client-side TCP retransmit timeout. >> >> I experienced that, both under load testing, >> and in real production setups. >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From stig at zedge.net Wed Jan 5 22:41:54 2011 From: stig at zedge.net (Stig Bakken) Date: Wed, 5 Jan 2011 23:41:54 +0100 Subject: Connections dropped under load In-Reply-To: <4D24EE3A.4070304@gmail.com> References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> <929452EAFEDD48DDB009A6695DD931C9@vectron.com> <4D24EE3A.4070304@gmail.com> Message-ID: This seems similar to what I've been seeing, described in an earlier thread from before christmas. In my case it was not during benchmarking, but when serving production load of around 300 req/s per server. Modern tcpip stacks on modern hardware should handle this without blinking. Did you have the chance to capture the problem with varnishlog so you can replay/analyze it? - Stig On Wed, Jan 5, 2011 at 11:18 PM, George Georgovassilis < g.georgovassilis at gmail.com> wrote: > I removed the varnish instance so that the load generator is directly > hitting Tomcat. Naturally, the request rate drops to 70 requests/sec with a > CPU load of 100%... however connections don't drop anymore, no timeouts > occur and the application remains pretty responsive. To recap, these are the > possible scenarios: > > 1. The networking layer is overtaxed with the original 300 reqs/sec. I > don't believe that, because the load generator doesn't record any dropped > connections while a simple browser can't connect. > > 2. Tomcat is overtaxed. That also seems not plausible, since it is not > servicing any requests under the load test - all is done by varnish. Even > if, as I said when removing varnish from in between, it serves the requests > just fine. > > 3. Varnish is overtaxed. Somehow that also doesn't make sense, since it is > servicing the load generator just fine... but will refuse to serve browser > requests. > > 4. Varnish, when under load, is picky about what connections to serve. > > I'm stuck :-) > > > On 05.01.2011 17:59, Bob Camp wrote: > >> Hi >> >> Running simple load tests both on Apache directly, and on Varnish - both >> seem to experience "long delays" on a small percentage of the requests. >> The >> problem does not appear to happen with low loads. It does come up as CPU >> usage becomes an issue. It also is hard to make happen with a single >> stream >> of requests. It seems to come up much quicker with many requests done in >> parallel. >> >> I've always *assumed* that the poor little TCP/IP hamster simply ran out >> of >> breath and started dropping connections. >> >> Bob >> >> -----Original Message----- >> From: varnish-misc-bounces at varnish-cache.org >> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of George >> Georgovassilis >> Sent: Wednesday, January 05, 2011 11:18 AM >> To: varnish-misc at projects.linpro.no >> Subject: Re: Connections dropped under load >> >> Hello Cosimo, >> >> Thank you for the quick reply. After your hint I had the tests run again >> but couldn't detect that pattern. What susprised me though after looking >> through the logs is that almost all requests by the load generator >> complete in a timely manner (< 1 sec), but all requests generated by a >> real browser (IE, FF, Opera) will be served much later or even run into >> a timeout. >> >> On 05.01.2011 16:30, Cosimo Streppone wrote: >> >>> On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis >>> wrote: >>> >>> I'm having trouble with dropped connections under a loadtest. >>>> >>>> The problem: As a measure for response, I am requesting an image from >>>> the webapp running in Tomcat while the loadtest is underway. However >>>> that either times out or is delivered after several seconds. Varnishlog >>>> will often either not show the request (RxURL) at all, or show it >>>> several seconds after the browser dispatched it. >>>> >>> Hi George, >>> >>> if you measure the time you mention as "several seconds" >>> and it's either 3 or 9 seconds, I think what you're seeing >>> is a client-side TCP retransmit timeout. >>> >>> I experienced that, both under load testing, >>> and in real production setups. >>> >>> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Stig Bakken CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.georgovassilis at gmail.com Wed Jan 5 22:44:21 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 05 Jan 2011 23:44:21 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> <929452EAFEDD48DDB009A6695DD931C9@vectron.com> <4D24EE3A.4070304@gmail.com> Message-ID: <4D24F445.2090800@gmail.com> Hello Stig, Thanks for the insight. I'm still on the logs, though not sure where to start - it's not like that there are any errors in it so I'm not really sure what to look for. Do you have a pointer to that discussion you are referring to? On 05.01.2011 23:41, Stig Bakken wrote: > This seems similar to what I've been seeing, described in an earlier > thread from before christmas. In my case it was not during > benchmarking, but when serving production load of around 300 req/s per > server. Modern tcpip stacks on modern hardware should handle this > without blinking. > > Did you have the chance to capture the problem with varnishlog so you > can replay/analyze it? > > - Stig > > On Wed, Jan 5, 2011 at 11:18 PM, George Georgovassilis > > wrote: > > I removed the varnish instance so that the load generator is > directly hitting Tomcat. Naturally, the request rate drops to 70 > requests/sec with a CPU load of 100%... however connections don't > drop anymore, no timeouts occur and the application remains pretty > responsive. To recap, these are the possible scenarios: > > 1. The networking layer is overtaxed with the original 300 > reqs/sec. I don't believe that, because the load generator doesn't > record any dropped connections while a simple browser can't connect. > > 2. Tomcat is overtaxed. That also seems not plausible, since it is > not servicing any requests under the load test - all is done by > varnish. Even if, as I said when removing varnish from in between, > it serves the requests just fine. > > 3. Varnish is overtaxed. Somehow that also doesn't make sense, > since it is servicing the load generator just fine... but will > refuse to serve browser requests. > > 4. Varnish, when under load, is picky about what connections to serve. > > I'm stuck :-) > > > On 05.01.2011 17:59, Bob Camp wrote: > > Hi > > Running simple load tests both on Apache directly, and on > Varnish - both > seem to experience "long delays" on a small percentage of the > requests. The > problem does not appear to happen with low loads. It does come > up as CPU > usage becomes an issue. It also is hard to make happen with a > single stream > of requests. It seems to come up much quicker with many > requests done in > parallel. > > I've always *assumed* that the poor little TCP/IP hamster > simply ran out of > breath and started dropping connections. > > Bob > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org > > [mailto:varnish-misc-bounces at varnish-cache.org > ] On Behalf Of > George > Georgovassilis > Sent: Wednesday, January 05, 2011 11:18 AM > To: varnish-misc at projects.linpro.no > > Subject: Re: Connections dropped under load > > Hello Cosimo, > > Thank you for the quick reply. After your hint I had the tests > run again > but couldn't detect that pattern. What susprised me though > after looking > through the logs is that almost all requests by the load generator > complete in a timely manner (< 1 sec), but all requests > generated by a > real browser (IE, FF, Opera) will be served much later or even > run into > a timeout. > > On 05.01.2011 16:30, Cosimo Streppone wrote: > > On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis > > wrote: > > I'm having trouble with dropped connections under a > loadtest. > > The problem: As a measure for response, I am > requesting an image from > the webapp running in Tomcat while the loadtest is > underway. However > that either times out or is delivered after several > seconds. Varnishlog > will often either not show the request (RxURL) at all, > or show it > several seconds after the browser dispatched it. > > Hi George, > > if you measure the time you mention as "several seconds" > and it's either 3 or 9 seconds, I think what you're seeing > is a client-side TCP retransmit timeout. > > I experienced that, both under load testing, > and in real production setups. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -- > Stig Bakken > CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig at zedge.net Wed Jan 5 22:56:41 2011 From: stig at zedge.net (Stig Bakken) Date: Wed, 5 Jan 2011 23:56:41 +0100 Subject: Connections dropped under load In-Reply-To: <4D24F445.2090800@gmail.com> References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> <929452EAFEDD48DDB009A6695DD931C9@vectron.com> <4D24EE3A.4070304@gmail.com> <4D24F445.2090800@gmail.com> Message-ID: This thread: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html - Stig On Wed, Jan 5, 2011 at 11:44 PM, George Georgovassilis < g.georgovassilis at gmail.com> wrote: > Hello Stig, > > Thanks for the insight. I'm still on the logs, though not sure where to > start - it's not like that there are any errors in it so I'm not really sure > what to look for. Do you have a pointer to that discussion you are referring > to? > > > On 05.01.2011 23:41, Stig Bakken wrote: > > This seems similar to what I've been seeing, described in an earlier thread > from before christmas. In my case it was not during benchmarking, but when > serving production load of around 300 req/s per server. Modern tcpip stacks > on modern hardware should handle this without blinking. > > Did you have the chance to capture the problem with varnishlog so you can > replay/analyze it? > > - Stig > > On Wed, Jan 5, 2011 at 11:18 PM, George Georgovassilis < > g.georgovassilis at gmail.com> wrote: > >> I removed the varnish instance so that the load generator is directly >> hitting Tomcat. Naturally, the request rate drops to 70 requests/sec with a >> CPU load of 100%... however connections don't drop anymore, no timeouts >> occur and the application remains pretty responsive. To recap, these are the >> possible scenarios: >> >> 1. The networking layer is overtaxed with the original 300 reqs/sec. I >> don't believe that, because the load generator doesn't record any dropped >> connections while a simple browser can't connect. >> >> 2. Tomcat is overtaxed. That also seems not plausible, since it is not >> servicing any requests under the load test - all is done by varnish. Even >> if, as I said when removing varnish from in between, it serves the requests >> just fine. >> >> 3. Varnish is overtaxed. Somehow that also doesn't make sense, since it is >> servicing the load generator just fine... but will refuse to serve browser >> requests. >> >> 4. Varnish, when under load, is picky about what connections to serve. >> >> I'm stuck :-) >> >> >> On 05.01.2011 17:59, Bob Camp wrote: >> >>> Hi >>> >>> Running simple load tests both on Apache directly, and on Varnish - both >>> seem to experience "long delays" on a small percentage of the requests. >>> The >>> problem does not appear to happen with low loads. It does come up as CPU >>> usage becomes an issue. It also is hard to make happen with a single >>> stream >>> of requests. It seems to come up much quicker with many requests done in >>> parallel. >>> >>> I've always *assumed* that the poor little TCP/IP hamster simply ran out >>> of >>> breath and started dropping connections. >>> >>> Bob >>> >>> -----Original Message----- >>> From: varnish-misc-bounces at varnish-cache.org >>> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of George >>> Georgovassilis >>> Sent: Wednesday, January 05, 2011 11:18 AM >>> To: varnish-misc at projects.linpro.no >>> Subject: Re: Connections dropped under load >>> >>> Hello Cosimo, >>> >>> Thank you for the quick reply. After your hint I had the tests run again >>> but couldn't detect that pattern. What susprised me though after looking >>> through the logs is that almost all requests by the load generator >>> complete in a timely manner (< 1 sec), but all requests generated by a >>> real browser (IE, FF, Opera) will be served much later or even run into >>> a timeout. >>> >>> On 05.01.2011 16:30, Cosimo Streppone wrote: >>> >>>> On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis >>>> wrote: >>>> >>>> I'm having trouble with dropped connections under a loadtest. >>>>> >>>>> The problem: As a measure for response, I am requesting an image from >>>>> the webapp running in Tomcat while the loadtest is underway. However >>>>> that either times out or is delivered after several seconds. Varnishlog >>>>> will often either not show the request (RxURL) at all, or show it >>>>> several seconds after the browser dispatched it. >>>>> >>>> Hi George, >>>> >>>> if you measure the time you mention as "several seconds" >>>> and it's either 3 or 9 seconds, I think what you're seeing >>>> is a client-side TCP retransmit timeout. >>>> >>>> I experienced that, both under load testing, >>>> and in real production setups. >>>> >>>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > > -- > Stig Bakken > CTO, Zedge.net - free your phone! > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Stig Bakken CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.georgovassilis at gmail.com Wed Jan 5 23:24:07 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Thu, 06 Jan 2011 00:24:07 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <4D2499B5.8010601@gmail.com> <929452EAFEDD48DDB009A6695DD931C9@vectron.com> <4D24EE3A.4070304@gmail.com> <4D24F445.2090800@gmail.com> Message-ID: <4D24FD97.1070502@gmail.com> Thanks a lot Stig... your analysis in that discussion goes way beyond mind. Did you ever sort it out? In the meantime I found an unlikely setting that solved my problem: the session_linger. I got it from here [1] and thought it wouldn't hurt and it nearly killed me. My initial tests were conducted with a value of 150, I had to lower it to 20 to get my test through. Thanks + best regards [1] https://kristianlyng.wordpress.com/2009/10/19/high-end-varnish-tuning/ On 05.01.2011 23:56, Stig Bakken wrote: > This thread: > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-December/005258.html > > > - Stig > > On Wed, Jan 5, 2011 at 11:44 PM, George Georgovassilis > > wrote: > > Hello Stig, > > Thanks for the insight. I'm still on the logs, though not sure > where to start - it's not like that there are any errors in it so > I'm not really sure what to look for. Do you have a pointer to > that discussion you are referring to? > > > On 05.01.2011 23:41, Stig Bakken wrote: >> This seems similar to what I've been seeing, described in an >> earlier thread from before christmas. In my case it was not >> during benchmarking, but when serving production load of around >> 300 req/s per server. Modern tcpip stacks on modern hardware >> should handle this without blinking. >> >> Did you have the chance to capture the problem with varnishlog so >> you can replay/analyze it? >> >> - Stig >> >> On Wed, Jan 5, 2011 at 11:18 PM, George Georgovassilis >> > >> wrote: >> >> I removed the varnish instance so that the load generator is >> directly hitting Tomcat. Naturally, the request rate drops to >> 70 requests/sec with a CPU load of 100%... however >> connections don't drop anymore, no timeouts occur and the >> application remains pretty responsive. To recap, these are >> the possible scenarios: >> >> 1. The networking layer is overtaxed with the original 300 >> reqs/sec. I don't believe that, because the load generator >> doesn't record any dropped connections while a simple browser >> can't connect. >> >> 2. Tomcat is overtaxed. That also seems not plausible, since >> it is not servicing any requests under the load test - all is >> done by varnish. Even if, as I said when removing varnish >> from in between, it serves the requests just fine. >> >> 3. Varnish is overtaxed. Somehow that also doesn't make >> sense, since it is servicing the load generator just fine... >> but will refuse to serve browser requests. >> >> 4. Varnish, when under load, is picky about what connections >> to serve. >> >> I'm stuck :-) >> >> >> On 05.01.2011 17:59, Bob Camp wrote: >> >> Hi >> >> Running simple load tests both on Apache directly, and on >> Varnish - both >> seem to experience "long delays" on a small percentage of >> the requests. The >> problem does not appear to happen with low loads. It does >> come up as CPU >> usage becomes an issue. It also is hard to make happen >> with a single stream >> of requests. It seems to come up much quicker with many >> requests done in >> parallel. >> >> I've always *assumed* that the poor little TCP/IP hamster >> simply ran out of >> breath and started dropping connections. >> >> Bob >> >> -----Original Message----- >> From: varnish-misc-bounces at varnish-cache.org >> >> [mailto:varnish-misc-bounces at varnish-cache.org >> ] On >> Behalf Of George >> Georgovassilis >> Sent: Wednesday, January 05, 2011 11:18 AM >> To: varnish-misc at projects.linpro.no >> >> Subject: Re: Connections dropped under load >> >> Hello Cosimo, >> >> Thank you for the quick reply. After your hint I had the >> tests run again >> but couldn't detect that pattern. What susprised me >> though after looking >> through the logs is that almost all requests by the load >> generator >> complete in a timely manner (< 1 sec), but all requests >> generated by a >> real browser (IE, FF, Opera) will be served much later or >> even run into >> a timeout. >> >> On 05.01.2011 16:30, Cosimo Streppone wrote: >> >> On Wed, 05 Jan 2011 16:20:31 +0100, George Georgovassilis >> > > wrote: >> >> I'm having trouble with dropped connections under >> a loadtest. >> >> The problem: As a measure for response, I am >> requesting an image from >> the webapp running in Tomcat while the loadtest >> is underway. However >> that either times out or is delivered after >> several seconds. Varnishlog >> will often either not show the request (RxURL) at >> all, or show it >> several seconds after the browser dispatched it. >> >> Hi George, >> >> if you measure the time you mention as "several seconds" >> and it's either 3 or 9 seconds, I think what you're >> seeing >> is a client-side TCP retransmit timeout. >> >> I experienced that, both under load testing, >> and in real production setups. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> -- >> Stig Bakken >> CTO, Zedge.net - free your phone! > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -- > Stig Bakken > CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Jan 5 23:36:17 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Jan 2011 23:36:17 +0000 Subject: Connections dropped under load In-Reply-To: Your message of "Thu, 06 Jan 2011 00:24:07 +0100." <4D24FD97.1070502@gmail.com> Message-ID: <65440.1294270577@critter.freebsd.dk> In message <4D24FD97.1070502 at gmail.com>, George Georgovassilis writes: >This is a multi-part message in MIME format. >In the meantime I found an unlikely setting that solved my problem: the >session_linger. I got it from here [1] and thought it wouldn't hurt and >it nearly killed me. My initial tests were conducted with a value of >150, I had to lower it to 20 to get my test through. session_linger is marked EXPERIMENTAL for a reason: I have no idea what a good default value is. It is one of those that has the potential to make synthetic benchmarks totally unrelated to real life traffic, so you should never tune it based on synthetic benchmarks. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Wed Jan 5 23:45:26 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Thu, 06 Jan 2011 00:45:26 +0100 Subject: Connections dropped under load In-Reply-To: <65440.1294270577@critter.freebsd.dk> References: <65440.1294270577@critter.freebsd.dk> Message-ID: <4D250296.2000601@gmail.com> Hello Poul-Henning, I need to correct you there: 1. The benchmark replicates an expected real-life sequence of requests and workload from a single IP (namely a corporate web-proxy), thus labelling it "synthetic" does it no justice :-) 2. If you leave "session_linger" out of the configuration (so not mentioning it at all) the benchmark still hangs. Whatever the default value is, it doesn't work and I explicitly need to reduce it to 20. On 06.01.2011 00:36, Poul-Henning Kamp wrote: > In message<4D24FD97.1070502 at gmail.com>, George Georgovassilis writes: >> This is a multi-part message in MIME format. >> In the meantime I found an unlikely setting that solved my problem: the >> session_linger. I got it from here [1] and thought it wouldn't hurt and >> it nearly killed me. My initial tests were conducted with a value of >> 150, I had to lower it to 20 to get my test through. > session_linger is marked EXPERIMENTAL for a reason: I have no idea > what a good default value is. > > It is one of those that has the potential to make synthetic benchmarks > totally unrelated to real life traffic, so you should never tune it > based on synthetic benchmarks. > From slackmoehrle.lists at gmail.com Wed Jan 5 23:56:56 2011 From: slackmoehrle.lists at gmail.com (Jason S-M) Date: Wed, 5 Jan 2011 15:56:56 -0800 Subject: a few intro to Varnish questions Message-ID: Hello All, I was told about Varnish today. I have a growing Apple fan website that as more and more videos get added my thought is to keep the most popular videos in cache. The machine this site is running on is CentOS 5.5 64 bit, Apache, PHP, MySQL 5. It is a dual core machine with 12gb of RAM, max is 16gb and I will max it out over the next month or so probably as I find good deals on 4gb DDR3 sticks. The site's size will be about 300gb (about 60gb now) I have LVM running with 300gb allotted to /var/www/html. I have a MySQL backend that stores paths and data about the video's, the videos themselves are housed on the filesystem. Can anyone provide insight on setup and optimization of Varnish? I have some confusion. 1. Looking at: http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? 3. I must make additions to vcl_recv I assume to cache what I want? 4. Do I have to make changes to my web pages to add meta-tags to trigger Varnish? Best, -Jason From phk at phk.freebsd.dk Thu Jan 6 00:00:25 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 06 Jan 2011 00:00:25 +0000 Subject: Connections dropped under load In-Reply-To: Your message of "Thu, 06 Jan 2011 00:45:26 +0100." <4D250296.2000601@gmail.com> Message-ID: <66480.1294272025@critter.freebsd.dk> In message <4D250296.2000601 at gmail.com>, George Georgovassilis writes: >1. The benchmark replicates an expected real-life sequence of requests >and workload from a single IP (namely a corporate web-proxy), thus >labelling it "synthetic" does it no justice :-) Well, that depends on your proxy more than anything else, but I'll take your word for it. >2. If you leave "session_linger" out of the configuration (so not >mentioning it at all) the benchmark still hangs. Whatever the default >value is, it doesn't work and I explicitly need to reduce it to 20. The default is 50msec. You can always get the full spiel, inclusive the default value from the CLI: param.show session_linger 200 1031 session_linger 50 [ms] Default is 50 How long time the workerthread lingers on the session to see if a new request appears right away. If sessions are reused, as much as half of all reuses happen within the first 100 msec of the previous request completing. Setting this too high results in worker threads not doing anything for their keep, setting it too low just means that more sessions take a detour around the waiter. NB: We do not know yet if it is a good idea to change this parameter, or if the default value is even sensible. Caution is advised, and feedback is most welcome. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Thu Jan 6 00:09:19 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Thu, 06 Jan 2011 01:09:19 +0100 Subject: Connections dropped under load In-Reply-To: <66480.1294272025@critter.freebsd.dk> References: <66480.1294272025@critter.freebsd.dk> Message-ID: <4D25082F.9040108@gmail.com> Many thanks for the pointer! If I understand this correctly, there are some implications following session_linger: 1. High values require also large thread pools to make up for the lingering sessions ? 2. Low values are safer but may result in increased CPU usage ? 3. The effectiveness of session_linger depends on the network latency: if requests are piped in at a slow rate more sessions are locked up waiting? If 3 is correct then session_linger sounds like a dangerous toy, because I can't really control the network latency. Regards, G. On 06.01.2011 01:00, Poul-Henning Kamp wrote: > In message<4D250296.2000601 at gmail.com>, George Georgovassilis writes: > >> 1. The benchmark replicates an expected real-life sequence of requests >> and workload from a single IP (namely a corporate web-proxy), thus >> labelling it "synthetic" does it no justice :-) > Well, that depends on your proxy more than anything else, but I'll > take your word for it. > >> 2. If you leave "session_linger" out of the configuration (so not >> mentioning it at all) the benchmark still hangs. Whatever the default >> value is, it doesn't work and I explicitly need to reduce it to 20. > The default is 50msec. > > You can always get the full spiel, inclusive the default value > from the CLI: > > param.show session_linger > 200 1031 > session_linger 50 [ms] > Default is 50 > How long time the workerthread lingers on the > session to see if a new request appears right > away. > If sessions are reused, as much as half of all > reuses happen within the first 100 msec of the > previous request completing. > Setting this too high results in worker threads > not doing anything for their keep, setting it too > low just means that more sessions take a detour > around the waiter. > > NB: We do not know yet if it is a good idea to > change this parameter, or if the default value is > even sensible. Caution is advised, and feedback > is most welcome. > > From slackmoehrle.lists at gmail.com Thu Jan 6 00:50:56 2011 From: slackmoehrle.lists at gmail.com (Jason S-M) Date: Wed, 5 Jan 2011 16:50:56 -0800 Subject: a few Varnish intro questions Message-ID: Hello All, I was told about Varnish today. I have a growing Apple fan website that as more and more videos get added my thought is to keep the most popular videos in cache. The machine this site is running on is CentOS 5.5 64 bit, Apache, PHP, MySQL 5. It is a dual core machine with 12gb of RAM, max is 16gb and I will max it out over the next month or so probably as I find good deals on 4gb DDR3 sticks. The site's size will be about 300gb (about 60gb now) I have LVM running with 300gb allotted to /var/www/html. I have a MySQL backend that stores paths and data about the video's, the videos themselves are housed on the filesystem. Can anyone provide insight on setup and optimization of Varnish? I have some confusion. 1. Looking at: http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? 3. I must make additions to vcl_recv I assume to cache what I want? 4. Do I have to make changes to my web pages to add meta-tags to trigger Varnish? Best, -Jason PS - I see 2 addresses for this list: varnish-misc at projects.linpro.no and varnish-misc at varnish-cache.org a ping shows different IP's although I suppose that would not be a definitive answer. From steamboatid at gmail.com Thu Jan 6 01:07:23 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Thu, 6 Jan 2011 08:07:23 +0700 Subject: a few intro to Varnish questions In-Reply-To: References: Message-ID: hi jason, i'm not a varnish guru nor expert, but let me give some hints :) On Thu, Jan 6, 2011 at 6:56 AM, Jason S-M wrote: > Can anyone provide insight on setup and optimization of Varnish? optimization is dependable on system and your website structures. so you need to do it your self, i guess. :) > I have some confusion. > 1. Looking at: http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html > So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? most OS wont allow two apps open same port. varnish on port 80, and backend web server at other port. > 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? dont have too. as long as varnish can contact backend server. > 3. I must make additions to vcl_recv I assume to cache what I want? > 4. Do I have to make changes to my web pages to add meta-tags to trigger Varnish? > Best, > -Jason varnish react on http heeaders sent by bacend webserver, especially on Pragma, Cache-control, and Expires. but, gladly varnish equipped with VCL, that enabled us to force and tweak the caching mechanism. for example you can "force" to cache or not based on url, user-agent, ip address of visitor, and many more.. please CMIIMW. regards, dwi From kristian at varnish-software.com Thu Jan 6 09:00:03 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Thu, 6 Jan 2011 10:00:03 +0100 Subject: Connections dropped under load In-Reply-To: <4D248C3F.2010401@gmail.com> References: <4D248C3F.2010401@gmail.com> Message-ID: <20110106090003.GA2106@freud> Hi, On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: > I'm having trouble with dropped connections under a loadtest. We need: varnishstat -1 Any further discussion without varnishstat -1 output is wild guesswork and superstition. - Kristian From perbu at varnish-software.com Thu Jan 6 09:01:38 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 6 Jan 2011 10:01:38 +0100 Subject: a few Varnish intro questions In-Reply-To: References: Message-ID: Hi, > I have some confusion. > 1. Looking at: http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html > > So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? Yes. Put your web server on port 8080 (or any other port, except 80) and Varnish on 80. The idea in the docs is that you can test varnish while running it on a high port and then do a switcharoo. > 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? 127.0.0.1 is fine. > 3. I must make additions to vcl_recv I assume to cache what I want? Yes, > 4. Do I have to make changes to my web pages to add meta-tags to trigger Varnish? Varnish doesn't care about meta-tags. Varnish only cares about the http headers. It should be explained decently in the "using varnish" tutorial. If anything there is unclear, please ask here. > > Best, > -Jason > > PS - I see 2 addresses for this list: varnish-misc at projects.linpro.no and varnish-misc at varnish-cache.org a ping shows different IP's although I suppose that would not be a definitive answer. varnish-misc at varnish-cache.org is the right one.We need to remove the others. Where did you find it? -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From perbu at varnish-software.com Thu Jan 6 10:07:36 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 6 Jan 2011 11:07:36 +0100 Subject: Please help testing gzip code in Varnish In-Reply-To: <48090.1294238746@critter.freebsd.dk> References: <48090.1294238746@critter.freebsd.dk> Message-ID: Hi, On Wed, Jan 5, 2011 at 3:45 PM, Poul-Henning Kamp wrote: > > I have added the first part of gzip support to varnish-trunk. It is temporarily running on varnish-cache.org at the moment. I don't know if I'll let it run all night, we'll see. So far it seems ok. I noticed that string concatenation now has to be explicit with a "+" instead of just a space. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From g.georgovassilis at gmail.com Thu Jan 6 10:49:11 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Thu, 06 Jan 2011 11:49:11 +0100 Subject: Connections dropped under load In-Reply-To: <20110106090003.GA2106@freud> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> Message-ID: <4D259E27.5080402@gmail.com> Hello Kristian, While my problem has been resolved, I'm of course curious about the causes. So I'm attaching my configuration and three stat outputs for the following cases: session_linger 20 no session_linger (yesterday this wouldn't pass the loadtest, today it does!) session_linger 150 (this still doesn't pass the loadtest) On 06.01.2011 10:00, Kristian Lyngstol wrote: Hi, On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: > > I'm having trouble with dropped connections under a loadtest. We need: varnishstat -1 Any further discussion without varnishstat -1 output is wild guesswork and superstition. - Kristian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: logs.txt URL: From kristoffer at brabrand.no Thu Jan 6 14:14:05 2011 From: kristoffer at brabrand.no (Kristoffer Brabrand) Date: Thu, 6 Jan 2011 15:14:05 +0100 Subject: Problems with users getting old cache versions Message-ID: <7B82B2BB-D645-4159-88B5-E7C32D75E693@brabrand.no> We're having a problem with users getting an old cache version every now and then. Most of the times it happens in the morning, it might be something with the browsert not beeing used overnight? What I do know for a fact is that the page looks right behind varnish, so it must be either Varnish, headers or the browser ? or a combination of the ones mentioned. Headers sent from backend to Varnish: > Status: HTTP/1.1 200 OK > Date: Thu, 06 Jan 2011 14:01:29 GMT > Server: Apache/2.2.12 (Ubuntu) > X-Powered-By: PHP/5.2.10-2ubuntu6.5 > Cache-control: max-age=600 > Connection: close > Content-Type: text/html; charset=utf-8 Headers sent from Varnish to browser: > Status: HTTP/1.1 200 OK > Server: Apache/2.2.12 (Ubuntu) > X-Powered-By: PHP/5.2.10-2ubuntu6.5 > Cache-control: max-age=600 > Content-Type: text/html; charset=utf-8 > Content-Length: 92664 > X-hash: /#www.XXXX.no# > Date: Thu, 06 Jan 2011 13:59:45 GMT > X-Varnish: 393208439 393133417 > Age: 528 > Via: 1.1 varnish > Connection: close > X-Cache: HIT > X-Cache-hits: 158 > X-Server: Percy I've uploaded the VCL file here: http://pastebin.com/4SyVDy70 Could it be that the browser for some reason decides it doesn't want to check for/load a new version from the host? I'm no Varnish expert, and even though I've been able to solve what's come up through the year this one is driving me mad. Some help would be mostly appreciated! Kind regards, K. Brabrand -------------- next part -------------- An HTML attachment was scrubbed... URL: From slackmoehrle.lists at gmail.com Thu Jan 6 14:16:05 2011 From: slackmoehrle.lists at gmail.com (Jason S-M) Date: Thu, 6 Jan 2011 06:16:05 -0800 Subject: a few Varnish intro questions In-Reply-To: References: Message-ID: <6B9E06F5-22DB-4745-A71A-3D11A5D3481F@gmail.com> Hi Per, Thanks for your reply. >> 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? > > 127.0.0.1 is fine. I am still a little confused about the setup in that what is the "call chain"? a user hits my website normally by URL, www.6colors.net, varnish intercepts as it listens on port 80, does its magic and passes to port 8080 where apache will take it from there? Currently: 1. Apache listens on my public 75.149.56.27 address on port 80. To implement Varnish: 1. Change Varnish to Port 80 and listen on my public IP? so: varnishd -f /etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 75.149.56.27:80 2. change apache to 8080, but keep it listening on my public IP of 75.149.56.27. So essentially someone could hit the site, bypassing varnish at: www.6colors.net:8080 in a browser. But I don't need to change apache from listening on my .27 public ip to 127.0.0.1? I think that I am having trouble visualizing! >> 3. I must make additions to vcl_recv I assume to cache what I want? > > Yes, Got it, I will review this in detail and continue reading the documents. >> PS - I see 2 addresses for this list: varnish-misc at projects.linpro.no and varnish-misc at varnish-cache.org a ping shows different IP's although I suppose that would not be a definitive answer. > > varnish-misc at varnish-cache.org is the right one.We need to remove the > others. Where did you find it? I noticed that some of the messages I was getting did not hit my rule in my e-mail client to put in a specified folder. When I looked at the e-mails they were being sent to varnish-misc at projects.linpro.no. messages from yesterday (1/5) with subject 'Re: Connections dropped under load ' triggered it. Probably long existing users and the old address forwards or still works.. Thank you for your help thus far. -Jason From fenton at american.edu Thu Jan 6 15:04:16 2011 From: fenton at american.edu (Jacob Fenton) Date: Thu, 6 Jan 2011 10:04:16 -0500 Subject: 503 errors on POST In-Reply-To: <201012210848.07720.alex@arcalpin.com> References: <201012201058.10015.alex@arcalpin.com> <4D0FF658.60805@yahoo.com.br> <201012210848.07720.alex@arcalpin.com> Message-ID: Hi, I've found that certain POST requests fail when passed to a healthy backend, though I'm not convinced this is a problem with varnish. I found that while a given post request failed--repeatably--using one internet service provider, the same POST worked from another ISP. It does seem that this problem only happens on larger POSTs. I'm too much of a dunce on networking issues to really debug this, but am happy to speculate wildly that it might have something to do with multipart encoding? Subtle differences in how multipart is handled in http 1 and 1.1? Skeezy ISP's and/or packet lengths ? If someone who's resolved this problem has any thoughts, or if this is already documented in the wiki somewhere that I didn't look, I'd love to hear it. FWIW, I'm running varnish-2.1.2 on ubuntu hardy heron. --jacob fenton On Tue, Dec 21, 2010 at 2:48 AM, Modesto Alexandre wrote: > Le mardi 21 d?cembre 2010, Flavio Torres a ?crit : > > Hello, > > > > On 12/20/2010 07:58 AM, Modesto Alexandre wrote: > > > Here are the errors I have (I have masked the private information): > > > > > > http://demo.ovh.net/view/6ac24fbc5400039bc86fd3444556ef76/0.colored > > > > Your backend seems not be good. How is your backend_health [1] ? > > > > [1] - http://www.varnish-cache.org/trac/wiki/BackendPolling > > Backends look good : > > varnishadm -T localhost:6082 debug.health > Backend backend1 is Healthy > Current states good: 10 threshold: 8 window: 10 > Average responsetime of good probes: 0.144168 > Oldest Newest > ================================================================ > 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 > XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit > RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR- Good Recv > HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH- Happy > Backend backend3 is Healthy > Current states good: 9 threshold: 8 window: 10 > Average responsetime of good probes: 0.106183 > Oldest Newest > ================================================================ > 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 > XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit > RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR-R Good Recv > HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH-H Happy > Backend backend2 is Healthy > Current states good: 9 threshold: 8 window: 10 > Average responsetime of good probes: 0.092774 > Oldest Newest > ================================================================ > 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 > XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit > RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR- Good Recv > HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH- Happy > > Configuration of backend pooling : > > backend backend1 { > .host = "ip.ip.ip.ip"; > .port = "80"; > .connect_timeout = 300s; > .first_byte_timeout = 300s; > .between_bytes_timeout = 300s; > .probe = { > .url = "/url.gif"; > .timeout = 1500 ms; > .interval = 5s; > .window = 10; > .threshold = 8; > } > > } > backend2 & backend3 have the same configuration > > any idea ? > > Alex. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Thu Jan 6 15:54:00 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Thu, 6 Jan 2011 16:54:00 +0100 Subject: 503 errors on POST In-Reply-To: <201012210848.07720.alex@arcalpin.com> References: <201012201058.10015.alex@arcalpin.com> <4D0FF658.60805@yahoo.com.br> <201012210848.07720.alex@arcalpin.com> Message-ID: <20110106155400.GC7833@freud> On Tue, Dec 21, 2010 at 08:48:07AM +0100, Modesto Alexandre wrote: > Le mardi 21 d?cembre 2010, Flavio Torres a ?crit : > > On 12/20/2010 07:58 AM, Modesto Alexandre wrote: > > > Here are the errors I have (I have masked the private information): > > > > > > http://demo.ovh.net/view/6ac24fbc5400039bc86fd3444556ef76/0.colored This was a 404 for me? > Backends look good : > > varnishadm -T localhost:6082 debug.health > Backend backend1 is Healthy > Current states good: 10 threshold: 8 window: 10 > Average responsetime of good probes: 0.144168 > Oldest Newest > ================================================================ > 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 > XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit > RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR- Good Recv > HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH- Happy It just had a failed request - so not really good. > Configuration of backend pooling : > > backend backend1 { > .host = "ip.ip.ip.ip"; > .port = "80"; > .connect_timeout = 300s; Connection timeout is pretty much only needed to allow geopgraphically distributed servers. Keep in mind that the application doesn't have to respond for the connection to be established: this is usually done by the operating system and is usually VERY fast. I did some quick math [1]: In 300 seconds, a packet can travel around the earth roughly 2000 times, assuming it's using mostly fiber and going around equator. Unless your web server is on a different planet (Venus is possible, but Mars is out of range I'm afraid) - your connection timeout is dangerous. Rule of thumb: If you are increasing default values by 10 000% or more: Think twice. Then don't do it. > .first_byte_timeout = 300s; > .between_bytes_timeout = 300s; Those are semi-fine - but still rather long (how slow is the application?). > .probe = { > .url = "/url.gif"; I recommend polling something that actually tests more than basic HTTP functionality. Typically I set up a poll against the application that needs to be tested and make sure the health check URL tests/probes any relevant resources (ie: do some simple database query, for example). > any idea ? Can you post varnishlog and VCL? Unfortunately, health checks only catch reasonably consistent errors. In your case, it would take about 10 seconds of consistent errors before the health checks would kick in and Varnish stop using a back end. For sporadic errors, that doesn't help you much. In this case, we already saw a sporadic error in the health checks. You may also want to take a look at the timer-values of ReqEnd to debug this. It will indicate the average response time. Looking at the Debug header might be useful too. But it will be much easier to analyse this with VCL and varnishlog. [1] (300s*(3m/s*10^8)/40075160m = 2245 - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From hyeh at rupaz.com Fri Jan 7 05:28:24 2011 From: hyeh at rupaz.com (Harry Yeh) Date: Thu, 6 Jan 2011 21:28:24 -0800 Subject: Rewriting URL's or Content inside a req object using regsub or regsuball In-Reply-To: References: Message-ID: I am currently having some success with the Reverse Proxying features of Varnish, and the only thing left that I need to be able to do is essentially rewrite some of the URL's in the body. For example, we have a url internally that might be wp1.rupaz.com and we need the url's in the HTML page to be rewritten to http://www.rupaz.com/blogs Right now I am kind of stuck but I am assuming I should be doing something similar to following? I have not idea which beresp object I should use for the body of the content since there is no documentation. sub vcl_fetch { if (req.http.host == "www.rupaz.com" && req.url ~ "^/blogs"){ set beresp = regsuball(beresp, "^wp1.rupaz.com", "www.rupaz.com/forums"); return(deliver); } if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } return (deliver); } ______________________________ Harry Yeh CEO / CTO Rupaz Twitter Facebook When you think thong , think Rupaz ! Web: http://www.rupaz.com Me: http://www.linkedin.com/in/harryyeh Twitter: http://twitter.com/harryyeh Confidentiality Notice: This electronic mail transmission and any accompanying attachments contain confidential information intended only for the use of the individual or entity named above. Any dissemination, distribution, copying or action taken in reliance on the contents of this communication by anyone other than the intended recipient is strictly prohibited. If you have received this communication in error please immediately delete the E-mail and notify the sender at the above E-mail address. -- Regards, ______________________________ Harry Yeh CEO / CTO Rupaz Direct: (310)974-8938 Direct: (604)304-1603 Fax: (310)494-9363 Twitter Facebook When you think thong , think Rupaz ! Web: http://www.rupaz.com Me: http://www.linkedin.com/in/harryyeh Twitter: http://twitter.com/harryyeh Confidentiality Notice: This electronic mail transmission and any accompanying attachments contain confidential information intended only for the use of the individual or entity named above. Any dissemination, distribution, copying or action taken in reliance on the contents of this communication by anyone other than the intended recipient is strictly prohibited. If you have received this communication in error please immediately delete the E-mail and notify the sender at the above E-mail address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Fri Jan 7 10:18:48 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 7 Jan 2011 11:18:48 +0100 Subject: Load balancing streaming (rtsp) servers In-Reply-To: References: Message-ID: <20110107101848.GA2153@freud> On Wed, Dec 29, 2010 at 05:59:15PM -0500, Nicholas Tang wrote: > Question: is it possible to load balance rtsp servers using Varnish? They'd > need to "stick" based on client ip. My thought was to try something like > this: Well, RTSP is two-way and keeps state. HTTP only allows clients to send requests and doesn't keep state.... I woduln't rule it out - RTSP is specced to "support the same sort of caching as HTTP" - but it's probably going to be a hack. Ask me in a few days - though - by a WILD coincidence, I'm hacking on rtsp anyway. - Kristian From kristian at varnish-software.com Fri Jan 7 10:21:56 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 7 Jan 2011 11:21:56 +0100 Subject: Memory usage In-Reply-To: References: <20110103095606.GA2454@freud> Message-ID: <20110107102156.GB2153@freud> On Mon, Jan 03, 2011 at 12:14:39PM -0300, Roberto O. Fern?ndez Crisial wrote: > Thank you for your reply. I was looking for a command that shows me the > memory usage (userd and free) to monitors servers from Nagios/Cacti > software. Do you know where can I start? Do you know anoutabout any test > script to find out how many memory is being used? I hear the munin plugin for Varnish is AMAZINGLY well written and demonstrates relationships between different varnishtat variables. - Kristian From g.georgovassilis at gmail.com Fri Jan 7 11:31:09 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Fri, 07 Jan 2011 12:31:09 +0100 Subject: Timestamp in logfile? Message-ID: <4D26F97D.1010905@gmail.com> Hello all, I wonder what the timestamp format in varnishlog is, i.e.: 18 ReqStart c 93.233.118.174 49556 988741896 I suspected that le last part (here: 988741896) is some kind of multiplier to UTC [1] - though it doesn't seem to be any power of 10. BR, G. [1] https://secure.wikimedia.org/wikipedia/en/wiki/Coordinated_Universal_Time -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Fri Jan 7 12:15:29 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 7 Jan 2011 13:15:29 +0100 Subject: Timestamp in logfile? In-Reply-To: <4D26F97D.1010905@gmail.com> References: <4D26F97D.1010905@gmail.com> Message-ID: <20110107121529.GC2153@freud> On Fri, Jan 07, 2011 at 12:31:09PM +0100, George Georgovassilis wrote: > I wonder what the timestamp format in varnishlog is, i.e.: > > 18 ReqStart c 93.233.118.174 49556 988741896 > > I suspected that le last part (here: 988741896) is some kind of > multiplier to UTC [1] - though it doesn't seem to be any power of > 10. You want to look at ReqEnd - which has normal epoch-time stamps. That ReqStart isn't all that useful in this regard. That's an XID, not a timestamp. http://www.varnish-cache.org/trac/wiki/Varnishlog has more information on ReqEnd. - Kristian From g.georgovassilis at gmail.com Fri Jan 7 12:17:34 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Fri, 07 Jan 2011 13:17:34 +0100 Subject: Timestamp in logfile? In-Reply-To: <20110107121529.GC2153@freud> References: <4D26F97D.1010905@gmail.com> <20110107121529.GC2153@freud> Message-ID: <4D27045E.5010701@gmail.com> Hello Kristian, Thanks for the hint! Regards, G. On 07.01.2011 13:15, Kristian Lyngstol wrote: > On Fri, Jan 07, 2011 at 12:31:09PM +0100, George Georgovassilis wrote: >> I wonder what the timestamp format in varnishlog is, i.e.: >> >> 18 ReqStart c 93.233.118.174 49556 988741896 >> >> I suspected that le last part (here: 988741896) is some kind of >> multiplier to UTC [1] - though it doesn't seem to be any power of >> 10. > You want to look at ReqEnd - which has normal epoch-time stamps. > > That ReqStart isn't all that useful in this regard. That's an XID, not a > timestamp. > > http://www.varnish-cache.org/trac/wiki/Varnishlog has more information on > ReqEnd. > > - Kristian From stewsnooze at gmail.com Fri Jan 7 12:36:13 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Fri, 7 Jan 2011 12:36:13 +0000 Subject: Planet Varnish Message-ID: Hi, We need to get Planet Varnish having more content flowing through it. Please add my varnish specific RSS feed to it. I guess if other people are writing Varnish posts they should ask here to be added to Planet Varnish. I'm a member of Drupal Planet and it really has great stuff on it that keeps the main Drupal site full of fresh links and experiences that will help as we grow. My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed I only have one article at the moment but it is a start. Cheers Stew From perbu at varnish-software.com Fri Jan 7 14:01:12 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 7 Jan 2011 15:01:12 +0100 Subject: Planet Varnish In-Reply-To: References: Message-ID: Hi Stewart, On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson wrote: > Hi, > > We need to get Planet Varnish having more content flowing through it. > Please add my varnish specific RSS feed to it. I guess if other people > are writing Varnish posts they should ask here to be added to Planet > Varnish. ?I'm a member of Drupal Planet and it really has great stuff > on it that keeps the main Drupal site full of fresh links and > experiences that will help as we grow. Agreed. > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed > > I only have one article at the moment but it is a start. It is added and should be visible within a few minutes. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From jnerin+varnish at gmail.com Fri Jan 7 19:52:36 2011 From: jnerin+varnish at gmail.com (=?UTF-8?B?Sm9yZ2UgTmVyw61u?=) Date: Fri, 7 Jan 2011 20:52:36 +0100 Subject: Planet Varnish In-Reply-To: References: Message-ID: Hi, I can't find the feed of the planet varnish, I would like to add the planet to my feed reader, instead of having to add the blogs one by one. Greetings. On Fri, Jan 7, 2011 at 15:01, Per Buer wrote: > Hi Stewart, > > On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson > wrote: > > Hi, > > > > We need to get Planet Varnish having more content flowing through it. > > Please add my varnish specific RSS feed to it. I guess if other people > > are writing Varnish posts they should ask here to be added to Planet > > Varnish. I'm a member of Drupal Planet and it really has great stuff > > on it that keeps the main Drupal site full of fresh links and > > experiences that will help as we grow. > > Agreed. > > > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed > > > > I only have one article at the moment but it is a start. > > It is added and should be visible within a few minutes. > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Jorge Ner?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Fri Jan 7 22:20:47 2011 From: perbu at varnish-software.com (Per Buer) Date: Fri, 7 Jan 2011 23:20:47 +0100 Subject: Planet Varnish In-Reply-To: References: Message-ID: Jorge, When I click the RSS icon in firefox the following link shows up: http://planet.varnish-cache.org/atom.xml If you look at the meta headers there are probably other formats as well. Per. On Fri, Jan 7, 2011 at 8:52 PM, Jorge Ner?n wrote: > Hi, I can't find the feed of the planet varnish, I would like to add the > planet to my feed reader, instead of having to add the blogs one by one. > > Greetings. > On Fri, Jan 7, 2011 at 15:01, Per Buer wrote: >> >> Hi Stewart, >> >> On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson >> wrote: >> > Hi, >> > >> > We need to get Planet Varnish having more content flowing through it. >> > Please add my varnish specific RSS feed to it. I guess if other people >> > are writing Varnish posts they should ask here to be added to Planet >> > Varnish. ?I'm a member of Drupal Planet and it really has great stuff >> > on it that keeps the main Drupal site full of fresh links and >> > experiences that will help as we grow. >> >> Agreed. >> >> > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed >> > >> > I only have one article at the moment but it is a start. >> >> It is added and should be visible within a few minutes. >> >> -- >> Per Buer,?Varnish Software >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer >> Varnish makes websites fly! >> Want to learn more about Varnish? >> http://www.varnish-software.com/whitepapers >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > -- > Jorge Ner?n > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From jnerin+varnish at gmail.com Fri Jan 7 22:41:22 2011 From: jnerin+varnish at gmail.com (=?UTF-8?B?Sm9yZ2UgTmVyw61u?=) Date: Fri, 7 Jan 2011 23:41:22 +0100 Subject: Planet Varnish In-Reply-To: References: Message-ID: Ok, thanks, I was looking at the wrong thing, I was in the front page of varnish-cache.org and only managed to get stuck at http://www.varnish-cache.org/aggregator/sources/1 Thank you. On Fri, Jan 7, 2011 at 23:20, Per Buer wrote: > Jorge, > > When I click the RSS icon in firefox the following link shows up: > http://planet.varnish-cache.org/atom.xml > > If you look at the meta headers there are probably other formats as well. > > Per. > > > On Fri, Jan 7, 2011 at 8:52 PM, Jorge Ner?n > > wrote: > > Hi, I can't find the feed of the planet varnish, I would like to add the > > planet to my feed reader, instead of having to add the blogs one by one. > > > > Greetings. > > On Fri, Jan 7, 2011 at 15:01, Per Buer > wrote: > >> > >> Hi Stewart, > >> > >> On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson > >> wrote: > >> > Hi, > >> > > >> > We need to get Planet Varnish having more content flowing through it. > >> > Please add my varnish specific RSS feed to it. I guess if other people > >> > are writing Varnish posts they should ask here to be added to Planet > >> > Varnish. I'm a member of Drupal Planet and it really has great stuff > >> > on it that keeps the main Drupal site full of fresh links and > >> > experiences that will help as we grow. > >> > >> Agreed. > >> > >> > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed > >> > > >> > I only have one article at the moment but it is a start. > >> > >> It is added and should be visible within a few minutes. > >> > >> -- > >> Per Buer, Varnish Software > >> Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > >> Varnish makes websites fly! > >> Want to learn more about Varnish? > >> http://www.varnish-software.com/whitepapers > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > > > -- > > Jorge Ner?n > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > -- Jorge Ner?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at gomiso.com Fri Jan 7 23:32:18 2011 From: tim at gomiso.com (Tim Lee) Date: Fri, 7 Jan 2011 15:32:18 -0800 Subject: Planet Varnish Message-ID: Per, We've been using Varnish pretty successfully in our app for a few months, and we recently added monitoring via Munin. The Varnish plugin for Munin has gotten pretty decent; until we set up Munin, we hadn't realized that the Varnish plugin is one of the default plugins for Munin now. We wrote up a guide here: http://engineering.gomiso.com/2011/01/04/easy-monitoring-of-varnish-with-munin/ We blog about Varnish here: http://engineering.gomiso.com/tag/varnish/feed/. Please add to Planet Varnish. Thanks! Tim Message: 5 > Date: Fri, 7 Jan 2011 15:01:12 +0100 > From: Per Buer > To: Stewart Robinson > Cc: varnish-misc at varnish-cache.org > Subject: Re: Planet Varnish > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Hi Stewart, > > On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson > wrote: > > Hi, > > > > We need to get Planet Varnish having more content flowing through it. > > Please add my varnish specific RSS feed to it. I guess if other people > > are writing Varnish posts they should ask here to be added to Planet > > Varnish. ?I'm a member of Drupal Planet and it really has great stuff > > on it that keeps the main Drupal site full of fresh links and > > experiences that will help as we grow. > > Agreed. > > > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed > > > > I only have one article at the moment but it is a start. > > It is added and should be visible within a few minutes. > > -- > Per Buer,?Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Mon Jan 10 16:54:42 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Mon, 10 Jan 2011 11:54:42 -0500 Subject: Planet Varnish In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CB661BC@TMG-EVS02.torstar.net> Didn't know about Planet Varnish, but I too, blog about varnish: http://caunter.ca/?cat=7 Stefan Caunter e: scaunter at topscms.com :: m: (416) 561-4871 www.thestar.com From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Tim Lee Sent: January-07-11 6:32 PM To: varnish-misc at varnish-cache.org Subject: Re: Planet Varnish Per, We've been using Varnish pretty successfully in our app for a few months, and we recently added monitoring via Munin. The Varnish plugin for Munin has gotten pretty decent; until we set up Munin, we hadn't realized that the Varnish plugin is one of the default plugins for Munin now. We wrote up a guide here: http://engineering.gomiso.com/2011/01/04/easy-monitoring-of-varnish-with -munin/ We blog about Varnish here: http://engineering.gomiso.com/tag/varnish/feed/ . Please add to Planet Varnish. Thanks! Tim Message: 5 Date: Fri, 7 Jan 2011 15:01:12 +0100 From: Per Buer To: Stewart Robinson Cc: varnish-misc at varnish-cache.org Subject: Re: Planet Varnish Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Hi Stewart, On Fri, Jan 7, 2011 at 1:36 PM, Stewart Robinson wrote: > Hi, > > We need to get Planet Varnish having more content flowing through it. > Please add my varnish specific RSS feed to it. I guess if other people > are writing Varnish posts they should ask here to be added to Planet > Varnish. ?I'm a member of Drupal Planet and it really has great stuff > on it that keeps the main Drupal site full of fresh links and > experiences that will help as we grow. Agreed. > My Varnish RSS URL is thttp://stewsnooze.com/varnish/feed > > I only have one article at the moment but it is a start. It is added and should be visible within a few minutes. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Mon Jan 10 18:50:39 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Mon, 10 Jan 2011 13:50:39 -0500 Subject: Connections dropped under load In-Reply-To: <20110106090003.GA2106@freud> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> Yes, and I also don't understand why there is no discussion of threads here. If we can see varnishadm -T 6082 param.show thread_pools varnishadm -T 6082 param.show thread_pool_min varnishadm -T 6082 param.show thread_pool_max varnishadm -T 6082 param.show thread_pool_add_delay it would be helpful. The only time I've seen varnish drop connections is when it cannot create threads quickly enough, or has insufficient resources to do so. Stefan Caunter e: scaunter at topscms.com :: m: (416) 561-4871 www.thestar.com -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian Lyngstol Sent: January-06-11 4:00 AM To: George Georgovassilis Cc: varnish-misc at varnish-cache.org Subject: Re: Connections dropped under load Hi, On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: > I'm having trouble with dropped connections under a loadtest. We need: varnishstat -1 Any further discussion without varnishstat -1 output is wild guesswork and superstition. - Kristian _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From g.georgovassilis at gmail.com Mon Jan 10 20:53:58 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Mon, 10 Jan 2011 21:53:58 +0100 Subject: Connections dropped under load In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> Message-ID: <4D2B71E6.4050805@gmail.com> Hello Stefan, For every of the commands you quoted I'm getting an connect(): Invalid argument Connection failed Why are threads relevant? As I wrote earlier, everything is answered from within the varnish cache - I thought the entire epolling parade was about avoiding caches. I published the threadsettings earlier in this discussion. Regards, G. On 10.01.2011 19:50, Caunter, Stefan wrote: > Yes, and I also don't understand why there is no discussion of threads > here. If we can see > > varnishadm -T 6082 param.show thread_pools > varnishadm -T 6082 param.show thread_pool_min > varnishadm -T 6082 param.show thread_pool_max > varnishadm -T 6082 param.show thread_pool_add_delay > > it would be helpful. > > The only time I've seen varnish drop connections is when it cannot > create threads quickly enough, or has insufficient resources to do so. > > Stefan Caunter > e: scaunter at topscms.com :: m: (416) 561-4871 > www.thestar.com > > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org > [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian > Lyngstol > Sent: January-06-11 4:00 AM > To: George Georgovassilis > Cc: varnish-misc at varnish-cache.org > Subject: Re: Connections dropped under load > > Hi, > > On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: >> I'm having trouble with dropped connections under a loadtest. > We need: > > varnishstat -1 > > Any further discussion without varnishstat -1 output is wild guesswork > and > superstition. > > - Kristian > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From cosimo at streppone.it Mon Jan 10 21:01:46 2011 From: cosimo at streppone.it (Cosimo Streppone) Date: Mon, 10 Jan 2011 22:01:46 +0100 Subject: Connections dropped under load In-Reply-To: <4D2B71E6.4050805@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> Message-ID: In data 10 gennaio 2011 alle ore 21:53:58, George Georgovassilis ha scritto: > Hello Stefan, > > For every of the commands you quoted I'm getting an > > connect(): Invalid argument > Connection failed > > Why are threads relevant? As I wrote earlier, everything is answered > from within the varnish cache - I thought the entire epolling parade was > about avoiding caches. > I published the threadsettings earlier in this discussion. Even more specifically, you could check your threads limited, overflowed requests and dropped requests counters in varnishstat. If under test load you see them increase steadily, that means you probably have too low "thread settings". -- Cosimo From g.georgovassilis at gmail.com Mon Jan 10 21:38:01 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Mon, 10 Jan 2011 22:38:01 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> Message-ID: <4D2B7C39.3070708@gmail.com> Hello Cosimo, I'd like to understand better how these values, such as thread pools, session pools and linger are connected. As I wrote earlier, since in my test everything is answered from the varnish cache I'd expect threads to not play any role. On 10.01.2011 22:01, Cosimo Streppone wrote: > In data 10 gennaio 2011 alle ore 21:53:58, George Georgovassilis > ha scritto: > >> Hello Stefan, >> >> For every of the commands you quoted I'm getting an >> >> connect(): Invalid argument >> Connection failed >> >> Why are threads relevant? As I wrote earlier, everything is answered >> from within the varnish cache - I thought the entire epolling parade >> was about avoiding caches. >> I published the threadsettings earlier in this discussion. > > Even more specifically, you could check your > threads limited, overflowed requests and dropped requests counters > in varnishstat. > > If under test load you see them increase steadily, > that means you probably have too low "thread settings". > From scaunter at topscms.com Mon Jan 10 23:48:55 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Mon, 10 Jan 2011 18:48:55 -0500 Subject: Connections dropped under load In-Reply-To: <4D2B71E6.4050805@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> Message-ID: <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> Hi George, I assumed your management access is on port 6082. Adjust to your configured varnishd please. Thread creation and pool size monitoring is essential to handling traffic spikes. If load test exceeded the configured available maximums varnish will drop connections, cache hit or no. A thread is required to answer a network connection. You don't get something for nothing. Unless varnish has resources to get that cached object, it cannot do anything for your requesting clients, real or test. Sent from my iPhone On 2011-01-10, at 15:55, "George Georgovassilis" wrote: > Hello Stefan, > > For every of the commands you quoted I'm getting an > > connect(): Invalid argument > Connection failed > > > Why are threads relevant? As I wrote earlier, everything is answered from within the varnish cache - I thought the entire epolling parade was about avoiding caches. > I published the threadsettings earlier in this discussion. > > Regards, > G. > > On 10.01.2011 19:50, Caunter, Stefan wrote: >> Yes, and I also don't understand why there is no discussion of threads >> here. If we can see >> >> varnishadm -T 6082 param.show thread_pools >> varnishadm -T 6082 param.show thread_pool_min >> varnishadm -T 6082 param.show thread_pool_max >> varnishadm -T 6082 param.show thread_pool_add_delay >> >> it would be helpful. >> >> The only time I've seen varnish drop connections is when it cannot >> create threads quickly enough, or has insufficient resources to do so. >> >> Stefan Caunter >> e: scaunter at topscms.com :: m: (416) 561-4871 >> www.thestar.com >> >> >> -----Original Message----- >> From: varnish-misc-bounces at varnish-cache.org >> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian >> Lyngstol >> Sent: January-06-11 4:00 AM >> To: George Georgovassilis >> Cc: varnish-misc at varnish-cache.org >> Subject: Re: Connections dropped under load >> >> Hi, >> >> On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: >>> I'm having trouble with dropped connections under a loadtest. >> We need: >> >> varnishstat -1 >> >> Any further discussion without varnishstat -1 output is wild guesswork >> and >> superstition. >> >> - Kristian >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From g.georgovassilis at gmail.com Mon Jan 10 23:59:58 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 00:59:58 +0100 Subject: Connections dropped under load In-Reply-To: <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> Message-ID: <4D2B9D7E.3060209@gmail.com> Hello Stefan, Thank you for the hint. Here are the values: thread_pools = 2 thread_pool_min = 2 thread_pool_max = 200 (was 2 at the time of my initial tests) thread_pool_add_delay = 2 Regards, G. On 11.01.2011 00:48, Caunter, Stefan wrote: > Hi George, > > I assumed your management access is on port 6082. Adjust to your configured varnishd please. > > Thread creation and pool size monitoring is essential to handling traffic spikes. If load test exceeded the configured available maximums varnish will drop connections, cache hit or no. > > A thread is required to answer a network connection. You don't get something for nothing. Unless varnish has resources to get that cached object, it cannot do anything for your requesting clients, real or test. > > Sent from my iPhone > > On 2011-01-10, at 15:55, "George Georgovassilis" wrote: > >> Hello Stefan, >> >> For every of the commands you quoted I'm getting an >> >> connect(): Invalid argument >> Connection failed >> >> >> Why are threads relevant? As I wrote earlier, everything is answered from within the varnish cache - I thought the entire epolling parade was about avoiding caches. >> I published the threadsettings earlier in this discussion. >> >> Regards, >> G. >> >> On 10.01.2011 19:50, Caunter, Stefan wrote: >>> Yes, and I also don't understand why there is no discussion of threads >>> here. If we can see >>> >>> varnishadm -T 6082 param.show thread_pools >>> varnishadm -T 6082 param.show thread_pool_min >>> varnishadm -T 6082 param.show thread_pool_max >>> varnishadm -T 6082 param.show thread_pool_add_delay >>> >>> it would be helpful. >>> >>> The only time I've seen varnish drop connections is when it cannot >>> create threads quickly enough, or has insufficient resources to do so. >>> >>> Stefan Caunter >>> e: scaunter at topscms.com :: m: (416) 561-4871 >>> www.thestar.com >>> >>> >>> -----Original Message----- >>> From: varnish-misc-bounces at varnish-cache.org >>> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian >>> Lyngstol >>> Sent: January-06-11 4:00 AM >>> To: George Georgovassilis >>> Cc: varnish-misc at varnish-cache.org >>> Subject: Re: Connections dropped under load >>> >>> Hi, >>> >>> On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: >>>> I'm having trouble with dropped connections under a loadtest. >>> We need: >>> >>> varnishstat -1 >>> >>> Any further discussion without varnishstat -1 output is wild guesswork >>> and >>> superstition. >>> >>> - Kristian >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From scaunter at topscms.com Tue Jan 11 00:07:27 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Mon, 10 Jan 2011 19:07:27 -0500 Subject: Connections dropped under load In-Reply-To: <4D2B9D7E.3060209@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> Message-ID: Even 200 is low if you regularly see a lot of traffic, but that initial setting would likely have dropped most connections. Sent from my iPhone On 2011-01-10, at 19:00, "George Georgovassilis" wrote: > Hello Stefan, > > Thank you for the hint. Here are the values: > > thread_pools = 2 > thread_pool_min = 2 > thread_pool_max = 200 (was 2 at the time of my initial tests) > thread_pool_add_delay = 2 > > Regards, > G. > > On 11.01.2011 00:48, Caunter, Stefan wrote: >> Hi George, >> >> I assumed your management access is on port 6082. Adjust to your configured varnishd please. >> >> Thread creation and pool size monitoring is essential to handling traffic spikes. If load test exceeded the configured available maximums varnish will drop connections, cache hit or no. >> >> A thread is required to answer a network connection. You don't get something for nothing. Unless varnish has resources to get that cached object, it cannot do anything for your requesting clients, real or test. >> >> Sent from my iPhone >> >> On 2011-01-10, at 15:55, "George Georgovassilis" wrote: >> >>> Hello Stefan, >>> >>> For every of the commands you quoted I'm getting an >>> >>> connect(): Invalid argument >>> Connection failed >>> >>> >>> Why are threads relevant? As I wrote earlier, everything is answered from within the varnish cache - I thought the entire epolling parade was about avoiding caches. >>> I published the threadsettings earlier in this discussion. >>> >>> Regards, >>> G. >>> >>> On 10.01.2011 19:50, Caunter, Stefan wrote: >>>> Yes, and I also don't understand why there is no discussion of threads >>>> here. If we can see >>>> >>>> varnishadm -T 6082 param.show thread_pools >>>> varnishadm -T 6082 param.show thread_pool_min >>>> varnishadm -T 6082 param.show thread_pool_max >>>> varnishadm -T 6082 param.show thread_pool_add_delay >>>> >>>> it would be helpful. >>>> >>>> The only time I've seen varnish drop connections is when it cannot >>>> create threads quickly enough, or has insufficient resources to do so. >>>> >>>> Stefan Caunter >>>> e: scaunter at topscms.com :: m: (416) 561-4871 >>>> www.thestar.com >>>> >>>> >>>> -----Original Message----- >>>> From: varnish-misc-bounces at varnish-cache.org >>>> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian >>>> Lyngstol >>>> Sent: January-06-11 4:00 AM >>>> To: George Georgovassilis >>>> Cc: varnish-misc at varnish-cache.org >>>> Subject: Re: Connections dropped under load >>>> >>>> Hi, >>>> >>>> On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: >>>>> I'm having trouble with dropped connections under a loadtest. >>>> We need: >>>> >>>> varnishstat -1 >>>> >>>> Any further discussion without varnishstat -1 output is wild guesswork >>>> and >>>> superstition. >>>> >>>> - Kristian >>>> >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From g.georgovassilis at gmail.com Tue Jan 11 00:13:45 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 01:13:45 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> Message-ID: <4D2BA0B9.7000705@gmail.com> You'd be surprised - it was dropping only a few of the total 300 requests / sec... On 11.01.2011 01:07, Caunter, Stefan wrote: > Even 200 is low if you regularly see a lot of traffic, but that initial setting would likely have dropped most connections. > > Sent from my iPhone > > On 2011-01-10, at 19:00, "George Georgovassilis" wrote: > >> Hello Stefan, >> >> Thank you for the hint. Here are the values: >> >> thread_pools = 2 >> thread_pool_min = 2 >> thread_pool_max = 200 (was 2 at the time of my initial tests) >> thread_pool_add_delay = 2 >> >> Regards, >> G. >> >> On 11.01.2011 00:48, Caunter, Stefan wrote: >>> Hi George, >>> >>> I assumed your management access is on port 6082. Adjust to your configured varnishd please. >>> >>> Thread creation and pool size monitoring is essential to handling traffic spikes. If load test exceeded the configured available maximums varnish will drop connections, cache hit or no. >>> >>> A thread is required to answer a network connection. You don't get something for nothing. Unless varnish has resources to get that cached object, it cannot do anything for your requesting clients, real or test. >>> >>> Sent from my iPhone >>> >>> On 2011-01-10, at 15:55, "George Georgovassilis" wrote: >>> >>>> Hello Stefan, >>>> >>>> For every of the commands you quoted I'm getting an >>>> >>>> connect(): Invalid argument >>>> Connection failed >>>> >>>> >>>> Why are threads relevant? As I wrote earlier, everything is answered from within the varnish cache - I thought the entire epolling parade was about avoiding caches. >>>> I published the threadsettings earlier in this discussion. >>>> >>>> Regards, >>>> G. >>>> >>>> On 10.01.2011 19:50, Caunter, Stefan wrote: >>>>> Yes, and I also don't understand why there is no discussion of threads >>>>> here. If we can see >>>>> >>>>> varnishadm -T 6082 param.show thread_pools >>>>> varnishadm -T 6082 param.show thread_pool_min >>>>> varnishadm -T 6082 param.show thread_pool_max >>>>> varnishadm -T 6082 param.show thread_pool_add_delay >>>>> >>>>> it would be helpful. >>>>> >>>>> The only time I've seen varnish drop connections is when it cannot >>>>> create threads quickly enough, or has insufficient resources to do so. >>>>> >>>>> Stefan Caunter >>>>> e: scaunter at topscms.com :: m: (416) 561-4871 >>>>> www.thestar.com >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: varnish-misc-bounces at varnish-cache.org >>>>> [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Kristian >>>>> Lyngstol >>>>> Sent: January-06-11 4:00 AM >>>>> To: George Georgovassilis >>>>> Cc: varnish-misc at varnish-cache.org >>>>> Subject: Re: Connections dropped under load >>>>> >>>>> Hi, >>>>> >>>>> On Wed, Jan 05, 2011 at 04:20:31PM +0100, George Georgovassilis wrote: >>>>>> I'm having trouble with dropped connections under a loadtest. >>>>> We need: >>>>> >>>>> varnishstat -1 >>>>> >>>>> Any further discussion without varnishstat -1 output is wild guesswork >>>>> and >>>>> superstition. >>>>> >>>>> - Kristian >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at varnish-cache.org >>>>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From steamboatid at gmail.com Tue Jan 11 01:04:52 2011 From: steamboatid at gmail.com (dwi kristianto) Date: Tue, 11 Jan 2011 08:04:52 +0700 Subject: Connections dropped under load In-Reply-To: <4D2BA0B9.7000705@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <4D2BA0B9.7000705@gmail.com> Message-ID: what about setting threads to something huge, say: thread_pool_min = 200 thread_pool_max = 2000 On Tue, Jan 11, 2011 at 7:13 AM, George Georgovassilis wrote: > You'd be surprised - it was dropping only a few of the total 300 requests / > sec... > > On 11.01.2011 01:07, Caunter, Stefan wrote: >> >> Even 200 is low if you regularly see a lot of traffic, but that initial >> setting would likely have dropped most connections. >> >> Sent from my iPhone >> >> On 2011-01-10, at 19:00, "George >> Georgovassilis" ?wrote: >> >>> Hello Stefan, >>> >>> Thank you for the hint. Here are the values: >>> >>> thread_pools = 2 >>> thread_pool_min = 2 >>> thread_pool_max = 200 (was 2 at the time of my initial tests) >>> thread_pool_add_delay = 2 >>> >>> Regards, >>> G. From kristian at varnish-software.com Tue Jan 11 07:48:14 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Tue, 11 Jan 2011 08:48:14 +0100 Subject: Connections dropped under load In-Reply-To: <4D2B9D7E.3060209@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> Message-ID: <20110111074814.GA2266@freud> thanks. posting, top stop Please On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: > Thank you for the hint. Here are the values: > > thread_pools = 2 > thread_pool_min = 2 > thread_pool_max = 200 (was 2 at the time of my initial tests) > thread_pool_add_delay = 2 As have already been pointed out, this is a low value. This also explains why session_linger is an issue to you. Unless you are on 32-bit (which you shouldn't ever ever ever be), there's no reason to not always have a thousand threads laying around. Your settings also means that you have FOUR threads available when you start your tests. Not exactly a lot of room for bursts of traffic. Your other mail actually had a thread_pool_max of 16. That will give you a maximum of 16 concurrent requests that can be handled, with an other 32 that can be queued. With session_linger, these threads will remain allocated to the connection for a longer duration, thus it's obvious that in this case, your thread starvation was the real issue and you just triggered it faster with a higher session_linger. It's a perfectly obvious and mystery-free explanation. Session lingering is a mechanism to avoid trashing your system during high load by constantly moving data around between threads, but it depends on reasonable thread-settings - or rather: an abundance of threads. http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ sounds like a good place to start reading. Specially about threads. - Kristian From g.georgovassilis at gmail.com Tue Jan 11 09:09:00 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 10:09:00 +0100 Subject: Connections dropped under load In-Reply-To: <20110111074814.GA2266@freud> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> Message-ID: <4D2C1E2C.9010305@gmail.com> Hello Kristian, Thank you for summarizing - the relation between threads, session linger and connection handling has been explored indeed sufficiently in this thread. The advice of increasing the thread pool size is one that is not always easy to follow though. I'm running my app on a virtual machine (think Open VMS or EC2) and there is a low thread limit, so naturally I'm exploring ways of keeping that low especially since the application server behind varnish is also competing for them. nginx can as far as I know serve thousand of connections with just two worker threads, I erroneously assumed when first evaluating varnish that it was using a similar technique. Regs, G. On 11.01.2011 08:48, Kristian Lyngstol wrote: > thanks. > posting, > top > stop > Please > > On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: >> Thank you for the hint. Here are the values: >> >> thread_pools = 2 >> thread_pool_min = 2 >> thread_pool_max = 200 (was 2 at the time of my initial tests) >> thread_pool_add_delay = 2 > As have already been pointed out, this is a low value. This also explains > why session_linger is an issue to you. Unless you are on 32-bit (which you > shouldn't ever ever ever be), there's no reason to not always have a > thousand threads laying around. Your settings also means that you have FOUR > threads available when you start your tests. Not exactly a lot of room for > bursts of traffic. > > Your other mail actually had a thread_pool_max of 16. That will give you a > maximum of 16 concurrent requests that can be handled, with an other 32 > that can be queued. With session_linger, these threads will remain > allocated to the connection for a longer duration, thus it's obvious that > in this case, your thread starvation was the real issue and you just > triggered it faster with a higher session_linger. It's a perfectly obvious > and mystery-free explanation. Session lingering is a mechanism to avoid > trashing your system during high load by constantly moving data around > between threads, but it depends on reasonable thread-settings - or rather: > an abundance of threads. > > http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ sounds > like a good place to start reading. Specially about threads. > > - Kristian From david.birdsong at gmail.com Tue Jan 11 09:41:37 2011 From: david.birdsong at gmail.com (David Birdsong) Date: Tue, 11 Jan 2011 04:41:37 -0500 Subject: Connections dropped under load In-Reply-To: <4D2C1E2C.9010305@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> Message-ID: On Tue, Jan 11, 2011 at 4:09 AM, George Georgovassilis wrote: > Hello Kristian, > > Thank you for summarizing - the relation between threads, session linger and > connection handling has been explored indeed sufficiently in this thread. > The advice of increasing the thread pool size is one that is not always easy > to follow though. I'm running my app on a virtual machine (think Open VMS or > EC2) and there is a low thread limit, so naturally I'm exploring ways of > keeping that low especially since the application server behind varnish is > also competing for them. nginx can as far as I know serve thousand of > connections with just two worker threads, I erroneously assumed when first > evaluating varnish that it was using a similar technique. nginx uses a completely different concurrency model, but it also is not burdened with managing as much address space and content as varnish is under normal configurations. in most nginx setups, it's either proxying content to a backend app or serving files off of disk. try serving lots of content off of disk with just 1-2 worker processes (nginx isn't multi-threaded) and you will start seeing connections piling up until you up the number of workers until your disk becomes the limiting factor. > > Regs, > G. > > > On 11.01.2011 08:48, Kristian Lyngstol wrote: >> >> thanks. >> posting, >> top >> stop >> Please >> >> On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: >>> >>> Thank you for the hint. Here are the values: >>> >>> thread_pools = 2 >>> thread_pool_min = 2 >>> thread_pool_max = 200 (was 2 at the time of my initial tests) >>> thread_pool_add_delay = 2 >> >> As have already been pointed out, this is a low value. This also explains >> why session_linger is an issue to you. Unless you are on 32-bit (which you >> shouldn't ever ever ever be), there's no reason to not always have a >> thousand threads laying around. Your settings also means that you have >> FOUR >> threads available when you start your tests. Not exactly a lot of room for >> bursts of traffic. >> >> Your other mail actually had a thread_pool_max of 16. That will give you a >> maximum of 16 concurrent requests that can be handled, with an other 32 >> that can be queued. With session_linger, these threads will remain >> allocated to the connection for a longer duration, thus it's obvious that >> in this case, your thread starvation was the real issue and you just >> triggered it faster with a higher session_linger. It's a perfectly obvious >> and mystery-free explanation. Session lingering is a mechanism to avoid >> trashing your system during high load by constantly moving data around >> between threads, but it depends on reasonable thread-settings - or rather: >> an abundance of threads. >> >> http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ >> sounds >> like a good place to start reading. Specially about threads. >> >> - Kristian > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From kristian at varnish-software.com Tue Jan 11 09:51:12 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Tue, 11 Jan 2011 10:51:12 +0100 Subject: Connections dropped under load In-Reply-To: <4D2C1E2C.9010305@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> Message-ID: <20110111095112.GC2266@freud> If that is inconvenient, I suggest not top-posting. PS: This mail is optimized for bottom-to-top reading. Kristian Regards, performance. expect to use a piece of software written for and optimized for high-end than enough threads. If you can't use a half-decent machine, then you can't we'll try to solve. Any half-decent virtual machine will let you use more don't really have any actual limits relevant to Varnish, is not a problem That some systems operate with artificially set limits on resources that operating environments, and we don't really care about number of threads. Varnish is written for modern computers, modern systems and modern Hi George, On Tue, Jan 11, 2011 at 10:09:00AM +0100, George Georgovassilis wrote: > Hello Kristian, > > Thank you for summarizing - the relation between threads, session > linger and connection handling has been explored indeed sufficiently > in this thread. The advice of increasing the thread pool size is one > that is not always easy to follow though. I'm running my app on a > virtual machine (think Open VMS or EC2) and there is a low thread > limit, so naturally I'm exploring ways of keeping that low > especially since the application server behind varnish is also > competing for them. nginx can as far as I know serve thousand of > connections with just two worker threads, I erroneously assumed when > first evaluating varnish that it was using a similar technique. > > Regs, > G. > > > On 11.01.2011 08:48, Kristian Lyngstol wrote: > >thanks. > >posting, > >top > >stop > >Please > > > >On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: > >>Thank you for the hint. Here are the values: > >> > >>thread_pools = 2 > >>thread_pool_min = 2 > >>thread_pool_max = 200 (was 2 at the time of my initial tests) > >>thread_pool_add_delay = 2 > >As have already been pointed out, this is a low value. This also explains > >why session_linger is an issue to you. Unless you are on 32-bit (which you > >shouldn't ever ever ever be), there's no reason to not always have a > >thousand threads laying around. Your settings also means that you have FOUR > >threads available when you start your tests. Not exactly a lot of room for > >bursts of traffic. > > > >Your other mail actually had a thread_pool_max of 16. That will give you a > >maximum of 16 concurrent requests that can be handled, with an other 32 > >that can be queued. With session_linger, these threads will remain > >allocated to the connection for a longer duration, thus it's obvious that > >in this case, your thread starvation was the real issue and you just > >triggered it faster with a higher session_linger. It's a perfectly obvious > >and mystery-free explanation. Session lingering is a mechanism to avoid > >trashing your system during high load by constantly moving data around > >between threads, but it depends on reasonable thread-settings - or rather: > >an abundance of threads. > > > >http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ sounds > >like a good place to start reading. Specially about threads. > > > >- Kristian > From g.georgovassilis at gmail.com Tue Jan 11 10:35:42 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 11:35:42 +0100 Subject: Connections dropped under load In-Reply-To: <20110111095112.GC2266@freud> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> Message-ID: <4D2C327E.7060705@gmail.com> Hello Kristian, Personally I'm cool with either way of posting, a quick scan of the mail archives showed that both were being practised and I couldn't find any posting rules in the maillist desc - so (even at the risk that you do mind) I'll stay with top posting... the modern internet has evolved past this discussion [1] and I really can't be bothered. Sorry. I do appreciate immensely your (all of your) precious insights and hints which help me understand (I'm neither particularly familiar with networking or OSes, I outsource these tasks to the corresponding software vendors :-) in which way exactly the resource needs of my application can be met. (Un)fortunately the hype around cloud environments has gotten to the people who pay my checks, and I can't ignore these restrained environments that come at hundreds of instances - even if you have the enviable luxury of doing so. The high latency connections between the cloud instances make a software like varnish excruciatingly necessary in order to avoid as many as possible roundtrips to the nodes behind. Please also note that in no way the Varnish documentation (see chapter on prerequisites [2]) mentions a high-end server for even moderate loads (I iterate: we are talking here about lousy 700 req/s), and keep in mind that this discussion has turned to a "virtual" resource: it's not about memory or CPU power but a logical division of such, namely threads. I do take the point however that when it comes to scalability nginx might be a better choice [3]. Many thanks, G. [1] https://secure.wikimedia.org/wikipedia/en/wiki/Posting_style [2] http://www.varnish-cache.org/docs/2.1/installation/prerequisites.html [3] http://highscalability.com/display/Search?searchQuery=nginx&moduleId=4876569 On 11.01.2011 10:51, Kristian Lyngstol wrote: > If that is inconvenient, I suggest not top-posting. > PS: This mail is optimized for bottom-to-top reading. > > Kristian > Regards, > > performance. > expect to use a piece of software written for and optimized for high-end > than enough threads. If you can't use a half-decent machine, then you can't > we'll try to solve. Any half-decent virtual machine will let you use more > don't really have any actual limits relevant to Varnish, is not a problem > That some systems operate with artificially set limits on resources that > operating environments, and we don't really care about number of threads. > Varnish is written for modern computers, modern systems and modern > > Hi George, > > On Tue, Jan 11, 2011 at 10:09:00AM +0100, George Georgovassilis wrote: >> Hello Kristian, >> >> Thank you for summarizing - the relation between threads, session >> linger and connection handling has been explored indeed sufficiently >> in this thread. The advice of increasing the thread pool size is one >> that is not always easy to follow though. I'm running my app on a >> virtual machine (think Open VMS or EC2) and there is a low thread >> limit, so naturally I'm exploring ways of keeping that low >> especially since the application server behind varnish is also >> competing for them. nginx can as far as I know serve thousand of >> connections with just two worker threads, I erroneously assumed when >> first evaluating varnish that it was using a similar technique. >> >> Regs, >> G. >> >> >> On 11.01.2011 08:48, Kristian Lyngstol wrote: >>> thanks. >>> posting, >>> top >>> stop >>> Please >>> >>> On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: >>>> Thank you for the hint. Here are the values: >>>> >>>> thread_pools = 2 >>>> thread_pool_min = 2 >>>> thread_pool_max = 200 (was 2 at the time of my initial tests) >>>> thread_pool_add_delay = 2 >>> As have already been pointed out, this is a low value. This also explains >>> why session_linger is an issue to you. Unless you are on 32-bit (which you >>> shouldn't ever ever ever be), there's no reason to not always have a >>> thousand threads laying around. Your settings also means that you have FOUR >>> threads available when you start your tests. Not exactly a lot of room for >>> bursts of traffic. >>> >>> Your other mail actually had a thread_pool_max of 16. That will give you a >>> maximum of 16 concurrent requests that can be handled, with an other 32 >>> that can be queued. With session_linger, these threads will remain >>> allocated to the connection for a longer duration, thus it's obvious that >>> in this case, your thread starvation was the real issue and you just >>> triggered it faster with a higher session_linger. It's a perfectly obvious >>> and mystery-free explanation. Session lingering is a mechanism to avoid >>> trashing your system during high load by constantly moving data around >>> between threads, but it depends on reasonable thread-settings - or rather: >>> an abundance of threads. >>> >>> http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ sounds >>> like a good place to start reading. Specially about threads. >>> >>> - Kristian -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.birdsong at gmail.com Tue Jan 11 11:06:40 2011 From: david.birdsong at gmail.com (David Birdsong) Date: Tue, 11 Jan 2011 06:06:40 -0500 Subject: Connections dropped under load In-Reply-To: <4D2C327E.7060705@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> Message-ID: On Tue, Jan 11, 2011 at 5:35 AM, George Georgovassilis wrote: > Hello Kristian, > > Personally I'm cool with either way of posting, a quick scan of the mail > archives showed that both were being practised and I couldn't find any > posting rules in the maillist desc - so (even at the risk that you do mind) > I'll stay with top posting... the modern internet has evolved past this > discussion [1] and I really can't be bothered. Sorry. > > I do appreciate immensely your (all of your) precious insights and hints > which help me understand (I'm neither particularly familiar with networking > or OSes, I? outsource these tasks to the corresponding software vendors :-) > in which way exactly the resource needs of my application can be met. > (Un)fortunately the hype around cloud environments has gotten to the people > who pay my checks, and I can't ignore these restrained environments that > come at hundreds of instances - even if you have the enviable luxury of > doing so. The high latency connections between the cloud instances make a > software like varnish excruciatingly necessary in order to avoid as many as > possible roundtrips to the nodes behind. > > Please also note that in no way the Varnish documentation (see chapter on > prerequisites [2]) mentions a high-end server for even moderate loads (I > iterate: we are talking here about lousy 700 req/s), and keep in mind that > this discussion has turned to a "virtual" resource: it's not about memory or > CPU power but a logical division of such, namely threads. I do take the > point however that when it comes to scalability nginx might be a better > choice [3]. Dont let the door hit you on the way out. > > Many thanks, > G. > > [1] https://secure.wikimedia.org/wikipedia/en/wiki/Posting_style > [2] http://www.varnish-cache.org/docs/2.1/installation/prerequisites.html > [3] > http://highscalability.com/display/Search?searchQuery=nginx&moduleId=4876569 > > On 11.01.2011 10:51, Kristian Lyngstol wrote: > > If that is inconvenient, I suggest not top-posting. > PS: This mail is optimized for bottom-to-top reading. > > Kristian > Regards, > > performance. > expect to use a piece of software written for and optimized for high-end > than enough threads. If you can't use a half-decent machine, then you can't > we'll try to solve. Any half-decent virtual machine will let you use more > don't really have any actual limits relevant to Varnish, is not a problem > That some systems operate with artificially set limits on resources that > operating environments, and we don't really care about number of threads. > Varnish is written for modern computers, modern systems and modern > > Hi George, > > On Tue, Jan 11, 2011 at 10:09:00AM +0100, George Georgovassilis wrote: > > Hello Kristian, > > Thank you for summarizing - the relation between threads, session > linger and connection handling has been explored indeed sufficiently > in this thread. The advice of increasing the thread pool size is one > that is not always easy to follow though. I'm running my app on a > virtual machine (think Open VMS or EC2) and there is a low thread > limit, so naturally I'm exploring ways of keeping that low > especially since the application server behind varnish is also > competing for them. nginx can as far as I know serve thousand of > connections with just two worker threads, I erroneously assumed when > first evaluating varnish that it was using a similar technique. > > Regs, > G. > > > On 11.01.2011 08:48, Kristian Lyngstol wrote: > > thanks. > posting, > top > stop > Please > > On Tue, Jan 11, 2011 at 12:59:58AM +0100, George Georgovassilis wrote: > > Thank you for the hint. Here are the values: > > thread_pools = 2 > thread_pool_min = 2 > thread_pool_max = 200 (was 2 at the time of my initial tests) > thread_pool_add_delay = 2 > > As have already been pointed out, this is a low value. This also explains > why session_linger is an issue to you. Unless you are on 32-bit (which you > shouldn't ever ever ever be), there's no reason to not always have a > thousand threads laying around. Your settings also means that you have FOUR > threads available when you start your tests. Not exactly a lot of room for > bursts of traffic. > > Your other mail actually had a thread_pool_max of 16. That will give you a > maximum of 16 concurrent requests that can be handled, with an other 32 > that can be queued. With session_linger, these threads will remain > allocated to the connection for a longer duration, thus it's obvious that > in this case, your thread starvation was the real issue and you just > triggered it faster with a higher session_linger. It's a perfectly obvious > and mystery-free explanation. Session lingering is a mechanism to avoid > trashing your system during high load by constantly moving data around > between threads, but it depends on reasonable thread-settings - or rather: > an abundance of threads. > > http://kristianlyng.wordpress.com/2010/01/26/varnish-best-practices/ sounds > like a good place to start reading. Specially about threads. > > - Kristian > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From g.georgovassilis at gmail.com Tue Jan 11 11:24:12 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 12:24:12 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> Message-ID: <4D2C3DDC.9070306@gmail.com> Hello David, > Dont let the door hit you on the way out. > No, I'm here to stay with varnish and (unless the moderator kicks me out for my posting habits :-) with the maillist. I like varnish - it's much better documented than nginx and, fairly unexperienced as I am, got it running in notime - I think it's worth more than a single try. Actually it was your earlier post that nearly got me defecting to nginx, but in my app's case I don't plan to serve anything from the disk (it's just small, yet numerous, GWT-RPC payloads originating from a couple of tomcat instances behind). I just need to find a way to serve that stuff quickly while maintaining a mangeable thread count. Regards, G. From stig at zedge.net Tue Jan 11 11:43:36 2011 From: stig at zedge.net (Stig Bakken) Date: Tue, 11 Jan 2011 12:43:36 +0100 Subject: Connections dropped under load In-Reply-To: References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> Message-ID: On Tue, Jan 11, 2011 at 12:06 PM, David Birdsong wrote: > Dont let the door hit you on the way out. That was a bit harsh, wasn't it? Certainly totally uncalled for. Disagree as Kristian and George may about posting styles, I think George is making polite and fair argument, which IMHO is something we should all strive towards. - Stig (bottom-posting just for the sake of it) -- Stig Bakken CTO, Zedge.net - free your phone! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Tue Jan 11 11:54:27 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Tue, 11 Jan 2011 12:54:27 +0100 Subject: Connections dropped under load In-Reply-To: <4D2C327E.7060705@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> Message-ID: <20110111115427.GA4538@freud> On Tue, Jan 11, 2011 at 11:35:42AM +0100, George Georgovassilis wrote: > Personally I'm cool with either way of posting, a quick scan of the > mail archives showed that both were being practised and I couldn't > find any posting rules in the maillist desc - so (even at the risk > that you do mind) I'll stay with top posting... the modern internet > has evolved past this discussion [1] and I really can't be bothered. It is true that this isn't explicitly documented, yet top-posting is not preferred on the Varnish lists. That it is not documented anywhere doesn't change that - it only means that you get the first offence for free. It wasn't my intention to berate you, but I would like the mail to this list to be of the traditional mail list type. The reason is simple: I often keep up with 5-6 threads of communication on the very same list. To be able to accomplish that, it's a great help if I have some context to deal with when you are writing. It's not much to ask for. If you continue to top post even after being hinted that you should not, you are essentially being rude to the people who would help you. It's true that others top-post too, but that's not an excuse for you to continue doing so. The reality is that top-posting leads to less focused replies, missed questions and eventually a departure of some of the most experienced users and developers from the mail lists because we don't want to cope with it. And eventually you end up with the blind leading the blind. It's a difference of ego: Top-posting might be easier to write, but it is by far harder to read on a busy mail list. If you can't be bothered to invest time in writing your mail, why should I be bothered to read it - let alone answer you. As for whatever arguments exist FOR top-posting, I really do not care. In my experience, the people who top-post and continue to do so are the members of the community that are least willing to truly contribute in a positive manner. I usually assume that top-posting is based on ignorance, not malice, but continued top-posting can't be seen as anything but malice. > Please also note that in no way the Varnish documentation (see > chapter on prerequisites [2]) mentions a high-end server for even > moderate loads (I iterate: we are talking here about lousy 700 > req/s), and keep in mind that this discussion has turned to a > "virtual" resource: it's not about memory or CPU power but a logical > division of such, namely threads. I do take the point however that > when it comes to scalability nginx might be a better choice [3]. Simply put: If your virtual solution limits your usage of threads, then picked the wrong virtual solution to run Varnish on. If you take a look at the architecture notes[1], you'll see what I'm talking about. Varnish is designed for high-end servers and environments, but works just fine under low-end systems too. I'm fairly sure my mail didn't say anything about requiring high-end hardware, but I don't really know, since your quoting style doesn't let me easily check what precisely you are replying to. I suspect you are just being inaccurate in your response. Rest assure that I catch details for better or worse - and I use them in my replies. If I say it's designed for high-end hardware, that does NOT mean that high-end hardware is required. You don't have to run just because you are wearing running shoes. However, limiting the number of threads is not something that is strongly affected by hardware at all. And there are many virtual environments that will have no trouble at all using threads heavily. That puts your particular environment into what I like to call the "Nintendo"-category: It's not a real platform anymore and if it works, then that's fun and nice and all of that, but if it doesn't, it's not something that we should divert resources to. That you did not state up-front that this was a virtual environment which put artificial limits on thread-usage is regrettable, but not a big deal. In that regard, I'm more worried about all the people who tried to help you without querying for those rather important (and easily available) details. This will be the last reply I send to you if you keep top posting. Your choice. [1] http://www.varnish-cache.org/trac/wiki/ArchitectNotes - Kristian From g.georgovassilis at gmail.com Tue Jan 11 12:16:17 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Tue, 11 Jan 2011 13:16:17 +0100 Subject: Connections dropped under load In-Reply-To: <20110111115427.GA4538@freud> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> <20110111115427.GA4538@freud> Message-ID: <4D2C4A11.2080603@gmail.com> On 11.01.2011 12:54, Kristian Lyngstol wrote: > On Tue, Jan 11, 2011 at 11:35:42AM +0100, George Georgovassilis wrote: >> Personally I'm cool with either way of posting, a quick scan of the >> mail archives showed that both were being practised and I couldn't >> find any posting rules in the maillist desc - so (even at the risk >> that you do mind) I'll stay with top posting... the modern internet >> has evolved past this discussion [1] and I really can't be bothered. > It is true that this isn't explicitly documented, yet top-posting is not > preferred on the Varnish lists. That it is not documented anywhere doesn't > change that - it only means that you get the first offence for free. It > wasn't my intention to berate you, but I would like the mail to this list > to be of the traditional mail list type. > > The reason is simple: I often keep up with 5-6 threads of communication on > the very same list. To be able to accomplish that, it's a great help if I > have some context to deal with when you are writing. It's not much to ask > for. > > If you continue to top post even after being hinted that you should not, > you are essentially being rude to the people who would help you. It's true > that others top-post too, but that's not an excuse for you to continue > doing so. The reality is that top-posting leads to less focused replies, > missed questions and eventually a departure of some of the most experienced > users and developers from the mail lists because we don't want to cope with > it. And eventually you end up with the blind leading the blind. > > It's a difference of ego: Top-posting might be easier to write, but it is > by far harder to read on a busy mail list. If you can't be bothered to > invest time in writing your mail, why should I be bothered to read it - let > alone answer you. > > As for whatever arguments exist FOR top-posting, I really do not care. In > my experience, the people who top-post and continue to do so are the > members of the community that are least willing to truly contribute in a > positive manner. I usually assume that top-posting is based on ignorance, > not malice, but continued top-posting can't be seen as anything but malice. > >> Please also note that in no way the Varnish documentation (see >> chapter on prerequisites [2]) mentions a high-end server for even >> moderate loads (I iterate: we are talking here about lousy 700 >> req/s), and keep in mind that this discussion has turned to a >> "virtual" resource: it's not about memory or CPU power but a logical >> division of such, namely threads. I do take the point however that >> when it comes to scalability nginx might be a better choice [3]. > Simply put: If your virtual solution limits your usage of threads, then > picked the wrong virtual solution to run Varnish on. > > If you take a look at the architecture notes[1], you'll see what I'm > talking about. Varnish is designed for high-end servers and environments, > but works just fine under low-end systems too. I'm fairly sure my mail > didn't say anything about requiring high-end hardware, but I don't really > know, since your quoting style doesn't let me easily check what precisely > you are replying to. I suspect you are just being inaccurate in your > response. Rest assure that I catch details for better or worse - and I use > them in my replies. If I say it's designed for high-end hardware, that does > NOT mean that high-end hardware is required. You don't have to run just > because you are wearing running shoes. > > However, limiting the number of threads is not something that is strongly > affected by hardware at all. And there are many virtual environments > that will have no trouble at all using threads heavily. That puts your > particular environment into what I like to call the "Nintendo"-category: > It's not a real platform anymore and if it works, then that's fun and nice > and all of that, but if it doesn't, it's not something that we should > divert resources to. > > That you did not state up-front that this was a virtual environment which > put artificial limits on thread-usage is regrettable, but not a big deal. > In that regard, I'm more worried about all the people who tried to help you > without querying for those rather important (and easily available) details. > > This will be the last reply I send to you if you keep top posting. Your > choice. > > [1] http://www.varnish-cache.org/trac/wiki/ArchitectNotes > > - Kristian Hello Kristian, It's your home, I'm just a guest passing by - sorry I didn't bring any gifts :-( Yes, Ninteno it's a good term describing that environment. Only that there is/might be a whole lot of them which need to be fed with connections. An army of lemmings if you will. So I guess that, even if I get a rather big virtual box with plenty of RAM and CPU, I still won't be able to handle all the requests if the sysadmin limited the thread count. I take that there is also no way to handle all incoming connections as a single point of entry if threads are limited. An unrelated question springs to my mind, which fortunately doesn't happen to be my use case: how would varnish handle a COMET scenario with tens of thousands of active connections, i.e. as a proxy for Jetty or Tomcat6 ? Regards, G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Tue Jan 11 12:41:37 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Tue, 11 Jan 2011 12:41:37 +0000 Subject: Connections dropped under load In-Reply-To: <4D2C4A11.2080603@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> <20110111115427.GA4538@freud> <4D2C4A11.2080603@gmail.com> Message-ID: <6394CBEB-8153-4987-824C-59CB402820D2@gmail.com> > An unrelated question springs to my mind, which fortunately doesn't happen to be my use case: how would varnish handle a COMET scenario with tens of thousands of active connections, i.e. as a proxy for Jetty or Tomcat6 ? We're using Varnish to do load balancing across Meteor servers. Clients connect to http://data-lb.example.com, which is Varnish. This then looks up which Meteor servers are active (using conventional varnish backend health polling), and then tells the client which one to bind to. It works well. Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.welschhoff at lvm.de Tue Jan 11 14:57:08 2011 From: s.welschhoff at lvm.de (Stefan Welschhoff) Date: Tue, 11 Jan 2011 15:57:08 +0100 Subject: Varnish-Cache In-Reply-To: References: Message-ID: Hello, we are interested in Varnish-Cache. But we have got some points we want to know bevor a choice. 1. We want to have an SSL communication for the front-end and back-end!?!? 2.Is it possible to disable back-end servers for maintenance without touching the config? 3.Is it possible to create statics to get a graph to see its load factor? Mit freundlichen Gr??en Kind Regards Stefan Welschhoff Abteilung DV-Infrastruktur Kolde-Ring 21 48126 M?nster Telefon: 0251 / 702 2328 E-Mail: s.welschhoff at lvm.de www.lvm.de In guten H?nden. LVM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 13F73299.gif Type: image/gif Size: 6968 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LVM_Unternehmenssignatur.pdf Type: application/pdf Size: 29468 bytes Desc: not available URL: From roberto.fernandezcrisial at gmail.com Tue Jan 11 15:00:36 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 11 Jan 2011 12:00:36 -0300 Subject: Object no cached when TTL expires Message-ID: Hi guys, I have a question about varnish vcl, can you help me? Once the varnish service is started the object (in this case an image) is cached: Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 4 Date Tue, 11 Jan 2011 14:20:01 GMT Content-Type image/jpeg Etag "a76-49992a41d1b5d" Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 14:10:04 GMT X-Cacheable 1800.004 X-Varnish 826882404 826879249 X-Varnish-Cache HIT Varnish (9) After some minutes later: Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 1299 Date Tue, 11 Jan 2011 14:41:36 GMT Content-Type image/jpeg Etag "a76-49992a41d1b5d" Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 14:10:04 GMT X-Cacheable 1800.004 X-Varnish 832397538 826879249 X-Varnish-Cache HIT Varnish (21056) When TTL expires (1800s) the object is no longer been cached: Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 0 Date Tue, 11 Jan 2011 14:58:03 GMT Content-Type image/jpeg Etag "a76-499933333a882" Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 14:50:05 GMT X-Cacheable 1800.002 X-Varnish 836621253 X-Varnish-Cache MISS Varnish Unless the service is restarted. Do you have any idea what could be happen? Do you see something abnormal in headers? My Varnish version is "varnishd (varnish-2.1.3 SVN )". Thank you very much, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Tue Jan 11 15:14:27 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Tue, 11 Jan 2011 15:14:27 +0000 Subject: Varnish-Cache In-Reply-To: References: Message-ID: <82513684-F8C4-4960-9A49-5F1BCF6A11DE@gmail.com> > 1. We want to have an SSL communication for the front-end and back-end!?!? What is the benefit in having SSL between Varnish and back-end? I'd assume that the entire environment is under your control, so there's no security improvement for this. To have SSL on the front-end, we use nginx in front of Varnish. > 2.Is it possible to disable back-end servers for maintenance without touching the config? We just stop the back-ends. We don't make any config changes. > 3.Is it possible to create statics to get a graph to see its load factor? > > Yes. We graph Varnish with Zabbix, and other people use other monitoring tools. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Tue Jan 11 16:11:55 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Tue, 11 Jan 2011 16:11:55 +0000 Subject: Varnish-Cache In-Reply-To: References: Message-ID: 1) Yes, you can use pound on 443. 2) Just have probes in Varnish looking at the backends and then Varnish will remove them when they are unavailable. Or look at switching VCL config. You can load multiple configs without editing files and restarting. 3) The munin varnish plug-in is getting lots of attention on the mailing list lately. On 11 January 2011 14:57, Stefan Welschhoff wrote: > Hello, > > we are interested in Varnish-Cache. But we have got some points we want to > know bevor a choice. > > 1. We want to have an SSL communication for the front-end and back-end!?!? > 2.Is it possible to disable back-end servers for maintenance without > touching the config? > 3.Is it possible to create statics to get a graph to see its load factor? > > Mit freundlichen Gr??en > Kind Regards > > Stefan Welschhoff > > > > Abteilung DV-Infrastruktur > Kolde-Ring 21 > 48126 M?nster > Telefon: 0251 / 702 2328 > E-Mail: s.welschhoff at lvm.de > www.lvm.de > *In guten H?nden.** LVM* > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablort+varnish at gmail.com Tue Jan 11 16:44:37 2011 From: pablort+varnish at gmail.com (pablort) Date: Tue, 11 Jan 2011 14:44:37 -0200 Subject: Object no cached when TTL expires In-Reply-To: References: Message-ID: Notice how Etag changes: First and second request: Etag "a76-49992a41d1b5d" Last request: Etag "a76-499933333a882" Try (in vcl_fetch): unset beresp.http.Etag; or change Apache to use only Size and MTime to create the Etag: FileETag MTime Size Cheers, 2011/1/11 Roberto O. Fern?ndez Crisial > Hi guys, > > I have a question about varnish vcl, can you help me? Once the varnish > service is started the object (in this case an image) is cached: > > Via 1.1 varnish, 1.1 V107WPROD > Connection Keep-Alive > Proxy-Connection Keep-Alive > Content-Length 2678 > Age 4 > Date Tue, 11 Jan 2011 14:20:01 GMT > Content-Type image/jpeg > Etag "a76-49992a41d1b5d" > Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 > Last-Modified Tue, 11 Jan 2011 14:10:04 GMT > X-Cacheable 1800.004 > X-Varnish 826882404 826879249 > X-Varnish-Cache HIT Varnish (9) > > > After some minutes later: > > Via 1.1 varnish, 1.1 V107WPROD > Connection Keep-Alive > Proxy-Connection Keep-Alive > Content-Length 2678 > Age 1299 > Date Tue, 11 Jan 2011 14:41:36 GMT > Content-Type image/jpeg > Etag "a76-49992a41d1b5d" > Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 > Last-Modified Tue, 11 Jan 2011 14:10:04 GMT > X-Cacheable 1800.004 > X-Varnish 832397538 826879249 > X-Varnish-Cache HIT Varnish (21056) > > > When TTL expires (1800s) the object is no longer been cached: > > Via 1.1 varnish, 1.1 V107WPROD > Connection Keep-Alive > Proxy-Connection Keep-Alive > Content-Length 2678 > Age 0 > Date Tue, 11 Jan 2011 14:58:03 GMT > Content-Type image/jpeg > Etag "a76-499933333a882" > Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 > Last-Modified Tue, 11 Jan 2011 14:50:05 GMT > X-Cacheable 1800.002 > X-Varnish 836621253 > X-Varnish-Cache MISS Varnish > > > Unless the service is restarted. Do you have any idea what could be happen? > Do you see something abnormal in headers? My Varnish version is "varnishd > (varnish-2.1.3 SVN )". > > Thank you very much, > Roberto. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.brasilino at gmail.com Tue Jan 11 16:48:47 2011 From: lucas.brasilino at gmail.com (Lucas Brasilino) Date: Tue, 11 Jan 2011 13:48:47 -0300 Subject: Trouble understanding Varnishlog In-Reply-To: References: Message-ID: Hi > Can someone explain why the object is not being returned as a cached object? Because your client is performing a conditional request and so revalidating using If-None-Match/Etag headers the already cached object in your machine. Your client is issuing: > 15 RxHeader c If-None-Match: "4ee85970d12afd8992d3dc1651f07b9d" It means: please send me the object if none match with this Etag value. Varnish responds: > 15 TxStatus c 304 > 15 TxResponse c Not Modified [...] > 15 TxHeader c ETag: "4ee85970d12afd8992d3dc1651f07b9d" It means: the object you have is the same one I have. Transfering an object you already have is a waste of resources :) regards Lucas Brasilino > I'm confused around the fact that it says there is a 'hit', but the > X-Varnish header only has 1 field. > I read that if the Set-Cookie header is sent, then varnish does not cache > the object, but I'm pretty sure thats not being sent. Is there another rule > I'm missing? Thanks. > Here is an excerpt of the log: > ??15 SessionOpen ?c 10.0.19.23 59326 :80 > ??15 ReqStart ? ? c 10.0.19.23 59326 1001168302 > ??15 RxRequest ? ?c GET > ??15 RxURL ? ? ? ?c /items?xzy=true > ??15 RxProtocol ? c HTTP/1.1 > ??15 RxHeader ? ? c Host: XYZ > ??15 RxHeader ? ? c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X > 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 GTB7.1 > ??15 RxHeader ? ? c Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > ??15 RxHeader ? ? c Accept-Language: en-us,en;q=0.5 > ??15 RxHeader ? ? c Accept-Encoding: gzip,deflate > ??15 RxHeader ? ? c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > ??15 RxHeader ? ? c Keep-Alive: 115 > ??15 RxHeader ? ? c Connection: keep-alive > ??15 RxHeader ? ? c Cookie: __qca=P0-707762352-1283409801091; > __utma=182775871.333345873.1284063320.1293103561.1293171223.24; > __utmz=182775871.1284063320.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); > km_lv=x; km_ai=T7R0VpZshcfDBJiF4n8Edt1vrrI; km_uq=; __utmv=182775871.p > ??15 RxHeader ? ? c If-None-Match: "4ee85970d12afd8992d3dc1651f07b9d" > ??15 VCL_call ? ? c recv > ??15 VCL_return ? c lookup > ??15 VCL_call ? ? c hash > ??15 VCL_return ? c hash > ??15 Hit ? ? ? ? ?c 1001168004 > ??15 VCL_call ? ? c hit > ??15 VCL_return ? c deliver > ??15 VCL_call ? ? c deliver > ??15 VCL_return ? c deliver > ??15 TxProtocol ? c HTTP/1.1 > ??15 TxStatus ? ? c 304 > ??15 TxResponse ? c Not Modified > ??15 TxHeader ? ? c Date: Tue, 28 Dec 2010 23:19:53 GMT > ??15 TxHeader ? ? c Via: 1.1 varnish > ??15 TxHeader ? ? c X-Varnish: 1001168302 > ??15 TxHeader ? ? c Cache-Control: private, max-age=0, must-revalidate > ??15 TxHeader ? ? c ETag: "4ee85970d12afd8992d3dc1651f07b9d" > ??15 TxHeader ? ? c Connection: keep-alive > ??15 Length ? ? ? c 0 > ??15 ReqEnd ? ? ? c 1001168302 1293578393.040635109 1293578393.040782928 > 0.000112057 0.000097990 0.000049829 > ??15 Debug ? ? ? ?c "herding" > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From ask at develooper.com Tue Jan 11 19:36:36 2011 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Tue, 11 Jan 2011 11:36:36 -0800 Subject: Connections dropped under load In-Reply-To: <4D2C4A11.2080603@gmail.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> <20110111115427.GA4538@freud> <4D2C4A11.2080603@gmail.com> Message-ID: <06F5C2E0-E6D5-4B6B-AC69-4060BE228986@develooper.com> On Jan 11, 2011, at 4:16, George Georgovassilis wrote: > So I guess that, even if I get a rather big virtual box with plenty of RAM and CPU, I still won't be able to handle all the requests if the sysadmin limited the thread count. I take that there is also no way to handle all incoming connections as a single point of entry if threads are limited. How can you limit the number of threads? And why would you? I run Varnish under both Xen and KVM (and there's an instance on 32-bit linux!) - none of them have trouble with our small load (<1000 requests per second). (And all very standard configurations). - ask From roberto.fernandezcrisial at gmail.com Tue Jan 11 20:10:31 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Tue, 11 Jan 2011 17:10:31 -0300 Subject: Object no cached when TTL expires In-Reply-To: References: Message-ID: I've unset beresp.http.Etag but still doesn't work. After TTL expires, the Age header shows "0": Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 3 Date Tue, 11 Jan 2011 18:06:04 GMT Content-Type image/jpeg Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 17:51:29 GMT X-Cacheable 1800.002 X-Varnish 1806792167 1806789988 X-Varnish-Cache HIT Varnish (12) Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 525 Date Tue, 11 Jan 2011 18:14:46 GMT Content-Type image/jpeg Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 17:51:29 GMT X-Cacheable 1800.002 X-Varnish 1808417729 1806789988 X-Varnish-Cache HIT Varnish (5528) Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 1462 Date Tue, 11 Jan 2011 18:30:23 GMT Content-Type image/jpeg Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 17:51:29 GMT X-Cacheable 1800.002 X-Varnish 1811285962 1806789988 X-Varnish-Cache HIT Varnish (15342) alter TTL 1800s: Via 1.1 varnish, 1.1 V107WPROD Connection Keep-Alive Proxy-Connection Keep-Alive Content-Length 2678 Age 0 Date Tue, 11 Jan 2011 19:22:07 GMT Content-Type image/jpeg Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 Last-Modified Tue, 11 Jan 2011 19:10:04 GMT X-Cacheable 1800.007 X-Varnish 1821237358 X-Varnish-Cache MISS Varnish Any ideas? Thank you! Roberto. 2011/1/11 pablort > > Notice how Etag changes: > > > First and second request: > Etag "a76-49992a41d1b5d" > > Last request: > Etag "a76-499933333a882" > > > Try (in vcl_fetch): > > unset beresp.http.Etag; > > or change Apache to use only Size and MTime to create the Etag: > > FileETag MTime Size > Cheers, > > 2011/1/11 Roberto O. Fern?ndez Crisial > > >> Hi guys, >> >> I have a question about varnish vcl, can you help me? Once the varnish >> service is started the object (in this case an image) is cached: >> >> Via 1.1 varnish, 1.1 V107WPROD >> Connection Keep-Alive >> Proxy-Connection Keep-Alive >> Content-Length 2678 >> Age 4 >> Date Tue, 11 Jan 2011 14:20:01 GMT >> Content-Type image/jpeg >> Etag "a76-49992a41d1b5d" >> Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 >> Last-Modified Tue, 11 Jan 2011 14:10:04 GMT >> X-Cacheable 1800.004 >> X-Varnish 826882404 826879249 >> X-Varnish-Cache HIT Varnish (9) >> >> >> After some minutes later: >> >> Via 1.1 varnish, 1.1 V107WPROD >> Connection Keep-Alive >> Proxy-Connection Keep-Alive >> Content-Length 2678 >> Age 1299 >> Date Tue, 11 Jan 2011 14:41:36 GMT >> Content-Type image/jpeg >> Etag "a76-49992a41d1b5d" >> Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 >> Last-Modified Tue, 11 Jan 2011 14:10:04 GMT >> X-Cacheable 1800.004 >> X-Varnish 832397538 826879249 >> X-Varnish-Cache HIT Varnish (21056) >> >> >> When TTL expires (1800s) the object is no longer been cached: >> >> Via 1.1 varnish, 1.1 V107WPROD >> Connection Keep-Alive >> Proxy-Connection Keep-Alive >> Content-Length 2678 >> Age 0 >> Date Tue, 11 Jan 2011 14:58:03 GMT >> Content-Type image/jpeg >> Etag "a76-499933333a882" >> Server Apache/2.2.15 (Unix) mod_fcgid/2.3.5 >> Last-Modified Tue, 11 Jan 2011 14:50:05 GMT >> X-Cacheable 1800.002 >> X-Varnish 836621253 >> X-Varnish-Cache MISS Varnish >> >> >> Unless the service is restarted. Do you have any idea what could be >> happen? Do you see something abnormal in headers? My Varnish version is >> "varnishd (varnish-2.1.3 SVN )". >> >> Thank you very much, >> Roberto. >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Tue Jan 11 22:02:12 2011 From: perbu at varnish-software.com (Per Buer) Date: Tue, 11 Jan 2011 23:02:12 +0100 Subject: Object no cached when TTL expires In-Reply-To: References: Message-ID: 2011/1/11 Roberto O. Fern?ndez Crisial : > I've unset beresp.http.Etag but still doesn't work. After TTL expires, the > Age header shows "0": What do you expect to happend after the TTL expires? -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From phk at phk.freebsd.dk Tue Jan 11 22:22:04 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 11 Jan 2011 22:22:04 +0000 Subject: real-life ESI files Message-ID: <72747.1294784524@critter.freebsd.dk> Hi Guys, I'm hacking up the ESI code to support gzip compression now and since this is a pretty extensive rework of the code, I would like to buld a collection of real-life ESI files to test the parsing code on. If you can spare a couple from your site, please send them by private mail to me, with a subject of "ESI FILE" so my mail-filter can see them. Thanks in advance, Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From g.georgovassilis at gmail.com Tue Jan 11 23:33:14 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 12 Jan 2011 00:33:14 +0100 Subject: Connections dropped under load In-Reply-To: <06F5C2E0-E6D5-4B6B-AC69-4060BE228986@develooper.com> References: <4D248C3F.2010401@gmail.com> <20110106090003.GA2106@freud> <7F0AA702B8A85A4A967C4C8EBAD6902CC69AEE@TMG-EVS02.torstar.net> <4D2B71E6.4050805@gmail.com> <790C42AB-E9F1-41F8-AA62-862E801D9FA7@topscms.com> <4D2B9D7E.3060209@gmail.com> <20110111074814.GA2266@freud> <4D2C1E2C.9010305@gmail.com> <20110111095112.GC2266@freud> <4D2C327E.7060705@gmail.com> <20110111115427.GA4538@freud> <4D2C4A11.2080603@gmail.com> <06F5C2E0-E6D5-4B6B-AC69-4060BE228986@develooper.com> Message-ID: <4D2CE8BA.70003@gmail.com> On 11.01.2011 20:36, Ask Bj?rn Hansen wrote: > > How can you limit the number of threads? And why would you? > > I run Varnish under both Xen and KVM (and there's an instance on 32-bit linux!) - none of them have trouble with our small load (<1000 requests per second). (And all very standard configurations). > > > > - ask Hi Ask, Some background: The hosting plans that use paravirtualization (such as Virtuozzo) artificially limit some logical resources such as sockets, pages, virtual memory address space (which you cannot cheat your way out of by just creating more swap space) and number of processes. The latter is called "numproc" and has fairly low values. The dev plan for my project has several such virtual instances, which however don't allow more than 150 threads for more than 5 minutes. This is quite annoying, but a real business model and I have to accept that (just as I have to accept that varnish is a thread-fest) - even more annoying because both the memory and CPU reserves would otherwise be more than adequate to serve the application: during the stresstest I mentioned earlier Varnish is taking up a neglible 4% CPU load while doing some pretty elaborate pattern/cookie/locale matching and hashing (yeah, I have a big VCL). I thank you all for your valuable input. The earlier posts delineate an accurate picture of where the limitations of using varnish on a constrained environment are. Several people use nginx/varnish cascades (apparently with keep-alive/pipelining) which is the next step I will be investigating. Thus - for my part - my questions have been answered and I'd like to close this topic. Best regards, G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmoniey at gmail.com Wed Jan 12 07:11:08 2011 From: gmoniey at gmail.com (.) Date: Tue, 11 Jan 2011 23:11:08 -0800 Subject: Trouble understanding Varnishlog In-Reply-To: References: Message-ID: Hmm..So essentially the browser cache is preventing Varnish from returning the result. This definitely makes sense, but I could have sworn that the X-Varnish header does not show up when I attempt the same request from a different browser. It seems regardless of the scenario, even if I clear my browser cache on the current page, I cannot get the X-Varnish header to show 2 values. I will keep digging, and see if I can come up with a test case. On Tue, Jan 11, 2011 at 8:48 AM, Lucas Brasilino wrote: > Hi > > > Can someone explain why the object is not being returned as a cached > object? > > Because your client is performing a conditional request and so > revalidating using > If-None-Match/Etag headers the already cached object in your machine. > > Your client is issuing: > > > 15 RxHeader c If-None-Match: "4ee85970d12afd8992d3dc1651f07b9d" > > It means: please send me the object if none match with this Etag value. > > Varnish responds: > > > 15 TxStatus c 304 > > 15 TxResponse c Not Modified > [...] > > 15 TxHeader c ETag: "4ee85970d12afd8992d3dc1651f07b9d" > > It means: the object you have is the same one I have. > > Transfering an object you already have is a waste of resources :) > > regards > Lucas Brasilino > > > > > I'm confused around the fact that it says there is a 'hit', but the > > X-Varnish header only has 1 field. > > I read that if the Set-Cookie header is sent, then varnish does not cache > > the object, but I'm pretty sure thats not being sent. Is there another > rule > > I'm missing? Thanks. > > Here is an excerpt of the log: > > 15 SessionOpen c 10.0.19.23 59326 :80 > > 15 ReqStart c 10.0.19.23 59326 1001168302 > > 15 RxRequest c GET > > 15 RxURL c /items?xzy=true > > 15 RxProtocol c HTTP/1.1 > > 15 RxHeader c Host: XYZ > > 15 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X > > 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 GTB7.1 > > 15 RxHeader c Accept: > > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > 15 RxHeader c Accept-Language: en-us,en;q=0.5 > > 15 RxHeader c Accept-Encoding: gzip,deflate > > 15 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 > > 15 RxHeader c Keep-Alive: 115 > > 15 RxHeader c Connection: keep-alive > > 15 RxHeader c Cookie: __qca=P0-707762352-1283409801091; > > __utma=182775871.333345873.1284063320.1293103561.1293171223.24; > > > __utmz=182775871.1284063320.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); > > km_lv=x; km_ai=T7R0VpZshcfDBJiF4n8Edt1vrrI; km_uq=; __utmv=182775871.p > > 15 RxHeader c If-None-Match: "4ee85970d12afd8992d3dc1651f07b9d" > > 15 VCL_call c recv > > 15 VCL_return c lookup > > 15 VCL_call c hash > > 15 VCL_return c hash > > 15 Hit c 1001168004 > > 15 VCL_call c hit > > 15 VCL_return c deliver > > 15 VCL_call c deliver > > 15 VCL_return c deliver > > 15 TxProtocol c HTTP/1.1 > > 15 TxStatus c 304 > > 15 TxResponse c Not Modified > > 15 TxHeader c Date: Tue, 28 Dec 2010 23:19:53 GMT > > 15 TxHeader c Via: 1.1 varnish > > 15 TxHeader c X-Varnish: 1001168302 > > 15 TxHeader c Cache-Control: private, max-age=0, must-revalidate > > 15 TxHeader c ETag: "4ee85970d12afd8992d3dc1651f07b9d" > > 15 TxHeader c Connection: keep-alive > > 15 Length c 0 > > 15 ReqEnd c 1001168302 1293578393.040635109 1293578393.040782928 > > 0.000112057 0.000097990 0.000049829 > > 15 Debug c "herding" > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fhelmschrott at gmail.com Wed Jan 12 08:59:13 2011 From: fhelmschrott at gmail.com (Frank Helmschrott) Date: Wed, 12 Jan 2011 09:59:13 +0100 Subject: vHosts (name or ip based) with varnish Message-ID: Hi, via google i came to this post on varnish-software.com regarding vhost solutions for varnish: http://www.varnish-software.com/blog/virtual-hosts-varnish I'd like to have a similar solution with an 'as small as possible' master.vcl that does the if/elsif part. I'd like to have mostly different configurations for each host. Does anyone know where to place this if/elsif part? Varnish expects at least acl, sub, backend or director in the vcl and i'm quite unsure how to keep this vcl as simple/small as possible and load most of the staff through the other vcls. Thanks! -- Frank From mgervais at agaetis.fr Wed Jan 12 10:34:37 2011 From: mgervais at agaetis.fr (=?UTF-8?Q?Micka=C3=ABl_GERVAIS?=) Date: Wed, 12 Jan 2011 11:34:37 +0100 Subject: Varnish and time out on backend =?UTF-8?Q?=28first=5Fbyte=5Ftimeout=29=2E?= Message-ID: <6e98c6388f7efe33b5a68796dda1408d@localhost> Hi I've configured my back end as follow: .host = "xxxxxxxxxx"; .port = "80"; .connect_timeout = 1s; .first_byte_timeout = 10s; .between_bytes_timeout = 2s; If a timeout occurs (first_byte_timeout reached) the function vcl_error is called, I'd like to use the saint mode to retreive the response from the cache, but saint mode is only avaliable on beresp. Is there a way to tell varnish use a dirty object from the cache? Maybe is not the correct way to handle this kind of error. Thanks. ::::::::::::::::::::::::::::::::::::::::::::::: MICKAL GERVAIS Agaetis 10 all?e Evariste Galois 63 000 Clermont-Ferrand Courriel : mgervais at agaetis.fr T?l?phone : 04 73 44 56 51 Portable : 06 82 35 52 82 Site : http://www.agaetis.fr [1] ::::::::::::::::::::::::::::::::::::::::::::::: Links: ------ [1] http://www.agaetis.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From egimenez at vectorsf.com Wed Jan 12 11:16:34 2011 From: egimenez at vectorsf.com (Eduardo Gimenez Ruiz) Date: Wed, 12 Jan 2011 12:16:34 +0100 Subject: Exclude a URL Message-ID: <4D2D8D92.8060909@vectorsf.com> Hi all in the list: In the first time I will like to say that my English is not very good, sorry for that. I will try to explain my question for is some person can help me. In the second time, sorry, my question is very simple and basic...I'm reading all manual and how to but i don't have results for my problem. I'm a new user for Varnish and i have very troubles in a priori easy thing. I use a Varnish Version 1.1.2 with pressflow (a kind of drupal) and i try to exclude a URL (or a path) from the cache. I use this configuration in my default.vcl: sub vcl_recv { if (req.url ~ "^/portal/ajax_user_bar") { //return (pass); //pass; unset req.http.cookie; } [other code] } And in all case I see this result with "varnishtop -b -i TxURL" is: 1.00 TxURL /portal/ajax_user_bar/login?0.9301228949334472 The number after the "login?" are a ramdom number from the code and the "ajax_user_bar" is a module in pressflow not a directory of the OS. I tried to use a "return (pass);", "pass;" and "unset req.http.cookie;" (like the example in a tutorial: http://www.varnish-cache.org/docs/2.1/tutorial/vcl.html) but I can't exclude the URL. What is my problem?,What change I make in my code for exclude this URL? Thank for all help, advice or documentation that you can give me. -- Eduardo Gim?nez Ruiz -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.brasilino at gmail.com Wed Jan 12 11:32:53 2011 From: lucas.brasilino at gmail.com (Lucas Brasilino) Date: Wed, 12 Jan 2011 08:32:53 -0300 Subject: Trouble understanding Varnishlog In-Reply-To: References: Message-ID: Hi > It seems regardless of the scenario, even if I clear my browser cache on the > current page, I cannot get the X-Varnish header to show 2 values. I will > keep digging, and see if I can come up with a test case. A test case should be nice. I recommend you using 'wget', since it uses a basic set of headers (Host, User-Agent and Connection, I think) and you can add any other header you'd like. regards Lucas Brasilino From roberto.fernandezcrisial at gmail.com Wed Jan 12 12:44:37 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 12 Jan 2011 09:44:37 -0300 Subject: Object no cached when TTL expires In-Reply-To: References: Message-ID: I expect Varnish request the object from webserver, show HIT and Age header increase value. This is the sequence right now: 1) user ask for image 2) varnish receive request 3) varnish ask the webserver 4) varnish cache the object 5) varnish response user request 6) varnish response users requests for 1800s (TTL) 7) when TTL expires varnish should ask again webserver for the object and cache it (but it doesn't) 8) varnish response MISS for the object (Age header always shows value "0") and ask webserver for every user request :( What should I do? Any experience handle problem like that? Thank you, Roberto. On Tue, Jan 11, 2011 at 7:02 PM, Per Buer wrote: > 2011/1/11 Roberto O. Fern?ndez Crisial >: > > > I've unset beresp.http.Etag but still doesn't work. After TTL expires, > the > > Age header shows "0": > > What do you expect to happend after the TTL expires? > > > > -- > Per Buer, Varnish Software > Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer > Varnish makes websites fly! > Want to learn more about Varnish? > http://www.varnish-software.com/whitepapers > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Wed Jan 12 12:59:21 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 12 Jan 2011 13:59:21 +0100 Subject: Object no cached when TTL expires In-Reply-To: References: Message-ID: <20110112125921.GC2952@freud> On Wed, Jan 12, 2011 at 09:44:37AM -0300, Roberto O. Fern?ndez Crisial wrote: > I expect Varnish request the object from webserver, show HIT and Age header > increase value. > > This is the sequence right now: > > 1) user ask for image > 2) varnish receive request > 3) varnish ask the webserver > 4) varnish cache the object > 5) varnish response user request > 6) varnish response users requests for 1800s (TTL) > 7) when TTL expires varnish should ask again webserver for the object and > cache it (but it doesn't) > 8) varnish response MISS for the object (Age header always shows value "0") > and ask webserver for every user request :( Aha, it sounded like you were complaining that you got a (single) miss after the object expired. If you are getting multiple misses, it's a different story. If you attach varnishlog -o output of both the first miss that gets cached, a cache hit and the two first miss after it is expired, I'm pretty sure we can figure this out. Oh, and your VCL, and while we're at it, we might as well throw in varnishstat -1. - Kristian From kristian at varnish-software.com Wed Jan 12 13:14:24 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 12 Jan 2011 14:14:24 +0100 Subject: vHosts (name or ip based) with varnish In-Reply-To: References: Message-ID: <20110112131424.GD2952@freud> Hi, On Wed, Jan 12, 2011 at 09:59:13AM +0100, Frank Helmschrott wrote: > via google i came to this post on varnish-software.com regarding vhost > solutions for varnish: > http://www.varnish-software.com/blog/virtual-hosts-varnish > > I'd like to have a similar solution with an 'as small as possible' > master.vcl that does the if/elsif part. I'd like to have mostly > different configurations for each host. There are a few different approaches. The basic issue is: Varnish has several entry points to VCL (vcl_recv, vcl_fetch, etc), and you have to either have one VCL per function per site, or have the if(req.http.host ~ ...) in each of the VCLs. If at all possible, I'd go for the latter. Ie: # master.vcl include "site1.vcl"; include "site2.vcl"; # site1.vcl backend site1backend { .host = "foo1" }; sub vcl_recv { if (req.http.host ~ "site1") { set req.backend = site1backend; (... more site1 stuff ...) } } sub vcl_deliver { if (req.http.host ~ "site1") { set resp.http.X-site = "Server by site 1"; } } # site2.vcl backend site2backend { .host = "foo1" }; sub vcl_recv { if (req.http.host ~ "site2") { set req.backend = site2backend; (... more site2 stuff ...) } } sub vcl_deliver { if (req.http.host ~ "site2") { set resp.http.X-site = "Server by site 2"; } } It adds a bit of extra indentation in each file, but the benefit is that you get full use of VCL in all the included files - for better or worse. The alternative is: # master.vcl backend site1 .... backend site2 .... sub vcl_recv { if (req.http.host ~ "site1") { set req.backend = site1; include "site1recv.vcl"; } elif (req.http.host ~ "site2") { set req.backend = site2; include "site2recv.vcl"; } } sub vcl_deliver { if (req.http.host ~ "site1") { include "site1deliver.vcl"; } elif (req.http.host ~ "site2") { include "site2deliver.vcl"; } } (followed by logic for vcl_recv in site1recv.vcl, site2recv.vcl and so forth). The benefit here is that all the "which site is this" control is contained in the master file, but you also get a large amount of _different_ files and less direct control in the included files... But for further reading, you really must go through the Varnish tutorial at http://www.varnish-cache.org/docs/2.1/ - Kristian PS: I didn't even pretend to proof read the VCL - consider it pseudo-code From kristian at varnish-software.com Wed Jan 12 13:24:52 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 12 Jan 2011 14:24:52 +0100 Subject: Exclude a URL In-Reply-To: <4D2D8D92.8060909@vectorsf.com> References: <4D2D8D92.8060909@vectorsf.com> Message-ID: <20110112132452.GE2952@freud> On Wed, Jan 12, 2011 at 12:16:34PM +0100, Eduardo Gimenez Ruiz wrote: > In the first time I will like to say that my English is not very > good, sorry for that. I will try to explain my question for is some > person can help me. Don't worry about your English - if you do your best, we are happy :) > I use a Varnish Version 1.1.2 with pressflow (a kind of drupal) and > i try to exclude a URL (or a path) from the cache. I strongly advice you to upgrade. Version 1.1.2 is very old and all the documentation on the wiki and /docs/ is for Varnish 2.1 (Or some for Varnish 2.0). Varnish 2.1 also fixes many known bugs. > I use this configuration in my default.vcl: > > sub vcl_recv { > if (req.url ~ "^/portal/ajax_user_bar") { > //return (pass); > //pass; > unset req.http.cookie; > } > [other code] > } > > And in all case I see this result with "varnishtop -b -i TxURL" is: > > 1.00 TxURL /portal/ajax_user_bar/login?0.9301228949334472 > > The number after the "login?" are a ramdom number from the code and > the "ajax_user_bar" is a module in pressflow not a directory of the > OS. It looks OK. You are removing a cookie, and the URL will not change because of that. The number 1 also means that only 1 backend request was seen for that URL for quite some time. Look at varnishlog -o -b to see if the Cookie header is removed or not. Varnish can also change the url, if that is what you want. > I tried to use a "return (pass);", "pass;" and "unset > req.http.cookie;" (like the example in a tutorial: > http://www.varnish-cache.org/docs/2.1/tutorial/vcl.html) but I can't > exclude the URL. Much of this will not work on Varnish 1.1.2, because the examples and documentation is written for Varnish 2.0 or newer. It will be much easier to help you if you upgrade. There is a package repository at http://repo.varnish-cache.org for Debian, Ubuntu and CentOS/RHEL packages. Hope this helps :) - Kristian From kristian at varnish-software.com Wed Jan 12 13:31:58 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 12 Jan 2011 14:31:58 +0100 Subject: Varnish and time out on backend (first_byte_timeout). In-Reply-To: <6e98c6388f7efe33b5a68796dda1408d@localhost> References: <6e98c6388f7efe33b5a68796dda1408d@localhost> Message-ID: <20110112133158.GF2952@freud> On Wed, Jan 12, 2011 at 11:34:37AM +0100, Micka?l GERVAIS wrote: > If a timeout occurs (first_byte_timeout reached) the function vcl_error is > called, I'd like to use the saint mode to retreive the response from the > cache, but saint mode is only avaliable on beresp. > > Is there a way to tell varnish use a dirty object from the cache? Maybe is > not the correct way to handle this kind of error. You are correct - that is a weakness. I have a nasty hack, though. 1. Declare a second, bogus backend which will always be sick. 2. In vcl_error if restarts is 0, set a magic marker and restart. 3. Look for the magic marker in vcl_recv - if it's present, tell Varnish to use the bogus backend. Grace will then kick in because that backend is marked as sick. 4. If the object exists in cache (graced) - it will be used. Otherwise, you will hit vcl_error again. (Thus the check of req.restarts in step 2). It's a nasty, yet brilliant hack, if I might say so myself ;) It adds latency and doesn't utilize saintmode, but it gets the job done in a way that will also make little children cry. - Kristian From fla_torres at yahoo.com.br Wed Jan 12 14:29:56 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Wed, 12 Jan 2011 12:29:56 -0200 Subject: Trouble understanding Varnishlog In-Reply-To: References: <4D21F4EB.3010403@yahoo.com.br> <4D231102.4060407@yahoo.com.br> Message-ID: <4D2DBAE4.9010101@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 01/04/2011 06:14 PM, . wrote: > Thanks for your reply. I'm curious as to why you suggested I remove > the X-Varnish header? I guess my confusion is why the header doesn't > include 2 numbers, even though it is a cache HIT, and the HIT > counter is being incremented. > Hello, It should be increased, are you testing with a simple file (without cookies and respecting the Cache-Control header) ? # first request $ curl -I -H "Host: www.flaviotorres.com.br" http://www.flaviotorres.com.br HTTP/1.1 200 OK Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT X-Mod-Pagespeed: 0.9.1.1-171 Cache-Control: max-age=60 Expires: Wed, 12 Jan 2011 12:11:55 GMT Vary: Accept-Encoding Content-Type: text/html; charset=UTF-8 VID: 01 Content-Length: 3 Date: Wed, 12 Jan 2011 12:10:55 GMT X-Varnish: 1268025441 Connection: keep-alive X-Cache: MISS X-Cache-Hits: 0 X-Age: 0 # second request $ curl -I -H "Host: www.flaviotorres.com.br" http://www.flaviotorres.com.br HTTP/1.1 200 OK Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT X-Mod-Pagespeed: 0.9.1.1-171 Cache-Control: max-age=60 Expires: Wed, 12 Jan 2011 12:11:55 GMT Vary: Accept-Encoding Content-Type: text/html; charset=UTF-8 VID: 01 Content-Length: 3 Date: Wed, 12 Jan 2011 12:10:59 GMT X-Varnish: 1268025442 1268025441 Connection: keep-alive X-Cache: HIT X-Cache-Hits: 1 X-Age: 4 # third request $ curl -I -H "Host: www.flaviotorres.com.br" http://www.flaviotorres.com.br HTTP/1.1 200 OK Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT X-Mod-Pagespeed: 0.9.1.1-171 Cache-Control: max-age=60 Expires: Wed, 12 Jan 2011 12:11:55 GMT Vary: Accept-Encoding Content-Type: text/html; charset=UTF-8 VID: 01 Content-Length: 3 Date: Wed, 12 Jan 2011 12:11:02 GMT X-Varnish: 1268025443 1268025441 Connection: keep-alive X-Cache: HIT X-Cache-Hits: 2 X-Age: 7 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk0tuuAACgkQNRQApncg297klgCfQuU9I2w/BfBIvFhddz6P9MIe NF4AoK5BA/ovvvsUnxcFA0ZCNLWD3CH5 =hyIE -----END PGP SIGNATURE----- From roberto.fernandezcrisial at gmail.com Wed Jan 12 14:34:54 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 12 Jan 2011 11:34:54 -0300 Subject: Object no cached when TTL expires In-Reply-To: <20110112125921.GC2952@freud> References: <20110112125921.GC2952@freud> Message-ID: Kristian, Here is the VCL and varnishstats requested. Unfortunatelly the varnishlog runs so fast so I can't catch the HIT/MISS logs just for one request. # VCL backend be1 { .host = "X.X.X.X"; } acl purge { "localhost"; } sub vcl_recv { set req.grace = 60s; if (req.http.host == "domain.com") { set req.backend = be1; } if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (!req.url~ "\.php\?.*$" && !req.url~ "search" ) { set req.url = regsub(req.url, "\?.*$", ""); } if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.url ~ "^(files|misc|sites|themes|modules|sfx)/" || req.url ~ "\.(txt|TXT|ico|ICO|css|CSS|png|PNG|jpg|JPG|gif|GIF|swf|SWF|flv|FLV|mp4|MP4|mp3|MP3|js|JS|xml|XML|jpeg|JPEG)$") { unset req.http.Cookie; remove req.http.Referer; remove req.http.User-Agent; remove req.http.Accept-Language; remove req.http.Accept-Charset; remove req.http.Accept; remove req.http.Cache-Control; } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|mp4)$") { remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else { remove req.http.Accept-Encoding; } if (req.request == "GET" && req.url ~ "\.(svg|swf|ico|mp3|mp4|m4a|ogg|mov|avi|wmv)$") { return (lookup); } if (req.request == "GET" && req.url ~ "\.(png|gif|jpg|jpeg)$") { return (lookup); } } return (lookup); } sub vcl_pipe { set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { if (req.http.Cookie) { set req.hash += req.http.Cookie; } set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { unset beresp.http.Etag; if (!beresp.cacheable) { set beresp.http.X-Cacheable = "NO:Not Cacheable"; } elsif(req.http.Cookie ~"(UserID|_session)") { set beresp.http.X-Cacheable = "NO:Got Session"; return (pass); } elsif ( beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return (pass); } elsif ( beresp.ttl < 1s ) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } else { set beresp.http.X-Cacheable = "YES"; } if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|swf|mp4|mp3|js|xml)$") { unset beresp.http.set-cookie; } if (beresp.http.Set-Cookie) { return (pass); } if (beresp.status == 404 || beresp.status == 503) { set beresp.ttl = 1s; } set beresp.http.X-Cacheable = beresp.ttl; return (deliver); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT Varnish (" obj.hits ")"; } else { set resp.http.X-Varnish-Cache = "MISS Varnish"; } return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

Guru Meditation:

XID: "} req.xid {"


Varnish cache server

"}; return (deliver); } ------ # varnishstats -1 client_conn 9514820 148.86 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 102670372 1606.26 Client requests received cache_hit 96847567 1515.16 Cache hits cache_hitpass 5716515 89.43 Cache hits for pass cache_miss 106255 1.66 Cache misses backend_conn 321210 5.03 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 4889 0.08 Backend conn. failures backend_reuse 5496709 85.99 Backend conn. reuses backend_toolate 39922 0.62 Backend conn. was closed backend_recycle 5536644 86.62 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 2267 0.04 Fetch head fetch_length 3542759 55.43 Fetch with Length fetch_chunked 6938 0.11 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 104849 1.64 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 2161065 33.81 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 5257 . N struct sess_mem n_sess 4645 . N struct sess n_object 105631 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 106147 . N struct objectcore n_objecthead 57775 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 8 . N struct vbe_conn n_wrk 521 . N worker threads n_wrk_create 8114 0.13 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 1323 0.02 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 232093 3.63 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 11 . N backends n_expired 521 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 1257422 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 24858882 388.91 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 9513431 148.84 Total Sessions s_req 102670372 1606.26 Total Requests s_pipe 27 0.00 Total pipe s_pass 5716519 89.43 Total pass s_fetch 5817878 91.02 Total fetch s_hdrbytes 25797680938 403599.57 Total header bytes s_bodybytes 274844776909 4299891.69 Total body bytes sess_closed 1141952 17.87 Session Closed sess_pipeline 62084 0.97 Session Pipeline sess_readahead 92195 1.44 Session Read Ahead sess_linger 102206982 1599.01 Session Linger sess_herd 84599464 1323.54 Session herd shm_records 3977250830 62223.30 SHM records shm_writes 227215458 3554.74 SHM writes shm_flushes 2 0.00 SHM flushes due to overflow shm_cont 720671 11.27 SHM MTX contention shm_cycles 1800 0.03 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 3759248 58.81 SMA allocator requests sma_nobj 209942 . SMA outstanding allocations sma_nbytes 1685800871 . SMA outstanding bytes sma_balloc 30860533837 . SMA bytes allocated sma_bfree 29174732966 . SMA bytes free sms_nreq 4896 0.08 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 2075904 . SMS bytes allocated sms_bfree 2075904 . SMS bytes freed backend_req 5817885 91.02 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 102654743 1606.01 HCB Lookups without lock hcb_lock 57436 0.90 HCB Lookups with lock hcb_insert 57436 0.90 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 1261 0.02 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 63919 1.00 Client uptime Thank you, Roberto. 2011/1/12 Kristian Lyngstol > On Wed, Jan 12, 2011 at 09:44:37AM -0300, Roberto O. Fern?ndez Crisial > wrote: > > I expect Varnish request the object from webserver, show HIT and Age > header > > increase value. > > > > This is the sequence right now: > > > > 1) user ask for image > > 2) varnish receive request > > 3) varnish ask the webserver > > 4) varnish cache the object > > 5) varnish response user request > > 6) varnish response users requests for 1800s (TTL) > > 7) when TTL expires varnish should ask again webserver for the object and > > cache it (but it doesn't) > > 8) varnish response MISS for the object (Age header always shows value > "0") > > and ask webserver for every user request :( > > Aha, it sounded like you were complaining that you got a (single) miss > after the object expired. If you are getting multiple misses, it's a > different story. > > If you attach varnishlog -o output of both the first miss that gets cached, > a cache hit and the two first miss after it is expired, I'm pretty sure we > can figure this out. > > Oh, and your VCL, and while we're at it, we might as well throw in > varnishstat -1. > > - Kristian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fla_torres at yahoo.com.br Wed Jan 12 14:48:20 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Wed, 12 Jan 2011 12:48:20 -0200 Subject: Exclude a URL In-Reply-To: <4D2D8D92.8060909@vectorsf.com> References: <4D2D8D92.8060909@vectorsf.com> Message-ID: <4D2DBF34.10109@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 01/12/2011 09:16 AM, Eduardo Gimenez Ruiz wrote: > Hi all in the list: > Hola! ;) > I use a Varnish Version 1.1.2 with pressflow (a kind of drupal) > and i try to exclude a URL (or a path) from the cache. > Do you mean 2.1.2 ? > I use this configuration in my default.vcl: > > sub vcl_recv { if (req.url ~ "^/portal/ajax_user_bar") { //return > (pass); //pass; unset req.http.cookie; } [other code] } > try this [1]: if (req.url ~ "ajax_user_bar") { return (pipe); } > And in all case I see this result with "varnishtop -b -i TxURL" > is: > > 1.00 TxURL /portal/ajax_user_bar/login?0.9301228949334472 > > The number after the "login?" are a ramdom number from the code > and the "ajax_user_bar" is a module in pressflow not a directory of > the OS. > Don't worry, varnish will match "ajax_user_bar" > > Thank for all help, advice or documentation that you can give me. [1] - http://www.varnish-cache.org/trac/wiki/VCL#vcl_pipe hope this helps -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk0tvzIACgkQNRQApncg296V4ACg2DhT1311CEueyv4WNlosXJuq wYEAnReacApkqcPTtds2yFD6nLdwOUcL =EJHg -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From egimenez at vectorsf.com Wed Jan 12 15:09:13 2011 From: egimenez at vectorsf.com (Eduardo Gimenez Ruiz) Date: Wed, 12 Jan 2011 16:09:13 +0100 Subject: Exclude a URL In-Reply-To: <4D2DBF34.10109@yahoo.com.br> References: <4D2D8D92.8060909@vectorsf.com> <4D2DBF34.10109@yahoo.com.br> Message-ID: <4D2DC419.5090503@vectorsf.com> Hi and Hola for the Spanish speaking in the list :D Thank you so much for your answers..... now I try to update the version of my varnish. Thank to Flavio and Kristian for their answers, when will be finish update correctly my varnish I will try to follow your advice. On 01/12/2011 03:48 PM, Flavio Torres wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 01/12/2011 09:16 AM, Eduardo Gimenez Ruiz wrote: > > Hi all in the list: > > > > > Hola! ;) > > > > I use a Varnish Version 1.1.2 > with pressflow (a kind of drupal) > > > and i try to exclude a URL (or a path) from the cache. > > > > > Do you mean 2.1.2 ? > > > I use this configuration in > my default.vcl: > > > > > > sub vcl_recv { if (req.url ~ "^/portal/ajax_user_bar") { > //return > > > (pass); //pass; unset req.http.cookie; } [other code] } > > > > > try this [1]: > > if (req.url ~ "ajax_user_bar") { > return (pipe); > } > > > > > And in all case I see this > result with "varnishtop -b -i TxURL" > > > is: > > > > > > 1.00 TxURL /portal/ajax_user_bar/login?0.9301228949334472 > > > > > > The number after the "login?" are a ramdom number from the > code > > > and the "ajax_user_bar" is a module in pressflow not a > directory of > > > the OS. > > > > > Don't worry, varnish will match "ajax_user_bar" > > > > > > Thank for all help, advice or documentation that you can > give me. > > > [1] - http://www.varnish-cache.org/trac/wiki/VCL#vcl_pipe > > > hope this helps > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk0tvzIACgkQNRQApncg296V4ACg2DhT1311CEueyv4WNlosXJuq > wYEAnReacApkqcPTtds2yFD6nLdwOUcL > =EJHg > -----END PGP SIGNATURE----- > -- Eduardo Gim?nez Ruiz M?vil: (+34) 615 90 60 98 Fax: (+34) 91 799 55 30 ____________________________ Parque empresarial La Finca Paseo del Club Deportivo, 1 - Bloque 11 28223 Pozuelo de Alarc?n Madrid ____________________________ http://www.vectorsf.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgervais at agaetis.fr Wed Jan 12 15:08:13 2011 From: mgervais at agaetis.fr (=?UTF-8?Q?Micka=C3=ABl_GERVAIS?=) Date: Wed, 12 Jan 2011 16:08:13 +0100 Subject: Varnish and time out on backend =?UTF-8?Q?=28first=5Fbyte=5Ftimeout?= =?UTF-8?Q?=29=2E?= In-Reply-To: <20110112133158.GF2952@freud> References: <6e98c6388f7efe33b5a68796dda1408d@localhost> <20110112133158.GF2952@freud> Message-ID: Thanks a lot!! Apparently it works... (I've taken my hearplugs...) Here is my config is somebody needs it: backend fake { .host = "xxxxxxxxx"; .port = "80"; .probe = { .url = "/fake.html"; .interval = 60s; .timeout = 0.1s; .window = 1; .threshold = 1; .initial = 1; } } sub vcl_recv { [...] if ( req.http.magicmarker && req.http.magicmarker == "fake" ) { unset req.http.magicmarker; set req.backend = fake; } else { set req.backend = yyyy; } [...] } sub vcl_error { log "[Error ] ( ) " req.url "(Status: " obj.status ", Restarts: " req.restarts ")"; if (obj.status == 503 && req.restarts < 5) { log "--- Restart url: " req.url "(Status: " obj.status ", Restarts: " req.restarts ")"; set obj.http.X-Restarts = req.restarts; if ( req.restarts == 0 ){ log "--- First restart add fake."; set req.http.magicmarker = "fake"; } restart; } } On Wed, 12 Jan 2011 14:31:58 +0100, Kristian Lyngstol wrote: > On Wed, Jan 12, 2011 at 11:34:37AM +0100, Micka?l GERVAIS wrote: >> If a timeout occurs (first_byte_timeout reached) the function vcl_error >> is >> called, I'd like to use the saint mode to retreive the response from the >> cache, but saint mode is only avaliable on beresp. >> >> Is there a way to tell varnish use a dirty object from the cache? Maybe >> is >> not the correct way to handle this kind of error. > > You are correct - that is a weakness. I have a nasty hack, though. > > 1. Declare a second, bogus backend which will always be sick. > 2. In vcl_error if restarts is 0, set a magic marker and restart. > 3. Look for the magic marker in vcl_recv - if it's present, tell Varnish to > use the bogus backend. Grace will then kick in because that backend is > marked as sick. > 4. If the object exists in cache (graced) - it will be used. Otherwise, you > will hit vcl_error again. (Thus the check of req.restarts in step 2). > > It's a nasty, yet brilliant hack, if I might say so myself ;) > > It adds latency and doesn't utilize saintmode, but it gets the job done in > a way that will also make little children cry. > > - Kristian -- ::::::::::::::::::::::::::::::::::::::::::::::: MICKAL GERVAIS Agaetis 10 all?e Evariste Galois 63 000 Clermont-Ferrand Courriel : mgervais at agaetis.fr T?l?phone : 04 73 44 56 51 Portable : 06 82 35 52 82 Site : http://www.agaetis.fr ::::::::::::::::::::::::::::::::::::::::::::::: From jonathanlopez at blackslot.com Wed Jan 12 15:08:46 2011 From: jonathanlopez at blackslot.com (Jonathan Lopez) Date: Wed, 12 Jan 2011 07:08:46 -0800 Subject: Virtual host based on includes Message-ID: Hello, I?m trying to create a main config file that includes an another file to the specific configuration for each virtual host, an example: sub vcl_recv { set req.http.Host = regsub(req.http.Host, "^www\.", ""); include "/etc/varnish/" req.http.host ".vcl"; } Then, each domain has a customized VCL in his own file. Is this possible? I have tried everything but with no success. I don't want to make a huge if/else condition for each domain (virtual host). Thanks a lot for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Wed Jan 12 15:12:26 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Wed, 12 Jan 2011 15:12:26 +0000 Subject: Virtual host based on includes In-Reply-To: References: Message-ID: <69A1C1F5-6386-4E9B-888E-2F8117D9E04C@gmail.com> On 12 Jan 2011, at 15:08, Jonathan Lopez wrote: > > set req.http.Host = regsub(req.http.Host, "^www\.", ""); > include "/etc/varnish/" req.http.host ?.vcl?; > > Is this possible? I have tried everything but with no success. I don?t want to make a huge if/else condition for each domain (virtual host). > I don't think this will work, because of the way that the VCL files are compiled when they are loaded. If you do have hundreds of hosts, I think you'd be better off writing a small Perl script to create your master VCL based on doing a directory listing of the individual VCL files, and using a big if/else structure. Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.sansbury at lullabot.com Wed Jan 12 15:31:23 2011 From: james.sansbury at lullabot.com (James Sansbury) Date: Wed, 12 Jan 2011 10:31:23 -0500 Subject: POST requests to ESIs Message-ID: Hello all! Consider this scenario: You have a piece of content that has commenting on it. The comments are highly dynamic, but the content of the page itself is basically static. ESI works great here to load all the comments with a TTL of zero. However, there is a comment form on this page as well. If a comment submission is made, by default Varnish will pass the POST request through to the backend. But on a site that is heavily commented, this means loading up that full page for each POST unnecessarily. The only piece of the page that has changed is the comment ESI, which could very well accept that POST request itself and do the processing, thereby lowering the number of requests to the backend. I see in the ESI code that it is hard coded that all ESI requests are GET requests. Some ESI implementations allow you to specify a method in the tag (e.g., ). I was thinking it would be handy if Varnish could support this, or something similar. Some way of parsing a POST request as it comes in, and possibly routing it to the ESI that can accept it. Or maybe even something more abstract than that; a way to further customize the ESI request in the vcl. Thoughts? Thanks for your time! James -------------- next part -------------- An HTML attachment was scrubbed... URL: From egimenez at vectorsf.com Wed Jan 12 15:55:41 2011 From: egimenez at vectorsf.com (Eduardo Gimenez Ruiz) Date: Wed, 12 Jan 2011 16:55:41 +0100 Subject: Exclude a URL In-Reply-To: <4D2DC419.5090503@vectorsf.com> References: <4D2D8D92.8060909@vectorsf.com> <4D2DBF34.10109@yahoo.com.br> <4D2DC419.5090503@vectorsf.com> Message-ID: <4D2DCEFD.40805@vectorsf.com> Hi again: Failure, I changed the sintax for to include the code that Flavio says if (req.url ~ "ajax_user_bar") { return (pipe); } And I tried to use "return (pass);" but varnish take a cache to the call of the browser.... (it says so?).... sorry for my english, but i don't know that said this :( I see: * In chromium that header response is: X-Varnish:791424256 * In varnishlog -o -b: 14 BackendOpen b default 127.0.0.1 59592 127.0.0.1 8080 14 TxRequest b GET 14 TxURL b /portal/ajax_user_bar/login?0.786534147337079 14 TxProtocol b HTTP/1.0 14 TxHeader b Host: maq.domain.com 14 TxHeader b Referer: http://maq.domain.com/portal/ 14 TxHeader b X-Requested-With: XMLHttpRequest 14 TxHeader b Accept: text/html, */* 14 TxHeader b User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Ubuntu/10.04 Chromium/8.0.552.224 Chrome/8.0.552.224 Safari/534.10 * varnishtop -b -i TxURL: 0.99 TxURL /portal/ajax_user_bar/login?0.35733756981790066 how can i change this actions from varnish?, On 01/12/2011 04:09 PM, Eduardo Gimenez Ruiz wrote: > Hi and Hola for the Spanish speaking in the list :D > > Thank you so much for your answers..... now I try to update the > version of my varnish. > > Thank to Flavio and Kristian for their answers, when will be finish > update correctly my varnish I will try to follow your advice. > > On 01/12/2011 03:48 PM, Flavio Torres wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 01/12/2011 09:16 AM, Eduardo Gimenez Ruiz wrote: >> > Hi all in the list: >> >> >> >> > >> >> Hola! ;) >> >> >> > I use a Varnish Version 1.1.2 >> >> with pressflow (a kind of drupal) >> >> >> >> > and i try to exclude a URL (or a path) from the cache. >> >> >> >> > >> >> Do you mean 2.1.2 ? >> >> > I use this configuration in >> >> my default.vcl: >> >> >> >> > >> >> >> >> > sub vcl_recv { if (req.url ~ "^/portal/ajax_user_bar") { >> >> //return >> >> >> >> > (pass); //pass; unset req.http.cookie; } [other code] } >> >> >> >> > >> >> try this [1]: >> >> if (req.url ~ "ajax_user_bar") { >> return (pipe); >> } >> >> >> >> > And in all case I see this >> >> result with "varnishtop -b -i TxURL" >> >> >> >> > is: >> >> >> >> > >> >> >> >> > 1.00 TxURL /portal/ajax_user_bar/login?0.9301228949334472 >> >> >> >> > >> >> >> >> > The number after the "login?" are a ramdom number from the >> >> code >> >> >> >> > and the "ajax_user_bar" is a module in pressflow not a >> >> directory of >> >> >> >> > the OS. >> >> >> >> > >> >> Don't worry, varnish will match "ajax_user_bar" >> >> > >> >> >> >> > Thank for all help, advice or documentation that you can >> >> give me. >> >> >> [1] - http://www.varnish-cache.org/trac/wiki/VCL#vcl_pipe >> >> >> hope this helps >> >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v1.4.10 (GNU/Linux) >> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ >> >> iEYEARECAAYFAk0tvzIACgkQNRQApncg296V4ACg2DhT1311CEueyv4WNlosXJuq >> wYEAnReacApkqcPTtds2yFD6nLdwOUcL >> =EJHg >> -----END PGP SIGNATURE----- >> > > -- > Eduardo Gim?nez Ruiz > > M?vil: (+34) 615 90 60 98 > Fax: (+34) 91 799 55 30 > ____________________________ > > Parque empresarial La Finca > Paseo del Club Deportivo, 1 - Bloque 11 > 28223 Pozuelo de Alarc?n > Madrid > ____________________________ > http://www.vectorsf.com > -- Eduardo Gim?nez Ruiz M?vil: (+34) 615 90 60 98 Fax: (+34) 91 799 55 30 ____________________________ Parque empresarial La Finca Paseo del Club Deportivo, 1 - Bloque 11 28223 Pozuelo de Alarc?n Madrid ____________________________ http://www.vectorsf.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Wed Jan 12 18:39:07 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 12 Jan 2011 13:39:07 -0500 Subject: POST requests to ESIs In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> Hello all! Consider this scenario: You have a piece of content that has commenting on it. The comments are highly dynamic, but the content of the page itself is basically static. ESI works great here to load all the comments with a TTL of zero. However, there is a comment form on this page as well. If a comment submission is made, by default Varnish will pass the POST request through to the backend. But on a site that is heavily commented, this means loading up that full page for each POST unnecessarily. The only piece of the page that has changed is the comment ESI, which could very well accept that POST request itself and do the processing, thereby lowering the number of requests to the backend. I see in the ESI code that it is hard coded that all ESI requests are GET requests. Some ESI implementations allow you to specify a method in the tag (e.g., ). I was thinking it would be handy if Varnish could support this, or something similar. Some way of parsing a POST request as it comes in, and possibly routing it to the ESI that can accept it. Or maybe even something more abstract than that; a way to further customize the ESI request in the vcl. Thoughts? Thanks for your time! James Hi, Sorry for lack of indents (outlook at work). Even on heavily commented sites, the number of POSTs for new comments will be low per second. Issue always seems to be pulling comments for each page load. Loading comments with a GET, with a ttl > 0 seems to work well for us (no ESI). Is there a reason you are running with TTL at zero? You could attempt to do real time updating, with ttl = 0, but your database and backends won't scale very well, and we find that users will accept a few minutes of delay for comments to update, and performance is better. YMMV. /Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.fernandezcrisial at gmail.com Wed Jan 12 19:04:15 2011 From: roberto.fernandezcrisial at gmail.com (=?ISO-8859-1?Q?Roberto_O=2E_Fern=E1ndez_Crisial?=) Date: Wed, 12 Jan 2011 16:04:15 -0300 Subject: http_range_support Message-ID: Hi guys, I'm trying to figure out what varnish http_range_support is used for.. I'm trying to play some media before varnish cache the object (I know this will be release on 3.0 version, but I need to know if this param can help me, or only start play the media once the object is cached). Thank you, Roberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fla_torres at yahoo.com.br Wed Jan 12 19:04:42 2011 From: fla_torres at yahoo.com.br (Flavio Torres) Date: Wed, 12 Jan 2011 17:04:42 -0200 Subject: Exclude a URL In-Reply-To: <4D2DCEFD.40805@vectorsf.com> References: <4D2D8D92.8060909@vectorsf.com> <4D2DBF34.10109@yahoo.com.br> <4D2DC419.5090503@vectorsf.com> <4D2DCEFD.40805@vectorsf.com> Message-ID: <4D2DFB4A.1050100@yahoo.com.br> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 01/12/2011 01:55 PM, Eduardo Gimenez Ruiz wrote: > I see: * In chromium that header response is: X-Varnish:791424256 if you are using pipe, then you will get something like that (VCL_call): $ varnishlog -o -c | perl -ne 'BEGIN { $/ = "";} print if (/RxURL.*.*$/m and /VCL_call.*pipe*/);' 259 ReqStart c 10.10.71.201 5791 158147924 259 RxRequest c GET 259 RxURL c /html/ultimas/jogorapido/latestNews.json 259 RxProtocol c HTTP/1.1 259 RxHeader c x-requested-with: XMLHttpRequest 259 RxHeader c Accept-Language: pt-br 259 RxHeader c Referer: http://www.host.com.br/gadgets/latestNews/content.html?canal=274&numShowNews=4&corMenu=4A89D9&corTitulos=00437F&no_cache=15604506172 259 RxHeader c Accept: application/json, text/javascript, */* 259 RxHeader c Accept-Encoding: gzip, deflate 259 RxHeader c User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) 259 RxHeader c If-Modified-Since: Wed, 12 Jan 2011 17:59:21 GMT 259 RxHeader c Host: www.host.com.br 259 RxHeader c Connection: Keep-Alive 259 RxHeader c Cookie: nvgpfl=95742716; __utma=223990925.1623791818.1281182520.1290519949.1290605813.225; __utmz=223990925.1281182520.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none) 259 VCL_call c recv pipe 259 VCL_call c hash hash 259 VCL_call c pipe pipe 259 Backend c 165 my_backend my_backend 259 ReqEnd c 158147924 1294859016.082086086 1294859016.163696051 2.148682117 0.000518799 0.081091166 > * In varnishlog -o -b: 14 BackendOpen b default 127.0.0.1 59592 > 127.0.0.1 8080 14 TxRequest b GET 14 TxURL b > /portal/ajax_user_bar/login?0.786534147337079 14 TxProtocol b > HTTP/1.0 14 TxHeader b Host: maq.domain.com 14 TxHeader b > Referer: http://maq.domain.com/portal/ 14 TxHeader b > X-Requested-With: XMLHttpRequest 14 TxHeader b Accept: > text/html, */* 14 TxHeader b User-Agent: Mozilla/5.0 (X11; U; > Linux i686; en-US) AppleWebKit/534.10 (KHTML, like Gecko) > Ubuntu/10.04 Chromium/8.0.552.224 Chrome/8.0.552.224 Safari/534.10 > * varnishtop -b -i TxURL: 0.99 TxURL > /portal/ajax_user_bar/login?0.35733756981790066 varnishtop: will show you the log entry *ranking*; -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk0t+0cACgkQNRQApncg2961NQCgggnRMsmWprktXVDHQ4aNODIT fuwAnRPF+TVUjWFGdt8lO7inVS8K4aE4 =m6oV -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From wido at widodh.nl Wed Jan 12 20:17:46 2011 From: wido at widodh.nl (Wido den Hollander) Date: Wed, 12 Jan 2011 21:17:46 +0100 Subject: http_range_support In-Reply-To: References: Message-ID: <1294863466.2503.31.camel@wido-laptop.pcextreme.nl> Hi Roberto, On Wed, 2011-01-12 at 16:04 -0300, Roberto O. Fern?ndez Crisial wrote: > Hi guys, > > I'm trying to figure out what varnish http_range_support is used for.. > I'm trying to play some media before varnish cache the object (I know > this will be release on 3.0 version, but I need to know if this param > can help me, or only start play the media once the object is cached). No, it can't. http_range_support is for partial content, for example: You have a 500MB video in your cache and the user wants to go to 1:34 of that video, the browser or videoplayer (whatever he is using) will then ask for a range of bytes instead of the whole object. The RFC about this: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16 Wido From james.sansbury at lullabot.com Wed Jan 12 21:01:47 2011 From: james.sansbury at lullabot.com (James Sansbury) Date: Wed, 12 Jan 2011 16:01:47 -0500 Subject: POST requests to ESIs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> Message-ID: On Wed, Jan 12, 2011 at 1:39 PM, Caunter, Stefan wrote: > Even on heavily commented sites, the number of POSTs for new comments will > be low per second. Issue always seems to be pulling comments for each page > load. Loading comments with a GET, with a ttl > 0 seems to work well for us > (no ESI). Is there a reason you are running with TTL at zero? > Yeah, sorry, it was not so much about the TTL as the ability to still lookup the cached page and pass the POST through to the ESI. I understand we can increase the TTL of the ESI, but the real goal here is to have a separate ESI only server that handles all authenticated traffic, while keeping the frontend server served completely from the Varnish cache. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.sansbury at lullabot.com Wed Jan 12 21:01:47 2011 From: james.sansbury at lullabot.com (James Sansbury) Date: Wed, 12 Jan 2011 16:01:47 -0500 Subject: POST requests to ESIs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> Message-ID: On Wed, Jan 12, 2011 at 1:39 PM, Caunter, Stefan wrote: > Even on heavily commented sites, the number of POSTs for new comments will > be low per second. Issue always seems to be pulling comments for each page > load. Loading comments with a GET, with a ttl > 0 seems to work well for us > (no ESI). Is there a reason you are running with TTL at zero? > Yeah, sorry, it was not so much about the TTL as the ability to still lookup the cached page and pass the POST through to the ESI. I understand we can increase the TTL of the ESI, but the real goal here is to have a separate ESI only server that handles all authenticated traffic, while keeping the frontend server served completely from the Varnish cache. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Wed Jan 12 21:03:19 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 12 Jan 2011 16:03:19 -0500 Subject: POST requests to ESIs In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CC69FB2@TMG-EVS02.torstar.net> From: jsansbury at lullabot.com [mailto:jsansbury at lullabot.com] On Behalf Of James Sansbury Sent: January-12-11 4:02 PM To: Caunter, Stefan; varnish-misc at varnish-cache.org Subject: Re: POST requests to ESIs On Wed, Jan 12, 2011 at 1:39 PM, Caunter, Stefan wrote: Even on heavily commented sites, the number of POSTs for new comments will be low per second. Issue always seems to be pulling comments for each page load. Loading comments with a GET, with a ttl > 0 seems to work well for us (no ESI). Is there a reason you are running with TTL at zero? Yeah, sorry, it was not so much about the TTL as the ability to still lookup the cached page and pass the POST through to the ESI. I understand we can increase the TTL of the ESI, but the real goal here is to have a separate ESI only server that handles all authenticated traffic, while keeping the frontend server served completely from the Varnish cache. Ah yes. Consider a subdomain. We pipe for cases like this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhalfmoon at milksnot.com Wed Jan 12 21:02:48 2011 From: jhalfmoon at milksnot.com (Johnny Halfmoon) Date: Wed, 12 Jan 2011 22:02:48 +0100 Subject: Virtual host based on includes In-Reply-To: References: Message-ID: <4D2E16F8.3050704@milksnot.com> On 01/12/2011 04:08 PM, Jonathan Lopez wrote: > Hello, > > I?m trying to create a main config file that includes an another file to the specific configuration for each virtual host, an example: > > sub vcl_recv { > set req.http.Host = regsub(req.http.Host, "^www\.", ""); > include "/etc/varnish/" req.http.host ?.vcl?; > } > > Then, each domain has a customized VCL in his own file. > Is this possible? I have tried everything but with no success. I don?t want to make a huge if/else condition for each domain (virtual host). > Hi Jonathan, what you try to do here is not possible: > include "/etc/varnish/" req.http.host ?.vcl?; Varnish does not include in runtime, but at compile time, that is, when the configs actually get loaded. What you could do is the following: main.vcl include "/etc/varnish/sites.vcl include "/etc/varnish/catch-all.vcl sites.vcl include "/etc/varnish/sites/site1.vcl include "/etc/varnish/sites/site2.vclT include etc... catch-all.vcl site1.vcl backend www_site1_com { .host = "1.2.3.4"; } sub vcl_recv { if (req.http.host ~ "^www.site1.com$") { set req.backend = www_site1_com; } } sub vcl_deliver { if (req.backend == www_site1_com ) { ; } } site2.vcl backend www_site2_com { .host = "11.22.33.44"; } sub vcl_recv { if (req.http.host ~ "^www.site2.com$") { set req.backend = www_site2_com; } } sub vcl_deliver { if (req.backend == www_site2_com ) { ; } } In short this is what we do in the above code: - define a main.vcl where you set a bunch of default stuff for all sites, like ttl, grace time etc... - from main.vcl you include sites.vcl, which in its turn includes the vcls of all your sites. This keeps main.vcl tidy - main.vcl also includes catch-all.vcl, which may and probably will contain code that is to be executed after all site configs have been handled - each site config does the stuff it needs to do for that site - the main idea behind all this is that varnish appends all the different code blocks in the order they are included. So if you were to define a vcl_receive and vcl_deliver in all your vcl files, this is how they would be compiled by varnish: vcl_receive { vcl_receive of main.vcl vcl_recieve of sites.vcl vcl_receive of site1.vcl vcl_receive of site2.vcl vcl_receive of catch-all } vcl_deliver { vcl_deliver of main.vcl vcl_deliver of sites.vcl vcl_deliver of site1.vcl vcl_deliver of site2.vcl vcl_deliver of catch-all } - NOTE: take care that you always do a " if (req.backend == www_sitename_com ) check in every siteconfig you define, like you can see in the vcl_deliver code of the site1.vcl example. You need to do this to check what site you are handling at that moment. If you do not do that check, the code you define will be run for every site that is in your config. Ik hope that helps. Cheers, Johnny From gmoniey at gmail.com Wed Jan 12 21:27:33 2011 From: gmoniey at gmail.com (.) Date: Wed, 12 Jan 2011 13:27:33 -0800 Subject: Trouble understanding Varnishlog In-Reply-To: <4D2DBAE4.9010101@yahoo.com.br> References: <4D21F4EB.3010403@yahoo.com.br> <4D231102.4060407@yahoo.com.br> <4D2DBAE4.9010101@yahoo.com.br> Message-ID: Thanks for the replies. I will come up with a sample vcl this weekend. As far as how I am testing; I am hitting a page which HAS cookies. I am trying to have Varnish ignore the cookies (but not strip them) from the requests. Thanks. On Wed, Jan 12, 2011 at 6:29 AM, Flavio Torres wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 01/04/2011 06:14 PM, . wrote: > > Thanks for your reply. I'm curious as to why you suggested I remove > > the X-Varnish header? I guess my confusion is why the header doesn't > > include 2 numbers, even though it is a cache HIT, and the HIT > > counter is being incremented. > > > > Hello, > > It should be increased, are you testing with a simple file (without > cookies and respecting the Cache-Control header) ? > > # first request > $ curl -I -H "Host: www.flaviotorres.com.br" > http://www.flaviotorres.com.br > HTTP/1.1 200 OK > Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT > X-Mod-Pagespeed: 0.9.1.1-171 > Cache-Control: max-age=60 > Expires: Wed, 12 Jan 2011 12:11:55 GMT > Vary: Accept-Encoding > Content-Type: text/html; charset=UTF-8 > VID: 01 > Content-Length: 3 > Date: Wed, 12 Jan 2011 12:10:55 GMT > X-Varnish: 1268025441 > Connection: keep-alive > X-Cache: MISS > X-Cache-Hits: 0 > X-Age: 0 > > # second request > $ curl -I -H "Host: www.flaviotorres.com.br" > http://www.flaviotorres.com.br > HTTP/1.1 200 OK > Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT > X-Mod-Pagespeed: 0.9.1.1-171 > Cache-Control: max-age=60 > Expires: Wed, 12 Jan 2011 12:11:55 GMT > Vary: Accept-Encoding > Content-Type: text/html; charset=UTF-8 > VID: 01 > Content-Length: 3 > Date: Wed, 12 Jan 2011 12:10:59 GMT > X-Varnish: 1268025442 1268025441 > Connection: keep-alive > X-Cache: HIT > X-Cache-Hits: 1 > X-Age: 4 > > # third request > $ curl -I -H "Host: www.flaviotorres.com.br" > http://www.flaviotorres.com.br > HTTP/1.1 200 OK > Last-Modified: Fri, 01 Oct 2010 11:25:07 GMT > X-Mod-Pagespeed: 0.9.1.1-171 > Cache-Control: max-age=60 > Expires: Wed, 12 Jan 2011 12:11:55 GMT > Vary: Accept-Encoding > Content-Type: text/html; charset=UTF-8 > VID: 01 > Content-Length: 3 > Date: Wed, 12 Jan 2011 12:11:02 GMT > X-Varnish: 1268025443 1268025441 > Connection: keep-alive > X-Cache: HIT > X-Cache-Hits: 2 > X-Age: 7 > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAk0tuuAACgkQNRQApncg297klgCfQuU9I2w/BfBIvFhddz6P9MIe > NF4AoK5BA/ovvvsUnxcFA0ZCNLWD3CH5 > =hyIE > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.sansbury at lullabot.com Wed Jan 12 21:49:50 2011 From: james.sansbury at lullabot.com (James Sansbury) Date: Wed, 12 Jan 2011 16:49:50 -0500 Subject: POST requests to ESIs In-Reply-To: <7F0AA702B8A85A4A967C4C8EBAD6902CC69FB2@TMG-EVS02.torstar.net> References: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net> <7F0AA702B8A85A4A967C4C8EBAD6902CC69FB2@TMG-EVS02.torstar.net> Message-ID: On Wed, Jan 12, 2011 at 4:03 PM, Caunter, Stefan wrote: > > On Wed, Jan 12, 2011 at 1:39 PM, Caunter, Stefan > wrote: > > Even on heavily commented sites, the number of POSTs for new comments will > be low per second. Issue always seems to be pulling comments for each page > load. Loading comments with a GET, with a ttl > 0 seems to work well for us > (no ESI). Is there a reason you are running with TTL at zero? > > > > Yeah, sorry, it was not so much about the TTL as the ability to still > lookup the cached page and pass the POST through to the ESI. I understand we > can increase the TTL of the ESI, but the real goal here is to have a > separate ESI only server that handles all authenticated traffic, while > keeping the frontend server served completely from the Varnish cache. > > > > Ah yes. Consider a subdomain. We pipe for cases like this. > Can you describe a bit more what you mean? We have our frontend site, origin.example.com, which has ESIs pointing to ugc.example.com (ugc == user generated content). If you pipe in vcl_recv, ESIs are not processed. Maybe I'm missing something. :) Thanks! James -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Wed Jan 12 21:51:02 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Wed, 12 Jan 2011 16:51:02 -0500 Subject: POST requests to ESIs In-Reply-To: References: <7F0AA702B8A85A4A967C4C8EBAD6902CC69F18@TMG-EVS02.torstar.net><7F0AA702B8A85A4A967C4C8EBAD6902CC69FB2@TMG-EVS02.torstar.net> Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CC69FE7@TMG-EVS02.torstar.net> From: jsansbury at lullabot.com [mailto:jsansbury at lullabot.com] On Behalf Of James Sansbury Sent: January-12-11 4:50 PM To: Caunter, Stefan; varnish-misc at varnish-cache.org Subject: Re: POST requests to ESIs On Wed, Jan 12, 2011 at 4:03 PM, Caunter, Stefan wrote: On Wed, Jan 12, 2011 at 1:39 PM, Caunter, Stefan wrote: Even on heavily commented sites, the number of POSTs for new comments will be low per second. Issue always seems to be pulling comments for each page load. Loading comments with a GET, with a ttl > 0 seems to work well for us (no ESI). Is there a reason you are running with TTL at zero? Yeah, sorry, it was not so much about the TTL as the ability to still lookup the cached page and pass the POST through to the ESI. I understand we can increase the TTL of the ESI, but the real goal here is to have a separate ESI only server that handles all authenticated traffic, while keeping the frontend server served completely from the Varnish cache. Ah yes. Consider a subdomain. We pipe for cases like this. Can you describe a bit more what you mean? We have our frontend site, origin.example.com, which has ESIs pointing to ugc.example.com (ugc == user generated content). If you pipe in vcl_recv, ESIs are not processed. Maybe I'm missing something. :) Indeed, I'm not using ESI, but if I need varnish to handle something I pass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From slackmoehrle at me.com Thu Jan 6 00:42:33 2011 From: slackmoehrle at me.com (Jason T. Slack-Moehrle) Date: Wed, 05 Jan 2011 16:42:33 -0800 Subject: a few intro to Varnish questions Message-ID: Hello All, I was told about Varnish today. I have a growing Apple fan website that as more and more videos get added my thought is to keep the most popular videos in cache. The machine this site is running on is CentOS 5.5 64 bit, Apache, PHP, MySQL 5. It is a dual core machine with 12gb of RAM, max is 16gb and I will max it out over the next month or so probably as I find good deals on 4gb DDR3 sticks. The site's size will be about 300gb (about 60gb now) I have LVM running with 300gb allotted to /var/www/html. I have a MySQL backend that stores paths and data about the video's, the videos themselves are housed on the filesystem. Can anyone provide insight on setup and optimization of Varnish? I have some confusion. 1. Looking at: http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? 2. Apache listens on my public IP and Varnish should too, correct? or do I use 127.0.0.1? 3. I must make additions to vcl_recv I assume to cache what I want? 4. Do I have to make changes to my web pages to add meta-tags to trigger Varnish? Best, -Jason PS - I see 2 addresses for this list: varnish-misc at projects.linpro.no and varnish-misc at varnish-cache.org a ping shows different IP's although I suppose that would not be a definitive answer. From hyeh at rupaz.com Fri Jan 7 05:25:06 2011 From: hyeh at rupaz.com (Harry Yeh) Date: Thu, 6 Jan 2011 21:25:06 -0800 Subject: Rewriting URL's or Content inside a req object using regsub or regsuball Message-ID: I am currently having some success with the Reverse Proxying features of Varnish, and the only thing left that I need to be able to do is essentially rewrite some of the URL's in the body. For example, we have a url internally that might be wp1.rupaz.com and we need the url's in the HTML page to be rewritten to http://www.rupaz.com/blogs Right now I am kind of stuck but I am assuming I should be doing something similar to following? I have not idea which beresp object I should use for the body of the content since there is no documentation. sub vcl_fetch { if (req.http.host == "www.rupaz.com" && req.url ~ "^/blogs"){ set beresp = regsuball(beresp, "^wp1.rupaz.com", "www.rupaz.com/forums"); return(deliver); } if (!beresp.cacheable) { return (pass); } if (beresp.http.Set-Cookie) { return (pass); } return (deliver); } ______________________________ Harry Yeh CEO / CTO Rupaz Twitter Facebook When you think thong , think Rupaz ! Web: http://www.rupaz.com Me: http://www.linkedin.com/in/harryyeh Twitter: http://twitter.com/harryyeh Confidentiality Notice: This electronic mail transmission and any accompanying attachments contain confidential information intended only for the use of the individual or entity named above. Any dissemination, distribution, copying or action taken in reliance on the contents of this communication by anyone other than the intended recipient is strictly prohibited. If you have received this communication in error please immediately delete the E-mail and notify the sender at the above E-mail address. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathanlopez at blackslot.com Tue Jan 11 10:22:28 2011 From: jonathanlopez at blackslot.com (Jonathan Lopez) Date: Tue, 11 Jan 2011 02:22:28 -0800 Subject: Virtual host based on includes Message-ID: Hello, I?m trying to create a main config file that includes an another file to the specific configuration for each virtual host, an example: sub vcl_recv { set req.http.Host = regsub(req.http.Host, "^www\.", ""); include "/etc/varnish/" req.http.host ".vcl"; } Then, each domain has a customized VCL in his own file. Is this possible? I have tried everything but with no success. I don't want to make a huge if/else condition for each domain (virtual host). Thanks a lot for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathanlopez at blackslot.com Tue Jan 11 15:00:20 2011 From: jonathanlopez at blackslot.com (Jonathan Lopez) Date: Tue, 11 Jan 2011 07:00:20 -0800 Subject: Virtual host based on includes Message-ID: Hello, I?m trying to create a main config file that includes an another file to the specific configuration for each virtual host, an example: sub vcl_recv { set req.http.Host = regsub(req.http.Host, "^www\.", ""); include "/etc/varnish/" req.http.host ".vcl"; } Then, each domain has a customized VCL in his own file. Is this possible? I have tried everything but with no success. I don't want to make a huge if/else condition for each domain (virtual host). Thanks a lot for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathanlopez at blackslot.com Wed Jan 12 12:18:16 2011 From: jonathanlopez at blackslot.com (Jonathan Lopez) Date: Wed, 12 Jan 2011 04:18:16 -0800 Subject: vHosts (name or ip based) with varnish In-Reply-To: References: Message-ID: Hi Frank, Yesterday i tried to send a mail to the list to ask for a similar topic, but i dont know why is waiting the approval by the moderator. In this mail I ask for a way to do this virtual host with something like this: sub vcl_recv { set req.http.Host = regsub(req.http.Host, "^www\.", ""); include "/etc/varnish/" req.http.host ".vcl"; } Then, each domain has a customized VCL in his own file. I think this method is better than a lot of if/else. Is this possible? I have tried everything but with no success. I don't want to make a huge if/else condition for each domain (virtual host). Regards -----Mensaje original----- De: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] En nombre de Frank Helmschrott Enviado el: mi?rcoles, 12 de enero de 2011 9:59 Para: varnish-misc at varnish-cache.org Asunto: vHosts (name or ip based) with varnish Hi, via google i came to this post on varnish-software.com regarding vhost solutions for varnish: http://www.varnish-software.com/blog/virtual-hosts-varnish I'd like to have a similar solution with an 'as small as possible' master.vcl that does the if/elsif part. I'd like to have mostly different configurations for each host. Does anyone know where to place this if/elsif part? Varnish expects at least acl, sub, backend or director in the vcl and i'm quite unsure how to keep this vcl as simple/small as possible and load most of the staff through the other vcls. Thanks! -- Frank _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From kristian at varnish-software.com Fri Jan 14 09:03:52 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 14 Jan 2011 10:03:52 +0100 Subject: vHosts (name or ip based) with varnish In-Reply-To: References: Message-ID: <20110114090352.GA2899@freud> On Wed, Jan 12, 2011 at 04:18:16AM -0800, Jonathan Lopez wrote: > In this mail I ask for a way to do this virtual host with something like > this: > > sub vcl_recv { > set req.http.Host = regsub(req.http.Host, "^www\.", ""); > include "/etc/varnish/" req.http.host ".vcl"; > } It's not possible and never will be. Sorry. First of all, include can be considered as more of a pre-processor directive. It's evaluated only on compile-time (just like with C #include's). What you are asking for _could_, theoretically, be done with a vmod, but I doubt you'll see it any time soon. It opens for file-system access based on client-input, which is an area you really do not want to get into. - Kristian From ask at develooper.com Fri Jan 14 09:04:50 2011 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Fri, 14 Jan 2011 01:04:50 -0800 Subject: Virtual host based on includes In-Reply-To: References: Message-ID: <8B03C796-2CB8-4FD2-BE78-9FF7FFEABFA6@develooper.com> On Jan 11, 2011, at 7:00, Jonathan Lopez wrote: > sub vcl_recv { > > set req.http.Host = regsub(req.http.Host, "^www\.", ""); > > include "/etc/varnish/" req.http.host ?.vcl?; > > } > > Then, each domain has a customized VCL in his own file. > > Is this possible? I have tried everything but with no success. I don?t want to make a huge if/else condition for each domain (virtual host). Just generate the final configuration file with Perl, Python or whatever your favorite tool is (even m4 could do it -- like we did in the olden day!). - ask From mgervais at agaetis.fr Fri Jan 14 09:29:54 2011 From: mgervais at agaetis.fr (=?UTF-8?Q?Micka=C3=ABl_GERVAIS?=) Date: Fri, 14 Jan 2011 10:29:54 +0100 Subject: Varnish and time out on backend =?UTF-8?Q?=28first=5Fbyte=5Ftimeout?= =?UTF-8?Q?=29=2E?= In-Reply-To: References: <6e98c6388f7efe33b5a68796dda1408d@localhost> <20110112133158.GF2952@freud> Message-ID: <879e03d7013caeec18ed874552d6252f@localhost> Hi, It's me again, apparently, my configuration doesn't work. When the backend is down, vcl_error is called, but it's not due to a timeout, so I've a 503 error then I call restart. But even if I've specified a grace mode to 3h in fetch my object is available for at least 2 min... Does this trick works (in order to handle timeout) with grace mode when backend is down? My config is attached. The request is restared 4 times (see config) but objet is not retrieve from cache... Thanks. Mickael P.S: Sorry for my english... On Wed, 12 Jan 2011 16:08:13 +0100, Micka?l GERVAIS wrote: > Thanks a lot!! Apparently it works... (I've taken my hearplugs...) > Here is my config is somebody needs it: > > backend fake { > .host = "xxxxxxxxx"; > .port = "80"; > .probe = { > .url = "/fake.html"; > .interval = 60s; > .timeout = 0.1s; > .window = 1; > .threshold = 1; > .initial = 1; > } > } > > sub vcl_recv { > [...] > if ( req.http.magicmarker && req.http.magicmarker == "fake" ) { > unset req.http.magicmarker; > set req.backend = fake; > } else { > set req.backend = yyyy; > } > [...] > } > > sub vcl_error { > log "[Error ] ( ) " req.url "(Status: " obj.status ", Restarts: " > req.restarts ")"; > if (obj.status == 503 && req.restarts < 5) { > log "--- Restart url: " req.url "(Status: " obj.status ", Restarts: > " req.restarts ")"; > set obj.http.X-Restarts = req.restarts; > if ( req.restarts == 0 ){ > log "--- First restart add fake."; > set req.http.magicmarker = "fake"; > } > restart; > } > } > > On Wed, 12 Jan 2011 14:31:58 +0100, Kristian Lyngstol > wrote: >> On Wed, Jan 12, 2011 at 11:34:37AM +0100, Micka?l GERVAIS wrote: >>> If a timeout occurs (first_byte_timeout reached) the function vcl_error >>> is >>> called, I'd like to use the saint mode to retreive the response from the >>> cache, but saint mode is only avaliable on beresp. >>> >>> Is there a way to tell varnish use a dirty object from the cache? Maybe >>> is >>> not the correct way to handle this kind of error. >> >> You are correct - that is a weakness. I have a nasty hack, though. >> >> 1. Declare a second, bogus backend which will always be sick. >> 2. In vcl_error if restarts is 0, set a magic marker and restart. >> 3. Look for the magic marker in vcl_recv - if it's present, tell Varnish > to >> use the bogus backend. Grace will then kick in because that backend is >> marked as sick. >> 4. If the object exists in cache (graced) - it will be used. Otherwise, > you >> will hit vcl_error again. (Thus the check of req.restarts in step 2). >> >> It's a nasty, yet brilliant hack, if I might say so myself ;) >> >> It adds latency and doesn't utilize saintmode, but it gets the job done > in >> a way that will also make little children cry. >> >> - Kristian -- ::::::::::::::::::::::::::::::::::::::::::::::: MICKAL GERVAIS Agaetis 10 all?e Evariste Galois 63 000 Clermont-Ferrand Courriel : mgervais at agaetis.fr T?l?phone : 04 73 44 56 51 Portable : 06 82 35 52 82 Site : http://www.agaetis.fr ::::::::::::::::::::::::::::::::::::::::::::::: -------------- next part -------------- ################# # Back-end. # ################# backend www { .host = "xxxxxxxxx"; .port = "8081"; .connect_timeout = 10s; .first_byte_timeout = 30s; .between_bytes_timeout = 1s; .probe = { .request = "GET /loader.gif HTTP/1.1" "Host: xxxxxxxxx:8081" "Connection: close"; .interval = 2s; .timeout = 1s; .window = 20; .threshold = 19; .initial = 19; } } ####################################### # Fake back-end which is always sick. # ####################################### backend fake { .host = "xxxxxxxxx"; .port = "8081"; .probe = { .url = "/fake.html"; .interval = 60s; .timeout = 0.1s; .window = 2; .threshold = 2; .initial = 0; } } ######################################################################################################################### # Called at the beginning of a request, after the complete request has been received and parsed. # # Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable, which backend to use. # ######################################################################################################################### sub vcl_recv { # Section to purge an URL. if ( req.request == "PURGE" ) { purge("req.url ~ " req.url); error 200 "Purged"; } # Add an unique header containing the client address unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; if (req.http.host ~ "beta.(xxxxxxxxxx).fr$") { # Redirection for mobile site. if ( req.http.user-agent ~ "^((Fly|FLY|HTC|LG|MAUI|MOT|SEC|SIE)|(.*(ACS-NF|Android|Alcatel|Amoi|BENQ|BenQ|BlackBerry|Cellphone|DoCoMo|Ericsson|Hutchison|iPAQ|iPhone|MIDP|Mitsu|mobile'|Mobile'|Motorola|Nokia|Palm|Panasonic|PHILIPS|portalmmm|SAGEM|Samsung|SAMSUNG|Sanyo|SANYO|SCH\-|Sendo|SHARP|SmartPhone|Smartphone|Symbian\ OS|SymbianOS|Toshiba|UP\.Browser|Vodafone|Windows\ CE)))" && req.url == "/" ) { set req.http.mobilehost = regsub(req.http.host, "^beta\.(.+)$", "http://m.\1"); log "[Receive] " req.url " Redirection to mobile URL. (" req.http.mobilehost ")"; error 750 req.http.mobilehost; } if ( req.http.magicmarker && req.http.magicmarker == "fake" ) { unset req.http.magicmarker; set req.backend = fake; } else { set req.backend = www; } } # Force cache for static files even if a cookie exists. if ( req.url ~ "\.(js|css|jpg|JPG|jpeg|JPEG|png|gif|swf|ico)(\?|$)" ) { remove req.http.cookie; } if ( req.request != "GET" && req.request != "HEAD" ) { log "[Receive] " req.url " not cached."; return (pass); } if ( req.url ~ ".*/(pdf|acces|ajax|.*\.shtml|json-rpc|captcha\.jpg|balancer-manager).*" ) { log "[Receive] " req.url " not cached."; return (pass); } if ( req.backend.healthy ) { set req.grace = 15s; log "[Receive] " req.url "(Back-end " req.backend " healthy, Grace: " req.grace ")"; } else { set req.grace = 1m; log "[Receive] " req.url "(Back-end " req.backend " not healthy, Grace: " req.grace ")"; } return (lookup); } ############################################################################################################# # Called upon entering pipe mode. In this mode, the request is passed on to the backend, and any # # further data from either client or backend is passed on unaltered until either end closes the connection. # ############################################################################################################# sub vcl_pipe { log "[Pipe ] " req.url; } ############################################################################################# # Called upon entering pass mode. In this mode, the request is passed on to the backend, # # and the backend?s response is passed on to the client, but is not entered into the cache. # # Subsequent requests sub-mitted over the same client connection are handled normally. # ############################################################################################# sub vcl_pass { log "[Pass ] " req.url; } #################################################################################################### # Use req.hash += req.http.Cookie or similar to include the Cookie HTTP header in the hash string. # #################################################################################################### sub vcl_hash { # log "[Hash ] " req.url; } ################################################################################# # Called after a cache lookup if the requested document was found in the cache. # ################################################################################# sub vcl_hit { if ( obj.cacheable ) { log "[Hit ] " req.url " (Cacheable: YES)"; } else { log "[Hit ] " req.url " (Cacheable: NO)"; } } ########################################################################################################################### # Called after a cache lookup if the requested document was not found in the cache. # # Its purpose is to decide whether or not to attempt to retrieve the document from the backend, and which backend to use. # ########################################################################################################################### sub vcl_miss { log "[Miss ] " req.url; if ( req.request == "PURGE" ) { error 200 "Not in cache"; } } ############################################################################# # Called after a document has been successfully retrieved from the backend. # ############################################################################# sub vcl_fetch { if ( beresp.status == 500 ) { set beresp.saintmode = 20s; log "[Fetch ] " bereq.url " (Saint: 20s)"; restart; } set beresp.grace = 3h; # These status code 404 should always pass through and never cache. if ( beresp.status == 404 ) { log "[Fetch ] " bereq.url " (Status:" beresp.status " not cached -> Pass.)"; set beresp.http.X-Cacheable = "NO: beresp.status"; set beresp.http.X-Cacheable-status = beresp.status; return (pass); } if( beresp.cacheable ) { log "[Fetch ] " bereq.url " (Grace:" beresp.grace ", TTL:" beresp.ttl ", Status:" beresp.status ", Cacheable: YES)"; } else { log "[Fetch ] " bereq.url " (Grace:" beresp.grace ", TTL:" beresp.ttl ", Status:" beresp.status ", Cacheable: NO)"; } } ############################################################# # Called before a cached object is delivered to the client. # ############################################################# sub vcl_deliver { # Add cache hit data if ( obj.hits > 0 ) { set resp.http.X-Cache = "HIT"; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS"; } set resp.http.X-BackEnd = req.backend; set resp.http.X-Restarts = req.restarts; } sub vcl_error { # Redirect, error sent from vcl_receive. if ( obj.status == 750 ) { set obj.http.Location = obj.response; set obj.status = 302; return (deliver); } log "[Error ] " req.url " (Status: " obj.status ", Restarts: " req.restarts ")"; if ( obj.status == 503 && req.restarts < 5 ) { set obj.http.X-Restarts = req.restarts; if ( req.restarts == 0 ){ set req.http.magicmarker = "fake"; } restart; } set obj.http.Content-Type = "text/html; charset=utf-8"; if (req.http.host ~ "beta.(xxxxxxxxxxxxxx).fr$") { synthetic {" "} obj.status " " obj.response {"

Error "} obj.status " " obj.response {"

"} obj.response {"

"}; } return (deliver); } From kristian at varnish-software.com Fri Jan 14 09:57:39 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Fri, 14 Jan 2011 10:57:39 +0100 Subject: Object no cached when TTL expires In-Reply-To: References: <20110112125921.GC2952@freud> Message-ID: <20110114095739.GA4418@freud> On Wed, Jan 12, 2011 at 11:34:54AM -0300, Roberto O. Fern?ndez Crisial wrote: > Here is the VCL and varnishstats requested. Unfortunatelly the varnishlog > runs so fast so I can't catch the HIT/MISS logs just for one request. Then you need to filter it... > # VCL > > sub vcl_pass { > return (pass); > } Why define vcl_pass when you only do this? It's just in your way. > sub vcl_hash { > if (req.http.Cookie) { > set req.hash += req.http.Cookie; > } Cookies in the hash == misses. We need varnishlog to confirm if your cookie normalization works or not. > sub vcl_hit { > return (deliver); > } > > sub vcl_miss { > return (fetch); > } Don't define these if you aren't doing anything with them. Let the default VCL do its job. > sub vcl_fetch { > unset beresp.http.Etag; > if (!beresp.cacheable) { > set beresp.http.X-Cacheable = "NO:Not Cacheable"; > } > elsif(req.http.Cookie ~"(UserID|_session)") { > set beresp.http.X-Cacheable = "NO:Got Session"; > return (pass); This decision will stick for the hash. > } > elsif ( beresp.http.Cache-Control ~ "private") { > set beresp.http.X-Cacheable = "NO:Cache-Control=private"; > return (pass); As will this. > } > elsif ( beresp.ttl < 1s ) { > set beresp.ttl = 5s; > set beresp.grace = 5s; > set beresp.http.X-Cacheable = "YES:FORCED"; > } > else { > set beresp.http.X-Cacheable = "YES"; > } > if (req.url ~ > "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|flv|swf|mp4|mp3|js|xml)$") { > unset beresp.http.set-cookie; > } > if (beresp.http.Set-Cookie) { > return (pass); And this. > # varnishstats -1 > > client_conn 9514820 148.86 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess/wrk > client_req 102670372 1606.26 Client requests received > cache_hit 96847567 1515.16 Cache hits > cache_hitpass 5716515 89.43 Cache hits for pass 90 hitpasses/second is a pretty large number. You are most likely passing something in vcl_fetch that you really don't want to create a hitpass object for. Based on the VCL and varnishstat, this seems like excessive tweaking to VCL resulting in unexpected passing in vcl_fetch and subsequent creation of hitpass objects - and there's the cache misses to things you would normally expect to cache. Keep in mind that VCL is like most other coding: You're not finished when there's nothing more you can add - you're finished when there's nothing more you can remove. Varnishlog is needed to verify if the theory above is true or not. - Kristian From lin.support at gmail.com Fri Jan 14 11:21:01 2011 From: lin.support at gmail.com (linuxsupport) Date: Fri, 14 Jan 2011 11:21:01 +0000 Subject: a few intro to Varnish questions In-Reply-To: References: Message-ID: Hi Jason, see below, On Thu, Jan 6, 2011 at 12:42 AM, Jason T. Slack-Moehrle wrote: > Hello All, > > I was told about Varnish today. I have a growing Apple fan website that as > more and more videos get added my thought is to keep the most popular videos > in cache. > > The machine this site is running on is CentOS 5.5 64 bit, Apache, PHP, > MySQL 5. It is a dual core machine with 12gb of RAM, max is 16gb and I > will max it out over the next month or so probably as I find good > deals on 4gb DDR3 sticks. > > The site's size will be about 300gb (about 60gb now) I have LVM > running with 300gb allotted to /var/www/html. I have a MySQL backend that > stores paths and data about the video's, the videos themselves are housed on > the filesystem. > > Can anyone provide insight on setup and optimization of Varnish? > > I have some confusion. > 1. Looking at: > http://www.varnish-cache.org/docs/2.1/tutorial/putting_varnish_on_port_80.html > > So I have to run varnish on 80 and my site on an alternate port (8080 as > example)? Or do they both run on port 80? > Yes, you need to run varnish on port 80, so that all requests hit Varnish first, Varnish will then fetch contents from your Apache server running on any other port ie. 8080, cache it and send response to requesting client. > > 2. Apache listens on my public IP and Varnish should too, correct? or do I > use 127.0.0.1? > Varnish should run on public IP, Apache can also run on public IP as long as ports are different, but simply you could run on 127.0.0.1:8080. > > 3. I must make additions to vcl_recv I assume to cache what I want? > In vcl_recv, you can select what you want to serve from cache, you might want to serve php pages directly and cache only static contents, depending on your requirements, in vcl_fetch you need to set how long you want content to be cached, setting TTL, you can also set TTL based on content types, please read the doc first http://www.varnish-cache.org/docs/2.1/ > > 4. Do I have to make changes to my web pages to add meta-tags to trigger > Varnish? > that depends, you may and you many not, http headers can also be set in Varnish. > > Best, > -Jason > > PS - I see 2 addresses for this list: varnish-misc at projects.linpro.no and > varnish-misc at varnish-cache.org a ping shows different IP's although I > suppose that would not be a definitive answer. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Aniruddh -------------- next part -------------- An HTML attachment was scrubbed... URL: From se at plista.com Fri Jan 14 12:11:29 2011 From: se at plista.com (Simon Effenberg) Date: Fri, 14 Jan 2011 13:11:29 +0100 Subject: Problem with "Backend conn. not attempted" Message-ID: <1295007089.26206.68.camel@niles> Hi, since i upgraded from 2.1.2 to 2.1.4 about a week ago no problems occurred but yesterday the "Backend conn. not attempted" arrised and i have no idea what is going on. (see Figure 2.) It seems to be the backends (and yes last night i switched from haproxy 1.3 to 1.4 whereas varnish only connects to the locally haproxy daemon but the problem occures half a day before that) but the log says nothing about an unhealty backend (see Figure 1. and remember it is 'localhost'). Any idea what could be the problem? Is there any information you would need? /simon ps: no fd problem i think because "Connection: close" is in use and an 'lsof' on the varnish pid show round about 70 open fd's.. Figure 1: $ varnishlog -r /tmp/debugging.binlog | grep Backend_health | grep -v "Still healthy" # no output $ varnishlog -r /tmp/debugging.binlog | grep Backend_health | head -n 5 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.040812 0.011764 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.000876 0.009042 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.001131 0.007064 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.000887 0.005520 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.000855 0.004354 HTTP/1.1 200 OK $ varnishlog -r /tmp/debugging.binlog | grep Backend_health | tail -n 5 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.003904 0.001866 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.000964 0.001640 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.000781 0.001426 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.001127 0.001351 HTTP/1.1 200 OK 0 Backend_health - default Still healthy 4--X-RH 10 4 10 0.001301 0.001338 HTTP/1.1 200 OK Figure 2: $ varnishstat -1: --------------------------------- client_conn 53645273 87.99 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 53201813 87.27 Client requests received cache_hit 51129547 83.87 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 2092386 3.43 Cache misses backend_conn 1819100 2.98 Backend conn. success backend_unhealthy 278118 0.46 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 1790403 2.94 Fetch with Length fetch_chunked 21484 0.04 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 24 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 14 0.00 Fetch failed n_sess_mem 13000 . N struct sess_mem n_sess 12907 . N struct sess n_object 92065 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 92438 . N struct objectcore n_objecthead 76044 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_wrk 600 . N worker threads n_wrk_create 600 0.00 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 746 0.00 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 1678862 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 16107120 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 52363470 85.89 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 53645264 87.99 Total Sessions s_req 53201813 87.27 Total Requests s_pipe 4203 0.01 Total pipe s_pass 633 0.00 Total pass s_fetch 1771016 2.90 Total fetch s_hdrbytes 17059440767 27982.21 Total header bytes s_bodybytes 367428478958 602684.61 Total body bytes sess_closed 36642075 60.10 Session Closed sess_pipeline 5 0.00 Session Pipeline sess_readahead 5 0.00 Session Read Ahead sess_linger 50364562 82.61 Session Linger sess_herd 19035641 31.22 Session herd shm_records 2311880141 3792.12 SHM records shm_writes 238329715 390.93 SHM writes shm_flushes 21 0.00 SHM flushes due to overflow shm_cont 2464869 4.04 SHM MTX contention shm_cycles 924 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 3582284 5.88 SMA allocator requests sma_nobj 184130 . SMA outstanding allocations sma_nbytes 895131255 . SMA outstanding bytes sma_balloc 19762542699 . SMA bytes allocated sma_bfree 18867411444 . SMA bytes free sms_nreq 297047 0.49 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 123833356 . SMS bytes allocated sms_bfree 123833356 . SMS bytes freed backend_req 1814892 2.98 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 2 . N total active purges n_purge_add 2 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 55396 0.09 N objects tested n_purge_re_test 55396 0.09 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 53181129 87.23 HCB Lookups without lock hcb_lock 1458761 2.39 HCB Lookups with lock hcb_insert 1458761 2.39 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 3 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 609653 1.00 Client uptime backend_retry 0 0.00 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache --------------------------------- From nicholas.tang at livestream.com Fri Jan 14 14:13:11 2011 From: nicholas.tang at livestream.com (Nicholas Tang) Date: Fri, 14 Jan 2011 09:13:11 -0500 Subject: Load balancing streaming (rtsp) servers In-Reply-To: <20110107101848.GA2153@freud> References: <20110107101848.GA2153@freud> Message-ID: I've been stuck on other issues, so I'm curious - did you have any luck? I'll probably be doing some testing next week or the week after depending on my schedule, so if you haven't tried it by then I can update people. :) Nicholas *Nicholas Tang**:* VP, Dev Ops nicholas.tang at livestream.com | t: +1 (646) 495 9707 x164 | m: +1 (347) 410 6066 | 111 8th Avenue, Floor 15, New York, NY 10011 [image: www.livestream.com] On Fri, Jan 7, 2011 at 5:18 AM, Kristian Lyngstol < kristian at varnish-software.com> wrote: > On Wed, Dec 29, 2010 at 05:59:15PM -0500, Nicholas Tang wrote: > > Question: is it possible to load balance rtsp servers using Varnish? > They'd > > need to "stick" based on client ip. My thought was to try something like > > this: > > Well, RTSP is two-way and keeps state. HTTP only allows clients to send > requests and doesn't keep state.... > > I woduln't rule it out - RTSP is specced to "support the same sort of > caching as HTTP" - but it's probably going to be a hack. > > Ask me in a few days - though - by a WILD coincidence, I'm hacking on rtsp > anyway. > > - Kristian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fhelmschrott at gmail.com Sat Jan 15 08:31:22 2011 From: fhelmschrott at gmail.com (Frank Helmschrott) Date: Sat, 15 Jan 2011 09:31:22 +0100 Subject: HTTP_X_FORWARDED_FOR handling Message-ID: Hi, I'm using varnish-2.1.4 SVN 5447M and wonder how HTTP_X_FORWARDED_FOR gets treatened by varnish. In my VCL (which i partly copied from elsewhere) there are some lines that i found in many VCLs around the net: -- snip -- # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = req.http.rlnclientipaddr; --/snip-- I think this should do what i need: set a HTTP_X_FORWARDED_FOR header containing the real client IP-Address and add it to the requests that hit the backend. These lines are within sub vcl_recv. I need the Client IP at the backend for some statistic stuff (ip based timeouts). Unfortunately i don't even get an empty HTTP_X_FORWARDED_FOR header. It basically doesn't exist. I tried commenting these lines out and also tried client.ip instead of req.http.rlnclientipaddr; as i found this somewhere else - i don't know which the correct syntax is. Is there anything else wrong? Or maybe some switch in my varnish version that i need to set to make HTTP_X_FORWARDED_FOR appear for my backend? Thanks for helping -- Frank From lin.support at gmail.com Sat Jan 15 09:55:45 2011 From: lin.support at gmail.com (linuxsupport) Date: Sat, 15 Jan 2011 15:25:45 +0530 Subject: HTTP_X_FORWARDED_FOR handling In-Reply-To: References: Message-ID: in vcl_recv put following. remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; Thaks On Sat, Jan 15, 2011 at 2:01 PM, Frank Helmschrott wrote: > Hi, > > I'm using varnish-2.1.4 SVN 5447M and wonder how HTTP_X_FORWARDED_FOR > gets treatened by varnish. > > In my VCL (which i partly copied from elsewhere) there are some lines > that i found in many VCLs around the net: > > -- snip -- > > # Add a unique header containing the client address > remove req.http.X-Forwarded-For; > set req.http.X-Forwarded-For = req.http.rlnclientipaddr; > > --/snip-- > > I think this should do what i need: set a HTTP_X_FORWARDED_FOR header > containing the real client IP-Address and add it to the requests that > hit the backend. These lines are within sub vcl_recv. > > I need the Client IP at the backend for some statistic stuff (ip based > timeouts). > > Unfortunately i don't even get an empty HTTP_X_FORWARDED_FOR header. > It basically doesn't exist. I tried commenting these lines out and > also tried client.ip instead of req.http.rlnclientipaddr; as i found > this somewhere else - i don't know which the correct syntax is. > > Is there anything else wrong? Or maybe some switch in my varnish > version that i need to set to make HTTP_X_FORWARDED_FOR appear for my > backend? > > Thanks for helping > > -- > Frank > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Sat Jan 15 17:47:12 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Sat, 15 Jan 2011 12:47:12 -0500 Subject: HTTP_X_FORWARDED_FOR handling In-Reply-To: References: Message-ID: <7F0AA702B8A85A4A967C4C8EBAD6902CC6A465@TMG-EVS02.torstar.net> -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Frank Helmschrott Sent: January-15-11 3:31 AM To: varnish-misc at varnish-cache.org Subject: HTTP_X_FORWARDED_FOR handling Hi, I'm using varnish-2.1.4 SVN 5447M and wonder how HTTP_X_FORWARDED_FOR gets treatened by varnish. In my VCL (which i partly copied from elsewhere) there are some lines that i found in many VCLs around the net: -- snip -- # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = req.http.rlnclientipaddr; --/snip-- I think this should do what i need: set a HTTP_X_FORWARDED_FOR header containing the real client IP-Address and add it to the requests that hit the backend. These lines are within sub vcl_recv. I need the Client IP at the backend for some statistic stuff (ip based timeouts). Unfortunately i don't even get an empty HTTP_X_FORWARDED_FOR header. It basically doesn't exist. I tried commenting these lines out and also tried client.ip instead of req.http.rlnclientipaddr; as i found this somewhere else - i don't know which the correct syntax is. Is there anything else wrong? Or maybe some switch in my varnish version that i need to set to make HTTP_X_FORWARDED_FOR appear for my backend? ----- Frank, You don't get X-F-F unless you are going through some kind of appliance load balancer. As mentioned earlier, rewrite it to be client.ip if you need it. Stefan Caunter e: scaunter at topscms.com :: m: (416) 561-4871 www.thestar.com From TFigueiro at au.westfield.com Sun Jan 16 21:32:54 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Mon, 17 Jan 2011 08:32:54 +1100 Subject: HTTP_X_FORWARDED_FOR handling In-Reply-To: References: Message-ID: <64E73E81AAC26A49AC9EA28CBE65365107D46555@AUSYDEVS01.au.ad.westfield.com> ? in vcl_recv put following. remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; Note that this doesn?t handle existing XFF headers so there?s a chance multiple clients can be identified as coming from the same IP (which they are, technically). If all you?re doing is timing-out based on source IP and you?re happy to pack a few clients in the same group you should be good. ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.vanarragon at lukkien.com Mon Jan 17 11:44:15 2011 From: j.vanarragon at lukkien.com (Jaap van Arragon) Date: Mon, 17 Jan 2011 12:44:15 +0100 Subject: Multiple varnish daemons and logging Message-ID: Hello, We have two varnish daemons for two customers. We want to log the request for each daemon separately in apache format. Can we do this with separate varnishncsa daemons or are there other ways to log the requests? Thank you. Grt Jaap -------------- next part -------------- An HTML attachment was scrubbed... URL: From sime at sime.net.au Mon Jan 17 15:44:28 2011 From: sime at sime.net.au (Simon Males) Date: Tue, 18 Jan 2011 02:44:28 +1100 Subject: Multiple varnish daemons and logging In-Reply-To: References: Message-ID: On Mon, Jan 17, 2011 at 10:44 PM, Jaap van Arragon wrote: > We have two varnish daemons for two customers. We want to log the request > for each daemon separately in apache format. > > Can we do this with separate varnishncsa daemons or are there other ways to > log the requests? I believe the -n flag is what you'll need to specify for each daemon. >From the man page: Specifies the name of the varnishd instance to get logs from. This is based on the assumption that your varnishd instances are differentiated by the -n flag. -- Simon Males From ismail at namtrac.org Mon Jan 17 19:47:20 2011 From: ismail at namtrac.org (=?UTF-8?B?xLBzbWFpbCBEw7ZubWV6?=) Date: Mon, 17 Jan 2011 21:47:20 +0200 Subject: Failing tests on OSX Message-ID: Hi; Tested using OSX 10.6.6 and Varnish 2.1.4 release. Two tests are failing; tests/a00004.vtc Error log is; #### top macro def tmpdir=/tmp/vtc.33934.6b8b4567 #### top macro def bad_ip=10.255.255.255 # top TEST tests/a00004.vtc starting # top TEST dual shared server HTTP transactions ## s1 Starting server #### s1 macro def s1_addr=127.0.0.1 #### s1 macro def s1_port=54466 #### s1 macro def s1_sock=127.0.0.1 54466 # s1 Listen on 127.0.0.1 54466 ## c1 Starting client ## s1 Started on 127.0.0.1 54466 ### s1 Iteration 0 ## c2 Starting client ## c1 Waiting for client ### c1 Connect to 127.0.0.1 54466 ### c2 Connect to 127.0.0.1 54466 ### c1 Connected to 127.0.0.1 54466 fd is 4 ### c2 Connected to 127.0.0.1 54466 fd is 5 ### s1 rxreq #### c1 txreq| PUT /foo HTTP/1.0\r\n #### c1 txreq| \r\n #### c2 txreq| PUT /foo HTTP/1.0\r\n #### c2 txreq| \r\n ---- c1 Write failed: Broken pipe Sometimes c2 fails with the same error. tests/c00005.vtc Error log is; ---- v1 FAIL VCL does not compile #### top macro def tmpdir=/tmp/vtc.14805.6b8b4567 #### top macro def bad_ip=10.255.255.255 # top TEST tests/c00005.vtc starting # top TEST Test simple ACL ## s1 Starting server #### s1 macro def s1_addr=127.0.0.1 #### s1 macro def s1_port=53086 #### s1 macro def s1_sock=127.0.0.1 53086 # s1 Listen on 127.0.0.1 53086 ## s1 Started on 127.0.0.1 53086 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/vtc.14805.6b8b4567/v1 -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.14805.6b8b4567/v1/_S -M '127.0.0.1 53087' -P /tmp/vtc.14805.6b8b4567/v1/varnishd.pid -sfile,/tmp/vtc.14805.6b8b4567/v1,10M -p vcl_trace=on ### v1 debug| storage_file: filename: /tmp/vtc.14805.6b8b4567/v1/varnish.8L0ttO size 10 MB.\n ### v1 debug| Creating new SHMFILE\n ### v1 debug| Varnish on Darwin,10.6.0,i386,-sfile,-hcritbit\n ### v1 debug| 200 230 \n ### v1 debug| -----------------------------\n ### v1 debug| Varnish HTTP accelerator CLI.\n ### v1 debug| -----------------------------\n ### v1 debug| Darwin,10.6.0,i386,-sfile,-hcritbit\n ### v1 debug| \n ### v1 debug| Type 'help' for command list.\n ### v1 debug| Type 'quit' to close CLI session.\n ### v1 debug| Type 'start' to launch worker process.\n ### v1 debug| \n ### v1 CLI connection fd = 5 ### v1 CLI RX 107 #### v1 CLI RX| cgvegwfwlmpryzghgzpdxhtknbgiijwl\n #### v1 CLI RX| \n #### v1 CLI RX| Authentication required.\n #### v1 CLI TX| auth 758b3e7662c9f2c102219616d4b7a4074a15049b945efe0a97fd5de4de7f2836\n ### v1 CLI RX 200 #### v1 CLI RX| -----------------------------\n #### v1 CLI RX| Varnish HTTP accelerator CLI.\n #### v1 CLI RX| -----------------------------\n #### v1 CLI RX| Darwin,10.6.0,i386,-sfile,-hcritbit\n #### v1 CLI RX| \n #### v1 CLI RX| Type 'help' for command list.\n #### v1 CLI RX| Type 'quit' to close CLI session.\n #### v1 CLI RX| Type 'start' to launch worker process.\n #### v1 CLI TX| vcl.inline vcl1 "backend s1 { .host = \"127.0.0.1\"; .port = \"53086\"; }\n\n\tacl acl1 {\n\t\t\"localhost\";\n\t}\n\n\tsub vcl_recv {\n\t\tif (client.ip ~ acl1) {\n\t\t\tset req.url = \"/\";\n\t\t}\n\t}\n" ### v1 CLI RX 106 #### v1 CLI RX| Message from VCC-compiler:\n #### v1 CLI RX| DNS lookup(localhost): nodename nor servname provided, or not known\n #### v1 CLI RX| (input Line 4 Pos 17)\n #### v1 CLI RX| "localhost";\n #### v1 CLI RX| ----------------###########-\n #### v1 CLI RX| Running VCC-compiler failed, exit 1\n #### v1 CLI RX| VCL compilation failed ---- v1 FAIL VCL does not compile Looks like a DNS failure but I made sure "localhost" resolves to 127.0.0.1 correctly. Regards, ismail From eentzel at localmatters.com Tue Jan 18 00:13:24 2011 From: eentzel at localmatters.com (Eric Entzel) Date: Mon, 17 Jan 2011 17:13:24 -0700 Subject: Failing tests on OSX In-Reply-To: References: Message-ID: Just the other day I saw a similar test failure while trying to build Varnish 2.1.4 on OSX 10.5.8. I ended up installing 2.0.4 from Macports instead, but here's my error log in case it helps diagnose the issue: # top TEST tests/a00000.vtc passed (0.001s) # top TEST tests/a00001.vtc passed (0.001s) # top TEST tests/a00002.vtc passed (0.001s) # top TEST tests/a00003.vtc passed (0.001s) ---- c1 HTTP rx failed (read: Connection reset by peer) # top TEST tests/a00004.vtc starting # top TEST dual shared server HTTP transactions ## s1 Starting server #### s1 macro def s1_addr=127.0.0.1 #### s1 macro def s1_port=53036 #### s1 macro def s1_sock=127.0.0.1 53036 # s1 Listen on 127.0.0.1 53036 ## c1 Starting client ## s1 Started on 127.0.0.1 53036 ### s1 Iteration 0 ## c2 Starting client ### c1 Connect to 127.0.0.1 53036 ## c1 Waiting for client ### c2 Connect to 127.0.0.1 53036 ### c1 Connected to 127.0.0.1 53036 fd is 4 ### c2 Connected to 127.0.0.1 53036 fd is 5 #### c1 txreq| PUT /foo HTTP/1.0\r\n #### c1 txreq| \r\n ### c1 rxresp ### s1 rxreq #### c2 txreq| PUT /foo HTTP/1.0\r\n #### c2 txreq| \r\n ### c2 rxresp ---- c1 HTTP rx failed (read: Connection reset by peer) # top RESETTING after tests/a00004.vtc ## s1 Waiting for server #### s1 macro undef s1_addr #### s1 macro undef s1_port #### s1 macro undef s1_sock ## c2 Waiting for client # top TEST tests/a00004.vtc FAILED - Eric On Mon, Jan 17, 2011 at 12:47 PM, ?smail D?nmez wrote: > Hi; > > Tested using OSX 10.6.6 and Varnish 2.1.4 release. Two tests are failing; > > tests/a00004.vtc > > Error log is; > > #### top ?macro def tmpdir=/tmp/vtc.33934.6b8b4567 > #### top ?macro def bad_ip=10.255.255.255 > # ? ?top ?TEST tests/a00004.vtc starting > # ? ?top ?TEST dual shared server HTTP transactions > ## ? s1 ? Starting server > #### s1 ? macro def s1_addr=127.0.0.1 > #### s1 ? macro def s1_port=54466 > #### s1 ? macro def s1_sock=127.0.0.1 54466 > # ? ?s1 ? Listen on 127.0.0.1 54466 > ## ? c1 ? Starting client > ## ? s1 ? Started on 127.0.0.1 54466 > ### ?s1 ? Iteration 0 > ## ? c2 ? Starting client > ## ? c1 ? Waiting for client > ### ?c1 ? Connect to 127.0.0.1 54466 > ### ?c2 ? Connect to 127.0.0.1 54466 > ### ?c1 ? Connected to 127.0.0.1 54466 fd is 4 > ### ?c2 ? Connected to 127.0.0.1 54466 fd is 5 > ### ?s1 ? rxreq > #### c1 ? txreq| PUT /foo HTTP/1.0\r\n > #### c1 ? txreq| \r\n > #### c2 ? txreq| PUT /foo HTTP/1.0\r\n > #### c2 ? txreq| \r\n > ---- c1 ? Write failed: Broken pipe > > Sometimes c2 fails with the same error. > > > tests/c00005.vtc > > Error log is; > > > ---- v1 ? FAIL VCL does not compile > #### top ?macro def tmpdir=/tmp/vtc.14805.6b8b4567 > #### top ?macro def bad_ip=10.255.255.255 > # ? ?top ?TEST tests/c00005.vtc starting > # ? ?top ?TEST Test simple ACL > ## ? s1 ? Starting server > #### s1 ? macro def s1_addr=127.0.0.1 > #### s1 ? macro def s1_port=53086 > #### s1 ? macro def s1_sock=127.0.0.1 53086 > # ? ?s1 ? Listen on 127.0.0.1 53086 > ## ? s1 ? Started on 127.0.0.1 53086 > ## ? v1 ? Launch > ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n > /tmp/vtc.14805.6b8b4567/v1 -p auto_restart=off -p > syslog_cli_traffic=off -a '127.0.0.1:0' -S > /tmp/vtc.14805.6b8b4567/v1/_S -M '127.0.0.1 53087' -P > /tmp/vtc.14805.6b8b4567/v1/varnishd.pid > -sfile,/tmp/vtc.14805.6b8b4567/v1,10M ?-p vcl_trace=on > ### ?v1 ? debug| storage_file: filename: > /tmp/vtc.14805.6b8b4567/v1/varnish.8L0ttO size 10 MB.\n > ### ?v1 ? debug| Creating new SHMFILE\n > ### ?v1 ? debug| Varnish on Darwin,10.6.0,i386,-sfile,-hcritbit\n > ### ?v1 ? debug| 200 230 ? ? \n > ### ?v1 ? debug| -----------------------------\n > ### ?v1 ? debug| Varnish HTTP accelerator CLI.\n > ### ?v1 ? debug| -----------------------------\n > ### ?v1 ? debug| Darwin,10.6.0,i386,-sfile,-hcritbit\n > ### ?v1 ? debug| \n > ### ?v1 ? debug| Type 'help' for command list.\n > ### ?v1 ? debug| Type 'quit' to close CLI session.\n > ### ?v1 ? debug| Type 'start' to launch worker process.\n > ### ?v1 ? debug| \n > ### ?v1 ? CLI connection fd = 5 > ### ?v1 ? CLI RX ?107 > #### v1 ? CLI RX| cgvegwfwlmpryzghgzpdxhtknbgiijwl\n > #### v1 ? CLI RX| \n > #### v1 ? CLI RX| Authentication required.\n > #### v1 ? CLI TX| auth > 758b3e7662c9f2c102219616d4b7a4074a15049b945efe0a97fd5de4de7f2836\n > ### ?v1 ? CLI RX ?200 > #### v1 ? CLI RX| -----------------------------\n > #### v1 ? CLI RX| Varnish HTTP accelerator CLI.\n > #### v1 ? CLI RX| -----------------------------\n > #### v1 ? CLI RX| Darwin,10.6.0,i386,-sfile,-hcritbit\n > #### v1 ? CLI RX| \n > #### v1 ? CLI RX| Type 'help' for command list.\n > #### v1 ? CLI RX| Type 'quit' to close CLI session.\n > #### v1 ? CLI RX| Type 'start' to launch worker process.\n > #### v1 ? CLI TX| vcl.inline vcl1 "backend s1 { .host = \"127.0.0.1\"; > .port = \"53086\"; }\n\n\tacl acl1 {\n\t\t\"localhost\";\n\t}\n\n\tsub > vcl_recv {\n\t\tif (client.ip ~ acl1) {\n\t\t\tset req.url = > \"/\";\n\t\t}\n\t}\n" > ### ?v1 ? CLI RX ?106 > #### v1 ? CLI RX| Message from VCC-compiler:\n > #### v1 ? CLI RX| DNS lookup(localhost): nodename nor servname > provided, or not known\n > #### v1 ? CLI RX| (input Line 4 Pos 17)\n > #### v1 ? CLI RX| ? ? ? ? ? ? ? ? "localhost";\n > #### v1 ? CLI RX| ----------------###########-\n > #### v1 ? CLI RX| Running VCC-compiler failed, exit 1\n > #### v1 ? CLI RX| VCL compilation failed > ---- v1 ? FAIL VCL does not compile > > Looks like a DNS failure but I made sure "localhost" resolves to > 127.0.0.1 correctly. > > Regards, > ismail > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From slackmoehrle.lists at gmail.com Tue Jan 18 22:25:55 2011 From: slackmoehrle.lists at gmail.com (Jason S-M) Date: Tue, 18 Jan 2011 14:25:55 -0800 Subject: a few intro to Varnish questions In-Reply-To: References: Message-ID: <61E29D77-78DA-4333-8799-7FB39B05C4EE@gmail.com> Hi, Thanks for the reply. very helpful. >> So I have to run varnish on 80 and my site on an alternate port (8080 as example)? Or do they both run on port 80? > > Yes, you need to run varnish on port 80, so that all requests hit Varnish first, Varnish will then fetch contents from your Apache server running on any other port ie. 8080, cache it and send response to requesting client. I put Varnish on 80 using: varnishd -f /etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 75.149.56.27:80 and I changed httpd.conf to: Listen 127.0.0.1:8080 in /etc/varnish/default.vcl I have: backend default { .host = "127.0.0.1"; .port = "8080"; } Upon doing this and hitting my url: http://6colors.net it seems to be in an infinite loop, Chrome just seems to loop and loop and loop. >> 3. I must make additions to vcl_recv I assume to cache what I want? > > In vcl_recv, you can select what you want to serve from cache, you might want to serve php pages directly and cache only static contents, depending on your requirements, in vcl_fetch you need to set how long you want content to be cached, setting TTL, you can also set TTL based on content types, please read the doc firsthttp://www.varnish-cache.org/docs/2.1/ I am confused. Can we use an example? Say I wanted to cache these: /mediaroom/video/keynotes/WWDC_2010/Apple_Special_Event.mp4 /mediaroom/video/keynotes/WWDC_2010/Apple_Special_Event.ogv /mediaroom/video/keynotes/WWDC_2009/Apple_Special_Event.mp4 /mediaroom/video/keynotes/WWDC_2009/Apple_Special_Event.ogv /mediaroom/video/special_events/2010/October/Apple_Special_Event.mp4 /mediaroom/video/special_events/2010/October/Apple_Special_Event.ogv Thanks for explaining, I appreciate it. Some of these docs seems confusing! -Jason From t at tylr.org Tue Jan 18 23:58:12 2011 From: t at tylr.org (Tyler Love) Date: Tue, 18 Jan 2011 18:58:12 -0500 Subject: Best approach for expiring objects Message-ID: I know this is a commonly misunderstood aspect of Varnish, but I think I have the hang of it. I am using varnish on a very high traffic website and have yet to find a satisfactory approach to expiring objects from the cache. My goal is to increase ttls (1 day, maybe even a week), to maximize hit rates, and expire objects when content is changed. This has proven to be more difficult than I anticipated. The first approach was to use the "purge" function that varnish gives you. The caveat that defated this approach was that purge is basically a memory leak, that also slows down requests. Please correct me if I have a misunderstanding of purge but, when you call purge you add either a url or a regex to a list stored within varnish, and then it checks all incoming requests against this url and makes sure to not serve them from the cache. This make sense with an understanding of how objects are stored in varnish, but the name "purge" is less than ideal and even misleading (I read this is being changed in trunk?). Second, banned items are not served in grace mode, which is behavior I do not want. The alternative is to set the ttl on an object to 0 (zero). The first caveat was being fully aware of every single resource/url that needs to be called for a given purge/expiration. One big problem area is caching on paginated urls. Second was the sheer volume of purges I need to send. At the scale I am at, I have to purge approximately 400-800 urls per second (I have multiple instances of varnish running by the way). Lastly, purging the objects with different compression (Accept-Encoding: gzip, deflate, etc) in the cache. I now have to multiply the amount of urls for every possible encoding I may or may not use. So there you have it, removing objects from varnish is tedious, and full of pitfalls. I understand the architecture of varnish just well enough to know why it works this way, but I still wish it was better. If I am approaching this problem the wrong way, I am more than happy to hear some thoughts. Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.georgovassilis at gmail.com Wed Jan 19 11:11:56 2011 From: g.georgovassilis at gmail.com (George Georgovassilis) Date: Wed, 19 Jan 2011 12:11:56 +0100 Subject: Best approach for expiring objects In-Reply-To: References: Message-ID: <4D36C6FC.6080205@gmail.com> Hello Tyler, > So there you have it, removing objects from varnish is tedious, and full of > pitfalls. I understand the architecture of varnish just well enough to know > why it works this way, but I still wish it was better. > > If I am approaching this problem the wrong way, I am more than happy to hear > some thoughts. I had a similar issue a while back and solved it with the ETag header. I must admit though that in my case it was rather easy to compute, either based on the last modified timestamp of a static file that was changed or a timestamp in the database - you always can compute an ETag from a timestamp, you don't neccessarily need to md5 on the contents. If you can't intervene in the backend architecture or it turns out to be too slow, then this approach can't help you of course. Regards, G. From perbu at varnish-software.com Thu Jan 20 12:51:28 2011 From: perbu at varnish-software.com (Per Buer) Date: Thu, 20 Jan 2011 13:51:28 +0100 Subject: Source repository in flux Message-ID: Migrating from Subversion to Git. Things should be available read only. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From scaunter at topscms.com Thu Jan 20 13:01:36 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Thu, 20 Jan 2011 08:01:36 -0500 Subject: Best approach for expiring objects In-Reply-To: References: Message-ID: <25618738-3FF2-4060-AD3F-A5134E387D2B@topscms.com> On 2011-01-19, at 2:50 AM, "Tyler Love" wrote: > I know this is a commonly misunderstood aspect of Varnish, but I think I have the hang of it. > > I am using varnish on a very high traffic website and have yet to find a satisfactory approach to expiring objects from the cache. You need a reasonable default that balances performance and freshness. > > My goal is to increase ttls (1 day, maybe even a week), to maximize hit rates, and expire objects when content is changed. This has proven to be more difficult than I anticipated. > > Indeed. Admin wants long cache, business wants instant everything. > > The first approach was to use the "purge" function that varnish gives you. The caveat that defated this approach was that purge is basically a memory leak, that also slows down requests. > > Please correct me if I have a misunderstanding of purge but, when you call purge you add either a url or a regex to a list stored within varnish, and then it checks all incoming requests against this url and makes sure to not serve them from the cache. True, it is a ban list. We do not get to manage the cache directly, we simply steer the ship. > > This make sense with an understanding of how objects are stored in varnish, but the name "purge" is less than ideal and even misleading (I read this is being changed in trunk?). > English words convey extra meaning, which unfortunately does not resonate with machines. > Second, banned items are not served in grace mode, which is behavior I do not want. > > So purge if backend is healthy? Should be possible in vcl. > > The alternative is to set the ttl on an object to 0 (zero). The first caveat was being fully aware of every single resource/url that needs to be called for a given purge/expiration. One big problem area is caching on paginated urls. Sounds like your app needs to set a header which you can use in recv. > > Second was the sheer volume of purges I need to send. At the scale I am at, I have to purge approximately 400-800 urls per second (I have multiple instances of varnish running by the way). > Your app needs to communicate with your vcl better. > Lastly, purging the objects with different compression (Accept-Encoding: gzip, deflate, etc) in the cache. I now have to multiply the amount of urls for every possible encoding I may or may not use. > Normalize these where possible or again it will get complex. > > > So there you have it, removing objects from varnish is tedious, and full of pitfalls. I understand the architecture of varnish just well enough to know why it works this way, but I still wish it was better. Again, if you are trying to manage the objects you aren't getting what you should from varnish. Your app and your vcl should get you a nice hit rate. > > If I am approaching this problem the wrong way, I am more than happy to hear some thoughts. > > Tyler > ______________________________ Stefan Caunter Operations TorstarDigital 416.561.4871 From philip.prince at oxil.co.uk Thu Jan 20 13:26:37 2011 From: philip.prince at oxil.co.uk (Philip Prince) Date: Thu, 20 Jan 2011 13:26:37 +0000 Subject: Set my own value in C for use in VCL Message-ID: Dear List, I am using Varnish 2.1.3 on Ubuntu 10.10. I would like to use a C routine to set a value which I could pass on to VCL. I have made this simple start derived from examples I have seen on the web. I direct the request to a PHP file that streams out simulated data. If I comment out the VRT_SetHdr I get the expected data, otherwise I get an empty file. C{ #include }C # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "89.151.80.99"; .port = "80"; } sub vcl_miss { C{ void setmyownvalue () { VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", vrt_magic_string_end); } setmyownvalue(); }C if (req.http.X-My-Own-Value=="1"){ error 418 "Short and Stout"; } return (fetch); } It has to be easy; my apologies for being so thick. I have looked for VRT documentation but I must be looking in all the wrong places... Thank you, Philip Prince Oxford Information Labs Limited The Magdalen Centre Oxford OX4 4GA t: 01865 784294 d: 01865 582040 m: 07595 894469 Legally privileged/Confidential Information may be contained in this message. If you are not the addressee(s) legally indicated in this message (or responsible for delivery of the message to such person) you may not copy or deliver this message to anyone. In such case, you should destroy this message, and notify us immediately. If you or your employer does not consent to Internet e-mail messages of this kind, please advise us immediately. Opinions, conclusions and other information expressed in this message are not given or endorsed by my firm or employer unless otherwise indicated by an authorised representative independent of this message. Please note that we do not accept any responsibility for viruses and we advise you to scan attachments. Registered office: 37 Market Square, Witney, Oxfordshire OX28 6RE. Registered in England and Wales No. 4520925. VAT No. 799526263. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Thu Jan 20 13:47:56 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Thu, 20 Jan 2011 14:47:56 +0100 Subject: Set my own value in C for use in VCL In-Reply-To: References: Message-ID: <20110120134756.GB2723@freud> On Thu, Jan 20, 2011 at 01:26:37PM +0000, Philip Prince wrote: > I would like to use a C routine to set a value which I could pass on to > VCL. I have made this simple start derived from examples I have seen on > the web. I direct the request to a PHP file that streams out simulated > data. If I comment out the VRT_SetHdr I get the expected data, otherwise > I get an empty file. > > C{ > #include > }C (...) > sub vcl_miss { > > C{ > void setmyownvalue () { > VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", vrt_magic_string_end); > } > setmyownvalue(); > }C You shouldn't define a function inside vcl_miss. And that function has no knowledge of (sp). You have to move the function definition out of vcl_miss if you intend to use a function, then prototype it properly and pass sp along. As for the actual VRT_SetHdr, I didn't really verify it, but the easiest test is to try exactly what you want in normal VCL and run "varnishd -C -f foo.vcl" which will give the resulting C code. > It has to be easy; my apologies for being so thick. I have looked for VRT > documentation but I must be looking in all the wrong places... There's a reason for that: We don't encourage in-line C unless you: A. Know C B. Understand Varnish C. Are willing to look through the source code. This is likely to change somewhat with Varnish 3.0 and vmods. As for A, the majority of issues I've seen people have with in-line C is a lack of basic C experience. Not prototyping would be one such thing (which any compiler will warn you about). Almost every time I get asked to look at in-line C, there's also a NULL-check missing which would result in a segfault. And for B... Well, if you don't properly understand Varnish, it's unfair to assume you will be able to write C code for it in a manner that wont increase deforestation, segfault varnish or harm little children. And it's likely that you can do what you want without in-line C. The last one, C, might require some more explanation. In-line C was designed as an emergency escape hatch that we got for free. Because VCL is translated to C, in-line C isn't so much a "feature" as a short-cut. The interfaces of VRT (Varnish Run Time, which VCL and in-line C use) are not guaranteed to be stable, even between stable version of Varnish. It's simply not a design goal. If we create extensive documentation for it, we create an expectation that any in-line C you write will work on the next release too. That makes development harder. Vmods are designed to solve that problem: We make it easier to write _good_ in-line C code for Varnish that can be re-used and maintained. It will also make it easier to see what interfaces we need to keep and which ones we can drop. Hope this both helped you with the original problem and explained why in-line C documentation isn't available :) - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From dev at soeren-soerries.de Thu Jan 20 13:51:02 2011 From: dev at soeren-soerries.de (Soesoe) Date: Thu, 20 Jan 2011 14:51:02 +0100 Subject: Sticky Load Balancing with Varnish - sick backends Message-ID: Hello everybody, we tried to enable "Sticky Load Balancing" with varnish 2.0.6 like this thread: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-April/004111.html But changed a custom cookie to the JSESSION Cookie from Tomcat and select the right backend via the jvmrout of the session: works fine - if the selected backend doesn't go sick. if the selected backend gos sick, varnish tries to choose still the sick backend or sends an error 503 - we tried to catch this error and "restart" by setting a roundrobin director but still the redirect to our error page happens, and not to one of the other healthy backends? to you have any hints for me? Thanks Soeren this is our config (in parts) ----------------------------------------------------------------------------------------------------------------------------- sub recv_loadBalancingOnStickyCookie { if (req.http.Cookie ~ "JSESSIONID=") { set req.http.StickyVarnish = regsub( req.http.Cookie, "^.*?JSESSIONID=([^;]*);*.*$","\1" ); call chooseBackend; unset req.http.StickyVarnish; } else { call chooseBackend; } } sub chooseBackend { if (req.http.StickyVarnish ~ "back01a$") { set req.backend = back01a; } else if (req.http.StickyVarnish ~ "back01b$") { set req.backend = back01b; } else if (req.http.StickyVarnish ~ "back02a$") { set req.backend = back02a; } else if (req.http.StickyVarnish ~ "back02b$") { set req.backend = back02b; } else { ##default set req.backend = wwwround; } } sub vcl_error{ if (obj.status == 404) { set obj.http.Location = " http://server.de/404/index.html?WT.mc_id=varnish_error404"; set obj.status = 302; } if (obj.status == 503 && req.restarts < 4) { set req.backend = wwwround; restart; } else if (obj.status == 503 && req.restarts >= 4) { set obj.http.Location = " http://server.de/500/index.html?WT.mc_id=varnish_error503"; set obj.status = 302; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Thu Jan 20 13:58:35 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Thu, 20 Jan 2011 14:58:35 +0100 Subject: Sticky Load Balancing with Varnish - sick backends In-Reply-To: References: Message-ID: <20110120135835.GC2723@freud> On Thu, Jan 20, 2011 at 02:51:02PM +0100, Soesoe wrote: > we tried to enable "Sticky Load Balancing" with varnish 2.0.6 like this > thread: > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-April/004111.html > > But changed a custom cookie to the JSESSION Cookie from Tomcat and select > the right backend via the jvmrout of the session: > works fine - if the selected backend doesn't go sick. Have you considered using 2.1.4 instead? That will give you the client/hash director and req.identity which is so much simpler to use and takes sick backends into account. Not the answer you were asking for, but perhaps the solution you were looking for... - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From wido at widodh.nl Thu Jan 20 13:59:09 2011 From: wido at widodh.nl (Wido den Hollander) Date: Thu, 20 Jan 2011 14:59:09 +0100 Subject: Sticky Load Balancing with Varnish - sick backends In-Reply-To: References: Message-ID: <1295531949.2195.24.camel@wido-desktop> Hi. On Thu, 2011-01-20 at 14:51 +0100, Soesoe wrote: > Hello everybody, > > > we tried to enable "Sticky Load Balancing" with varnish 2.0.6 like > this > thread: http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-April/004111.html > Any particular reason why you are still using 2.0.6? > > But changed a custom cookie to the JSESSION Cookie from Tomcat and > select the right backend via the jvmrout of the session: > works fine - if the selected backend doesn't go sick. > > > if the selected backend gos sick, varnish tries to choose still the > sick backend > or sends an error 503 - Correct, since you specify that you want to use that backend. > to you have any hints for me? > Thanks Soeren Yes, my advice is to upgrade to at least Varnish 2.1.3 and use the client director: http://www.varnish-cache.org/trac/wiki/LoadBalancing Wido From l.barszcz at gadu-gadu.pl Thu Jan 20 14:55:52 2011 From: l.barszcz at gadu-gadu.pl (=?UTF-8?B?xYF1a2FzeiBCYXJzemN6IC8gR2FkdS1HYWR1?=) Date: Thu, 20 Jan 2011 15:55:52 +0100 Subject: Set my own value in C for use in VCL In-Reply-To: References: Message-ID: <4D384CF8.9080106@gadu-gadu.pl> On 20.01.2011 14:26, Philip Prince wrote: > VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", > vrt_magic_string_end); You probably get panic in here, since length of "X-My-Own-Value:" isn't 013oct ( = 11dec). More detailed info on panic reasons should be in your syslog. Argument for VRT_SetHdr should be "\020X-My-Own-Value:". -- ?ukasz Barszcz web architect Pion Aplikacji Internetowych GG Network S.A http://www.gadu-gadu.pl ul. Kamionkowska 45 03-812 Warszawa tel.: +48 22 4277900 fax.: +48 22 5146498 gg:16210 Sp??ka zarejestrowana w S?dzie Rejonowym dla m. st. Warszawy, XIII Wydzia? Gospodarczy KRS pod numerem 0000264575, NIP 867-19-48-977. Kapita? zak?adowy: 1 758 461,10 z? - wp?acony w ca?o?ci. -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.prince at oxil.co.uk Thu Jan 20 14:58:25 2011 From: philip.prince at oxil.co.uk (Philip Prince) Date: Thu, 20 Jan 2011 14:58:25 +0000 Subject: Set my own value in C for use in VCL In-Reply-To: <20110120134756.GB2723@freud> References: <20110120134756.GB2723@freud> Message-ID: <0F51D0B5-1676-4565-B59A-C4D055E00565@oxil.co.uk> Dear Kristian, It is very kind of you to respond so quickly! I apologise for the C error; my colleague looking over my shoulder and reading this email with me had a laugh about it. However, even when I remove the line above and the two lines below (which could bring sp back into scope) the behaviour is identical. Unpacking the C code sounds like great fun but in the meantime, shouldn't the call to VRT_SetHdr with the simple arguments (taken from examples by Poul and others) be benign (I have tried it with and without vrt-magic-string)? C{ VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1"); }C That segfault-ing Varnish may be one of my guilty pleasures notwithstanding, I have now searched for VMODs and have found a reference to them in the 'Documentation for trunk.' Curiously, they don't appear to be documented in the 'Documentation for the latest stable release' which was the path I was stumbling down. Unfortunately, our preferred development environment (PHP on Macs for delivery on Ubuntu) does not lend itself to rapid prototyping C modules. The inline C seemed to amenable to me faffing about for a bit until I get something working. I have a client who is very concerned about the very first access to a not-yet-cached bit of information by many, many people all at the same time. Their preference is to receive a retry response rather than queue and wait for the backend to respond (it is an expensive request) if someone else has already accessed the URL but the cache has not yet been populated with its content. Is there a built-in that would be ideal for this scenario? Many thanks, Philip On 20 Jan 2011, at 13:47, Kristian Lyngstol wrote: > On Thu, Jan 20, 2011 at 01:26:37PM +0000, Philip Prince wrote: >> I would like to use a C routine to set a value which I could pass on to >> VCL. I have made this simple start derived from examples I have seen on >> the web. I direct the request to a PHP file that streams out simulated >> data. If I comment out the VRT_SetHdr I get the expected data, otherwise >> I get an empty file. >> >> C{ >> #include >> }C > > (...) > >> sub vcl_miss { >> >> C{ >> void setmyownvalue () { >> VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", vrt_magic_string_end); >> } >> setmyownvalue(); >> }C > > You shouldn't define a function inside vcl_miss. And that function has no > knowledge of (sp). > > You have to move the function definition out of vcl_miss if you intend to > use a function, then prototype it properly and pass sp along. > > As for the actual VRT_SetHdr, I didn't really verify it, but the easiest > test is to try exactly what you want in normal VCL and run "varnishd -C -f > foo.vcl" which will give the resulting C code. > >> It has to be easy; my apologies for being so thick. I have looked for VRT >> documentation but I must be looking in all the wrong places... > > There's a reason for that: We don't encourage in-line C unless you: > > A. Know C > B. Understand Varnish > C. Are willing to look through the source code. > > This is likely to change somewhat with Varnish 3.0 and vmods. > > As for A, the majority of issues I've seen people have with in-line C is a > lack of basic C experience. Not prototyping would be one such thing (which > any compiler will warn you about). Almost every time I get asked to look at > in-line C, there's also a NULL-check missing which would result in a > segfault. > > And for B... Well, if you don't properly understand Varnish, it's unfair to > assume you will be able to write C code for it in a manner that wont > increase deforestation, segfault varnish or harm little children. And it's > likely that you can do what you want without in-line C. > > The last one, C, might require some more explanation. In-line C was > designed as an emergency escape hatch that we got for free. Because VCL is > translated to C, in-line C isn't so much a "feature" as a short-cut. The > interfaces of VRT (Varnish Run Time, which VCL and in-line C use) are not > guaranteed to be stable, even between stable version of Varnish. It's > simply not a design goal. If we create extensive documentation for it, we > create an expectation that any in-line C you write will work on the next > release too. That makes development harder. > > Vmods are designed to solve that problem: We make it easier to write _good_ > in-line C code for Varnish that can be re-used and maintained. It will also > make it easier to see what interfaces we need to keep and which ones we can > drop. > > Hope this both helped you with the original problem and explained why > in-line C documentation isn't available :) > > - Kristian Philip Prince SB MA MBCS CITP Network Architect, Director Oxford Information Labs Limited The Magdalen Centre Oxford OX4 4GA t: 01865 784294 d: 01865 582040 m: 07595 894469 Legally privileged/Confidential Information may be contained in this message. If you are not the addressee(s) legally indicated in this message (or responsible for delivery of the message to such person) you may not copy or deliver this message to anyone. In such case, you should destroy this message, and notify us immediately. If you or your employer does not consent to Internet e-mail messages of this kind, please advise us immediately. Opinions, conclusions and other information expressed in this message are not given or endorsed by my firm or employer unless otherwise indicated by an authorised representative independent of this message. Please note that we do not accept any responsibility for viruses and we advise you to scan attachments. Registered office: 37 Market Square, Witney, Oxfordshire OX28 6RE. Registered in England and Wales No. 4520925. VAT No. 799526263. -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.prince at oxil.co.uk Thu Jan 20 14:56:05 2011 From: philip.prince at oxil.co.uk (Philip Prince) Date: Thu, 20 Jan 2011 14:56:05 +0000 Subject: Set my own value in C for use in VCL In-Reply-To: <20110120134756.GB2723@freud> References: <20110120134756.GB2723@freud> Message-ID: <17E542A5-2C66-48D5-927E-46C032D2600A@oxil.co.uk> Dear Kristian, It is very kind of you to respond so quickly! I apologise for the C error; my colleague looking over my shoulder and reading this email with me had a laugh about it. However, even when I remove the line above and the two lines below (which could bring sp back into scope) the behaviour is identical. Unpacking the C code sounds like great fun but in the meantime, shouldn't the call to VRT_SetHdr with the simple arguments (taken from examples by Poul and others) be benign (I have tried it with and without vrt-magic-string)? C{ VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1"); }C That segfault-ing Varnish may be one of my guilty pleasures notwithstanding, I have now searched for VMODs and have found a reference to them in the 'Documentation for trunk.' Curiously, they don't appear to be documented in the 'Documentation for the latest stable release' which was the path I was stumbling down. Unfortunately, our preferred development environment (PHP on Macs for delivery on Ubuntu) does not lend itself to rapid prototyping C modules. The inline C seemed to amenable to me faffing about for a bit until I get something working. I have a client who is very concerned about the very first access to a not-yet-cached bit of information by many, many people all at the same time. Their preference is to receive a retry response rather than queue and wait for the backend to respond (it is an expensive request) if someone else has already accessed the URL but the cache has not yet been populated with its content. Is there a built-in that would be ideal for this scenario? Many thanks, Philip On 20 Jan 2011, at 13:47, Kristian Lyngstol wrote: > On Thu, Jan 20, 2011 at 01:26:37PM +0000, Philip Prince wrote: >> I would like to use a C routine to set a value which I could pass on to >> VCL. I have made this simple start derived from examples I have seen on >> the web. I direct the request to a PHP file that streams out simulated >> data. If I comment out the VRT_SetHdr I get the expected data, otherwise >> I get an empty file. >> >> C{ >> #include >> }C > > (...) > >> sub vcl_miss { >> >> C{ >> void setmyownvalue () { >> VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", vrt_magic_string_end); >> } >> setmyownvalue(); >> }C > > You shouldn't define a function inside vcl_miss. And that function has no > knowledge of (sp). > > You have to move the function definition out of vcl_miss if you intend to > use a function, then prototype it properly and pass sp along. > > As for the actual VRT_SetHdr, I didn't really verify it, but the easiest > test is to try exactly what you want in normal VCL and run "varnishd -C -f > foo.vcl" which will give the resulting C code. > >> It has to be easy; my apologies for being so thick. I have looked for VRT >> documentation but I must be looking in all the wrong places... > > There's a reason for that: We don't encourage in-line C unless you: > > A. Know C > B. Understand Varnish > C. Are willing to look through the source code. > > This is likely to change somewhat with Varnish 3.0 and vmods. > > As for A, the majority of issues I've seen people have with in-line C is a > lack of basic C experience. Not prototyping would be one such thing (which > any compiler will warn you about). Almost every time I get asked to look at > in-line C, there's also a NULL-check missing which would result in a > segfault. > > And for B... Well, if you don't properly understand Varnish, it's unfair to > assume you will be able to write C code for it in a manner that wont > increase deforestation, segfault varnish or harm little children. And it's > likely that you can do what you want without in-line C. > > The last one, C, might require some more explanation. In-line C was > designed as an emergency escape hatch that we got for free. Because VCL is > translated to C, in-line C isn't so much a "feature" as a short-cut. The > interfaces of VRT (Varnish Run Time, which VCL and in-line C use) are not > guaranteed to be stable, even between stable version of Varnish. It's > simply not a design goal. If we create extensive documentation for it, we > create an expectation that any in-line C you write will work on the next > release too. That makes development harder. > > Vmods are designed to solve that problem: We make it easier to write _good_ > in-line C code for Varnish that can be re-used and maintained. It will also > make it easier to see what interfaces we need to keep and which ones we can > drop. > > Hope this both helped you with the original problem and explained why > in-line C documentation isn't available :) > > - Kristian Philip Prince SB MA MBCS CITP Network Architect, Director Oxford Information Labs Limited The Magdalen Centre Oxford OX4 4GA t: 01865 784294 d: 01865 582040 m: 07595 894469 Legally privileged/Confidential Information may be contained in this message. If you are not the addressee(s) legally indicated in this message (or responsible for delivery of the message to such person) you may not copy or deliver this message to anyone. In such case, you should destroy this message, and notify us immediately. If you or your employer does not consent to Internet e-mail messages of this kind, please advise us immediately. Opinions, conclusions and other information expressed in this message are not given or endorsed by my firm or employer unless otherwise indicated by an authorised representative independent of this message. Please note that we do not accept any responsibility for viruses and we advise you to scan attachments. Registered office: 37 Market Square, Witney, Oxfordshire OX28 6RE. Registered in England and Wales No. 4520925. VAT No. 799526263. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at varnish-software.com Thu Jan 20 15:14:47 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Thu, 20 Jan 2011 16:14:47 +0100 Subject: Set my own value in C for use in VCL In-Reply-To: <0F51D0B5-1676-4565-B59A-C4D055E00565@oxil.co.uk> References: <20110120134756.GB2723@freud> <0F51D0B5-1676-4565-B59A-C4D055E00565@oxil.co.uk> Message-ID: <20110120151447.GE2723@freud> On Thu, Jan 20, 2011 at 02:58:25PM +0000, Philip Prince wrote: > I apologise for the C error; my colleague looking over my shoulder and > reading this email with me had a laugh about it. However, even when I > remove the line above and the two lines below (which could bring sp back > into scope) the behaviour is identical. I believe there was an other posts explaining that the \013 was incorrect in this situation, so I'll leave it at that :) > That segfault-ing Varnish may be one of my guilty pleasures > notwithstanding, I have now searched for VMODs and have found a reference > to them in the 'Documentation for trunk.' Curiously, they don't appear > to be documented in the 'Documentation for the latest stable release' > which was the path I was stumbling down. That is correct. VMods are not part of any release yet, and we are just now trying them out ourself and thus expect them to change a bit before release of 3.0. So the reason there's no documentation of in-line C is intentional, but the lack of documentaiton for VMods is just because we haven't gotten to it yet :) > Unfortunately, our preferred development environment (PHP on Macs for > delivery on Ubuntu) does not lend itself to rapid prototyping C modules. > The inline C seemed to amenable to me faffing about for a bit until I get > something working. Quite understandable, this is why we've realized we need to make it easier. > I have a client who is very concerned about the very first access to a > not-yet-cached bit of information by many, many people all at the same > time. Their preference is to receive a retry response rather than queue > and wait for the backend to respond (it is an expensive request) if > someone else has already accessed the URL but the cache has not yet been > populated with its content. Is there a built-in that would be ideal for > this scenario? Hmm, not quite. But 2.1.4 has req.ignore_busy, which will by-pass the waiting list, but then you have to ensure (in vcl_miss) that you don't request the same object multiple times... I suppose you could set a marker in vcl_miss when an object is being requested, but all of this quickly gets dirty. This is somewhat outside the scope of in-line C, as it affects multiple concurrent requests... It's doable, but much much easier by just haking HSH_Lookup in the right place... - Kristian From dev at soeren-soerries.de Thu Jan 20 15:25:43 2011 From: dev at soeren-soerries.de (Soesoe) Date: Thu, 20 Jan 2011 16:25:43 +0100 Subject: Sticky Load Balancing with Varnish - sick backends In-Reply-To: <192926231.405557.1295536939748.JavaMail.open-xchange@oxltgw14.schlund.de> References: <192926231.405557.1295536939748.JavaMail.open-xchange@oxltgw14.schlund.de> Message-ID: Hi oh that was quick - thanks for the fast answers. right now we can't easly upgrade to a newer varnish version. the stickyness should only catch if a Session is this (when you logged in) if you don't have this session cookie no stickyness should be there - for better performance. THX Soeren > ---------- Urspr?ngliche Nachricht ---------- > Von: Wido den Hollander > An: dev at soeren-soerries.de > Cc: varnish-misc at varnish-cache.org > Datum: 20. Januar 2011 um 14:59 > Betreff: Re: Sticky Load Balancing with Varnish - sick backends > > Hi. > > On Thu, 2011-01-20 at 14:51 +0100, Soesoe wrote: > > Hello everybody, > > > > > > we tried to enable "Sticky Load Balancing" with varnish 2.0.6 like > > this > > thread: > http://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-April/004111.html > > > > Any particular reason why you are still using 2.0.6? > > > > > But changed a custom cookie to the JSESSION Cookie from Tomcat and > > select the right backend via the jvmrout of the session: > > works fine - if the selected backend doesn't go sick. > > > > > > if the selected backend gos sick, varnish tries to choose still the > > sick backend > > or sends an error 503 - > > Correct, since you specify that you want to use that backend. > > > > to you have any hints for me? > > Thanks Soeren > > Yes, my advice is to upgrade to at least Varnish 2.1.3 and use the > client director: http://www.varnish-cache.org/trac/wiki/LoadBalancing > > Wido > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.prince at oxil.co.uk Thu Jan 20 15:46:01 2011 From: philip.prince at oxil.co.uk (Philip Prince) Date: Thu, 20 Jan 2011 15:46:01 +0000 Subject: Set my own value in C for use in VCL In-Reply-To: <4D384CF8.9080106@gadu-gadu.pl> References: <4D384CF8.9080106@gadu-gadu.pl> Message-ID: Dear Lukasz, That was it! It did not occur to me to look at the syslog ( I kept running varnishlog...). Many thanks, Philip On 20 Jan 2011, at 14:55, ?ukasz Barszcz / Gadu-Gadu wrote: > On 20.01.2011 14:26, Philip Prince wrote: >> >> VRT_SetHdr(sp, HDR_REQ, "\013X-My-Own-Value:", "1", vrt_magic_string_end); > You probably get panic in here, since length of "X-My-Own-Value:" isn't 013oct ( = 11dec). More detailed info on panic reasons should be in your syslog. > > Argument for VRT_SetHdr should be "\020X-My-Own-Value:". > > -- > ?ukasz Barszcz > web architect > Pion Aplikacji Internetowych > GG Network S.A http://www.gadu-gadu.pl > ul. Kamionkowska 45 03-812 Warszawa > tel.: +48 22 4277900 fax.: +48 22 5146498 gg:16210 > > Sp??ka zarejestrowana w S?dzie Rejonowym dla m. st. Warszawy, XIII > Wydzia? Gospodarczy KRS pod numerem 0000264575, NIP 867-19-48-977. > Kapita? zak?adowy: 1 758 461,10 z? - wp?acony w ca?o?ci. > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc Philip Prince SB MA MBCS CITP Network Architect, Director Oxford Information Labs Limited The Magdalen Centre Oxford OX4 4GA t: 01865 784294 d: 01865 582040 m: 07595 894469 Legally privileged/Confidential Information may be contained in this message. If you are not the addressee(s) legally indicated in this message (or responsible for delivery of the message to such person) you may not copy or deliver this message to anyone. In such case, you should destroy this message, and notify us immediately. If you or your employer does not consent to Internet e-mail messages of this kind, please advise us immediately. Opinions, conclusions and other information expressed in this message are not given or endorsed by my firm or employer unless otherwise indicated by an authorised representative independent of this message. Please note that we do not accept any responsibility for viruses and we advise you to scan attachments. Registered office: 37 Market Square, Witney, Oxfordshire OX28 6RE. Registered in England and Wales No. 4520925. VAT No. 799526263. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at varnish-software.com Thu Jan 20 16:23:44 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Thu, 20 Jan 2011 17:23:44 +0100 Subject: Source repository in flux In-Reply-To: (Per Buer's message of "Thu, 20 Jan 2011 13:51:28 +0100") References: Message-ID: <87fwsnv90v.fsf@qurzaw.varnish-software.com> ]] Per Buer | Migrating from Subversion to Git. Things should be available read only. New read-only git URL: git clone git://git.varnish-cache.org/varnish-cache/ If you have a developer account and want to push to the repository, use git clone ssh://git.varnish-cache.org/git/varnish-cache I'll update the documentation on varnish-cache.org as well, it still refers to svn. Please note that the SVN tree is still available, but read-only. It'll stay that way for the foreseeable future. Best regards, -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From TFigueiro at au.westfield.com Fri Jan 21 03:28:03 2011 From: TFigueiro at au.westfield.com (Thiago Figueiro) Date: Fri, 21 Jan 2011 14:28:03 +1100 Subject: Best approach for expiring objects In-Reply-To: <25618738-3FF2-4060-AD3F-A5134E387D2B@topscms.com> References: <25618738-3FF2-4060-AD3F-A5134E387D2B@topscms.com> Message-ID: <64E73E81AAC26A49AC9EA28CBE65365107DCF3A6@AUSYDEVS01.au.ad.westfield.com> From: Caunter, Stefan > So purge if backend is healthy? Should be possible in vcl. sub vcl_recv { (...) # Allow purging if backend is healthy if (req.backend.healthy) { if (req.http.Pragma ~ "no-cache" || req.http.Cache-Control ~ "no-cache") { purge("req.url == " req.url " && req.http.host == " req.http.host ); } } ______________________________________________________ CONFIDENTIALITY NOTICE This electronic mail message, including any and/or all attachments, is for the sole use of the intended recipient(s), and may contain confidential and/or privileged information, pertaining to business conducted under the direction and supervision of the sending organization. All electronic mail messages, which may have been established as expressed views and/or opinions (stated either within the electronic mail message or any of its attachments), are left to the sole responsibility of that of the sender, and are not necessarily attributed to the sending organization. Unauthorized interception, review, use, disclosure or distribution of any such information contained within this electronic mail message and/or its attachment(s), is (are) strictly prohibited. If you are not the intended recipient, please contact the sender by replying to this electronic mail message, along with the destruction all copies of the original electronic mail message (along with any attachments). ______________________________________________________ From harimetkari at gmail.com Fri Jan 21 11:04:40 2011 From: harimetkari at gmail.com (Hari Metkari) Date: Fri, 21 Jan 2011 16:34:40 +0530 Subject: Configuration varnish cache for multiple domain Message-ID: Hi, This is Hari Metkari from India.i am community member of varnish cache. I am implementing varnish cache for multiple seprate domain.i.e sitetwo.comand site3.com I am configured varnish cache on varnish server ip address (192.168.126.30) and my one site ip address 192.168.126.20 and two site ip address 192.168.126.60 I added multiple backend in VCL file.when I hit site one it calls to backend of site two instead of backend one. for single domain it's working fine but multiple domain not work. Please help me below some varnish cache configuration files. *1)/etc/sysconfig/varnish file * VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. VARNISH_LISTEN_ADDRESS=192.168.126.30 #localhost default by hari VARNISH_LISTEN_PORT=80 #6081 #by default by hari # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=192.168.126.30 #127.0.0.1 #by default by hari VARNISH_ADMIN_LISTEN_PORT=6082 # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 # # # DAEMON_OPTS is used by the init script. If you add or remove options, make # # sure you update this section, too. DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -s ${VARNISH_STORAGE}" # *2) /etc/varnish/default.vcl file * backend site2 { .host = "sitetwo.com"; .port = "8081"; } backend site3 { .host = "site3.com"; .port = "8080"; } sub vcl_recv { if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ return (pass); } if (req.http.host == "site3.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "site3.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site3; return (lookup); } if (req.http.host == "sitetwo.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "sitetwo.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site2; return (lookup); } return (lookup); } # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set req.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. return (pipe); } # sub vcl_pass { return (pass); } # sub vcl_hash { set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } return (hash); } # sub vcl_hit { if (!obj.cacheable) { return (pass); } return (deliver); } # sub vcl_miss { return (fetch); } sub vcl_fetch { if (!obj.cacheable) { return (pass); } if (obj.http.Set-Cookie) { return (pass); } ##set obj.prefetch = 30s; #The following vcl code will make Varnish serve expired objects. All object will be kept up to two minutes past their expiration time or a fresh object is generated set obj.grace = 2m; return (deliver); } # sub vcl_deliver { return (deliver); } Thanks in Advanced [?] Thanks, Hari Metkari +91-9881462183 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 328.png Type: image/png Size: 569 bytes Desc: not available URL: From rtshilston at gmail.com Fri Jan 21 16:27:25 2011 From: rtshilston at gmail.com (Robert Shilston) Date: Fri, 21 Jan 2011 16:27:25 +0000 Subject: ESI hits vs misses Message-ID: <99B167D0-9DEB-4360-B694-56E8841464CD@gmail.com> Hi, We've been reviewing the varnishlog output (admittedly on 2.0.6), looking at ESI details. We can see the full transaction details for misses, but hits are elusive. Is there anyway to see the details of ESI hits? Thanks Rob From racemd at verizon.net Fri Jan 21 17:52:28 2011 From: racemd at verizon.net (Roland Rebstock) Date: Fri, 21 Jan 2011 12:52:28 -0500 Subject: How to setup MaxMind Mod_GeoIP in Varnish Message-ID: <000f01cbb993$f8b28050$ea1780f0$@net> All, I want to be able to setup Mod_GeoIP to restrict Countries in Varnish, I tried on the web server but of course the web server is fronted by my Varnish server. Anyone have instructions on how to setup Varnish with Mod_GEOIP to only allow certain countries? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdzstz at gmail.com Sun Jan 23 17:30:13 2011 From: jdzstz at gmail.com (jdzstz - gmail dot com) Date: Sun, 23 Jan 2011 18:30:13 +0100 Subject: Fwd: Wiki page for "Installing Varnish from source code" In-Reply-To: References: Message-ID: In Trac wiki, it exists a wiki page about "Installing Varnish from source code" in http://varnish-cache.org/trac/wiki/Installation with information about compiling varnish from source code. ?This page has also a lot information for Mac OS X, OpenSolaris and Solaris 10. This page is not accesible by any other wiki page, in the past, in WikiStart was a link that was deleted in version 146: ? - ?http://varnish-cache.org/trac/wiki/WikiStart?action=diff&version=146&old_version=145 So, I think a link in WikiStart or in other wiki page must be created. But I also have noticed that exists other documentation page about compiling varnish from source in: ? - ?http://www.varnish-cache.org/docs/trunk/installation/install.html The problem of this page is that it does not help much with some platforms like Mac OS X, OpenSolaris and Solaris 10 that are explained in http://varnish-cache.org/trac/wiki/Installation So I think both pages should be mixed. If it is not possible, only special instructions for compiling could be keeped in wiki and make a link to http://www.varnish-cache.org/docs/trunk/installation/install.html From straightflush at gmail.com Sun Jan 23 22:55:27 2011 From: straightflush at gmail.com (AD) Date: Sun, 23 Jan 2011 17:55:27 -0500 Subject: Inconsistency in VRT_SetHdr/GetHhdr field sizes Message-ID: Hello, I undertstand then when using VRT_GetHdr the prefix to the header field is the size of the field (like "\005X-AB:"). However i noticed when using the -C flag that this does not seem to be consistent and called me some issues when trying to access these fields using inline C. Here is an example vcl backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { set req.http.X-Cache-Key = "test"; } This should mean the field is \012X-Cache-Key: however when running this vcl through -C it shows up as 14 (2 more bytes than the full size of the field). Any ideas why this would be ? # varnishd -C -f test.vcl | grep Cache VRT_SetHdr(sp, HDR_REQ, "\014X-Cache-Key:", "test", vrt_magic_string_end); " set req.http.X-Cache-Key = \"test\"; \n" Thanks AD -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Sun Jan 23 23:48:15 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 23 Jan 2011 23:48:15 +0000 Subject: Inconsistency in VRT_SetHdr/GetHhdr field sizes In-Reply-To: Your message of "Sun, 23 Jan 2011 17:55:27 EST." Message-ID: <72263.1295826495@critter.freebsd.dk> In message , AD w rites: >This should mean the field is \012X-Cache-Key: however when running this vcl >through -C it shows up as 14 (2 more bytes than the full size of the field). > Any ideas why this would be ? Because (\###) is octal notation \014 means twelve. > VRT_SetHdr(sp, HDR_REQ, "\014X-Cache-Key:", "test", -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From harimetkari at gmail.com Mon Jan 24 04:40:35 2011 From: harimetkari at gmail.com (Hari Metkari) Date: Mon, 24 Jan 2011 10:10:35 +0530 Subject: Configuration varnish cache for multiple domain Message-ID: Hi, I am implementing varnish cache for multiple seprate domain.i.e sitetwo.comand site3.com I am configured varnish cache on varnish server local ip address (192.168.126.30) and my one site ip address 192.168.126.20 and two site ip address 192.168.126.60 I added multiple backend in VCL file.when I hit site one it calls to backend of site two instead of backend one. for single domain it's working fine but multiple domain not work. Please help me below some varnish cache configuration files. 1)/etc/sysconfig/varnish file VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. VARNISH_LISTEN_ADDRESS=192.168.126.30 #localhost default by hari VARNISH_LISTEN_PORT=80 #6081 #by default by hari # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=192.168.126.30 #127.0.0.1 #by default by hari VARNISH_ADMIN_LISTEN_PORT=6082 # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 # # # DAEMON_OPTS is used by the init script. If you add or remove options, make # # sure you update this section, too. DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -s ${VARNISH_STORAGE}" # 2) /etc/varnish/default.vcl file backend site2 { .host = "sitetwo.com"; .port = "8081"; } backend site3 { .host = "site3.com"; .port = "8080"; } sub vcl_recv { if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ return (pass); } if (req.http.host == "site3.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "site3.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site3; return (lookup); } if (req.http.host == "sitetwo.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "sitetwo.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site2; return (lookup); } return (lookup); } # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set req.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. return (pipe); } # sub vcl_pass { return (pass); } # sub vcl_hash { set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } return (hash); } # sub vcl_hit { if (!obj.cacheable) { return (pass); } return (deliver); } # sub vcl_miss { return (fetch); } sub vcl_fetch { if (!obj.cacheable) { return (pass); } if (obj.http.Set-Cookie) { return (pass); } ##set obj.prefetch = 30s; #The following vcl code will make Varnish serve expired objects. All object will be kept up to two minutes past their expiration time or a fresh object is generated set obj.grace = 2m; return (deliver); } # sub vcl_deliver { return (deliver); } Thanks in Advanced Thanks, Hari Metkari +91-9881462183 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn at ruberg.no Mon Jan 24 06:21:12 2011 From: bjorn at ruberg.no (=?ISO-8859-1?Q?Bj=F8rn_Ruberg?=) Date: Mon, 24 Jan 2011 07:21:12 +0100 Subject: Configuration varnish cache for multiple domain In-Reply-To: References: Message-ID: <4D3D1A58.1090603@ruberg.no> On 01/24/2011 05:40 AM, Hari Metkari wrote: > Hi, > > > I am implementing varnish cache for multiple seprate domain.i.e > sitetwo.com and site3.com > > I am configured varnish cache on varnish server local ip address > (192.168.126.30) > and my one site ip address 192.168.126.20 and two site ip address > 192.168.126.60 > > I added multiple backend in VCL file.when I hit site one it calls to > backend of site two instead of backend one. > for single domain it's working fine but multiple domain not work. What do you mean by "not work"? Please describe what happens. The best evidence is an extract from varnishlog showing the transactions, both successful ones and failed ones. [...] > 2) /etc/varnish/default.vcl file > > backend site2 { > .host = "sitetwo.com "; > .port = "8081"; > } > backend site3 { > .host = "site3.com "; > .port = "8080"; > } I suggest you use plain text when posting code, not HTML. [...] > if (req.http.host == "site3.com ") { > #You will need the following line only if your backend has > multiple virtual host names > set req.http.host = "site3.com "; > set req.http.X-Orig-Host = req.http.host; > set req.backend = site3; > return (lookup); > } > if (req.http.host == "sitetwo.com ") { > #You will need the following line only if your backend has > multiple virtual host names > set req.http.host = "sitetwo.com "; > set req.http.X-Orig-Host = req.http.host; > set req.backend = site2; > return (lookup); > } I'm unsure whether "return (lookup)" is a good idea at this stage. Apart from that, the above should work as long a the clients use the exact hostnames mentioned in your config. E.g. www.site3.com will fall through. Without varnishlog evidence, there's not more we can do. -- Bj?rn From npf-mlists at eurotux.com Mon Jan 24 10:04:05 2011 From: npf-mlists at eurotux.com (Nuno Fernandes) Date: Mon, 24 Jan 2011 10:04:05 +0000 Subject: Configuration varnish cache for multiple domain In-Reply-To: References: Message-ID: <201101241004.05126.npf-mlists@eurotux.com> Hum.. in the very start of vcl_recv put: if (req.http.host == "site3.com") { set req.backend = site3; } if (req.http.host == "sitetwo.com") { set req.backend = site2; } You are puting set req.backend very late in the configuration and, for example, if a user uses a cookie if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ return (pass); } you returns immediatly without setting the backend (so it's choosing the first backend). Best regads, Nuno Fernandes On Monday 24 January 2011, Hari Metkari wrote: > Hi, > > > I am implementing varnish cache for multiple seprate domain.i.e > sitetwo.comand site3.com > > I am configured varnish cache on varnish server local ip address > (192.168.126.30) > and my one site ip address 192.168.126.20 and two site ip address > 192.168.126.60 > > I added multiple backend in VCL file.when I hit site one it calls to > backend of site two instead of backend one. > for single domain it's working fine but multiple domain not work. > > Please help me below some varnish cache configuration files. > 1)/etc/sysconfig/varnish file > > > VARNISH_VCL_CONF=/etc/varnish/default.vcl > # > # # Default address and port to bind to > # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify > # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. > VARNISH_LISTEN_ADDRESS=192.168.126.30 > #localhost default by hari > VARNISH_LISTEN_PORT=80 > #6081 #by default by hari > # > # # Telnet admin interface listen address and port > VARNISH_ADMIN_LISTEN_ADDRESS=192.168.126.30 > #127.0.0.1 #by default by hari > VARNISH_ADMIN_LISTEN_PORT=6082 > # > # # The minimum number of worker threads to start > VARNISH_MIN_THREADS=1 > # > # # The Maximum number of worker threads to start > VARNISH_MAX_THREADS=1000 > # > # # Idle timeout for worker threads > VARNISH_THREAD_TIMEOUT=120 > # > # # Cache file location > VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin > # > # # Cache file size: in bytes, optionally using k / M / G / T suffix, > # # or in percentage of available disk space using the % suffix. > VARNISH_STORAGE_SIZE=1G > # > # # Backend storage specification > VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" > # > # # Default TTL used when the backend does not specify one > VARNISH_TTL=120 > # > # # DAEMON_OPTS is used by the init script. If you add or remove options, > make > # # sure you update this section, too. > DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T > ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ > -t ${VARNISH_TTL} \ > -w > ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ > -u varnish -g varnish \ > -s ${VARNISH_STORAGE}" > # > 2) /etc/varnish/default.vcl file > > backend site2 { > .host = "sitetwo.com"; > .port = "8081"; > } > backend site3 { > .host = "site3.com"; > .port = "8080"; > } > sub vcl_recv { > if (req.request != "GET" && > req.request != "HEAD" && > req.request != "PUT" && > req.request != "POST" && > req.request != "TRACE" && > req.request != "OPTIONS" && > req.request != "DELETE") { > # /* Non-RFC2616 or CONNECT which is weird. */ > return (pipe); > } > if (req.request != "GET" && req.request != "HEAD") { > # /* We only deal with GET and HEAD by default */ > return (pass); > } > if (req.http.Authorization || req.http.Cookie) { > # /* Not cacheable by default */ > return (pass); > } > if (req.http.host == "site3.com") { > #You will need the following line only if your backend has multiple > virtual host names > set req.http.host = "site3.com"; > set req.http.X-Orig-Host = req.http.host; > set req.backend = site3; > return (lookup); > } > if (req.http.host == "sitetwo.com") { > #You will need the following line only if your backend has multiple > virtual host names > set req.http.host = "sitetwo.com"; > set req.http.X-Orig-Host = req.http.host; > set req.backend = site2; > return (lookup); > } > return (lookup); > } > # > sub vcl_pipe { > # # Note that only the first request to the backend will have > # # X-Forwarded-For set. If you use X-Forwarded-For and want to > # # have it set for all requests, make sure to have: > # # set req.http.connection = "close"; > # # here. It is not set by default as it might break some broken web > # # applications, like IIS with NTLM authentication. > return (pipe); > } > # > sub vcl_pass { > return (pass); > } > # > sub vcl_hash { > set req.hash += req.url; > if (req.http.host) { > set req.hash += req.http.host; > } else { > set req.hash += server.ip; > } > return (hash); > } > # > sub vcl_hit { > if (!obj.cacheable) { > return (pass); > } > return (deliver); > } > # > sub vcl_miss { > return (fetch); > } > sub vcl_fetch { > if (!obj.cacheable) { > return (pass); > } > if (obj.http.Set-Cookie) { > return (pass); > } > ##set obj.prefetch = 30s; > #The following vcl code will make Varnish serve expired objects. All object > will be kept up to two minutes past their expiration time or a fresh object > is generated > set obj.grace = 2m; > return (deliver); > } > # > sub vcl_deliver { > return (deliver); > } > > Thanks in Advanced > > > Thanks, > Hari Metkari > +91-9881462183 From sfoutrel at bcstechno.com Mon Jan 24 13:27:21 2011 From: sfoutrel at bcstechno.com (=?iso-8859-1?Q?S=E9bastien_FOUTREL?=) Date: Mon, 24 Jan 2011 14:27:21 +0100 Subject: logic representation. Message-ID: Hello, I found this link googling but did not found a recent version : http://phk.freebsd.dk/misc/varnish.gif Is that schema always current or not ? Is it possible to have it ( or an actual version included in the documentation sites ? Thank you for your job. (Doing research on 503 like many) -- S?bastien FOUTREL BCS Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From indranilc at rediff-inc.com Mon Jan 24 13:39:07 2011 From: indranilc at rediff-inc.com (Indranil Chakravorty) Date: 24 Jan 2011 13:39:07 -0000 Subject: =?utf-8?B?UmU6IGxvZ2ljIHJlcHJlc2VudGF0aW9uLg==?= Message-ID: <1295875588.S.6749.H.WVPpYmFzdGllbiBGT1VUUkVMAGxvZ2ljIHJlcHJlc2VudGF0aW9uLg__.44134.pro-237-175.old.1295876347.20945@webmail.rediffmail.com> I think this has little more details.http://www.varnish-cache.org/trac/wiki/VCLExampleDefault Thanks, Neel On Mon, 24 Jan 2011 18:56:28 +0530 S??bastien FOUTREL <sfoutrel at bcstechno.com> wrote >Hello, >I found this link googling but did not found a recent version : http://phk.freebsd.dk/misc/varnish.gifIs that schema always current or not ?Is it possible to have it ( or an actual version included in the documentation sites ? >Thank you for your job.(Doing research on 503 like many) > >--Sbastien FOUTRELBCS Technologies >_______________________________________________ >varnish-misc mailing list >varnish-misc at varnish-cache.org >http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Mon Jan 24 14:56:45 2011 From: straightflush at gmail.com (AD) Date: Mon, 24 Jan 2011 09:56:45 -0500 Subject: MD5 Hash function for Varnish Message-ID: hey Guys, I was messing around with the load_module functionality and managed to get a working C library to integrate into VCL for calling the MD5 function. I know this is said to be coming in 3.0 but hopefully it will be of use to someone who needs to call MD5 inside their VCL. https://github.com/denen99/libmd5varnish Feel free to fork,modify,update Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From v.bilek at 1art.cz Mon Jan 24 15:00:37 2011 From: v.bilek at 1art.cz (=?ISO-8859-1?Q?V=E1clav_B=EDlek?=) Date: Mon, 24 Jan 2011 16:00:37 +0100 Subject: How to setup MaxMind Mod_GeoIP in Varnish In-Reply-To: <000f01cbb993$f8b28050$ea1780f0$@net> References: <000f01cbb993$f8b28050$ea1780f0$@net> Message-ID: <4D3D9415.3020208@1art.cz> Forward client IP and solve on backend Roland Rebstock napsal(a): > All, I want to be able to setup Mod_GeoIP to restrict Countries in > Varnish, I tried on the web server but of course the web server is > fronted by my Varnish server. Anyone have instructions on how to setup > Varnish with Mod_GEOIP to only allow certain countries? > > > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From harimetkari at gmail.com Mon Jan 24 15:41:23 2011 From: harimetkari at gmail.com (Hari Metkari) Date: Mon, 24 Jan 2011 22:41:23 +0700 Subject: varnish cache configuration for multiple domain Message-ID: Hi All, I am implementing varnish cache for multiple separate domain.i.e sitetwo.tieto.com and site3.tieto.com I am configured varnish cache on varnish server local ip address (192.168.126.30) and my sitetwo.tieto.com ip address 192.168.126.60 and site3.tieto.com ip address 192.168.126.20 as per snap varnush-cache-server-setup.bmp.Here I make varnish cache server act as a central server and cache content. when I am add only single backend for single domain in VCL file that time it's working fine with varnish logs.see in snap site3.bmp but when I added multiple backend for multiple domain in vcl file that time i hit sitetwo.tieto.com and check varnish logs see snap sitetwo.bmp(here I hit sitetwo.tieto.com it calls to backend of site3 instead of backend site2 in varnish logs,here I am expected backend open site2,here is main my problem). Please find attached varnish configuration files and snap of varnish logs. Please look this issue and please help me where i am wrong in varnish cache configuration. Thanks in Advanced [?] Thanks, Hari Metkari Software Engineer. +91-9881462183 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 328.png Type: image/png Size: 569 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: default.vcl Type: application/octet-stream Size: 4524 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: site3.bmp Type: image/bmp Size: 3932214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sitetwo.bmp Type: image/bmp Size: 3932214 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish Type: application/octet-stream Size: 3170 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnush-cache-server-setup.bmp Type: image/bmp Size: 631718 bytes Desc: not available URL: From npf-mlists at eurotux.com Mon Jan 24 16:15:16 2011 From: npf-mlists at eurotux.com (Nuno Fernandes) Date: Mon, 24 Jan 2011 16:15:16 +0000 Subject: varnish cache configuration for multiple domain In-Reply-To: References: Message-ID: <201101241615.16515.npf-mlists@eurotux.com> Your request has a cookie so it gets pickup by the code: if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ return (pass); <<<<-------- It returns here without setting the backend } if (req.http.host == "site3.tieto.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "site3.tieto.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site3; return (lookup); } if (req.http.host == "sitetwo.tieto.com") { #You will need the following line only if your backend has multiple virtual host names set req.http.host = "sitetwo.tieto.com"; set req.http.X-Orig-Host = req.http.host; set req.backend = site2; return (lookup); } Plz check the configuration i've sent you privatly... Best regards, Nuno Fernandes On Monday 24 January 2011, Hari Metkari wrote: > Hi All, > > I am implementing varnish cache for multiple separate domain.i.e > sitetwo.tieto.com and site3.tieto.com > > I am configured varnish cache on varnish server local ip address > (192.168.126.30) and my sitetwo.tieto.com ip address 192.168.126.60 and > site3.tieto.com ip address 192.168.126.20 as per snap > varnush-cache-server-setup.bmp.Here I make varnish cache server act as a > central server and cache content. > > when I am add only single backend for single domain in VCL file that time > it's working fine with varnish logs.see in snap site3.bmp but when I added > multiple backend for multiple domain in vcl file that time i hit > sitetwo.tieto.com and check varnish logs see snap sitetwo.bmp(here I hit > sitetwo.tieto.com it calls to backend of site3 instead of backend site2 in > varnish logs,here I am expected backend open site2,here is main my > problem). > > Please find attached varnish configuration files and snap of varnish logs. > > Please look this issue and please help me where i am wrong in varnish cache > configuration. > > Thanks in Advanced [?] > > > Thanks, > Hari Metkari > Software Engineer. > +91-9881462183 From jeanmarc.pouchoulon at gmail.com Mon Jan 24 19:33:20 2011 From: jeanmarc.pouchoulon at gmail.com (jean-marc pouchoulon) Date: Mon, 24 Jan 2011 20:33:20 +0100 Subject: How to setup MaxMind Mod_GeoIP in Varnish In-Reply-To: <000f01cbb993$f8b28050$ea1780f0$@net> References: <000f01cbb993$f8b28050$ea1780f0$@net> Message-ID: <4D3DD400.2020809@gmail.com> Le 21/01/2011 18:52, Roland Rebstock a ?crit : > All, I want to be able to setup Mod_GeoIP to restrict Countries in Varnish, > I tried on the web server but of course the web server is fronted by my > Varnish server. Anyone have instructions on how to setup Varnish with > Mod_GEOIP to only allow certain countries? > > you can have a look at this recipe. http://drcarter.info/2010/07/another-way-to-link-varnish-and-maxmind-geoip/ jmp > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From racemd at verizon.net Tue Jan 25 02:10:24 2011 From: racemd at verizon.net (Roland Rebstock) Date: Mon, 24 Jan 2011 21:10:24 -0500 Subject: How to setup MaxMind Mod_GeoIP in Varnish In-Reply-To: <4D3DD400.2020809@gmail.com> References: <000f01cbb993$f8b28050$ea1780f0$@net> <4D3DD400.2020809@gmail.com> Message-ID: <004801cbbc35$07db9230$1792b690$@net> I saw that but I don?t know how to set it to only allow certain countries or block them for that fact? Where do you specify the countries you want to allow or block? From: jean-marc pouchoulon [mailto:jeanmarc.pouchoulon at gmail.com] Sent: Monday, January 24, 2011 2:33 PM To: Roland Rebstock Cc: varnish-misc at varnish-cache.org Subject: Re: How to setup MaxMind Mod_GeoIP in Varnish Le 21/01/2011 18:52, Roland Rebstock a ?crit : All, I want to be able to setup Mod_GeoIP to restrict Countries in Varnish, I tried on the web server but of course the web server is fronted by my Varnish server. Anyone have instructions on how to setup Varnish with Mod_GEOIP to only allow certain countries? you can have a look at this recipe. http://drcarter.info/2010/07/another-way-to-link-varnish-and-maxmind-geoip/ jmp _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanmarc.pouchoulon at gmail.com Tue Jan 25 06:46:07 2011 From: jeanmarc.pouchoulon at gmail.com (jean-marc pouchoulon) Date: Tue, 25 Jan 2011 07:46:07 +0100 Subject: How to setup MaxMind Mod_GeoIP in Varnish In-Reply-To: <004801cbbc35$07db9230$1792b690$@net> References: <000f01cbb993$f8b28050$ea1780f0$@net> <4D3DD400.2020809@gmail.com> <004801cbbc35$07db9230$1792b690$@net> Message-ID: 2011/1/25 Roland Rebstock > I saw that but I don?t know how to set it to only allow certain > countries or block them for that fact? Where do you specify the countries > you want to allow or block? > > > > in vcl_recv C{ const char* pays = NULL; pays = (*get_country_code)(VRT_IP_string(sp, VRT_r_client_ip(sp))); VRT_log(sp, pays, vrt_magic_string_end); if(!strcmp(pays, "FR")) { VRT_error(sp, 504, "IP not authorized"); VRT_done(sp, VCL_RET_ERROR); } }C hth -------------- next part -------------- An HTML attachment was scrubbed... URL: From carrot at carrotis.com Tue Jan 25 06:50:02 2011 From: carrot at carrotis.com (Calvin Park) Date: Tue, 25 Jan 2011 15:50:02 +0900 Subject: [varnish-misc] How to see log info. as Squid style ? Message-ID: Hello Varnish users~ I did below things : /usr/bin/varnishncsa -a -c -w /var/log/varnish/varnishncsa.log -D tail -f /var/log/varnish/varnishncsa.log There are no information about HIT, MISS, IMS ... etc. How to see it ? From phk at phk.freebsd.dk Tue Jan 25 11:04:02 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 25 Jan 2011 11:04:02 +0000 Subject: Please help break Varnish GZIP/ESI support before 3.0 Message-ID: <87903.1295953442@critter.freebsd.dk> One of the major features of Varnish 3.0 is now feature complete, and I need people to start beating it up and help me find the bugs before we go into the 3.0 release cycle. GZIP support ------------ Varnish will ask the backend for gzip'ed objects by default and for the minority of clients that do not grok that, ungzip during delivery. If the backend can not or will not gzip the objects, varnish can be told in VCL to gzip during fetch from the backend. (It can also gunzip, but I don't know why would you do that ?) In addition to bandwidth, this should save varnish storage (one gzip copy, rather than two copies, one gzip'ed one not). GZIP support is on by default, but can be disabled with a parameter. ESI support ----------- Well, we have ESI support already, the difference is that it also understands GZIP'ing. This required a total rewrite of the ESI parser, much improving the readability of it, I might add. So now you can use ESI with compression, something that has hitherto been a faustian bargain, often requiring an afterburner of some kind to do the compression. There are a lot of weird cornercases in this code, (such as including a gzip'ed object in an uncomressed object) so this code really needs beaten up. How you can help ---------------- The code is newly written, and bugs are to be expected, so I do not expect you to put it in production right away, but rather to run some stand alone tests, to see that it works for your site and content. The code is feature complete, but still lacks sensible stats counters, debug handles and so on, these will be added in coming days. The reports I am looking for are, in order of priority: 1. How to crash varnish by sending legit traffic through it. 2. How to crash varnish with worst-case traffic. 3. How to make varnish send wrong content 4. How to make varnish use a lot of resources 5. Any other pertinent observations of trouble. I have written the beginning of the documentation in our sphinx docs: http://www.varnish-cache.org/docs/trunk/phk/gzip.html To test this, you need to be aware that we switched from SVN to GIT last week, so to pull a copy of -trunk the magic command now is: git clone git://git.varnish-cache.org/varnish-cache/ Thank you for using Varnish, and thank you for helping make 3.0 our best release ever. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kristian at varnish-software.com Tue Jan 25 16:15:05 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Tue, 25 Jan 2011 17:15:05 +0100 Subject: Please test 2.1.5 Message-ID: <20110125161505.GC21248@freud> I've uploaded a 2.1.5-tar-ball and rpm/Debian/Ubuntu packages to http://repo.varnsih-cache.org/test/ and would like some feedback before we call it a release. Particularly because it's the first time I'm doing the final packaging. So if you can, please give the packages a spin and see if there's something amiss. Both with regards to Varnish itself and the packaging. If we get enough feedback, those packages will become the official release of Varnish 2.1.5 this week, and will be moved to the "proper" repositories so any server using repo.varnish-cache.org for .deb/.rpm packages will pick them up. For a ChangeLog, either get it from the tar-ball, git (2.1 branch), or wait for the official release. - Kristian PS: Due to timing-issues under a virtual machine making rpm's, I still haven't been able to upload the 32-bit rpms yet. They are being built while I'm writing this, so hopefully they'll be uploaded later tonight. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From bedis9 at gmail.com Tue Jan 25 21:38:11 2011 From: bedis9 at gmail.com (Bedis 9) Date: Tue, 25 Jan 2011 22:38:11 +0100 Subject: Please help testing gzip code in Varnish In-Reply-To: <48090.1294238746@critter.freebsd.dk> References: <48090.1294238746@critter.freebsd.dk> Message-ID: On Wed, Jan 5, 2011 at 3:45 PM, Poul-Henning Kamp wrote: > > I have added the first part of gzip support to varnish-trunk. > > This is new code with semi-twisted logic under the hood, so > I am very dependent on you guys helping to test it out. > > If you set the paramter http_gzip_support to true, varnish > will always send "Accept-encoding: gzip" to the backend. > > If the client does not understand gzip, varnish will gunzip > the object during delivery. > > This means that you only will only cache the gzip'ed version of > objects. > > The responsibility for gzip'ing the object is with your backend, > Varnish doesnt don't know which objects you want to gzip and which > not (ie: images: no, html: yes, but what about .cgi ?) > > ESI is not supported with gzip mode yet, that is the next and > even more involved step. > > When you file tickets, please use "version = trunk" in trac > > Thanks in advance, > > Poul-Henning > > PS: Also be aware that "purge" is now called "ban" in -trunk. > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > Hi, I've tested the new http_gzip_support parameter. First of all, I can confirm that an "Accept-Encoding: gzip" header is well sent to the backend. My VCL file is empty, I have just the vcl_recv tips to normalize the Accept-Encoding: backend www { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { ### parse accept encoding rulesets to normalize if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } } I ran two requests: Req1: wget -S http://127.0.0.1:80/ --header="Accept-Encoding: gzip" Req2: wget -S http://127.0.0.1:80/ For both requests: - I had a MISS from Varnish. - Varnish sent a "Accept-Encoding: gzip" header (looks normal) For the first request, I got a gzipped file while I got a HTML flat file for the second. Everything seems to work as expected. I just wonder why the second request is a MISS while the gzipped object is already in memory. Can't Varnish use it to deliver a gunzipped object? (I know it will break the Vary: Accept-Encoding rule) cheers From phk at phk.freebsd.dk Tue Jan 25 22:07:50 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 25 Jan 2011 22:07:50 +0000 Subject: Please help testing gzip code in Varnish In-Reply-To: Your message of "Tue, 25 Jan 2011 22:38:11 +0100." Message-ID: <11006.1295993270@critter.freebsd.dk> In message , Bedi s 9 writes: >I just wonder why the second request is a MISS while the gzipped >object is already in memory. It souldn't have been. The normal cause is cookies, by default varnish does not cache anything that comes with cookies, since we don't know what they mean. >Can't Varnish use it to deliver a gunzipped object? Yes, that's the entire point. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kristian at varnish-software.com Wed Jan 26 01:42:33 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 26 Jan 2011 02:42:33 +0100 Subject: Please test 2.1.5 In-Reply-To: <20110125161505.GC21248@freud> References: <20110125161505.GC21248@freud> Message-ID: <20110126014233.GA27828@freud> Greetings. I've re-built the Debian and Ubuntu packages. The new versions are 2.1.5-1~3 (don't ask where ~..2 went). Bj?rn pointed out that the /etc/default/varnish and /etc/init.d/varnish now introduced a START variable which would default to not starting, which could cause an otherwise auto-starting Varnish to cease starting after upgrade. The new packages should fix this. Feel free to verify that I got it right. Jens also pointed out that I got the URL wrong in the previous mail (thanks), the correct URL is: http://repo.varnish-cache.org/test/ Thanks so far, guys. Keep it comming. Oh, and still no-go on 32-bit rpms, I think I will have to re-think my strategy tomorrow... But nobody here uses 32-bit anyway, right? ;) - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From kristian at varnish-software.com Wed Jan 26 07:08:47 2011 From: kristian at varnish-software.com (Kristian Lyngstol) Date: Wed, 26 Jan 2011 08:08:47 +0100 Subject: Please test 2.1.5 In-Reply-To: <20110126014233.GA27828@freud> References: <20110125161505.GC21248@freud> <20110126014233.GA27828@freud> Message-ID: <20110126070847.GA2097@freud> On Wed, Jan 26, 2011 at 02:42:33AM +0100, Kristian Lyngstol wrote: > Oh, and still no-go on 32-bit rpms, I think I will have to re-think my > strategy tomorrow... But nobody here uses 32-bit anyway, right? ;) There we go. Hooray for just looping over rpmbuild for 9 hours until a few i386 rpms pop out in the other end. (all due to timing issues with the regression tests on a virtualized i386 box). So now all the packages are available. - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From bedis9 at gmail.com Wed Jan 26 09:07:21 2011 From: bedis9 at gmail.com (Bedis 9) Date: Wed, 26 Jan 2011 10:07:21 +0100 Subject: Please help testing gzip code in Varnish In-Reply-To: <11006.1295993270@critter.freebsd.dk> References: <11006.1295993270@critter.freebsd.dk> Message-ID: On Tue, Jan 25, 2011 at 11:07 PM, Poul-Henning Kamp wrote: > In message , Bedi > s 9 writes: > >>I just wonder why the second request is a MISS while the gzipped >>object is already in memory. > > It souldn't have been. > > The normal cause is cookies, by default varnish does not cache > anything that comes with cookies, since we don't know what they > mean. > >>Can't Varnish use it to deliver a gunzipped object? > > Yes, that's the entire point. > > -- > Poul-Henning Kamp ? ? ? | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG ? ? ? ? | TCP/IP since RFC 956 > FreeBSD committer ? ? ? | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > I did not use any cookie, only basic wget request with minimal client headers. Do you want me to create a ticket for that? From phk at phk.freebsd.dk Wed Jan 26 09:08:33 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 26 Jan 2011 09:08:33 +0000 Subject: Please help testing gzip code in Varnish In-Reply-To: Your message of "Wed, 26 Jan 2011 10:07:21 +0100." Message-ID: <32615.1296032913@critter.freebsd.dk> In message , Bedi s 9 writes: >I did not use any cookie, only basic wget request with minimal client heade= >rs. > >Do you want me to create a ticket for that? Capture varnishlog output and mail it to me first -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ksorensen at nordija.com Thu Jan 27 20:13:30 2011 From: ksorensen at nordija.com (Kristian =?ISO-8859-1?Q?Gr=F8nfeldt_S=F8rensen?=) Date: Thu, 27 Jan 2011 21:13:30 +0100 Subject: Please help break Varnish GZIP/ESI support before 3.0 In-Reply-To: <87903.1295953442@critter.freebsd.dk> References: <87903.1295953442@critter.freebsd.dk> Message-ID: <1296159210.19735.51.camel@localhost> On tir, 2011-01-25 at 11:04 +0000, Poul-Henning Kamp wrote: > One of the major features of Varnish 3.0 is now feature complete, and > I need people to start beating it up and help me find the bugs before > we go into the 3.0 release cycle. > > > GZIP support > ------------ > > Varnish will ask the backend for gzip'ed objects by default and for > the minority of clients that do not grok that, ungzip during delivery. > > If the backend can not or will not gzip the objects, varnish can > be told in VCL to gzip during fetch from the backend. (It can also > gunzip, but I don't know why would you do that ?) > > In addition to bandwidth, this should save varnish storage (one > gzip copy, rather than two copies, one gzip'ed one not). > > GZIP support is on by default, but can be disabled with a parameter. > > > > ESI support > ----------- > > Well, we have ESI support already, the difference is that it also > understands GZIP'ing. This required a total rewrite of the ESI > parser, much improving the readability of it, I might add. > > So now you can use ESI with compression, something that has hitherto > been a faustian bargain, often requiring an afterburner of some kind > to do the compression. > > There are a lot of weird cornercases in this code, (such as including > a gzip'ed object in an uncomressed object) so this code really > needs beaten up. > > > How you can help > ---------------- > > The code is newly written, and bugs are to be expected, so I do not > expect you to put it in production right away, but rather to run > some stand alone tests, to see that it works for your site and > content. > > The code is feature complete, but still lacks sensible stats counters, > debug handles and so on, these will be added in coming days. > > The reports I am looking for are, in order of priority: > > 1. How to crash varnish by sending legit traffic through it. > > 2. How to crash varnish with worst-case traffic. > > 3. How to make varnish send wrong content > > 4. How to make varnish use a lot of resources > > 5. Any other pertinent observations of trouble. > I believe I've found a case of Varnish returning wrong content. We use ESI to populate a json-array. The new ESI-code seems to handle this just as well as it did in 2.1.x, as long as we keep gzip out of the equation. When I use "set beresp.do_gzip=true" in vcl_fetch() some parts of the document disappears from the response. My backend is not configured to use gzip, so I let varnish do the gzip'ing. We use ESI in mode 0x00000001 for ESI to work with json objects. It doesn't seem to matter whether or not the response is served from cache or not. It only seem to depend on the setting of beresp.do_gzip when the objects are put in to the cache. I've attached 3 files: json.nogzip.txt showing the correct ESI-parsed response when gzip is disabled. json.gzip.txt showing the ESI-parsed response when beresp.do_gzip=true is actived in the VCL. json.esi.txt showing the response from the backend that Varnish tries to parse. All files are requested using wget with no "Accept-Encoding"-headers. Note that the segment just before the ESI-tags is missing from the gzipped response. Regards Kristian S?rensen -------------- next part -------------- {"10":[{"program":true,"id":859105,"vendorId":"15210857","starttime":1296168300000,"endtime":1296171900000,"title":"DR K Jazz: Minh og Alex Riel","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":true,"id":859309,"vendorId":"15210858","starttime":1296171900000,"endtime":1296173400000,"title":"Great Artists","oTitle":"true","genre":"0x20x3","timeshiftEnabled":true},{"program":true,"id":859258,"vendorId":"15210859","starttime":1296173400000,"endtime":1296173700000,"title":"Dagens sang","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":false,"id":-1,"vendorId":"-1","starttime":1296173700000,"endtime":1296184499000,"timeshiftEnabled":false}],"16":[{"program":false,"id":-1,"vendorId":"-1","starttime":1296171899000,"endtime":1296184499000,"timeshiftEnabled":false}],"start":1296171899000,"end":1296184499000,"epgSetId":-1,"size":50} -------------- next part -------------- [{"program":true,"id":859105,"vendorId":"15210857","starttime":1296168300000,"endtime":1296171900000,"title":"DR K Jazz: Minh og Alex Riel","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":true,"id":859309,"vendorId":"15210858","starttime":1296171900000,"endtime":1296173400000,"title":"Great Artists","oTitle":"true","genre":"0x20x3","timeshiftEnabled":true},{"program":true,"id":859258,"vendorId":"15210859","starttime":1296173400000,"endtime":1296173700000,"title":"Dagens sang","oTitle":null,"genre":"0x30x0","timeshiftEnabled":true},{"program":false,"id":-1,"vendorId":"-1","starttime":1296173700000,"endtime":1296184499000,"timeshiftEnabled":false}] -------------- next part -------------- {"10":,"16":,"start":1296171899000,"end":1296184499000,"epgSetId":-1,"size":50} From yanghatespam at gmail.com Fri Jan 28 04:25:01 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Thu, 27 Jan 2011 20:25:01 -0800 Subject: Understanding persistent storage Message-ID: I've been playing around with the experimental persistent storage in varnish-2.1.5 SVN 0843d7a, but I'm finding that the cache doesn't seem to survive across restarts. This matches up with hints like "When storage is full, Varnish should restart, cleaning storage" from http://www.varnish-cache.org/trac/wiki/changelog_2.0.6-2.1.0. Can anyone clarify what's persistent about persistent storage, how it differs from -sfile, etc.? I tried looking up info but didn't find much beyond implementation details in http://www.varnish-cache.org/trac/wiki/ArchitecturePersistentStorage. Thanks in advance. -- Yang Zhang http://yz.mit.edu/ From sgeorge.ml at gmail.com Fri Jan 28 10:46:59 2011 From: sgeorge.ml at gmail.com (Siju George) Date: Fri, 28 Jan 2011 16:16:59 +0530 Subject: How to set up varnish not be a single point of failure Message-ID: Hi, I understand that varnish does not support cache peering like Squid. My planned set up is something like ---- Webserver1 --- ------- Cache --- ------ API LB ----| |---- LB----| |---- LB ----| ---- Webserver2 --- ------- Cache --- ------ API So if I am using Varnish as Cache what is the best way to configure them so that there is redundancy and the setup can continue even if one Cache fails? Thanks --Siju -------------- next part -------------- An HTML attachment was scrubbed... URL: From stewsnooze at gmail.com Fri Jan 28 11:25:34 2011 From: stewsnooze at gmail.com (Stewart Robinson) Date: Fri, 28 Jan 2011 11:25:34 +0000 Subject: How to set up varnish not be a single point of failure In-Reply-To: References: Message-ID: Other people have configured two Varnish servers to be backends for each other. When you see the other Varnish cache as your remote IP you then point the request to the real backend. This duplicates your cache items in each cache. Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy Stew On 28 January 2011 10:46, Siju George wrote: > Hi, > > I understand that varnish does not support cache peering like Squid. > My planned set up is something like > > > ????????? ---- Webserver1 ---????????????? ------- Cache --- > ------ API > LB ----| ???????????????????????? |---- LB----|??????????????????? |---- LB > ----| > ????????? ---- Webserver2 ---????????????? ------- Cache --- > ------ API > > So if I am using Varnish as Cache what is the best way to configure them so > that there is redundancy and the setup can continue even if one Cache fails? > > Thanks > > --Siju > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From scaunter at topscms.com Fri Jan 28 13:38:18 2011 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 28 Jan 2011 08:38:18 -0500 Subject: How to set up varnish not be a single point of failure In-Reply-To: References: Message-ID: On 2011-01-28, at 6:26 AM, "Stewart Robinson" wrote: > Other people have configured two Varnish servers to be backends for > each other. When you see the other Varnish cache as your remote IP you > then point the request to the real backend. This duplicates your cache > items in each cache. > > Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy > > Stew > > On 28 January 2011 10:46, Siju George wrote: >> Hi, >> >> I understand that varnish does not support cache peering like Squid. >> My planned set up is something like >> >> >> ---- Webserver1 --- ------- Cache --- >> ------ API >> LB ----| |---- LB----| |---- LB >> ----| >> ---- Webserver2 --- ------- Cache --- >> ------ API >> >> So if I am using Varnish as Cache what is the best way to configure them so >> that there is redundancy and the setup can continue even if one Cache fails? >> >> Thanks >> >> --Siju Put two behind LB. Caches are cooler but you get high availability. Easy to do maintenance this way. Stefan Caunter Operations TorstarDigital 416.561.4871 >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From a.hongens at netmatch.nl Fri Jan 28 13:42:06 2011 From: a.hongens at netmatch.nl (=?ISO-8859-1?Q?Angelo_H=F6ngens?=) Date: Fri, 28 Jan 2011 14:42:06 +0100 Subject: How to set up varnish not be a single point of failure In-Reply-To: References: Message-ID: <4D42C7AE.2070602@netmatch.nl> On 28-1-2011 14:38, Caunter, Stefan wrote: > > > > On 2011-01-28, at 6:26 AM, "Stewart Robinson" wrote: > >> Other people have configured two Varnish servers to be backends for >> each other. When you see the other Varnish cache as your remote IP you >> then point the request to the real backend. This duplicates your cache >> items in each cache. >> >> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >> >> Stew >> >> On 28 January 2011 10:46, Siju George wrote: >>> Hi, >>> >>> I understand that varnish does not support cache peering like Squid. >>> My planned set up is something like >>> >>> >>> ---- Webserver1 --- ------- Cache --- >>> ------ API >>> LB ----| |---- LB----| |---- LB >>> ----| >>> ---- Webserver2 --- ------- Cache --- >>> ------ API >>> >>> So if I am using Varnish as Cache what is the best way to configure them so >>> that there is redundancy and the setup can continue even if one Cache fails? >>> >>> Thanks >>> >>> --Siju > > > Put two behind LB. Caches are cooler but you get high availability. > Easy to do maintenance this way. We use Varnish on CentOS machines. We use Pacemaker for high-availability (multiple virtual ip's) and DNSRR for balancing end-users to the caches. see http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/ for the pacemaker part.. -- With kind regards, Angelo H?ngens systems administrator MCSE on Windows 2003 MCSE on Windows 2000 MS Small Business Specialist ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg +31 (0)13 5811088 +31 (0)13 5821239 A.Hongens at netmatch.nl www.netmatch.nl ------------------------------------------ From malevo at gmail.com Fri Jan 28 13:51:50 2011 From: malevo at gmail.com (Pablo Garcia Melga) Date: Fri, 28 Jan 2011 10:51:50 -0300 Subject: How to set up varnish not be a single point of failure In-Reply-To: References: Message-ID: Sounds good to me, check if you LB has the ability to persists to the same cache based on the URL, that would prevent duplicate objects. Regards, Pablo On Fri, Jan 28, 2011 at 7:46 AM, Siju George wrote: > Hi, > > I understand that varnish does not support cache peering like Squid. > My planned set up is something like > > > ---- Webserver1 --- ------- Cache --- > ------ API > LB ----| |---- LB----| |---- LB > ----| > ---- Webserver2 --- ------- Cache --- > ------ API > > So if I am using Varnish as Cache what is the best way to configure them so > that there is redundancy and the setup can continue even if one Cache fails? > > Thanks > > --Siju > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From AGresens at Scholastic.com Fri Jan 28 14:01:03 2011 From: AGresens at Scholastic.com (Gresens, August) Date: Fri, 28 Jan 2011 09:01:03 -0500 Subject: How to set up varnish not be a single point of failure In-Reply-To: <4D42C7AE.2070602@netmatch.nl> Message-ID: <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> We have two varnish servers behind the load balancer (nginx). Each varnish server has an identical configuration and load balances the actual backends (web servers). Traffic for particular url patterns are routed to one of the varnish servers by the load balancer. For each url pattern the secondary source is the alternate varnish server. In this way we can we partition traffic between the two varnish servers and avoid redundant caching but the second one will act as a failover if the primary goes down. Best, A -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Angelo H?ngens Sent: Friday, January 28, 2011 8:42 AM To: varnish-misc at varnish-cache.org Subject: Re: How to set up varnish not be a single point of failure On 28-1-2011 14:38, Caunter, Stefan wrote: > > > > On 2011-01-28, at 6:26 AM, "Stewart Robinson" wrote: > >> Other people have configured two Varnish servers to be backends for >> each other. When you see the other Varnish cache as your remote IP you >> then point the request to the real backend. This duplicates your cache >> items in each cache. >> >> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >> >> Stew >> >> On 28 January 2011 10:46, Siju George wrote: >>> Hi, >>> >>> I understand that varnish does not support cache peering like Squid. >>> My planned set up is something like >>> >>> >>> ---- Webserver1 --- ------- Cache --- >>> ------ API >>> LB ----| |---- LB----| |---- LB >>> ----| >>> ---- Webserver2 --- ------- Cache --- >>> ------ API >>> >>> So if I am using Varnish as Cache what is the best way to configure them so >>> that there is redundancy and the setup can continue even if one Cache fails? >>> >>> Thanks >>> >>> --Siju > > > Put two behind LB. Caches are cooler but you get high availability. > Easy to do maintenance this way. We use Varnish on CentOS machines. We use Pacemaker for high-availability (multiple virtual ip's) and DNSRR for balancing end-users to the caches. see http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/ for the pacemaker part.. -- With kind regards, Angelo H?ngens systems administrator MCSE on Windows 2003 MCSE on Windows 2000 MS Small Business Specialist ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg +31 (0)13 5811088 +31 (0)13 5821239 A.Hongens at netmatch.nl www.netmatch.nl ------------------------------------------ _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc SCHOLASTIC Read Every Day. Lead a Better Life. From bedis9 at gmail.com Fri Jan 28 14:50:44 2011 From: bedis9 at gmail.com (Bedis 9) Date: Fri, 28 Jan 2011 15:50:44 +0100 Subject: How to set up varnish not be a single point of failure In-Reply-To: <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> References: <4D42C7AE.2070602@netmatch.nl> <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> Message-ID: On Fri, Jan 28, 2011 at 3:01 PM, Gresens, August wrote: > We have two varnish servers behind the load balancer (nginx). Each varnish server has an identical configuration and load balances the actual backends (web servers). > > Traffic for particular url patterns are routed to one of the varnish servers by the load balancer. For each url pattern the secondary source is the alternate varnish server. In this way we can we partition traffic between the two varnish servers and avoid redundant caching but the second one will act as a failover if the primary goes down. > > Best, > > A > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Angelo H?ngens > Sent: Friday, January 28, 2011 8:42 AM > To: varnish-misc at varnish-cache.org > Subject: Re: How to set up varnish not be a single point of failure > > On 28-1-2011 14:38, Caunter, Stefan wrote: >> >> >> >> On 2011-01-28, at 6:26 AM, "Stewart Robinson" wrote: >> >>> Other people have configured two Varnish servers to be backends for >>> each other. When you see the other Varnish cache as your remote IP you >>> then point the request to the real backend. This duplicates your cache >>> items in each cache. >>> >>> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >>> >>> Stew >>> >>> On 28 January 2011 10:46, Siju George wrote: >>>> Hi, >>>> >>>> I understand that varnish does not support cache peering like Squid. >>>> My planned set up is something like >>>> >>>> >>>> ? ? ? ? ? ---- Webserver1 --- ? ? ? ? ? ? ?------- Cache --- >>>> ------ API >>>> LB ----| ? ? ? ? ? ? ? ? ? ? ? ? ?|---- LB----| ? ? ? ? ? ? ? ? ? ?|---- LB >>>> ----| >>>> ? ? ? ? ? ---- Webserver2 --- ? ? ? ? ? ? ?------- Cache --- >>>> ------ API >>>> >>>> So if I am using Varnish as Cache what is the best way to configure them so >>>> that there is redundancy and the setup can continue even if one Cache fails? >>>> >>>> Thanks >>>> >>>> --Siju >> >> >> Put two behind LB. Caches are cooler but you get high availability. >> Easy to do maintenance this way. > > > We use Varnish on CentOS machines. We use Pacemaker for > high-availability (multiple virtual ip's) and DNSRR for balancing > end-users to the caches. > > see > http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/ > for the pacemaker part.. > > -- > > > With kind regards, > > > Angelo H?ngens > systems administrator > > MCSE on Windows 2003 > MCSE on Windows 2000 > MS Small Business Specialist > ------------------------------------------ > NetMatch > tourism internet software solutions > > Ringbaan Oost 2b > 5013 CA Tilburg > +31 (0)13 5811088 > +31 (0)13 5821239 > > A.Hongens at netmatch.nl > www.netmatch.nl > ------------------------------------------ > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > SCHOLASTIC > Read Every Day. > Lead a Better Life. > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > Hey, You can use HAproxy for your LB. It has a hash metric, usefull for caches (and much more functionnality). cheers From jdzstz at gmail.com Fri Jan 28 14:55:47 2011 From: jdzstz at gmail.com (jdzstz - gmail dot com) Date: Fri, 28 Jan 2011 15:55:47 +0100 Subject: How to set up varnish not be a single point of failure In-Reply-To: <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> References: <4D42C7AE.2070602@netmatch.nl> <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> Message-ID: In my opinion, the problem of having separate caching based on URL is that in case of problems, secondary failover server has a empty cache for rest of URL, so can affect to throughtput. Our architecture is the following: [1. F5 LB] => [2. Varnish] => [3. Tomcat] 1) F5 Big IP Hardware Load Balancer 2) Four Varnish cache in diferent machines 3) Four Tomcat servers in diferent machines We don't care to have redundant caching because: - we don't have resource problems - in case of problems, all varnish instances has the cache already populated 2011/1/28 Gresens, August: > We have two varnish servers behind the load balancer (nginx). Each varnish server has an identical configuration and load balances the actual backends (web servers). > > Traffic for particular url patterns are routed to one of the varnish servers by the load balancer. For each url pattern the secondary source is the alternate varnish server. In this way we can we partition traffic between the two varnish servers and avoid redundant caching but the second one will act as a failover if the primary goes down. > > Best, > > A > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Angelo H?ngens > Sent: Friday, January 28, 2011 8:42 AM > To: varnish-misc at varnish-cache.org > Subject: Re: How to set up varnish not be a single point of failure > > On 28-1-2011 14:38, Caunter, Stefan wrote: >> >> >> >> On 2011-01-28, at 6:26 AM, "Stewart Robinson" wrote: >> >>> Other people have configured two Varnish servers to be backends for >>> each other. When you see the other Varnish cache as your remote IP you >>> then point the request to the real backend. This duplicates your cache >>> items in each cache. >>> >>> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >>> >>> Stew >>> >>> On 28 January 2011 10:46, Siju George wrote: >>>> Hi, >>>> >>>> I understand that varnish does not support cache peering like Squid. >>>> My planned set up is something like >>>> >>>> >>>> ? ? ? ? ? ---- Webserver1 --- ? ? ? ? ? ? ?------- Cache --- >>>> ------ API >>>> LB ----| ? ? ? ? ? ? ? ? ? ? ? ? ?|---- LB----| ? ? ? ? ? ? ? ? ? ?|---- LB >>>> ----| >>>> ? ? ? ? ? ---- Webserver2 --- ? ? ? ? ? ? ?------- Cache --- >>>> ------ API >>>> >>>> So if I am using Varnish as Cache what is the best way to configure them so >>>> that there is redundancy and the setup can continue even if one Cache fails? >>>> >>>> Thanks >>>> >>>> --Siju >> >> >> Put two behind LB. Caches are cooler but you get high availability. >> Easy to do maintenance this way. > > > We use Varnish on CentOS machines. We use Pacemaker for > high-availability (multiple virtual ip's) and DNSRR for balancing > end-users to the caches. > > see > http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/ > for the pacemaker part.. > > -- > > > With kind regards, > > > Angelo H?ngens > systems administrator > > MCSE on Windows 2003 > MCSE on Windows 2000 > MS Small Business Specialist > ------------------------------------------ > NetMatch > tourism internet software solutions > > Ringbaan Oost 2b > 5013 CA Tilburg > +31 (0)13 5811088 > +31 (0)13 5821239 > > A.Hongens at netmatch.nl > www.netmatch.nl > ------------------------------------------ > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > SCHOLASTIC > Read Every Day. > Lead a Better Life. > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From bret at iwin.com Fri Jan 28 15:34:55 2011 From: bret at iwin.com (Bret A. Barker) Date: Fri, 28 Jan 2011 10:34:55 -0500 Subject: How to set up varnish not be a single point of failure In-Reply-To: References: <4D42C7AE.2070602@netmatch.nl> <8D333BF67F814C4D9803C99961BE5E70036675DB@corpex07.corp.scholasticinc.local> Message-ID: <20110128153455.GK98322@iwin.com> For some of our clusters we use a slightly different approach with the F5s: [1. F5] -> [2. Varnish pool] -> [3. F5] -> [4. Tomcat pool] By going back through the F5, this setup allows us to keep all of our backend selection logic (not to mention other iRule goodness) together in the F5 configs instead of VCL. We've been using this scheme for quite some time w/good results - the extra hop is negligible in terms of latency vs. the average backend response times for dynamic requests. And we likewise don't have an issue w/redundant cache data for our use-cases. The extra backend request per Varnish instance per TTL period is minor vs. the impact of losing a Varnish instance that is the sole cache for a large percentage of your URL space. I think hashed based balancing is generally better for static content. -bret On Fri, Jan 28, 2011 at 03:55:47PM +0100, jdzstz - gmail dot com wrote: > In my opinion, the problem of having separate caching based on URL is > that in case of problems, secondary failover server has a empty cache > for rest of URL, so can affect to throughtput. > > Our architecture is the following: > > [1. F5 LB] => [2. Varnish] => [3. Tomcat] > > 1) F5 Big IP Hardware Load Balancer > 2) Four Varnish cache in diferent machines > 3) Four Tomcat servers in diferent machines > > We don't care to have redundant caching because: > - we don't have resource problems > - in case of problems, all varnish instances has the cache already populated > From jdzstz at gmail.com Fri Jan 28 16:56:10 2011 From: jdzstz at gmail.com (jdzstz - gmail dot com) Date: Fri, 28 Jan 2011 17:56:10 +0100 Subject: Please help break Varnish GZIP/ESI support before 3.0 In-Reply-To: <87903.1295953442@critter.freebsd.dk> References: <87903.1295953442@critter.freebsd.dk> Message-ID: I have compiled new varnish 3.0 trunk in Solaris 10 (32 bits and 64 bits) and Cygwin and also executed varnishtests. Solaris 10 64 bits compiles successfully, but it coredumps at boot time: SMA.s0: max size 100 MB. Message from C-compiler: gcc: unrecognized option `-Kpic' Platform: -smalloc,-smalloc,-hcritbit 200 214 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- -smalloc,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. start child (17086) Started Pushing vcls failed: CLI communication error (hdr) Stopping Child 200 0 Child (17086) died signal=10 (core dumped) Child (-1) said Child (-1) said Child starts Child cleanup complete Varnish 2.1.2 works ok in same machin, so, I will inspect core file and search the error. About solaris 32 bits and cygwin, the following tests are FAILED: Solaris 10 - 32 bits: # top TEST ./tests/e00021.vtc FAILED (2.656) # top TEST ./tests/e00022.vtc FAILED (3.479) # top TEST ./tests/e00023.vtc FAILED (2.517) # top TEST ./tests/e00024.vtc FAILED (2.440) # top TEST ./tests/g00002.vtc FAILED (3.455) # top TEST ./tests/m00004.vtc FAILED (2.431) # top TEST ./tests/v00006.vtc FAILED (7.090) # top TEST ./tests/v00012.vtc FAILED (30.017) # top TEST ./tests/v00017.vtc FAILED (3.180) Cygwin. # top TEST tests/e00022.vtc FAILED (8.336) # top TEST tests/g00002.vtc FAILED (6.281) # top TEST tests/s00002.vtc FAILED (30.111) (s00002.vtc maybe a unresolved cygwin issue) Test logs are attached to email, I will made some real tests in both platforms and send again. It seems that ,at least, there is some problems with e00022.vtc and g00002.vtc because fails in both systems. -------------- next part -------------- A non-text attachment was scrubbed... Name: errores_cygwin Type: application/octet-stream Size: 25268 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: errores_solaris Type: application/octet-stream Size: 87428 bytes Desc: not available URL: From jdzstz at gmail.com Fri Jan 28 17:17:39 2011 From: jdzstz at gmail.com (jdzstz - gmail dot com) Date: Fri, 28 Jan 2011 18:17:39 +0100 Subject: Please help break Varnish GZIP/ESI support before 3.0 In-Reply-To: References: <87903.1295953442@critter.freebsd.dk> Message-ID: About my coredumps problems at startup in Solaris 10 - 64 bits, I have recompiled in debug mode and inspected core. The problem root is in VSC_main, it isn't initialized and generates a bus error in: vsl_wrap () at cache_shmlog.c:87 ( VSC_main->shm_cycles++;) Solaris version is: 5.10 Generic_120011-14 sun4v sparc SUNW,Sun-Fire-T2000 Varnish is compiled with gcc version 3.4.6 GDB result: GNU gdb 6.8 Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "sparc-sun-solaris2.10"... Reading symbols from /lib/sparcv9/libumem.so.1...done. Loaded symbols for /lib/64/libumem.so.1 Reading symbols from /export/home/web1/varnish3_64/lib/libvarnish.so.1...done. Loaded symbols for /export/home/web1/varnish3_64/lib/libvarnish.so.1 Reading symbols from /export/home/web1/varnish3_64/lib/libvarnishcompat.so.1...done. Loaded symbols for /export/home/web1/varnish3_64/lib/libvarnishcompat.so.1 Reading symbols from /export/home/web1/varnish3_64/lib/libvcl.so.1...done. Loaded symbols for /export/home/web1/varnish3_64/lib/libvcl.so.1 Reading symbols from /lib/sparcv9/librt.so.1...done. Loaded symbols for /lib/64/librt.so.1 Reading symbols from /export/home/web1/local/lib/libpcre.so.0...done. Loaded symbols for /export/home/web1/local/lib/libpcre.so.0 Reading symbols from /export/home/web1/varnish3_64/lib/libvgz.so.1...done. Loaded symbols for /export/home/web1/varnish3_64/lib/libvgz.so.1 Reading symbols from /lib/sparcv9/libdl.so.1... warning: Lowest section in /lib/sparcv9/libdl.so.1 is .hash at 0000000000000120 done. Loaded symbols for /lib/64/libdl.so.1 Reading symbols from /lib/sparcv9/libnsl.so.1...done. Loaded symbols for /lib/64/libnsl.so.1 Reading symbols from /lib/sparcv9/libsocket.so.1...done. Loaded symbols for /lib/64/libsocket.so.1 Reading symbols from /lib/sparcv9/libm.so.2...done. Loaded symbols for /lib/64/libm.so.2 Reading symbols from /lib/sparcv9/libpthread.so.1... warning: Lowest section in /lib/sparcv9/libpthread.so.1 is .dynamic at 00000000000000b0 done. Loaded symbols for /lib/64/libpthread.so.1 Reading symbols from /lib/sparcv9/libc.so.1...done. Loaded symbols for /lib/64/libc.so.1 Reading symbols from /usr/local/lib/sparcv9/libgcc_s.so.1...done. Loaded symbols for /usr/local/lib/sparcv9/libgcc_s.so.1 Reading symbols from /lib/sparcv9/libaio.so.1...done. Loaded symbols for /lib/64/libaio.so.1 Reading symbols from /lib/sparcv9/libmd.so.1...done. Loaded symbols for /lib/64/libmd.so.1 Reading symbols from /platform/sun4v/lib/sparcv9/libc_psr.so.1...done. Loaded symbols for /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr.so.1 Reading symbols from /lib/sparcv9/ld.so.1...done. Loaded symbols for /lib/sparcv9/ld.so.1 Core was generated by `/export/home/web1/varnish3_64/sbin/varnishd -d -a :7002 :8082 -T :8802 -f /expo'. Program terminated with signal 10, Bus error. [New process 82622 ] #0 0x0000000100061650 in vsl_wrap () at cache_shmlog.c:87 87 VSC_main->shm_cycles++; (gdb) bt #0 0x0000000100061650 in vsl_wrap () at cache_shmlog.c:87 #1 0x0000000100063124 in VSL_Init () at cache_shmlog.c:289 #2 0x0000000100052ee8 in child_main () at cache_main.c:105 #3 0x000000010007d678 in start_child (cli=0x1002b3ef0) at mgt_child.c:408 #4 0x000000010007f8f0 in mcf_server_startstop (cli=0x1002b3ef0, av=0x100211b00, priv=0x0) at mgt_child.c:657 #5 0xffffffff7f30a2f8 in cls_dispatch (cli=0x1002b3ef0, clp=0x1001c13b0, av=0x100211b00, ac=1) at cli_serve.c:231 #6 0xffffffff7f30a8bc in cls_vlu2 (priv=0x1002b3ec0, av=0x100211b00) at cli_serve.c:287 #7 0xffffffff7f30af18 in cls_vlu (priv=0x1002b3ec0, p=0x100208010 "start") at cli_serve.c:342 #8 0xffffffff7f3121b4 in LineUpProcess (l=0x1001f3890) at vlu.c:157 #9 0xffffffff7f3124b8 in VLU_Fd (fd=0, l=0x1001f3890) at vlu.c:182 #10 0xffffffff7f30c410 in CLS_PollFd (cs=0x1002b1f40, fd=0, timeout=0) at cli_serve.c:493 #11 0x0000000100084384 in mgt_cli_callback2 (e=0x1002b5f50, what=1) at mgt_cli.c:389 #12 0xffffffff7f31132c in vev_schedule_one (evb=0x10021ff10) at vev.c:501 #13 0xffffffff7f3106b0 in vev_schedule (evb=0x10021ff10) at vev.c:366 #14 0x000000010007f744 in MGT_Run () at mgt_child.c:639 #15 0x00000001000a564c in main (argc=0, argv=0xffffffff7ffff938) at varnishd.c:650 (gdb) print VSC_main $1 = (struct vsc_main *) 0xffffffff7601007c (gdb) print *VSC_main Cannot access memory at address 0xffffffff7601007c (gdb) print VSC_main->shm_cycles Cannot access memory at address 0xffffffff7601028c (gdb) 2011/1/28 jdzstz - gmail dot com: > I have compiled new varnish 3.0 trunk in Solaris 10 (32 bits and 64 > bits) and Cygwin and also executed varnishtests. > > Solaris 10 64 bits compiles successfully, but it coredumps at boot time: > > SMA.s0: max size 100 MB. > Message from C-compiler: > gcc: unrecognized option `-Kpic' > Platform: -smalloc,-smalloc,-hcritbit > 200 214 > ----------------------------- > Varnish Cache CLI 1.0 > ----------------------------- > -smalloc,-smalloc,-hcritbit > > Type 'help' for command list. > Type 'quit' to close CLI session. > Type 'start' to launch worker process. > > start > child (17086) Started > Pushing vcls failed: CLI communication error (hdr) > Stopping Child > 200 0 > > Child (17086) died signal=10 (core dumped) > Child (-1) said > Child (-1) said Child starts > Child cleanup complete > > Varnish 2.1.2 works ok in same machin, so, I will inspect core file > and search the error. > > About solaris 32 bits and cygwin, the following tests are FAILED: > > Solaris 10 - 32 bits: > > # ? ? top ?TEST ./tests/e00021.vtc FAILED (2.656) > # ? ? top ?TEST ./tests/e00022.vtc FAILED (3.479) > # ? ? top ?TEST ./tests/e00023.vtc FAILED (2.517) > # ? ? top ?TEST ./tests/e00024.vtc FAILED (2.440) > # ? ? top ?TEST ./tests/g00002.vtc FAILED (3.455) > # ? ? top ?TEST ./tests/m00004.vtc FAILED (2.431) > # ? ? top ?TEST ./tests/v00006.vtc FAILED (7.090) > # ? ? top ?TEST ./tests/v00012.vtc FAILED (30.017) > # ? ? top ?TEST ./tests/v00017.vtc FAILED (3.180) > > Cygwin. > > # ? ? top ?TEST tests/e00022.vtc FAILED (8.336) > # ? ? top ?TEST tests/g00002.vtc FAILED (6.281) > # ? ? top ?TEST tests/s00002.vtc FAILED (30.111) > > (s00002.vtc ?maybe a unresolved cygwin issue) > > Test logs are attached to email, I will made some real tests in both > platforms and send again. > > It seems that ,at least, there is some problems with e00022.vtc ?and > g00002.vtc because fails in both systems. > From phk at phk.freebsd.dk Fri Jan 28 18:08:33 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 28 Jan 2011 18:08:33 +0000 Subject: Please help break Varnish GZIP/ESI support before 3.0 In-Reply-To: Your message of "Fri, 28 Jan 2011 18:17:39 +0100." Message-ID: <16283.1296238113@critter.freebsd.dk> In message , jdzs tz - gmail dot com writes: >The problem root is in VSC_main, it isn't initialized and generates a >bus error in: vsl_wrap () at cache_shmlog.c:87 ( >VSC_main->shm_cycles++;) VSC_main is initialized in the manager process and inherited by the child process. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From shirin.hossain at gmail.com Sun Jan 30 13:18:47 2011 From: shirin.hossain at gmail.com (Shirin Hossain) Date: Sun, 30 Jan 2011 19:18:47 +0600 Subject: Like to know Message-ID: Dear Sir I really glad to join with you. I like to join your activities I will stay in New york on March 21st to June 20 that's why my wish I will take any kind of work with you if you can Chance me then. Thanks & regards -- -------------------------------------- Ms. Shirin Hossain Chairman Bridge International Dhaka, Bangladesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnson at nmr.mgh.harvard.edu Sun Jan 30 15:12:51 2011 From: johnson at nmr.mgh.harvard.edu (Chris Johnson) Date: Sun, 30 Jan 2011 10:12:51 -0500 (EST) Subject: Varnis and virtual hosts Message-ID: Hi. VERY new to varnish (2.0.5) here. I need to pass some virtual host requests through to the Apache serverbwhich is set up to serve them. I've read the wiki. Nto sure but I THINK I need something like this in vcl_rev if (req.http.host ~ "^virtual-domain.edu") { return (pass); } Or possibly return(lookup) as I would like this cached. Currently varnish is just swallowing all the virtual host requests. Do I need a similr entry in vcl_fetch? Am I even close? I'm guessing here. If anyone has a pointer to more detaild and thorough documentation that would be appreciated also. Thank you. ------------------------------------------------------------------------------- Chris Johnson |Internet: johnson at nmr.mgh.harvard.edu Systems Administrator |Web: http://www.nmr.mgh.harvard.edu/~johnson NMR Center |Voice: 617.726.0949 Mass. General Hospital |FAX: 617.726.7422 149 (2301) 13th Street |Life, a bad idea whose time is past. Charlestown, MA., 02129 USA | Me ------------------------------------------------------------------------------- The information in this e-mail is intended only for the person to whom it is addressed. If you believe this e-mail was sent to you in error and the e-mail contains patient information, please contact the Partners Compliance HelpLine at http://www.partners.org/complianceline . If the e-mail was sent to you in error but does not contain patient information, please contact the sender and properly dispose of the e-mail. From perbu at varnish-software.com Sun Jan 30 15:20:09 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 30 Jan 2011 16:20:09 +0100 Subject: Varnis and virtual hosts In-Reply-To: References: Message-ID: Hi, On Sun, Jan 30, 2011 at 4:12 PM, Chris Johnson wrote: > > ? ? VERY new to varnish (2.0.5) here. ?I need to pass some virtual host > requests through to the Apache serverbwhich is set up to serve them. > I've read the wiki. I would recommend you read through Using Varnish - http://www.varnish-cache.org/docs/master/tutorial/index.html it will answer most of your questions and give you a good understanding. > ?Nto sure but I THINK I need something like this in > vcl_rev > > if (req.http.host ~ "^virtual-domain.edu") { > ?return (pass); > } > > Or possibly return(lookup) as I would like this cached. Ideally Varnish should determine cacheability from the request and response headers. If you need to override the headers there is specific information in the tutorial on how to do so. If you have any comments regarding the tutorial, please voice them here. Good luck, Per. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From rfine at finelinellc.com Sun Jan 30 17:02:35 2011 From: rfine at finelinellc.com (Russell Fine) Date: Sun, 30 Jan 2011 09:02:35 -0800 Subject: How To Perform Post-Request Async Processing Message-ID: I'm wondering if I can have Varnish perform any post-processing on a request after returning a response to the client. The project I'm working on is a high volume http request logging and redirection system. I want to make sure that the clients receive a very fast response (usually just an empty file with a 200, optionally a single pixel graphic). The primary reason for the request is for logging, not to perform a client action. However after the request occurs, I want to perform some specialized logging and perhaps transforms/analyses. These actions may take a couple of seconds and I don't want the client response to wait. One option is to just read and continuously parse the log files, but that seems inelegant. 1 - Can Varnish perform any post-processing on a request after returning the response? Thanks, Russell From perbu at varnish-software.com Sun Jan 30 17:51:01 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 30 Jan 2011 18:51:01 +0100 Subject: obj.cacheable discontinuity and lifetime sw policy In-Reply-To: References: Message-ID: Hi, On Sun, Jan 30, 2011 at 5:56 PM, Amedeo Salvati wrote: > > Question is: how long statement, object... are supported through minor > version release(e.g. 2.1.x) or major version 2, 3, 4? Or they can > change at any time on every minor version? > > I know that on 2.1.5 released few days ago ogj.cacheable it's present, > but on 2.1.6? VCL syntax won't change in a minor version. That is, we might add functionality in a minor release but never take something away. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From perbu at varnish-software.com Sun Jan 30 17:54:55 2011 From: perbu at varnish-software.com (Per Buer) Date: Sun, 30 Jan 2011 18:54:55 +0100 Subject: How To Perform Post-Request Async Processing In-Reply-To: References: Message-ID: On Sun, Jan 30, 2011 at 6:02 PM, Russell Fine wrote: > > However after the request occurs, I want to perform some specialized > logging and perhaps transforms/analyses. ?These actions may take a > couple of seconds and I don't want the client response to wait. ?One > option is to just read and continuously parse the log files, but that > seems inelegant. I think would be an excellent way of doing such operations. Just make a specialized version of varnishlog to suit your needs. > 1 - Can Varnish perform any post-processing on a request after > returning the response? No.. -- Per Buer,?Varnish Software Phone: +47 21 98 92 61 / Mobile: +47 958 39 117 / Skype: per.buer Varnish makes websites fly! Want to learn more about Varnish? http://www.varnish-software.com/whitepapers From phk at phk.freebsd.dk Sun Jan 30 18:10:05 2011 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 30 Jan 2011 18:10:05 +0000 Subject: obj.cacheable discontinuity and lifetime sw policy In-Reply-To: Your message of "Sun, 30 Jan 2011 17:56:51 +0100." Message-ID: <73065.1296411005@critter.freebsd.dk> In message , Amed eo Salvati writes: >Question is: how long statement, object... are supported through minor >version release(e.g. 2.1.x) or major version 2, 3, 4? Or they can >change at any time on every minor version? We try to keep the VCL stable in minor versions, so that a VCL that works with N.m, also works for any N.m+1, N.m+2, ... That allows us to add features, but not to remove them. When we do massive changes, we will change the major version, and the change you just spotted, is part of the 3.x set of changes. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at varnish-software.com Mon Jan 31 08:14:55 2011 From: tfheen at varnish-software.com (Tollef Fog Heen) Date: Mon, 31 Jan 2011 09:14:55 +0100 Subject: obj.cacheable discontinuity and lifetime sw policy In-Reply-To: (Amedeo Salvati's message of "Sun, 30 Jan 2011 17:56:51 +0100") References: Message-ID: <87y6618p6o.fsf@qurzaw.varnish-software.com> ]] Amedeo Salvati | Question is: how long statement, object... are supported through minor | version release(e.g. 2.1.x) or major version 2, 3, 4? Or they can | change at any time on every minor version? | | I know that on 2.1.5 released few days ago ogj.cacheable it's present, | but on 2.1.6? We won't change VCL or parameters in backwards-incompatible ways in minor versions. So this change (and similar ones) won't go into 2.1.6, but can (and in this case will) go into 3.0. -- Tollef Fog Heen Varnish Software t: +47 21 98 92 64 From amedeo.salvati at gmail.com Sun Jan 30 16:56:51 2011 From: amedeo.salvati at gmail.com (Amedeo Salvati) Date: Sun, 30 Jan 2011 17:56:51 +0100 Subject: obj.cacheable discontinuity and lifetime sw policy Message-ID: hi, first i want to tank you and all developers for your work, every time i recommend varnish-cache to our customer, but today i tried to update varnish from source, switching from svn to git repository, and saw that obj.cacheable object was retired with this commit: commit e0db8b06a229058bb759962cb3a3db5e14828e25 Author: Poul-Henning Kamp Date: Fri Jan 28 21:37:38 2011 +0000 Retire the obj.cacheable VCL variable. Whatever our thinking at the time might have been, it clearly was woollen, and all it did was confuse people. Question is: how long statement, object... are supported through minor version release(e.g. 2.1.x) or major version 2, 3, 4? Or they can change at any time on every minor version? I know that on 2.1.5 released few days ago ogj.cacheable it's present, but on 2.1.6? best regards a