From des at linpro.no Mon Mar 3 12:16:29 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 13:16:29 +0100 Subject: error page not delivered In-Reply-To: <47B9315B.6090107@chicagosuntimes.com> (Ramon A. Hermon's message of "Mon, 18 Feb 2008 01:18:51 -0600") References: <200802141232.39278.rhermon@chicagosuntimes.com> <47B9315B.6090107@chicagosuntimes.com> Message-ID: Ramon A Hermon writes: > I am not sure how you can tell that I have 2 caches running, if by cache > you mean varnish instances then yes. However I get the same result if I > shut one down. Because I misread your email (you wrote "case #1" and "case #2", I assumed you meant two different servers) What you're seeing is an instance of #197, which was fixed in trunk last year, but (due to an oversight on my part) only recently in 1.1. You're running some sort of RedHat / Fedora / CentOS, right? You can build the latest 1.1 directly from sources as follows: # mkdir /tmp/varnish # cd /tmp/varnish # svn co http://varnish.projects.linpro.no/svn/branches/1.1 varnish-1.1.2 # tar zcf varnish-1.1.2.tar.gz varnish-1.1.2 # rpmbuild -tb varnish-1.1.2.tar.gz # rpm -U *.rpm DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 12:21:54 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 13:21:54 +0100 Subject: vcl-mode for emacs In-Reply-To: <7x7ih2y66q.fsf@iostat.linpro.no> (Stig Sandbeck Mathisen's message of "Mon, 18 Feb 2008 10:17:33 +0100") References: <7x7ih2y66q.fsf@iostat.linpro.no> Message-ID: Stig Sandbeck Mathisen writes: > I've created a "vcl-mode" for emacs. It does indenting and syntax > highlighting. This is great, thanks! My only complaint so far is that it considers vcl_* keywords when in fact they are identifiers (you can define your own subs as well, not just override the predefined ones) DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 12:26:13 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 13:26:13 +0100 Subject: VCL purge In-Reply-To: <47C7EA65.3030602@idium.no> (=?utf-8?B?QW5kcsOpIMOYaWVu?= Langvand's message of "Fri, 29 Feb 2008 12:20:05 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> Message-ID: Andr? ?ien Langvand writes: > What I want is when a client sends e.g Pragma: no-cache, I want > warnish to fetch from backend and insert, not only do a pass. In order to prevent Varnish from serving a cached object, you need to purge it already in vcl_recv... which version are you running? DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 12:29:00 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 13:29:00 +0100 Subject: obj.tll as string In-Reply-To: (Anders =?utf-8?Q?V=C3=A4nnman's?= message of "Tue, 19 Feb 2008 13:10:24 +0100") References: Message-ID: "Anders V?nnman" writes: > I tried this code in vcl_fetch in my vcl.conf, but it doesnt > work. We are running varnish-1.1.2 > > I get error > > String representation of 'obj.ttl' not implemented yet Yes, obj.ttl is a double, and Varnish lacks code to convert a double to a string. I'll get on it right away. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From andre at idium.no Mon Mar 3 12:48:40 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Mon, 03 Mar 2008 13:48:40 +0100 Subject: VCL purge In-Reply-To: References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> Message-ID: <47CBF3A8.4090506@idium.no> Yes, that was my thought. Besides an if (req.http.Pragma ~ "no-cache") in vcl_recv how can i purge the requested URL (and host)?. I'm running the latest trunk. -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Dag-Erling Sm?rgrav wrote: > Andr? ?ien Langvand writes: >> What I want is when a client sends e.g Pragma: no-cache, I want >> warnish to fetch from backend and insert, not only do a pass. > > In order to prevent Varnish from serving a cached object, you need to > purge it already in vcl_recv... which version are you running? > > DES From des at linpro.no Mon Mar 3 13:07:26 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:07:26 +0100 Subject: still trying to purge In-Reply-To: <200802201546.22900.cfarinella@appropriatesolutions.com> (Charlie Farinella's message of "Wed, 20 Feb 2008 15:46:22 -0500") References: <200802201546.22900.cfarinella@appropriatesolutions.com> Message-ID: Charlie Farinella writes: > For the moment I have given up trying to do the http purge thing and > am trying to find a workaround. I have the following shell script: > [...] What's wrong with varnishadm(1)? DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 13:09:50 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:09:50 +0100 Subject: Varnishlogging In-Reply-To: (duja@torlen.net's message of "Wed, 27 Feb 2008 10:16:08 +0100") References: Message-ID: Erik writes: > Ah ok, I see :( Unfortunately i'm not a c++ guru so I can just cross > my fingers and hope that some handy guy writes a patch. Actually, there isn't a single line of C++ code in Varnish. You must be thinking of Squid... DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 13:11:37 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:11:37 +0100 Subject: Deliver ReqEnd in http header In-Reply-To: (duja@torlen.net's message of "Thu, 28 Feb 2008 11:03:38 +0100") References: Message-ID: Erik writes: > Okey... I started to add some things that looked ok to me. But I > have a feeling that I cannot use "string". Here is what I did: > > On line 551 in vcc_fixed_token.c: > [...] You need to edit vcc_gen_fixed_token.tcl instead. That's why it says the following right at the top: /* * $Id: vcc_fixed_token.c 2461 2008-02-13 17:25:57Z des $ * * NB: This file is machine generated, DO NOT EDIT! * * Edit vcc_gen_fixed_token.tcl instead */ DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 13:13:09 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:13:09 +0100 Subject: Blank pages with HTTP/1.0 In-Reply-To: <6C00C181-BCF6-4DE9-A525-658DD180CE39@gotamedia.se> (Fredrik Nygren's message of "Fri, 29 Feb 2008 10:23:21 +0100") References: <6C00C181-BCF6-4DE9-A525-658DD180CE39@gotamedia.se> Message-ID: Fredrik Nygren writes: > I have a couple of servers with Varnish 1.1.2-5 installed. Some > visitors has reported a blank page when they are visiting us. What I > can se from our logs the problem seems to be related to the visitors > HTTP protocol version. Requests with HTTP/1.0 gets a blank page but > HTTP/1.1 not. Our Apache logs shows this: #197, see my earlier email to Ramon A Hermon. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 13:24:43 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:24:43 +0100 Subject: varnish-misc Digest, Vol 23, Issue 25 In-Reply-To: <3A69629F-222C-40B2-837D-32D02ADDEDD1@gotamedia.se> (Fredrik Nygren's message of "Fri, 29 Feb 2008 13:48:43 +0100") References: <3A69629F-222C-40B2-837D-32D02ADDEDD1@gotamedia.se> Message-ID: Fredrik Nygren writes: > I guess its the same problem that ive had, and there is a patch > regarding it; > > http://varnish.projects.linpro.no/ticket/197 The patch in the ticket is incorrect. The revision numbers for the correct fix are in the audit trail. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 13:26:10 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 14:26:10 +0100 Subject: VCL purge In-Reply-To: <47CBF3A8.4090506@idium.no> (=?utf-8?B?QW5kcsOpIMOYaWVu?= Langvand's message of "Mon, 03 Mar 2008 13:48:40 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> Message-ID: Andr? ?ien Langvand writes: > Yes, that was my thought. Besides an if (req.http.Pragma ~ "no-cache") > in vcl_recv how can i purge the requested URL (and host)?. purge_hash() and purge_url(), as documented in vcl(7). DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From andre at idium.no Mon Mar 3 14:54:19 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Mon, 03 Mar 2008 15:54:19 +0100 Subject: VCL purge In-Reply-To: References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> Message-ID: <47CC111B.7080008@idium.no> Yep, and as mentioned in the first mail, I've tried to use both req.url and req.http.host in diffrent combinations in purge_hash() without any luck. Using purge_url(), it seems to be working quite nice with purge_url(req.url), but I would really prefer to use purge_hash(). -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Dag-Erling Sm?rgrav wrote: > Andr? ?ien Langvand writes: >> Yes, that was my thought. Besides an if (req.http.Pragma ~ "no-cache") >> in vcl_recv how can i purge the requested URL (and host)?. > > purge_hash() and purge_url(), as documented in vcl(7). > > DES From des at linpro.no Mon Mar 3 16:19:02 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 17:19:02 +0100 Subject: VCL purge In-Reply-To: <47CC111B.7080008@idium.no> (=?utf-8?B?QW5kcsOpIMOYaWVu?= Langvand's message of "Mon, 03 Mar 2008 15:54:19 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> Message-ID: Andr? ?ien Langvand writes: > Yep, and as mentioned in the first mail, I've tried to use both > req.url and req.http.host in diffrent combinations in purge_hash() > without any luck. purge_hash() wants the object's hash string, which is something like req.url + '#' + req.host. If you only have one virtual host, just use purge_url() instead. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Mon Mar 3 17:06:09 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 03 Mar 2008 18:06:09 +0100 Subject: obj.tll as string In-Reply-To: (Dag-Erling =?utf-8?Q?Sm?= =?utf-8?Q?=C3=B8rgrav's?= message of "Mon, 03 Mar 2008 13:29:00 +0100") References: Message-ID: Dag-Erling Sm?rgrav writes: > "Anders V?nnman" writes: > > String representation of 'obj.ttl' not implemented yet > Yes, obj.ttl is a double, and Varnish lacks code to convert a double > to a string. I'll get on it right away. Fixed in trunk (r2546-2547). I haven't checked how much work it is to backport; I'll look into that tomorrow. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From andre at idium.no Mon Mar 3 17:31:56 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Mon, 03 Mar 2008 18:31:56 +0100 Subject: VCL purge In-Reply-To: References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> Message-ID: <47CC360C.1090607@idium.no> Thank you, that makes sense now. However, I am still not able to make it work. E.g using purge_hash(req.url + '#' + req.http.host + '#$'); throws me an syntax error, failing at the apostrophe ('). Also tried diffrent variations of quotation marks and backslashes. Is the escaping wrong? -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Dag-Erling Sm?rgrav wrote: > Andr? ?ien Langvand writes: >> Yep, and as mentioned in the first mail, I've tried to use both >> req.url and req.http.host in diffrent combinations in purge_hash() >> without any luck. > > purge_hash() wants the object's hash string, which is something like > req.url + '#' + req.host. If you only have one virtual host, just use > purge_url() instead. > > DES From andre at idium.no Mon Mar 3 17:35:22 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Mon, 03 Mar 2008 18:35:22 +0100 Subject: VCL purge In-Reply-To: <47CC111B.7080008@idium.no> References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> Message-ID: <47CC36DA.5020907@idium.no> Thank you, that makes sense now. However, I am still not able to make it work. E.g using purge_hash(req.url + '#' + req.http.host + '#$'); throws me an syntax error, failing at the apostrophe ('). Also tried diffrent variations of quotation marks and backslashes. Is the escaping wrong? -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Andr? ?ien Langvand wrote: > Yep, and as mentioned in the first mail, I've tried to use both req.url > and req.http.host in diffrent combinations in purge_hash() without any > luck. Using purge_url(), it seems to be working quite nice with > purge_url(req.url), but I would really prefer to use purge_hash(). > > -- > Andr? ?ien Langvand - PGP: 0x7B1E3468 > Systemadministrator - Idium AS - http://www.idium.no > > > Dag-Erling Sm?rgrav wrote: >> Andr? ?ien Langvand writes: >>> Yes, that was my thought. Besides an if (req.http.Pragma ~ "no-cache") >>> in vcl_recv how can i purge the requested URL (and host)?. >> purge_hash() and purge_url(), as documented in vcl(7). >> >> DES > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From duja at torlen.net Mon Mar 3 18:19:43 2008 From: duja at torlen.net (Erik Torlen) Date: Mon, 03 Mar 2008 19:19:43 +0100 Subject: Varnishlogging In-Reply-To: References: Message-ID: <47CC413F.3030108@torlen.net> Well C then :P Dag-Erling Sm?rgrav skrev: > Erik writes: > >> Ah ok, I see :( Unfortunately i'm not a c++ guru so I can just cross >> my fingers and hope that some handy guy writes a patch. >> > > Actually, there isn't a single line of C++ code in Varnish. You must > be thinking of Squid... > > DES > From andrew at imeem.com Mon Mar 3 21:03:01 2008 From: andrew at imeem.com (Andrew Knapp) Date: Mon, 3 Mar 2008 13:03:01 -0800 Subject: Child dying with "Too many open files" In-Reply-To: <0A3A6EA86530E64EA3BEA603EC4401CA020EAAE3@dexbe014-8.exch014.msoutlookonline.net> References: <0A3A6EA86530E64EA3BEA603EC4401CA01FC33B0@dexbe014-8.exch014.msoutlookonline.net><86db848d0802201747w68f34006hf95d27f682e46892@mail.gmail.com><0A3A6EA86530E64EA3BEA603EC4401CA01FC3463@dexbe014-8.exch014.msoutlookonline.net><0A3A6EA86530E64EA3BEA603EC4401CA020732BA@dexbe014-8.exch014.msoutlookonline.net><86db848d0802281356l7cf41a5g9dc24327bb065ba8@mail.gmail.com><0A3A6EA86530E64EA3BEA603EC4401CA020EA84C@dexbe014-8.exch014.msoutlookonline.net><86db848d0802281551j3dfa0221v5e9867bd340b40e3@mail.gmail.com> <0A3A6EA86530E64EA3BEA603EC4401CA020EAAE3@dexbe014-8.exch014.msoutlookonline.net> Message-ID: <0A3A6EA86530E64EA3BEA603EC4401CA020EAF43@dexbe014-8.exch014.msoutlookonline.net> I'm actually getting this a lot more frequently while running trunk (r2544). Every time the child dies it's cleaning out the cache and starting over. Right now it's happening about every 15 seconds, which seems crazy. Any ideas? -Andy > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc- > bounces at projects.linpro.no] On Behalf Of Andrew Knapp > Sent: Friday, February 29, 2008 12:54 PM > To: Michael S. Fischer > Cc: varnish-misc at projects.linpro.no > Subject: RE: Child dying with "Too many open files" > > I'm still getting the "Too many open files" error on the child. > > $ sudo sysctl -a | grep file > fs.file-max = 131072 > > NFILES is also set to 131072. Any ideas? > > -Andy > > > -----Original Message----- > > From: michaelonlaw at gmail.com [mailto:michaelonlaw at gmail.com] On > Behalf > > Of Michael S. Fischer > > Sent: Thursday, February 28, 2008 3:51 PM > > To: Andrew Knapp > > Cc: varnish-misc at projects.linpro.no > > Subject: Re: Child dying with "Too many open files" > > > > I can't help but wonder if you'd set it too high. What happens when > > you set NFILES and fs.file-max both to 131072? I've tested that as a > > known good value. > > > > --Michael > > > > On Thu, Feb 28, 2008 at 2:58 PM, Andrew Knapp > wrote: > > > Yup, it is. Here's some output: > > > > > > $ ps auxwww | grep varnish > > > root 12036 0.0 0.0 27704 648 ? Ss 14:54 0:00 > > > /usr/sbin/varnishd -a :80 -f /etc/varnish/photo.vcl -T > > :6082 > > > > > > -t 120 -w 10,700,30 -s file,/c01/varnish/varnish_storage.bin,12G -u > > > varnish -g varnish -P /var/run/varnish.pid > > > varnish 12037 1.2 0.4 13119108 39936 ? Sl 14:54 0:00 > > > /usr/sbin/varnishd -a :80 -f /etc/varnish/photo.vcl -T > > :6082 > > > > > > -t 120 -w 10,700,30 -s file,/c01/varnish/varnish_storage.bin,12G -u > > > varnish -g varnish -P /var/run/varnish.pid > > > > > > -Andy > > > > > > > > > > -----Original Message----- > > > > From: michaelonlaw at gmail.com [mailto:michaelonlaw at gmail.com] On > > Behalf > > > > Of Michael S. Fischer > > > > > > > > > > Sent: Thursday, February 28, 2008 1:57 PM > > > > To: Andrew Knapp > > > > Cc: varnish-misc at projects.linpro.no > > > > Subject: Re: Child dying with "Too many open files" > > > > > > > > Is varnishd being started as root? (even if it drops privileges > > > > later) Only root can have > 1024 file descriptors open, to my > > > > knowledge. > > > > > > > > --Michael > > > > > > > > On Thu, Feb 28, 2008 at 11:48 AM, Andrew Knapp > > > > > wrote: > > > > > Didn't really get a answer to this, so I'm trying again. > > > > > > > > > > I've done some testing with the NFILES variable, and I keep > > getting > > > > the > > > > > same error as before ("Too many open files"). I've also > > verified > > > > that > > > > > the limit is actually being applied by putting a ulimit -a in > > the > > > > > /etc/init.d/varnish script. > > > > > > > > > > Anyone have any ideas? I'm running the 1.1.2-5 rpms from > sf.net > > on > > > > > Centos 5.1. > > > > > > > > > > Thanks, > > > > > Andy > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > From: varnish-misc-bounces at projects.linpro.no > > [mailto:varnish- > > > > misc- > > > > > > bounces at projects.linpro.no] On Behalf Of Andrew Knapp > > > > > > > > > > > Sent: Wednesday, February 20, 2008 5:52 PM > > > > > > To: Michael S. Fischer > > > > > > Cc: varnish-misc at projects.linpro.no > > > > > > > > > > > > > > > > Subject: RE: Child dying with "Too many open files" > > > > > > > > > > > > Here's the output: > > > > > > > > > > > > $ sysctl fs.file-max > > > > > > fs.file-max = 767606 > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: michaelonlaw at gmail.com > > [mailto:michaelonlaw at gmail.com] On > > > > > > Behalf > > > > > > > Of Michael S. Fischer > > > > > > > Sent: Wednesday, February 20, 2008 5:48 PM > > > > > > > To: Andrew Knapp > > > > > > > Cc: varnish-misc at projects.linpro.no > > > > > > > Subject: Re: Child dying with "Too many open files" > > > > > > > > > > > > > > Does 'sysctl fs.file-max' say? It should be >= the > ulimit. > > > > > > > > > > > > > > --Michael > > > > > > > > > > > > > > On Wed, Feb 20, 2008 at 4:04 PM, Andrew Knapp > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm getting this error when running varnishd: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > > > > > > > > > > > Child said (2, 15369): < > > > > cache_pool.c > > > > > > > line > > > > > > > > 217: > > > > > > > > > > > > > > > > Condition((pipe(w->pipe)) == 0) not true. > > > > > > > > > > > > > > > > errno = 24 (Too many open files) > > > > > > > > > > > > > > > > >> > > > > > > > > > > > > > > > > Cache child died pid=15369 status=0x6 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > uname -a: > > > > > > > > > > > > > > > > Linux 2.6.18-53.1.4.el5 #1 SMP Fri Nov 30 > > 00:45:55 > > > > EST > > > > > > > 2007 > > > > > > > > x86_64 x86_64 x86_64 GNU/Linux > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > command used to start varnish: > > > > > > > > > > > > > > > > /usr/sbin/varnishd -d -d -a :80 -f > /etc/varnish/photo.vcl > > -T > > > > > > > > :6082 -t 120 -w 10,700,30 -s > > > > > > > > file,/c01/varnish/varnish_storage.bin,12G -u varnish -g > > > > varnish -P > > > > > > > > /var/run/varnish.pid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have NFILES=270000 set in /etc/sysconfig/varnish. Do > I > > just > > > > need > > > > > > to > > > > > > > up > > > > > > > > that value? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > Andy > > > > > > > > _______________________________________________ > > > > > > > > varnish-misc mailing list > > > > > > > > varnish-misc at projects.linpro.no > > > > > > > > http://projects.linpro.no/mailman/listinfo/varnish- > misc > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > varnish-misc mailing list > > > > > > varnish-misc at projects.linpro.no > > > > > > http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > > > > > varnish-misc mailing list > > > > > varnish-misc at projects.linpro.no > > > > > http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > > > > > > > > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From andrew at imeem.com Tue Mar 4 03:25:58 2008 From: andrew at imeem.com (Andrew Knapp) Date: Mon, 3 Mar 2008 19:25:58 -0800 Subject: Child dying with "Too many open files" In-Reply-To: <0A3A6EA86530E64EA3BEA603EC4401CA020EAF43@dexbe014-8.exch014.msoutlookonline.net> References: <0A3A6EA86530E64EA3BEA603EC4401CA01FC33B0@dexbe014-8.exch014.msoutlookonline.net><86db848d0802201747w68f34006hf95d27f682e46892@mail.gmail.com><0A3A6EA86530E64EA3BEA603EC4401CA01FC3463@dexbe014-8.exch014.msoutlookonline.net><0A3A6EA86530E64EA3BEA603EC4401CA020732BA@dexbe014-8.exch014.msoutlookonline.net><86db848d0802281356l7cf41a5g9dc24327bb065ba8@mail.gmail.com><0A3A6EA86530E64EA3BEA603EC4401CA020EA84C@dexbe014-8.exch014.msoutlookonline.net><86db848d0802281551j3dfa0221v5e9867bd340b40e3@mail.gmail.com><0A3A6EA86530E64EA3BEA603EC4401CA020EAAE3@dexbe014-8.exch014.msoutlookonline.net> <0A3A6EA86530E64EA3BEA603EC4401CA020EAF43@dexbe014-8.exch014.msoutlookonline.net> Message-ID: <0A3A6EA86530E64EA3BEA603EC4401CA020EB22E@dexbe014-8.exch014.msoutlookonline.net> I've been looking at this more, and no combination of NFILES and fs.file-max seem to fix the problem. When I run `varnishlog -i Debug` along with the varnishd process, I get tons and tons of these: "Accept failed errno=24" Which is the same as the "Too many open files" error I believe. Is anyone else having this error? Here's a look at varnishstat (after about 8 mins, which is on the high end of time between crashes): client_conn 26744 57.02 Client connections accepted client_req 64444 137.41 Client requests received cache_hit 30529 65.09 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 33913 72.31 Cache misses backend_conn 33914 72.31 Backend connections success backend_fail 0 0.00 Backend connections failures backend_reuse 31815 67.84 Backend connections reuses backend_recycle 31935 68.09 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 2145 . N struct srcaddr n_srcaddr_act 306 . N active struct srcaddr n_sess_mem 525 . N struct sess_mem n_sess 439 . N struct sess n_object 34061 . N struct object n_objecthead 34061 . N struct objecthead n_smf 67916 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 1 . N large free smf n_vbe_conn 11 . N struct vbe_conn n_bereq 139 . N struct bereq n_wrk 199 . N worker threads n_wrk_create 316 0.67 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 316 0.67 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 15941 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 63764 135.96 Objects sent with write s_sess 26603 56.72 Total Sessions s_req 64287 137.07 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 33831 72.13 Total fetch s_hdrbytes 20767331 44280.02 Total header bytes s_bodybytes 2076771265 4428083.72 Total body bytes sess_closed 2658 5.67 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 92 0.20 Session Read Ahead sess_herd 61921 132.03 Session herd shm_records 3795728 8093.24 SHM records shm_writes 144882 308.92 SHM writes shm_cont 142 0.30 SHM MTX contention sm_nreq 67971 144.93 allocator requests sm_nobj 67915 . outstanding allocations sm_balloc 1611931648 . bytes allocated sm_bfree 11272970240 . bytes free backend_req 33914 72.31 Backend requests made > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc- > bounces at projects.linpro.no] On Behalf Of Andrew Knapp > Sent: Monday, March 03, 2008 1:03 PM > To: Michael S. Fischer > Cc: varnish-misc at projects.linpro.no > Subject: RE: Child dying with "Too many open files" > > I'm actually getting this a lot more frequently while running trunk > (r2544). Every time the child dies it's cleaning out the cache and > starting over. Right now it's happening about every 15 seconds, which > seems crazy. > > Any ideas? > > -Andy > > > -----Original Message----- > > From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc- > > bounces at projects.linpro.no] On Behalf Of Andrew Knapp > > Sent: Friday, February 29, 2008 12:54 PM > > To: Michael S. Fischer > > Cc: varnish-misc at projects.linpro.no > > Subject: RE: Child dying with "Too many open files" > > > > I'm still getting the "Too many open files" error on the child. > > > > $ sudo sysctl -a | grep file > > fs.file-max = 131072 > > > > NFILES is also set to 131072. Any ideas? > > > > -Andy > > > > > -----Original Message----- > > > From: michaelonlaw at gmail.com [mailto:michaelonlaw at gmail.com] On > > Behalf > > > Of Michael S. Fischer > > > Sent: Thursday, February 28, 2008 3:51 PM > > > To: Andrew Knapp > > > Cc: varnish-misc at projects.linpro.no > > > Subject: Re: Child dying with "Too many open files" > > > > > > I can't help but wonder if you'd set it too high. What happens > when > > > you set NFILES and fs.file-max both to 131072? I've tested that as > a > > > known good value. > > > > > > --Michael > > > > > > On Thu, Feb 28, 2008 at 2:58 PM, Andrew Knapp > > wrote: > > > > Yup, it is. Here's some output: > > > > > > > > $ ps auxwww | grep varnish > > > > root 12036 0.0 0.0 27704 648 ? Ss 14:54 0:00 > > > > /usr/sbin/varnishd -a :80 -f /etc/varnish/photo.vcl -T > > > :6082 > > > > > > > > -t 120 -w 10,700,30 -s file,/c01/varnish/varnish_storage.bin,12G > -u > > > > varnish -g varnish -P /var/run/varnish.pid > > > > varnish 12037 1.2 0.4 13119108 39936 ? Sl 14:54 0:00 > > > > /usr/sbin/varnishd -a :80 -f /etc/varnish/photo.vcl -T > > > :6082 > > > > > > > > -t 120 -w 10,700,30 -s file,/c01/varnish/varnish_storage.bin,12G > -u > > > > varnish -g varnish -P /var/run/varnish.pid > > > > > > > > -Andy > > > > > > > > > > > > > -----Original Message----- > > > > > From: michaelonlaw at gmail.com [mailto:michaelonlaw at gmail.com] > On > > > Behalf > > > > > Of Michael S. Fischer > > > > > > > > > > > > > Sent: Thursday, February 28, 2008 1:57 PM > > > > > To: Andrew Knapp > > > > > Cc: varnish-misc at projects.linpro.no > > > > > Subject: Re: Child dying with "Too many open files" > > > > > > > > > > Is varnishd being started as root? (even if it drops > privileges > > > > > later) Only root can have > 1024 file descriptors open, to my > > > > > knowledge. > > > > > > > > > > --Michael > > > > > > > > > > On Thu, Feb 28, 2008 at 11:48 AM, Andrew Knapp > > > > > > > wrote: > > > > > > Didn't really get a answer to this, so I'm trying again. > > > > > > > > > > > > I've done some testing with the NFILES variable, and I keep > > > getting > > > > > the > > > > > > same error as before ("Too many open files"). I've also > > > verified > > > > > that > > > > > > the limit is actually being applied by putting a ulimit -a > in > > > the > > > > > > /etc/init.d/varnish script. > > > > > > > > > > > > Anyone have any ideas? I'm running the 1.1.2-5 rpms from > > sf.net > > > on > > > > > > Centos 5.1. > > > > > > > > > > > > Thanks, > > > > > > Andy > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: varnish-misc-bounces at projects.linpro.no > > > [mailto:varnish- > > > > > misc- > > > > > > > bounces at projects.linpro.no] On Behalf Of Andrew Knapp > > > > > > > > > > > > > Sent: Wednesday, February 20, 2008 5:52 PM > > > > > > > To: Michael S. Fischer > > > > > > > Cc: varnish-misc at projects.linpro.no > > > > > > > > > > > > > > > > > > > Subject: RE: Child dying with "Too many open files" > > > > > > > > > > > > > > Here's the output: > > > > > > > > > > > > > > $ sysctl fs.file-max > > > > > > > fs.file-max = 767606 > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > From: michaelonlaw at gmail.com > > > [mailto:michaelonlaw at gmail.com] On > > > > > > > Behalf > > > > > > > > Of Michael S. Fischer > > > > > > > > Sent: Wednesday, February 20, 2008 5:48 PM > > > > > > > > To: Andrew Knapp > > > > > > > > Cc: varnish-misc at projects.linpro.no > > > > > > > > Subject: Re: Child dying with "Too many open files" > > > > > > > > > > > > > > > > Does 'sysctl fs.file-max' say? It should be >= the > > ulimit. > > > > > > > > > > > > > > > > --Michael > > > > > > > > > > > > > > > > On Wed, Feb 20, 2008 at 4:04 PM, Andrew Knapp > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm getting this error when running varnishd: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > > > > > > > > > > > > > > Child said (2, 15369): < wrk_thread(), > > > > > > cache_pool.c > > > > > > > > line > > > > > > > > > 217: > > > > > > > > > > > > > > > > > > Condition((pipe(w->pipe)) == 0) not true. > > > > > > > > > > > > > > > > > > errno = 24 (Too many open files) > > > > > > > > > > > > > > > > > > >> > > > > > > > > > > > > > > > > > > Cache child died pid=15369 status=0x6 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > uname -a: > > > > > > > > > > > > > > > > > > Linux 2.6.18-53.1.4.el5 #1 SMP Fri Nov 30 > > > 00:45:55 > > > > > EST > > > > > > > > 2007 > > > > > > > > > x86_64 x86_64 x86_64 GNU/Linux > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > command used to start varnish: > > > > > > > > > > > > > > > > > > /usr/sbin/varnishd -d -d -a :80 -f > > /etc/varnish/photo.vcl > > > -T > > > > > > > > > :6082 -t 120 -w 10,700,30 -s > > > > > > > > > file,/c01/varnish/varnish_storage.bin,12G -u varnish > -g > > > > > varnish -P > > > > > > > > > /var/run/varnish.pid > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I have NFILES=270000 set in /etc/sysconfig/varnish. > Do > > I > > > just > > > > > need > > > > > > > to > > > > > > > > up > > > > > > > > > that value? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > > > > Andy > > > > > > > > > _______________________________________________ > > > > > > > > > varnish-misc mailing list > > > > > > > > > varnish-misc at projects.linpro.no > > > > > > > > > http://projects.linpro.no/mailman/listinfo/varnish- > > misc > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > varnish-misc mailing list > > > > > > > varnish-misc at projects.linpro.no > > > > > > > http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > > > > > > varnish-misc mailing list > > > > > > varnish-misc at projects.linpro.no > > > > > > http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at projects.linpro.no > > http://projects.linpro.no/mailman/listinfo/varnish-misc > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From h.stener at betradar.com Tue Mar 4 09:53:13 2008 From: h.stener at betradar.com (Henning Stener) Date: Tue, 04 Mar 2008 10:53:13 +0100 Subject: Tuning varnish for high load In-Reply-To: <86db848d0802291223n2400e8c5ya4ef0e648f4d124b@mail.gmail.com> References: <658de5590802282152x6002e77fn93345953acbba4cb@mail.gmail.com> <86db848d0802291223n2400e8c5ya4ef0e648f4d124b@mail.gmail.com> Message-ID: <1204624394.28563.514.camel@henning-desktop> Are you sending one request per connection and closing it, or are you serving a number of requests to 10K different connections? In the last case how many requests/sec are you seeing? I have ran a small test against 1.1.2 release on a 8-cpu (dual quad-core) system with 4GB RAM running Debian etch, and for objects larger than a few KB, the gigabit link is the bottleneck. However, most of our requests are for very small files and I am hitting some other limitation that I haven't yet figured out fully. First of all, when running one request per connection, I hit the flood protection in the switch so I haven't gotten around to testing this properly yet, but for what it's worth the varnish is chugging along at 7K requests/s or thereabouts in this situation. When bumping up the number of requests per connection, I am seeing 17K reqs/sec After turning some knobs on the system, mainly net.core.somaxconn net.core.netdev_max_backlog net.ipv4.tcp_max_syn_backlog It jumps to ~25K reqs/sec At this point, it refuses to go any higher even if I tune the sysctl settings higher, run the benchmark from more client machines or even add another network card. load is 1.1, CPU usage in top is 30% (no single core is used 100% either) and there is no I/O wait, so unless there is something obvious I have missed, this is as fast as the system goes. Moving away from the synthetic tests to some real world observations however, things become a bit different. In production, with 98% hit rate varnish sometimes become flaky around 6-7K requests/sec because of resource leaks it seems. The virtual memory usage suddenly jumps to 80G or more, the varnish stops serving new requests and the child restarts. This might have been plugged in trunk, but all later revisions I have tried have had other problems and have been really unstable for me. -Henning On Fri, 2008-02-29 at 12:23 -0800, Michael S. Fischer wrote: > On Thu, Feb 28, 2008 at 9:52 PM, Mark Smallcombe wrote: > > > What tuning recommendations do you have for varnish to help it handle high load? > > Funny you should ask, I've been spending a lot of time with Varnish in > the lab. Here are a few observations I've made: > > (N.B. We're using 4-CPU Xeon hardware running RHEL 4.5, which runs > the 2.6.9 Linux kernel. All machines have at least 4GB RAM and run > the 64-bit Varnish build, but our results are equally applicable to > 32-bit builds) > > - When the cache hit ratio is very high (i.e. 100%), we discovered > that Varnish's default configuration of thread_pool_max is too high. > When there are too many worker threads, Varnish spends an inordinate > amount of time in system call space. We're not sure whether this is > due to some flaw in Varnish, our ancient Linux kernel (we were unable > to test with a modern 2.6.22 or later kernel that apparently has a > better scheduler), or is just a fundamental problem when a threaded > daemon like Varnish tries to service thousands of concurrent > connections. After much tweaking we determined that, on our hardware, > the optimal ratio of threads per CPU is about 16, or around 48-50 > threads on a 4-CPU box. To eliminate dropping work requests, it is > also advisable to raise overflow_max to a significantly higher ratio > than the default (e.g. 10000%). This will cause Varnish to consume > somewhat more RAM, but will provide outstanding performance. With > these tweaks, we were able to get Varnish to serve 10,000 concurrent > connections, flooding a Gigabit Ethernet channel with 5 KB cached > objects. > > - Conversely, when the cache hit ratio is 0, the default of 100 > threads is too low. (To create this scenario, we used 2 Varnish boxes: > the front-end proxy was configured to "pass" all requests to an > optimized backend Varnish instance that served all requests from > cache.) On the same 4-CPU hardware, we found that the optimal > thread_pool_max value in this situation is about 750. Again, we were > able to serve 10,0000 concurrent connections after optimizing the > settings. > > I find this interesting, because one would think that Varnish would be > making the system spend much more time in the scheduler in the second > scenario because it is doing significantly less work (no lookups, just > handing off connections to the appropriate backend). I suspect that > there may be some thread-scalability issues with the cache lookup > process. If someone with a suitably powerful lab setup (i.e. Gigabit > Ethernet, big hardware) can test with a more modern Linux kernel, I'd > be very interested in the results. Feel free to contact me if you > need assistance with setup/analysis. > > Finally: Varnish performance is absolutely atrocious on a 8-CPU RHEL > 4.5 system -- so bad that I have to turn down thread_pool_max to 4 or > restrict it to run only on 4 CPUs via taskset(1). I've heard that > MySQL has similar problems, so I suspect that this is a Linux kernel > issue. > > Best regards, > > --Michael > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From des at linpro.no Tue Mar 4 11:45:50 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Tue, 04 Mar 2008 12:45:50 +0100 Subject: VCL purge In-Reply-To: <47CC36DA.5020907@idium.no> (=?utf-8?B?QW5kcsOpIMOYaWVu?= Langvand's message of "Mon, 03 Mar 2008 18:35:22 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> Message-ID: Andr? ?ien Langvand writes: > Thank you, that makes sense now. However, I am still not able to make it > work. E.g using purge_hash(req.url + '#' + req.http.host + '#$'); throws > me an syntax error, failing at the apostrophe ('). Also tried diffrent > variations of quotation marks and backslashes. Is the escaping wrong? Sorry, my bad: those should be double quotes, not single quotes. I'm not sure you need the final #, btw. An easy way to check is to add a custom header with obj.hash (should work in the latest trunk) to see what it really looks like. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From andre at idium.no Tue Mar 4 13:04:16 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Tue, 04 Mar 2008 14:04:16 +0100 Subject: VCL purge In-Reply-To: References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> Message-ID: <47CD48D0.3010408@idium.no> obj.hash is very useful, thank you. Sadly, using double quotes doesnt help. purge_hash(req.url + "#" + req.http.host); -------------------#---------------------- Error: Expected ')' got '+' purge_hash(req.url + '#' + req.http.host); ---------------------#-------------------- Error: Syntax error at -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Dag-Erling Sm?rgrav wrote: > Andr? ?ien Langvand writes: >> Thank you, that makes sense now. However, I am still not able to make it >> work. E.g using purge_hash(req.url + '#' + req.http.host + '#$'); throws >> me an syntax error, failing at the apostrophe ('). Also tried diffrent >> variations of quotation marks and backslashes. Is the escaping wrong? > > Sorry, my bad: those should be double quotes, not single quotes. > > I'm not sure you need the final #, btw. An easy way to check is to > add a custom header with obj.hash (should work in the latest trunk) to > see what it really looks like. > > DES From des at linpro.no Tue Mar 4 13:55:05 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Tue, 04 Mar 2008 14:55:05 +0100 Subject: VCL purge In-Reply-To: <47CD48D0.3010408@idium.no> (=?utf-8?B?QW5kcsOpIMOYaWVu?= Langvand's message of "Tue, 04 Mar 2008 14:04:16 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> <47CD48D0.3010408@idium.no> Message-ID: Andr? ?ien Langvand writes: > obj.hash is very useful, thank you. > > Sadly, using double quotes doesnt help. > > purge_hash(req.url + "#" + req.http.host); > -------------------#---------------------- > Error: Expected ')' got '+' > > purge_hash(req.url + '#' + req.http.host); > ---------------------#-------------------- > Error: Syntax error at I'm going to have to pass this on to Poul-Henning. I thought we supported string concatenation, but perhaps not in function calls? DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Tue Mar 4 16:24:05 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 4 Mar 2008 08:24:05 -0800 Subject: Tuning varnish for high load In-Reply-To: <1204624394.28563.514.camel@henning-desktop> References: <658de5590802282152x6002e77fn93345953acbba4cb@mail.gmail.com> <86db848d0802291223n2400e8c5ya4ef0e648f4d124b@mail.gmail.com> <1204624394.28563.514.camel@henning-desktop> Message-ID: <86db848d0803040824j5ca83e50hde0a3aa584d243a0@mail.gmail.com> On Tue, Mar 4, 2008 at 1:53 AM, Henning Stener wrote: > > Are you sending one request per connection and closing it, or are you > serving a number of requests to 10K different connections? In the last > case how many requests/sec are you seeing? In our test, we sent about 200 requests per connection, and achieved around 16,000-18,000 requests/sec. Trying to issue one request per connection will quickly exhaust the number of available open TCP sockets. --Michael From anders.vannman at vk.se Wed Mar 5 11:00:06 2008 From: anders.vannman at vk.se (Anders =?ISO-8859-1?Q?V=E4nnman?=) Date: Wed, 05 Mar 2008 12:00:06 +0100 Subject: varnish-misc Digest, Vol 24, Issue 4 Message-ID: Jag ?r ledig idag, ?ter i morgon. Maila till support at vk.se eller ring 090-70 84 70 om det g?ller n?got support?rende. From des at linpro.no Wed Mar 5 11:21:42 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 05 Mar 2008 12:21:42 +0100 Subject: VCL purge In-Reply-To: ("Dag-Erling =?utf-8?Q?Sm?= =?utf-8?Q?=C3=B8rgrav=22's?= message of "Tue\, 04 Mar 2008 14\:55\:05 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> <47CD48D0.3010408@idium.no> Message-ID: <87lk4xza9l.fsf@des.linpro.no> Dag-Erling Sm?rgrav writes: > Andr? ?ien Langvand writes: > > purge_hash(req.url + "#" + req.http.host); > > -------------------#---------------------- > > Error: Expected ')' got '+' > > > > purge_hash(req.url + '#' + req.http.host); > > ---------------------#-------------------- > > Error: Syntax error at > I'm going to have to pass this on to Poul-Henning. I thought we > supported string concatenation, but perhaps not in function calls? phk says to remove the + :) I tested it with the following VCL code: sub vcl_fetch { set obj.http.X-Varnish-Hash = obj.hash; set obj.http.X-Varnish-Foo = req.url "#" req.http.host "#"; } which gives X-Varnish-Hash: /index.html#www.des.no# X-Varnish-Foo: /index.html#www.des.no# showing that the two are identical. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Wed Mar 5 11:22:45 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 05 Mar 2008 12:22:45 +0100 Subject: Child dying with "Too many open files" In-Reply-To: <0A3A6EA86530E64EA3BEA603EC4401CA020EB22E@dexbe014-8.exch014.msoutlookonline.net> (Andrew Knapp's message of "Mon\, 3 Mar 2008 19\:25\:58 -0800") References: <0A3A6EA86530E64EA3BEA603EC4401CA01FC33B0@dexbe014-8.exch014.msoutlookonline.net> <86db848d0802201747w68f34006hf95d27f682e46892@mail.gmail.com> <0A3A6EA86530E64EA3BEA603EC4401CA01FC3463@dexbe014-8.exch014.msoutlookonline.net> <0A3A6EA86530E64EA3BEA603EC4401CA020732BA@dexbe014-8.exch014.msoutlookonline.net> <86db848d0802281356l7cf41a5g9dc24327bb065ba8@mail.gmail.com> <0A3A6EA86530E64EA3BEA603EC4401CA020EA84C@dexbe014-8.exch014.msoutlookonline.net> <86db848d0802281551j3dfa0221v5e9867bd340b40e3@mail.gmail.com> <0A3A6EA86530E64EA3BEA603EC4401CA020EAAE3@dexbe014-8.exch014.msoutlookonline.net> <0A3A6EA86530E64EA3BEA603EC4401CA020EAF43@dexbe014-8.exch014.msoutlookonline.net> <0A3A6EA86530E64EA3BEA603EC4401CA020EB22E@dexbe014-8.exch014.msoutlookonline.net> Message-ID: <87hcflza7u.fsf@des.linpro.no> "Andrew Knapp" writes: > I've been looking at this more, and no combination of NFILES and > fs.file-max seem to fix the problem. When I run `varnishlog -i Debug` > along with the varnishd process, I get tons and tons of these: > > "Accept failed errno=24" > > Which is the same as the "Too many open files" error I believe. add 'ulimit -a' to your init script, right before the line that actually starts varnishd, and send us the output. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From andre at idium.no Wed Mar 5 11:52:21 2008 From: andre at idium.no (=?UTF-8?B?QW5kcsOpIMOYaWVuIExhbmd2YW5k?=) Date: Wed, 05 Mar 2008 12:52:21 +0100 Subject: VCL purge In-Reply-To: <87lk4xza9l.fsf@des.linpro.no> References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> <47CD48D0.3010408@idium.no> <87lk4xza9l.fsf@des.linpro.no> Message-ID: <47CE8975.5030502@idium.no> Wonderful, however, doing this from the purge_hash() function still fails: purge_hash(req.url "#" req.http.host "#"); -------------------###-------------------- Error: Expected ')' got '"#"' -- Andr? ?ien Langvand - PGP: 0x7B1E3468 Systemadministrator - Idium AS - http://www.idium.no Dag-Erling Sm?rgrav wrote: > Dag-Erling Sm?rgrav writes: >> Andr? ?ien Langvand writes: >>> purge_hash(req.url + "#" + req.http.host); >>> -------------------#---------------------- >>> Error: Expected ')' got '+' >>> >>> purge_hash(req.url + '#' + req.http.host); >>> ---------------------#-------------------- >>> Error: Syntax error at >> I'm going to have to pass this on to Poul-Henning. I thought we >> supported string concatenation, but perhaps not in function calls? > > phk says to remove the + :) > > I tested it with the following VCL code: > > sub vcl_fetch { > set obj.http.X-Varnish-Hash = obj.hash; > set obj.http.X-Varnish-Foo = req.url "#" req.http.host "#"; > } > > which gives > > X-Varnish-Hash: /index.html#www.des.no# > X-Varnish-Foo: /index.html#www.des.no# > > showing that the two are identical. > > DES From ssm at linpro.no Wed Mar 5 15:06:47 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Wed, 05 Mar 2008 16:06:47 +0100 Subject: vcl-mode for emacs In-Reply-To: (Dag-Erling =?utf-8?Q?Sm?= =?utf-8?Q?=C3=B8rgrav's?= message of "Mon, 03 Mar 2008 13:21:54 +0100") References: <7x7ih2y66q.fsf@iostat.linpro.no> Message-ID: <7xbq5tmcqg.fsf@iostat.linpro.no> Dag-Erling Sm?rgrav writes: > My only complaint so far is that it considers vcl_* keywords when in > fact they are identifiers (you can define your own subs as well, not > just override the predefined ones) I'll need to match on something like "sub ([[:alpha:]])", and use the capture group to highlight, then. I don't think "identifier" is available as a font-lock-mode colouring, but "function" is. That may be better than "keyword". -- Stig Sandbeck Mathisen, Linpro From phk at phk.freebsd.dk Wed Mar 5 20:29:41 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 05 Mar 2008 20:29:41 +0000 Subject: Tuning varnish for high load In-Reply-To: Your message of "Fri, 29 Feb 2008 12:23:34 PST." <86db848d0802291223n2400e8c5ya4ef0e648f4d124b@mail.gmail.com> Message-ID: <28889.1204748981@critter.freebsd.dk> In message <86db848d0802291223n2400e8c5ya4ef0e648f4d124b at mail.gmail.com>, "Mich ael S. Fischer" writes: >Funny you should ask, I've been spending a lot of time with Varnish in >the lab. Here are a few observations I've made: >- When the cache hit ratio is very high (i.e. 100%), we discovered >that Varnish's default configuration of thread_pool_max is too high. That's unexpected for me, I wouldn't expect many threads to be created in this case. You don't say which version of varnish you are running, but there have been some serious changes in -trunk recently, so if you can check if this still happens with -trunk, that would help me understand the problem better. Do notice, that if you have a high hit ratio under high traffic, adding more thread pools is recommended to lower the mutex congestion. In -trunk you can set the diag_bitmap paramter to 0x10 and run varnishtop -i debug -I MTX to get a view of contests mutexes, output from that would be welcome. >When there are too many worker threads, Varnish spends an inordinate >amount of time in system call space. Yes, having more workerthreads than you need is not recommended, it just gives the scheduler too much work. >I find this interesting, because one would think that Varnish would be >making the system spend much more time in the scheduler in the second >scenario because it is doing significantly less work (no lookups, just >handing off connections to the appropriate backend). Connections are not "handed off", the data transferred between the two connections by vanrish. >Finally: Varnish performance is absolutely atrocious on a 8-CPU RHEL >4.5 system -- so bad that I have to turn down thread_pool_max to 4 or >restrict it to run only on 4 CPUs via taskset(1). I've heard that >MySQL has similar problems, so I suspect that this is a Linux kernel >issue. I'm not up to date on the various Linux kernels performance. You can find some benchmarking material from Kris Kennaway, who's keeping an eye on FreeBSD's scalability and often compares to various Linux kernels here: http://people.freebsd.org/~kris/scaling/ -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From andrew at imeem.com Wed Mar 5 20:36:06 2008 From: andrew at imeem.com (Andrew Knapp) Date: Wed, 5 Mar 2008 12:36:06 -0800 Subject: Varnish died while running in debug mode Message-ID: <0A3A6EA86530E64EA3BEA603EC4401CA0218FD8C@dexbe014-8.exch014.msoutlookonline.net> I have a instance of varnish running on FreeBSD 7.0-RELEASE, and it crashed last night, and was wondering if anyone could shed some light on this. Here are the details (I'm running trunk as of r2554): Error message: start child pid 3155 Child said (2, 3155): <> Child said (2, 3155): <> Child said (2, 3155): <> Child not responding to ping Child not responding to ping Child not responding to ping Child not responding to ping Child not responding to ping Cache child died pid=3155 status=0x9 Clean child Child cleaned start child pid 3360 Pushing vcls failed: CLI communication error Child said (1, 3360): <> unlink ./vcl.1P9zoqAU.o #uname -a FreeBSD 7.0-RELEASE FreeBSD 7.0-RELEASE #0: Sun Feb 24 10:35:36 UTC 2008 root at driscoll.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 Varnish command line: /usr/local/sbin/varnishd -d -d -a :80 -f /etc/varnish/photo.vcl -T :6082 -t 120 -w 10,700,30 -s file,/c01/varnish/varnish_storage.bin,40% -u varnish -g varnish -p lru_interval=3600 -h classic,500009 -P /var/run/varnish.pid I can include my VCL file if that's useful. The child repeatedly died last night also, with the same "Child not responding to ping" error message, but this was the only time that the master crashed. For what it's worth, I followed all the performance tips on the wiki relating to freebsd. Also, the 40% of the /c01 mount is ~ 138GB. Thanks, Andy -- Andrew Knapp | systems engineer | imeem, inc. | 415.373.1857 o | 415.994.5752 m | 415.836.0088 f -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at longhill.org.uk Fri Mar 7 12:16:54 2008 From: alex at longhill.org.uk (Alex Harrington) Date: Fri, 7 Mar 2008 12:16:54 -0000 Subject: URL Rewriting Message-ID: <2779A35BE6EBB14A808809C7E825052990CBB5@mcexchange.mail.longhill.brighton-hove.sch.uk> Hi all I have a webserver over which I have no control. It's a managed CMS solution running Apache, however the vendor will not allow us to add URL rewriting rules or virtual hosts to the configuration. It serves different websites at the following URLs: http://example.com http://example.com/dlc http://example.com/wyc I'm hoping I can use Varnish to sit between the web server and my users to disguise the subfolders in to separate domains - eg Client URL => Backend URL http://example.com => http://example.com http://example.com/index.phtml?d=12345 => http://10.108.1.59/index.phtml?d=12345 http://www.deansleisurecentre.com => http://10.108.1.59/dlc http://www.deansleisurecentre.com/index.phtml?d=12121 => http://10.108.1.59/index.phtml?d=12121 http://www.woodingdeanyc.org.uk => http://10.108.1.59/wyc http://www.woodingdeanyc.org.uk/index.phtml?d=54321 => http://10.108.1.59/index.phtml?d=54321 DNS for deansleisurecentre.com and woodingdeanyc.org.uk would point to the server running Varnish on port 80. The client should not see the redirect to /dlc or /wyc (ie so that it thinks it's looking at the root of the domain). Once the client has connected, /dlc is redirected by the CMS to a url like /index.phtml?d=12345 and all subsequent pages use that format. The CMS will be happy with deansleisurecentre.com/index.phtml?d=12345. I've setup a Debian box using testing and installed the varnish package 1.1.2. Here is my current config: default.vcl backend default { set backend.host = "10.108.1.59"; set backend.port = "80"; } sub vcl_recv { if (req.http.host ~ "^(www.)?deansleisurecentre.com") { set req.http.host = "deansleisurecentre.com"; set req.http.url = regsub(req.url, "^/$", "/dlc"); } if (req.request != "GET" && req.request != "HEAD") { pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Cookie) { pass; } lookup; } I've edited my local PCs hosts file to point deansleisurecentre.com to the box running Varnish, however when I access deansleisurecentre.com, I get http://example.com instead without the rewrite. Looking at the logs, it appears Varnish is doing the rewrite in the headers correctly, but the RxURL and TxURL fields always match - surely shouldn't the TxURL change to match the URL shown in the header (which selectively includes my URL rewrite?). I saw an example using berew.http.url however I can only seem to get that to work in vcl_hit subroutine (currently commented out in my config) and it seems to make no difference when used. Can Varnish do what I want, and if so, does anyone have a pointer that would help me track down the right config?! Thanks Alex -- Alex Harrington - Network Manager, Longhill High School t: 01273 304086 | e: alex at longhill.org.uk From des at linpro.no Sat Mar 8 10:35:35 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sat, 08 Mar 2008 11:35:35 +0100 Subject: Tuning varnish for high load In-Reply-To: <28889.1204748981@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed\, 05 Mar 2008 20\:29\:41 +0000") References: <28889.1204748981@critter.freebsd.dk> Message-ID: <87r6elcxl4.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > Do notice, that if you have a high hit ratio under high traffic, > adding more thread pools is recommended to lower the mutex > congestion. This is as good an occasion as any to mention that these parameters are poorly named; thread_pool_{min,max} are the minimum and maximum *total* number of threads, while thread_pools is the number of pools. The actual minimum and maximum number of threads *per pool* is thread_pool_{min,max} / thread_pools. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Sat Mar 8 10:38:00 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sat, 08 Mar 2008 11:38:00 +0100 Subject: URL Rewriting In-Reply-To: <2779A35BE6EBB14A808809C7E825052990CBB5@mcexchange.mail.longhill.brighton-hove.sch.uk> (Alex Harrington's message of "Fri\, 7 Mar 2008 12\:16\:54 -0000") References: <2779A35BE6EBB14A808809C7E825052990CBB5@mcexchange.mail.longhill.brighton-hove.sch.uk> Message-ID: <87myp9cxh3.fsf@des.linpro.no> "Alex Harrington" writes: > I'm hoping I can use Varnish to sit between the web server and my users > to disguise the subfolders in to separate domains - eg > > Client URL => Backend URL > http://example.com => http://example.com > http://example.com/index.phtml?d=12345 => > http://10.108.1.59/index.phtml?d=12345 > > http://www.deansleisurecentre.com => http://10.108.1.59/dlc > http://www.deansleisurecentre.com/index.phtml?d=12121 => > http://10.108.1.59/index.phtml?d=12121 > > http://www.woodingdeanyc.org.uk => http://10.108.1.59/wyc > http://www.woodingdeanyc.org.uk/index.phtml?d=54321 => > http://10.108.1.59/index.phtml?d=54321 Yes, Varnish can do that. > sub vcl_recv { > if (req.http.host ~ "^(www.)?deansleisurecentre.com") { you'll want to terminate that regexp with $. > set req.http.host = "deansleisurecentre.com"; > set req.http.url = regsub(req.url, "^/$", "/dlc"); > } The rest of your vcl_recv is redundant and will cause trouble if / when you upgrade to a newer version. > I've edited my local PCs hosts file to point deansleisurecentre.com to > the box running Varnish, however when I access deansleisurecentre.com, I > get http://example.com instead without the rewrite. You need to set up a separate backend for deansleisurecentre.com. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From des at linpro.no Sat Mar 8 10:38:47 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sat, 08 Mar 2008 11:38:47 +0100 Subject: VCL purge In-Reply-To: <47CE8975.5030502@idium.no> (=?utf-8?B?IkFuZHLDqSDDmGllbg==?= Langvand"'s message of "Wed\, 05 Mar 2008 12\:52\:21 +0100") References: <47BA9704.6060507@idium.no> <47C7EA65.3030602@idium.no> <47CBF3A8.4090506@idium.no> <47CC111B.7080008@idium.no> <47CC36DA.5020907@idium.no> <47CD48D0.3010408@idium.no> <87lk4xza9l.fsf@des.linpro.no> <47CE8975.5030502@idium.no> Message-ID: <87fxv1cxfs.fsf@des.linpro.no> Andr? ?ien Langvand writes: > Wonderful, however, doing this from the purge_hash() function still fails: > > purge_hash(req.url "#" req.http.host "#"); > -------------------###-------------------- > Error: Expected ')' got '"#"' Please submit a ticket. It would appear string concatenation only works in assignments... DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From alex at longhill.org.uk Sat Mar 8 18:08:01 2008 From: alex at longhill.org.uk (Alex Harrington) Date: Sat, 8 Mar 2008 18:08:01 -0000 Subject: URL Rewriting In-Reply-To: <2779A35BE6EBB14A808809C7E825052990CBB5@mcexchange.mail.longhill.brighton-hove.sch.uk> References: <2779A35BE6EBB14A808809C7E825052990CBB5@mcexchange.mail.longhill.brighton-hove.sch.uk> Message-ID: <2779A35BE6EBB14A808809C7E825052990CBBA@mcexchange.mail.longhill.brighton-hove.sch.uk> Hi again Many thanks for all the help on and off list. I've tightened the regexp as suggested and am now setting req.url instead of req.http.url and It's working perfectly in the lab. Thanks again Alex sub vcl_recv { if (req.http.host ~ "^(www\.)?deansleisurecentre\.com$") { set req.http.host = "deansleisurecentre.com"; set req.url = regsub(req.url, "^/$", "/dlc"); } } -- Alex Harrington - Network Manager, Longhill High School t: 01273 304086 | e: alex at longhill.org.uk From gsm_lock at viaduk.net Mon Mar 10 10:57:55 2008 From: gsm_lock at viaduk.net (Gsm Lock) Date: Mon, 10 Mar 2008 12:57:55 +0200 Subject: URL rewrite Message-ID: <000e01c8829d$985ff140$acbe7a4d@SUNDUK> Hi! I have a few backend servers . Static documents on servers has ugly addresses as http://my-next-back.end/111../785643../blabla/.../myXXXX.doc (mostly unstructured). Some of them has not unique names. I need them to be accessible from frontend as http://myfront.end/something/my-new-named.doc There are a few thousands of documents... How can I (when I do) configure Varnish for this ? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at dynamine.net Mon Mar 10 14:41:32 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 10 Mar 2008 07:41:32 -0700 Subject: URL rewrite In-Reply-To: <000e01c8829d$985ff140$acbe7a4d@SUNDUK> References: <000e01c8829d$985ff140$acbe7a4d@SUNDUK> Message-ID: <86db848d0803100741j4556331ar3f1b3178a770c3a5@mail.gmail.com> On Mon, Mar 10, 2008 at 3:57 AM, Gsm Lock wrote: > I have a few backend servers . Static documents on servers has ugly > addresses as http://my-next-back.end/111../785643../blabla/.../myXXXX.doc > (mostly unstructured). > Some of them has not unique names. > I need them to be accessible from frontend as > http://myfront.end/something/my-new-named.doc > There are a few thousands of documents... How can I (when I do) configure > Varnish for this ? This is easily done with Varnish. See the gsub() method in vcl(7). You'll need a working knowledge of regular expressions. Best regards, --Michael From michael at dynamine.net Mon Mar 10 14:45:15 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 10 Mar 2008 07:45:15 -0700 Subject: URL rewrite In-Reply-To: <86db848d0803100741j4556331ar3f1b3178a770c3a5@mail.gmail.com> References: <000e01c8829d$985ff140$acbe7a4d@SUNDUK> <86db848d0803100741j4556331ar3f1b3178a770c3a5@mail.gmail.com> Message-ID: <86db848d0803100745g3abe6e35rbbe2f48795f7978e@mail.gmail.com> On Mon, Mar 10, 2008 at 7:41 AM, Michael S. Fischer wrote: > > On Mon, Mar 10, 2008 at 3:57 AM, Gsm Lock wrote: > > I have a few backend servers . Static documents on servers has ugly > > addresses as http://my-next-back.end/111../785643../blabla/.../myXXXX.doc > > (mostly unstructured). > > Some of them has not unique names. > > I need them to be accessible from frontend as > > http://myfront.end/something/my-new-named.doc > > There are a few thousands of documents... How can I (when I do) configure > > Varnish for this ? > > This is easily done with Varnish. See the gsub() method in vcl(7). > You'll need a working knowledge of regular expressions. I should add: For what I gather are performance reasons, Varnish does not have database lookup support or external map support. You will probably want to generate your vcl file programmatically using a map that contains the mapped url as the key and the backend url as the value (or vice versa). Varnish can switch among VCL files at runtime using the admin console. Best regards, --Michael From Kenneth.Rorvik at hio.no Tue Mar 11 11:10:14 2008 From: Kenneth.Rorvik at hio.no (=?ISO-8859-1?Q?Kenneth_R=F8rvik?=) Date: Tue, 11 Mar 2008 12:10:14 +0100 Subject: Blank pages with HTTP/1.0 In-Reply-To: <6C00C181-BCF6-4DE9-A525-658DD180CE39@gotamedia.se> References: <6C00C181-BCF6-4DE9-A525-658DD180CE39@gotamedia.se> Message-ID: <47D66896.7000103@hio.no> Fredrik Nygren wrote: > I have searched the mailinglist and found this thread which seems to > look like our problem but I'm not sure it's the same problem: > http://projects.linpro.no/pipermail/varnish-misc/2008-February/001349.html > > Is there a known problem with the HTTP/1.0 protocol and Varnish? This looks a lot like a problem I had recently with an eZ-publish backend that insists on always setting cookies. Note that your headers do NOT contain a Content-Length, and this may prove to be problematic with HTTP 1.0: http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4 My solution was simply to strip cookies in vcl_fetch, this restored Content-length: sub vcl_fetch { remove obj.http.Set-Cookie; # } Of course, you may NEED these cookies.... YMMV. (I am using RH5.1, with 1.1.2-5 rpms) Kenneth. From fredrik.nygren at gotamedia.se Tue Mar 11 12:13:02 2008 From: fredrik.nygren at gotamedia.se (Fredrik Nygren) Date: Tue, 11 Mar 2008 13:13:02 +0100 Subject: Blank pages with HTTP/1.0 In-Reply-To: <47D66896.7000103@hio.no> References: <6C00C181-BCF6-4DE9-A525-658DD180CE39@gotamedia.se> <47D66896.7000103@hio.no> Message-ID: <1E7F16B5-3D5A-4E17-BB50-DAF1E7B302F9@gotamedia.se> Thank you for taking time to help. Yes, I need my cookies. I can't remove them. I also did an experiment with checking for HTTP/1.0 and then tried to pass these requests to backend. Unfortunately without success. It looked something like this: sub vcl_recv { if(req.proto == "1.0") { pass; } } By building my own rpm's from latest trunk I solved the problem. Regards On 11 mar 2008, at 12.10, Kenneth R?rvik wrote: > Fredrik Nygren wrote: > >> I have searched the mailinglist and found this thread which seems >> to look like our problem but I'm not sure it's the same problem: >> http://projects.linpro.no/pipermail/varnish-misc/2008-February/001349.html >> Is there a known problem with the HTTP/1.0 protocol and Varnish? > > This looks a lot like a problem I had recently with an eZ-publish > backend that insists on always setting cookies. > > Note that your headers do NOT contain a Content-Length, and this may > prove to be problematic with HTTP 1.0: > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4 > > My solution was simply to strip cookies in vcl_fetch, this restored > Content-length: > > sub vcl_fetch { > remove obj.http.Set-Cookie; > # > } > > Of course, you may NEED these cookies.... YMMV. > > (I am using RH5.1, with 1.1.2-5 rpms) > > Kenneth. > From Kenneth.Rorvik at hio.no Tue Mar 11 14:59:09 2008 From: Kenneth.Rorvik at hio.no (=?ISO-8859-1?Q?Kenneth_R=F8rvik?=) Date: Tue, 11 Mar 2008 15:59:09 +0100 Subject: ACL handling and IPv6 Message-ID: <47D69E3D.60807@hio.no> Hi folks. I have a rather wellbehaving varnish running. However, I need to do some matching against an ACL on client IP address, including our local ipv6-space. However, it seems the acl does not actually MATCH the ip6 spec, given as: acl hio { #snip "128.39.89.0"/24; "2001:700:700::/48"; } Test in vcl_recv is: if(req.http.host ~ "^(www.)?hio.no$" && req.url == "/" && ! req.http.Referer ~ "^http://www\.hio\.no" && client.ip ~ hio) { Example log response partial: 18 SessionOpen c 2001:700:700:5:21d:9ff:fe10:caac 48995 18 VCL_acl c NO_MATCH hio 18 VCL_acl c NO_MATCH hio 18 ReqStart c 2001:700:700:5:21d:9ff:fe10:caac 48995 2045281282 18 RxRequest c GET 18 RxURL c / So it seems that either my ip6-spec is wrong, or varnish actually does not handle it correctly. This is Red Hat package varnish-1.1.2-5el5. Any pointers or ideas? Kenneth. From phk at phk.freebsd.dk Tue Mar 11 19:51:44 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 11 Mar 2008 19:51:44 +0000 Subject: ACL handling and IPv6 In-Reply-To: Your message of "Tue, 11 Mar 2008 15:59:09 +0100." <47D69E3D.60807@hio.no> Message-ID: <7616.1205265104@critter.freebsd.dk> In message <47D69E3D.60807 at hio.no>, =?ISO-8859-1?Q?Kenneth_R=F8rvik?= writes: >However, I need to do some matching against an ACL on client IP address, >including our local ipv6-space. The ipv6 handling runtime part of ACLs is not yet written, sorry. It's probably very trivial, look in VRT_acl_match() in cache_vrt_acl.c but I do not have any IPv6 here, so I cannot test it. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From Kenneth.Rorvik at hio.no Wed Mar 12 09:13:16 2008 From: Kenneth.Rorvik at hio.no (=?ISO-8859-1?Q?Kenneth_R=F8rvik?=) Date: Wed, 12 Mar 2008 10:13:16 +0100 Subject: ACL handling and IPv6 In-Reply-To: <7616.1205265104@critter.freebsd.dk> References: <7616.1205265104@critter.freebsd.dk> Message-ID: <47D79EAC.9040804@hio.no> Poul-Henning Kamp wrote: >> However, I need to do some matching against an ACL on client IP address, >> including our local ipv6-space. > > The ipv6 handling runtime part of ACLs is not yet written, sorry. Thanks for the info :) I was afraid of that. Unfortunately I am no C developer, so I can't contribute there. It's no big deal for me anyway, so I'll leave it at that for now, but I can open a feature request for it if you think it's needed? K. From phk at phk.freebsd.dk Wed Mar 12 10:30:42 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 12 Mar 2008 10:30:42 +0000 Subject: ACL handling and IPv6 In-Reply-To: Your message of "Wed, 12 Mar 2008 10:13:16 +0100." <47D79EAC.9040804@hio.no> Message-ID: <66914.1205317842@critter.freebsd.dk> In message <47D79EAC.9040804 at hio.no>, =?ISO-8859-1?Q?Kenneth_R=F8rvik?= writes: >Poul-Henning Kamp wrote: > >>> However, I need to do some matching against an ACL on client IP address, >>> including our local ipv6-space. >> >> The ipv6 handling runtime part of ACLs is not yet written, sorry. > >Thanks for the info :) > >I was afraid of that. Unfortunately I am no C developer, so I can't >contribute there. > >It's no big deal for me anyway, so I'll leave it at that for now, but I >can open a feature request for it if you think it's needed? open a ticket so we don't forget. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From andre2178 at hotmail.com Wed Mar 12 08:50:57 2008 From: andre2178 at hotmail.com (zhongwei wang) Date: Wed, 12 Mar 2008 16:50:57 +0800 Subject: varnishd exit on signal 11 Message-ID: HI I have varnish-1.1.2 running on my FreeBSD7.0 amd64 box, but it's NOT stable. Lots of message like these "pid 36910 (varnishd), uid 65534: exited on signal 11" in dmesg, then the varnishd-mgr process just disappear. Backtracing the core dump, I get:[Thread 0xa84419650 (LWP 100874) exited] Program received signal SIGSEGV, Segmentation fault.[Switching to Thread 0xa83404090 (LWP 100648)]vbe_sock_conn (ai=0x0) at cache_backend.c:162162 s = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol);(gdb) bt#0 vbe_sock_conn (ai=0x0) at cache_backend.c:162#1 0x0000000000408c05 in VBE_GetFd (sp=0xa856bd008) at cache_backend.c:190#2 0x000000000040b0b2 in Fetch (sp=0xa856bd008) at cache_fetch.c:278#3 0x0000000000409851 in CNT_Session (sp=0xa856bd008) at cache_center.c:300#4 0x0000000000410e81 in wrk_do_one (w=0x7fffc1208ad0) at cache_pool.c:194#5 0x00000000004110ae in wrk_thread (priv=Variable "priv" is not available.) at cache_pool.c:248#6 0x0000000800a83a88 in pthread_getprio () from /lib/libthr.so.3#7 0x0000000000000000 in ?? ()Error accessing memory address 0x7fffc120b000: Bad address.(gdb) The ai is NULL, then varnish-chld is killed. Any ideas? Thanks _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangmd0127 at gmail.com Wed Mar 12 09:04:37 2008 From: wangmd0127 at gmail.com (mingdawang) Date: Wed, 12 Mar 2008 17:04:37 +0800 Subject: Segmentation fault varnish 1.1.2 on FreeBSD7 Message-ID: HI I have varnish-1.1.2 running on my FreeBSD7.0 amd64 box, but it's NOT stable. Lots of message like these "pid 36910 (varnishd), uid 65534: exited on signal 11" in dmesg, then the varnishd-mgr process just disappear. Backtracing the core dump, I get: [Thread 0xa84419650 (LWP 100874) exited] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xa83404090 (LWP 100648)] vbe_sock_conn (ai=0x0) at cache_backend.c:162 162 s = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol); (gdb) bt #0 vbe_sock_conn (ai=0x0) at cache_backend.c:162 #1 0x0000000000408c05 in VBE_GetFd (sp=0xa856bd008) at cache_backend.c:190 #2 0x000000000040b0b2 in Fetch (sp=0xa856bd008) at cache_fetch.c:278 #3 0x0000000000409851 in CNT_Session (sp=0xa856bd008) at cache_center.c:300 #4 0x0000000000410e81 in wrk_do_one (w=0x7fffc1208ad0) at cache_pool.c:194 #5 0x00000000004110ae in wrk_thread (priv=Variable "priv" is not available. ) at cache_pool.c:248 #6 0x0000000800a83a88 in pthread_getprio () from /lib/libthr.so.3 #7 0x0000000000000000 in ?? () Error accessing memory address 0x7fffc120b000: Bad address. (gdb) The ai is NULL, then varnish-chld is killed. Any ideas? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluesky2167 at 163.com Wed Mar 12 08:37:06 2008 From: bluesky2167 at 163.com (wzw) Date: Wed, 12 Mar 2008 16:37:06 +0800 (CST) Subject: varnish exited on signal 11 Message-ID: <33285387.188561205311026779.JavaMail.coremail@bj163app15.163.com> HI I have varnish-1.1.2 running on my FreeBSD7.0 amd64 box, but it's NOT stable. Lots of message like these "pid 36910 (varnishd), uid 65534: exited on signal 11" in dmesg, then the varnishd-mgr process just disappear. Backtracing the core dump, I get: [Thread 0xa84419650 (LWP 100874) exited] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xa83404090 (LWP 100648)] vbe_sock_conn (ai=0x0) at cache_backend.c:162 162 s = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol); (gdb) bt #0 vbe_sock_conn (ai=0x0) at cache_backend.c:162 #1 0x0000000000408c05 in VBE_GetFd (sp=0xa856bd008) at cache_backend.c:190 #2 0x000000000040b0b2 in Fetch (sp=0xa856bd008) at cache_fetch.c:278 #3 0x0000000000409851 in CNT_Session (sp=0xa856bd008) at cache_center.c:300 #4 0x0000000000410e81 in wrk_do_one (w=0x7fffc1208ad0) at cache_pool.c:194 #5 0x00000000004110ae in wrk_thread (priv=Variable "priv" is not available. ) at cache_pool.c:248 #6 0x0000000800a83a88 in pthread_getprio () from /lib/libthr.so.3 #7 0x0000000000000000 in ?? () Error accessing memory address 0x7fffc120b000: Bad address. (gdb) The ai is NULL, then varnish-chld is killed. Any ideas? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluesky2167 at 163.com Wed Mar 12 08:38:54 2008 From: bluesky2167 at 163.com (wzw) Date: Wed, 12 Mar 2008 16:38:54 +0800 (CST) Subject: varnish exit on signal 11 Message-ID: <31527357.190161205311134551.JavaMail.coremail@bj163app15.163.com> HI I have varnish-1.1.2 running on my FreeBSD7.0 amd64 box, but it's NOT stable. Lots of message like these "pid 36910 (varnishd), uid 65534: exited on signal 11" in dmesg, then the varnishd-mgr process just disappear. Backtracing the core dump, I get: [Thread 0xa84419650 (LWP 100874) exited] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xa83404090 (LWP 100648)] vbe_sock_conn (ai=0x0) at cache_backend.c:162 162 s = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol); (gdb) bt #0 vbe_sock_conn (ai=0x0) at cache_backend.c:162 #1 0x0000000000408c05 in VBE_GetFd (sp=0xa856bd008) at cache_backend.c:190 #2 0x000000000040b0b2 in Fetch (sp=0xa856bd008) at cache_fetch.c:278 #3 0x0000000000409851 in CNT_Session (sp=0xa856bd008) at cache_center.c:300 #4 0x0000000000410e81 in wrk_do_one (w=0x7fffc1208ad0) at cache_pool.c:194 #5 0x00000000004110ae in wrk_thread (priv=Variable "priv" is not available. ) at cache_pool.c:248 #6 0x0000000800a83a88 in pthread_getprio () from /lib/libthr.so.3 #7 0x0000000000000000 in ?? () Error accessing memory address 0x7fffc120b000: Bad address. (gdb) The ai is NULL, then varnish-chld is killed. Any ideas? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From andre.ease at 163.com Wed Mar 12 06:22:53 2008 From: andre.ease at 163.com (andre.ease) Date: Wed, 12 Mar 2008 14:22:53 +0800 (CST) Subject: varnishd exited on signal 11 Message-ID: <766240.79081205302973365.JavaMail.coremail@bj163app108.163.com> HI I have varnish-1.1.2 running on my FreeBSD7.0 amd64 box, but it's NOT stable. Lots of message like these "pid 36910 (varnishd), uid 65534: exited on signal 11" in dmesg, then the varnishd-mgr process just disappear. Backtracing the core dump, I get: [Thread 0xa84419650 (LWP 100874) exited] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xa83404090 (LWP 100648)] vbe_sock_conn (ai=0x0) at cache_backend.c:162 162 s = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol); (gdb) bt #0 vbe_sock_conn (ai=0x0) at cache_backend.c:162 #1 0x0000000000408c05 in VBE_GetFd (sp=0xa856bd008) at cache_backend.c:190 #2 0x000000000040b0b2 in Fetch (sp=0xa856bd008) at cache_fetch.c:278 #3 0x0000000000409851 in CNT_Session (sp=0xa856bd008) at cache_center.c:300 #4 0x0000000000410e81 in wrk_do_one (w=0x7fffc1208ad0) at cache_pool.c:194 #5 0x00000000004110ae in wrk_thread (priv=Variable "priv" is not available. ) at cache_pool.c:248 #6 0x0000000800a83a88 in pthread_getprio () from /lib/libthr.so.3 #7 0x0000000000000000 in ?? () Error accessing memory address 0x7fffc120b000: Bad address. (gdb) The ai is NULL, then varnish-chld is killed. Any ideas? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Mar 12 19:00:37 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 12 Mar 2008 19:00:37 +0000 Subject: Segmentation fault varnish 1.1.2 on FreeBSD7 In-Reply-To: Your message of "Wed, 12 Mar 2008 17:04:37 +0800." Message-ID: <88549.1205348437@critter.freebsd.dk> In message , mingda wang writes: >The ai is NULL, then varnish-chld is killed. This is an old known bug, it has been fixed long time ago. I suggest you update to 1.2 -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From anders at fupp.net Fri Mar 14 13:43:53 2008 From: anders at fupp.net (Anders Nordby) Date: Fri, 14 Mar 2008 14:43:53 +0100 Subject: FreeBSD 7 and SACK problems Message-ID: <20080314134353.GA81613@fupp.net> Hi, Just a heads-up to those of you thinking to use or upgrade to FreeBSD 7.0 for running Varnish. On a fairly big site here in Norway, a number of users could not get TCP connect to our cache servers at all. This was after upgrading all the cache servers to 7.0-RELEASE. By setting this sysctl we could avoid the problem alltogether: net.inet.tcp.sack.enable=0 Hopefully this will (if not already) get fixed in RELENG_7, and RELENG_7_0. More info on the freebsd-net list. Bye, -- Anders. From ottolski at web.de Fri Mar 14 20:37:03 2008 From: ottolski at web.de (Sascha Ottolski) Date: Fri, 14 Mar 2008 21:37:03 +0100 Subject: how to...accelarate randon access to millions of images? Message-ID: <200803142137.03828.ottolski@web.de> Hi, I'm relatively new to varnish (I'm having an eye on it since it appeared in public, but so far never really used it). Now the time may have come to give it whirl. And am wondering if someone could give me a little advise to get me going. The challenge is to server 20+ million image files, I guess with up to 1500 req/sec at peak. The files tend to be small, most of them in a range of 5-50 k. Currently the image store is about 400 GB in size (and growing every day). The access pattern is very random, so it will be very unlikely that any size of RAM will be big enough... Now my question is: what kind of hardware would I need? Lots of RAM seems to be obvious, what ever "a lot" may be...What about the disk subsystem? Should I look into something like RAID-0 with many disk to push the IO-performance? Thanks in advance, Sascha From des at linpro.no Sun Mar 16 14:54:42 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Sun, 16 Mar 2008 15:54:42 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <200803142137.03828.ottolski@web.de> (Sascha Ottolski's message of "Fri\, 14 Mar 2008 21\:37\:03 +0100") References: <200803142137.03828.ottolski@web.de> Message-ID: <87lk4iiurx.fsf@des.linpro.no> Sascha Ottolski writes: > Now my question is: what kind of hardware would I need? Lots of RAM > seems to be obvious, what ever "a lot" may be...What about the disk > subsystem? Should I look into something like RAID-0 with many disk to > push the IO-performance? First things first: instead of a few large disks, you want lots of small fast ones - 36 GB or 72 GB 10,000 RPM disks - to maximize bandwidth. There are two ways you might condfigure your storage: one is to place all the disks in a RAID-0 array and use a single file system and storage file on top of that. The alternative is to have a separate file system and storage file on each disk. I honestly don't know which will be the fastest; if you have a chance to run some benchmarks, I'd love to see your numbers and your conclusion. Note that even if a single RAID-0 array turns out to be the fastest option, you may have to compromise and split your disks into two arrays, unless you find a RAID controller that can handle the number of disks you need. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Sun Mar 16 17:00:42 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 16 Mar 2008 10:00:42 -0700 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <200803142137.03828.ottolski@web.de> References: <200803142137.03828.ottolski@web.de> Message-ID: <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> On Fri, Mar 14, 2008 at 1:37 PM, Sascha Ottolski wrote: > The challenge is to server 20+ million image files, I guess with up to > 1500 req/sec at peak. A modern disk drive can service 100 random IOPS (@ 10ms/seek, that's reasonable). Without any caching, you'd need 15 disks to service your peak load, with a bit over 10ms I/O latency (seek + read). > The files tend to be small, most of them in a > range of 5-50 k. Currently the image store is about 400 GB in size (and > growing every day). The access pattern is very random, so it will be > very unlikely that any size of RAM will be big enough... Are you saying that the hit ratio is likely to be zero? If so, consider whether you want to have caching turned on the first place. There's little sense buying extra RAM if it's useless to you. > Now my question is: what kind of hardware would I need? Lots of RAM > seems to be obvious, what ever "a lot" may be...What about the disk > subsystem? Should I look into something like RAID-0 with many disk to > push the IO-performance? You didn't say what your failure tolerance requirements were. Do you care if you lose data? Do you care if you're unable to serve some requests while a machine is down? Consider dividing up your image store onto multiple machines. Not only would you get better performance, but you would be able to survive hardware failures with fewer catastropic effects (i.e., you'd lose only 1/n of service). If I were designing such a service, my choices would be: (1) 4 machines, each with 4-disk RAID 1 (fast, but dangerous) (2) 4 machines, each with 5-disk RAID 5 (safe, fast reads, but slow writes for your file size - also, RAID 5 should be battery backed, which adds cost) (3) 4 machines, each with 4-disk RAID 10 (will meet workload requirement, but won't handle peak load in degraded mode) (4) 5 machines, each with 4-disk RAID 10 (5) 9 machines, each with 2-disk RAID 0 Multiply each of these machine counts by 2 if you want to be resilient to failures other than disk failures. You can then put a Varnish proxy layer in front of your image storage servers, and direct incoming requests to the appropriate backend server. --Michael From michael at dynamine.net Sun Mar 16 17:02:41 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 16 Mar 2008 10:02:41 -0700 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> References: <200803142137.03828.ottolski@web.de> <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> Message-ID: <86db848d0803161002w31a1e9d5m708834465ad35476@mail.gmail.com> On Sun, Mar 16, 2008 at 10:00 AM, Michael S. Fischer wrote: > If I were designing such a service, my choices would be: Corrections: > (1) 4 machines, each with 4-disk RAID 1 (fast, but dangerous) > (2) 4 machines, each with 5-disk RAID 5 (safe, fast reads, but slow > writes for your file size - also, RAID 5 should be battery backed, > which adds cost) > (3) 4 machines, each with 4-disk RAID 10 (will meet workload > requirement, but won't handle peak load in degraded mode) > (4) 5 machines, each with 4-disk RAID 10 > (5) 9 machines, each with 2-disk RAID 0 > > --Michael > From michael at dynamine.net Sun Mar 16 17:03:40 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 16 Mar 2008 10:03:40 -0700 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <86db848d0803161002w31a1e9d5m708834465ad35476@mail.gmail.com> References: <200803142137.03828.ottolski@web.de> <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> <86db848d0803161002w31a1e9d5m708834465ad35476@mail.gmail.com> Message-ID: <86db848d0803161003w2c23e4aewad7c6790d4fe624c@mail.gmail.com> On Sun, Mar 16, 2008 at 10:02 AM, Michael S. Fischer wrote: I don't know why I'm having such a problem with this. Sigh! I think I got it right this time. > > If I were designing such a service, my choices would be: > > Corrections: > > > > (1) 4 machines, each with 4-disk RAID 0 (fast, but dangerous) > > (2) 4 machines, each with 5-disk RAID 5 (safe, fast reads, but slow > > writes for your file size - also, RAID 5 should be battery backed, > > which adds cost) > > (3) 4 machines, each with 4-disk RAID 10 (will meet workload > > requirement, but won't handle peak load in degraded mode) > > (4) 5 machines, each with 4-disk RAID 10 > > (5) 9 machines, each with 2-disk RAID 1 --Michael From michael at dynamine.net Sun Mar 16 19:38:40 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Sun, 16 Mar 2008 12:38:40 -0700 Subject: Miscellaneous questions In-Reply-To: References: <1167.1202770705@critter.freebsd.dk> Message-ID: <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> On Feb 13, 2008 7:41 AM, Dag-Erling Sm?rgrav wrote: > I believe varnishlog -w /var/log/varnish.log is enabled by default if > you install from packages on !FreeBSD. We may want to change this. This was true for my RHEL 4 installation. I was only able to achieve 16,000 connections/second after completely disabling logging to disk (and fine tuning the thread pool size), which is why I asked my question about further turning down the verbosity of logging to memory. > I think the default timeout on backends connection may be a little > short, though. I assume this is the thread_pool_timeout parameter? > > > (3) Feature request: Request hashing. It would be really cool if > > > Varnish were able to select the origin server (in reality another > > > Varnish proxy) by hashing the Request URI. Having this ability would > > > improve the cache hit ratio overall where a pool of caching proxies is > > > used. > > We have sort of given up on the peer-to-peer cache fetches using > > dedicated protocols, but if you are able to tell that another > > varnish is a better place to pick up something, nothing prevents > > you from making that a backend of this varnish and doing > > a pass on the request. > > No, I think what he means is selecting the backend based on client-ip > modulo number-of-backends so each client always gets the same backend > (which makes session tracking much easier) That's a good idea, too, and deserves implementation, but I was referring to something else. I think phk understood what I was getting at. I'm dealing with a situation where the working set of cacheable responses is larger than the RAM size of a particular Varnish instance. (I don't want to go to disk because it will incur at least a 10ms penalty.) I also want to maximize the hit ratio. One good way to do this is to put a pass-only Varnish instance (i.e., a content switch) in front of a set of intermediate backends (Varnish caching proxies), each of which is assigned to cache a subset of the possible URI namespace. However, in order to do this, the content switch must make consistent decisions about which cache to direct the incoming requests to. One good way of doing that is implementing a hash function H(U) -> V, where U is the request URI, and V is the intermediate-level proxy. I'd appreciate it if you'd consider adding this as a feature. Best regards, --Michael From des at linpro.no Mon Mar 17 07:42:48 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 17 Mar 2008 08:42:48 +0100 Subject: Miscellaneous questions In-Reply-To: <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> (Michael S. Fischer's message of "Sun\, 16 Mar 2008 12\:38\:40 -0700") References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> Message-ID: <87prttg5jb.fsf@des.linpro.no> "Michael S. Fischer" writes: > Dag-Erling Sm?rgrav writes: > > I think the default timeout on backends connection may be a little > > short, though. > I assume this is the thread_pool_timeout parameter? No, that's how long an idle worker thread is kept alive. I don't think the backend timeout is configurable, I think it's hardocded to five seconds. > I'm dealing with a situation where the working set of cacheable > responses is larger than the RAM size of a particular Varnish > instance. (I don't want to go to disk because it will incur at least > a 10ms penalty.) I also want to maximize the hit ratio. My knee-jerk reaction would be "add more RAM, or add more servers" > One good way to do this is to put a pass-only Varnish instance (i.e., > a content switch) in front of a set of intermediate backends (Varnish > caching proxies), each of which is assigned to cache a subset of the > possible URI namespace. > > However, in order to do this, the content switch must make consistent > decisions about which cache to direct the incoming requests to. One > good way of doing that is implementing a hash function H(U) -> V, > where U is the request URI, and V is the intermediate-level proxy. That's actually a pretty good idea... Could you open a ticket for it? DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From augustin at waw.com Mon Mar 17 09:04:14 2008 From: augustin at waw.com (Augustin Amann) Date: Mon, 17 Mar 2008 10:04:14 +0100 Subject: Miscellaneous questions In-Reply-To: <87prttg5jb.fsf@des.linpro.no> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> Message-ID: <47DE340E.9000102@waw.com> Dag-Erling Sm?rgrav a ?crit : > "Michael S. Fischer" writes: > >> Dag-Erling Sm?rgrav writes: >> >>> I think the default timeout on backends connection may be a little >>> short, though. >>> >> I assume this is the thread_pool_timeout parameter? >> > > No, that's how long an idle worker thread is kept alive. I don't think > the backend timeout is configurable, I think it's hardocded to five > seconds. > > >> I'm dealing with a situation where the working set of cacheable >> responses is larger than the RAM size of a particular Varnish >> instance. (I don't want to go to disk because it will incur at least >> a 10ms penalty.) I also want to maximize the hit ratio. >> > > My knee-jerk reaction would be "add more RAM, or add more servers" > > >> One good way to do this is to put a pass-only Varnish instance (i.e., >> a content switch) in front of a set of intermediate backends (Varnish >> caching proxies), each of which is assigned to cache a subset of the >> possible URI namespace. >> >> However, in order to do this, the content switch must make consistent >> decisions about which cache to direct the incoming requests to. One >> good way of doing that is implementing a hash function H(U) -> V, >> where U is the request URI, and V is the intermediate-level proxy. >> > That's actually a pretty good idea... Could you open a ticket for it? > > DES > I'm thinking about the same idea of a reverse-proxy cache cluster for work. I think that one way of doing that is to use HaProxy (*haproxy*.1wt.eu) which implement such hash function. You could use it in front of Varnish, with a URL based balance algorithm ... Should work ok for this job, even if this could be great to do that in varnish directly. Augustin. From fragfutter at gmail.com Mon Mar 17 09:56:44 2008 From: fragfutter at gmail.com (C. Handel) Date: Mon, 17 Mar 2008 10:56:44 +0100 Subject: Miscellaneous questions In-Reply-To: <47DE340E.9000102@waw.com> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> <47DE340E.9000102@waw.com> Message-ID: <3d62bd5f0803170256n6aa85259s42d43639be277a8b@mail.gmail.com> On Mon, Mar 17, 2008 at 10:04 AM, Augustin Amann wrote: > I'm thinking about the same idea of a reverse-proxy cache cluster for work. > I think that one way of doing that is to use HaProxy (*haproxy*.1wt.eu) > which implement such hash function. You could use it in front of > Varnish, with a URL based balance algorithm ... Should work ok for this > job, even if this could be great to do that in varnish directly. You might be interessted in using http://www.linuxvirtualserver.org/ to build the loadbalancer part. You could route the traffic using sticky connections to the right backend server. It is even possible that the LVS Servers see only the incoming traffic not the replies. This is an important feature if the limit of one gigbit interface is reached. Greetings Christoph From cherife at dotimes.com Mon Mar 17 11:15:55 2008 From: cherife at dotimes.com (Cherife Li) Date: Mon, 17 Mar 2008 19:15:55 +0800 Subject: Miscellaneous questions In-Reply-To: <3d62bd5f0803170256n6aa85259s42d43639be277a8b@mail.gmail.com> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> <47DE340E.9000102@waw.com> <3d62bd5f0803170256n6aa85259s42d43639be277a8b@mail.gmail.com> Message-ID: <47DE52EB.3070600@dotimes.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 2008-3-17 17:56, C. Handel wrote: | On Mon, Mar 17, 2008 at 10:04 AM, Augustin Amann wrote: |> I'm thinking about the same idea of a reverse-proxy cache cluster for work. |> I think that one way of doing that is to use HaProxy (*haproxy*.1wt.eu) |> which implement such hash function. You could use it in front of |> Varnish, with a URL based balance algorithm ... Should work ok for this |> job, even if this could be great to do that in varnish directly. | | You might be interessted in using http://www.linuxvirtualserver.org/ | to build the loadbalancer part. You could route the traffic using | sticky connections to the right backend server. It is even possible | that the LVS Servers see only the incoming traffic not the replies. | This is an important feature if the limit of one gigbit interface is | reached. I'm afraid that LVS can not do URI based load balancing. LVS is an IP-level rather than application-level LBer. If you have different domains, that's troublesome. | Greetings | Christoph | _______________________________________________ | varnish-misc mailing list | varnish-misc at projects.linpro.no | http://projects.linpro.no/mailman/listinfo/varnish-misc - -- Rgds, Cherife. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (MingW32) iD8DBQFH3lLrD+6zYVqA6cMRAgpkAJ9oJwanTXF/ghuwLQTmPUraMjx69ACfRmm9 giOaPzp095FpexsVGTq+Rco= =Vp1q -----END PGP SIGNATURE----- From michael at dynamine.net Mon Mar 17 15:30:26 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 17 Mar 2008 08:30:26 -0700 Subject: Miscellaneous questions In-Reply-To: <87prttg5jb.fsf@des.linpro.no> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> Message-ID: <86db848d0803170830t30170616re9c49d681f13fdf3@mail.gmail.com> On Mon, Mar 17, 2008 at 12:42 AM, Dag-Erling Sm?rgrav wrote: > "Michael S. Fischer" writes: > > > Dag-Erling Sm?rgrav writes: > > > I think the default timeout on backends connection may be a little > > > short, though. > > I assume this is the thread_pool_timeout parameter? > > No, that's how long an idle worker thread is kept alive. I don't think > the backend timeout is configurable, I think it's hardocded to five > seconds. What does the timeout pertain to? Connect time? Response time? --Michael From phk at phk.freebsd.dk Mon Mar 17 15:35:59 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Mar 2008 15:35:59 +0000 Subject: Miscellaneous questions In-Reply-To: Your message of "Mon, 17 Mar 2008 08:30:26 MST." <86db848d0803170830t30170616re9c49d681f13fdf3@mail.gmail.com> Message-ID: <4093.1205768159@critter.freebsd.dk> In message <86db848d0803170830t30170616re9c49d681f13fdf3 at mail.gmail.com>, "Mich ael S. Fischer" writes: >On Mon, Mar 17, 2008 at 12:42 AM, Dag-Erling Sm=F8rgrav wro= >te: >> "Michael S. Fischer" writes: >> >> > Dag-Erling Sm=F8rgrav writes: >> > > I think the default timeout on backends connection may be a little >> > > short, though. >> > I assume this is the thread_pool_timeout parameter? >> >> No, that's how long an idle worker thread is kept alive. I don't think >> the backend timeout is configurable, I think it's hardocded to five >> seconds. > >What does the timeout pertain to? Connect time? Response time? Actually, I don't think we have any non-default timeout on the backends now: connect timeout is whatever the kernel uses and we don't set any socketopts like SO_RCVTIMEO on the tcp connection. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From des at linpro.no Mon Mar 17 15:56:20 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Mon, 17 Mar 2008 16:56:20 +0100 Subject: Miscellaneous questions In-Reply-To: <4093.1205768159@critter.freebsd.dk> (Poul-Henning Kamp's message of "Mon\, 17 Mar 2008 15\:35\:59 +0000") References: <4093.1205768159@critter.freebsd.dk> Message-ID: <87abkxuyxn.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > "Michael S. Fischer" writes: > > "Dag-Erling Sm?rgrav" writes: > > > "Michael S. Fischer" writes: > > > > "Dag-Erling Sm?rgrav" writes: > > > > > I think the default timeout on backends connection may be a > > > > > little short, though. > > > > I assume this is the thread_pool_timeout parameter? > > > No, that's how long an idle worker thread is kept alive. I don't > > > think the backend timeout is configurable, I think it's hardocded > > > to five seconds. > > What does the timeout pertain to? Connect time? Response time? > Actually, I don't think we have any non-default timeout on the > backends now: connect timeout is whatever the kernel uses and we don't > set any socketopts like SO_RCVTIMEO on the tcp connection. No, we were talking about how long an idle backend connection is kept open (or at least I was). Here's some additional context: "Dag-Erling Sm?rgrav" writes: > "Poul-Henning Kamp" writes: > > "Michael S. Fischer" writes: > > > (2) HTTP/1.1 keep-alive connection reuse: Does Varnish have the > > > ability to reuse origin server connections (assuming they are HTTP/1.1 > > > Keep-Alive connections)? Or, is there a strict 1:1 mapping between > > > client-proxy connections and proxy-origin server connections? > > They should already be reused by default. > > Maybe something is preventing backend session reuse in his > installation; that can easily be determined from logs. > > I think the default timeout on backends connection may be a little > short, though. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From phk at phk.freebsd.dk Mon Mar 17 15:57:54 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Mar 2008 15:57:54 +0000 Subject: Miscellaneous questions In-Reply-To: Your message of "Mon, 17 Mar 2008 16:56:20 +0100." <87abkxuyxn.fsf@des.linpro.no> Message-ID: <4252.1205769474@critter.freebsd.dk> In message <87abkxuyxn.fsf at des.linpro.no>, =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= writes: >> > What does the timeout pertain to? Connect time? Response time? >> Actually, I don't think we have any non-default timeout on the >> backends now: connect timeout is whatever the kernel uses and we don't >> set any socketopts like SO_RCVTIMEO on the tcp connection. > >No, we were talking about how long an idle backend connection is kept >open (or at least I was). Yes I know :-) And we don't do anything to close those before the backend closes on us, we have no reason to, the longer we keep it, the more connection setups we avoid. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From michael at dynamine.net Mon Mar 17 16:47:11 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 17 Mar 2008 09:47:11 -0700 Subject: Miscellaneous questions In-Reply-To: <4252.1205769474@critter.freebsd.dk> References: <87abkxuyxn.fsf@des.linpro.no> <4252.1205769474@critter.freebsd.dk> Message-ID: <86db848d0803170947q13b5aa40u1593d885aabf6553@mail.gmail.com> On Mon, Mar 17, 2008 at 8:57 AM, Poul-Henning Kamp wrote: > >No, we were talking about how long an idle backend connection is kept > >open (or at least I was). > > Yes I know :-) > > And we don't do anything to close those before the backend closes on > us, we have no reason to, the longer we keep it, the more connection > setups we avoid. So does Varnish close HTTP Keep-Alive backend connections after an idle period? or not? Best regards, --Michael From phk at phk.freebsd.dk Mon Mar 17 16:51:53 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Mar 2008 16:51:53 +0000 Subject: Miscellaneous questions In-Reply-To: Your message of "Mon, 17 Mar 2008 09:47:11 MST." <86db848d0803170947q13b5aa40u1593d885aabf6553@mail.gmail.com> Message-ID: <4555.1205772713@critter.freebsd.dk> In message <86db848d0803170947q13b5aa40u1593d885aabf6553 at mail.gmail.com>, "Mich ael S. Fischer" writes: >On Mon, Mar 17, 2008 at 8:57 AM, Poul-Henning Kamp wrote: > >> >No, we were talking about how long an idle backend connection is kept >> >open (or at least I was). >> >> Yes I know :-) >> >> And we don't do anything to close those before the backend closes on >> us, we have no reason to, the longer we keep it, the more connection >> setups we avoid. > >So does Varnish close HTTP Keep-Alive backend connections after an >idle period? or not? We _never_ close a backend connection until the backend closed its end. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ottolski at web.de Mon Mar 17 18:00:09 2008 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 17 Mar 2008 19:00:09 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <87lk4iiurx.fsf@des.linpro.no> References: <200803142137.03828.ottolski@web.de> <87lk4iiurx.fsf@des.linpro.no> Message-ID: <200803171900.09780.ottolski@web.de> Am Sonntag 16 M?rz 2008 15:54:42 schrieben Sie: > Sascha Ottolski writes: > > Now my question is: what kind of hardware would I need? Lots of RAM > > seems to be obvious, what ever "a lot" may be...What about the disk > > subsystem? Should I look into something like RAID-0 with many disk > > to push the IO-performance? > > First things first: instead of a few large disks, you want lots of > small fast ones - 36 GB or 72 GB 10,000 RPM disks - to maximize > bandwidth. > > There are two ways you might condfigure your storage: one is to place > all the disks in a RAID-0 array and use a single file system and > storage file on top of that. The alternative is to have a separate > file system and storage file on each disk. I honestly don't know > which will be the fastest; if you have a chance to run some > benchmarks, I'd love to see your numbers and your conclusion. Dag, thanks for your hints. Just curious, how would i tell the varnish-process that I have several filesystem to put the cache on? I had the feeling that you give exactly one directory as option. Cheers, Sascha > > Note that even if a single RAID-0 array turns out to be the fastest > option, you may have to compromise and split your disks into two > arrays, unless you find a RAID controller that can handle the number > of disks you need. > > DES From ottolski at web.de Mon Mar 17 18:19:08 2008 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 17 Mar 2008 19:19:08 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> References: <200803142137.03828.ottolski@web.de> <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> Message-ID: <200803171919.08817.ottolski@web.de> Michael, thanks a lot for taking the time to give me such a detailed answer. please see my replies below. Am Sonntag 16 M?rz 2008 18:00:42 schrieb Michael S. Fischer: > On Fri, Mar 14, 2008 at 1:37 PM, Sascha Ottolski wrote: > > The challenge is to server 20+ million image files, I guess with > > up to 1500 req/sec at peak. > > A modern disk drive can service 100 random IOPS (@ 10ms/seek, that's > reasonable). Without any caching, you'd need 15 disks to service > your peak load, with a bit over 10ms I/O latency (seek + read). > > > The files tend to be small, most of them in a > > range of 5-50 k. Currently the image store is about 400 GB in size > > (and growing every day). The access pattern is very random, so it > > will be very unlikely that any size of RAM will be big enough... > > Are you saying that the hit ratio is likely to be zero? If so, > consider whether you want to have caching turned on the first place. > There's little sense buying extra RAM if it's useless to you. well, wo far I have analyzed the webserver logs of one week. this indicates that indeed there would be at least some cache hits. we have about 20 mio. images on our storage, and in one week about 3.5 mio images were repeadetly requested. to be more precise: 272,517,167 requests made to a total of 7,489,059 different URLs 3,226,150 URLs were requested at least 10 times, accounting for 257,306,351 "repeated" request so, if I made my analysis not to lousy, I guess there is quite a opportunity that a cache will help. roughly, the currently 20 mio images use 400 GB of storage; so 3.5 mio images may account for 17.5% of 400 GB ~70 GB. well, but 70 GB RAM is still a lot. but may be a mix of "enough" RAM and fast disks may be the way to go; may be in addition to a content based load-balancing to several caches (say, one for thumbnails, one for larger size images). currently, at peak times we only serve about 350 images/sec, due to the bottleneck of the storage backend. so the targed of 1500 req/sec may be bit of wishful thinking, as I don't know what the real peak would look like without the bottleneck; may very well be more like 500-1000 req/sec; but of course I'd like to leave room for growth :-) Thanks a lot, Sascha > > > Now my question is: what kind of hardware would I need? Lots of > > RAM seems to be obvious, what ever "a lot" may be...What about the > > disk subsystem? Should I look into something like RAID-0 with many > > disk to push the IO-performance? > > You didn't say what your failure tolerance requirements were. Do you > care if you lose data? Do you care if you're unable to serve some > requests while a machine is down? well, it's a cache, after all. the real image store is in place and high available and backed up and all the like. but, the webservers can't get the images fast enough of the storage. we just enabled apache's mod_cache, which seems to help a bit, but I suspect a dedicated tool like varnish could perform better (plus, you don't get any runtime information to learn how efficient the apache cache is). > > Consider dividing up your image store onto multiple machines. Not > only would you get better performance, but you would be able to > survive hardware failures with fewer catastropic effects (i.e., you'd > lose only 1/n of service). > > If I were designing such a service, my choices would be: > > (1) 4 machines, each with 4-disk RAID 1 (fast, but dangerous) > (2) 4 machines, each with 5-disk RAID 5 (safe, fast reads, but slow > writes for your file size - also, RAID 5 should be battery backed, > which adds cost) > (3) 4 machines, each with 4-disk RAID 10 (will meet workload > requirement, but won't handle peak load in degraded mode) > (4) 5 machines, each with 4-disk RAID 10 > (5) 9 machines, each with 2-disk RAID 0 > > Multiply each of these machine counts by 2 if you want to be > resilient to failures other than disk failures. > > You can then put a Varnish proxy layer in front of your image storage > servers, and direct incoming requests to the appropriate backend > server. > > --Michael From fragfutter at gmail.com Mon Mar 17 19:01:54 2008 From: fragfutter at gmail.com (C. Handel) Date: Mon, 17 Mar 2008 20:01:54 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <200803171919.08817.ottolski@web.de> References: <200803142137.03828.ottolski@web.de> <86db848d0803161000q3f70f24bo7fa79832f3eaa4c6@mail.gmail.com> <200803171919.08817.ottolski@web.de> Message-ID: <3d62bd5f0803171201l1fcadfean7744f62dd9641b7@mail.gmail.com> > > On Fri, Mar 14, 2008 at 1:37 PM, Sascha Ottolski > wrote: > > > The challenge is to server 20+ million image files, I guess with > > > up to 1500 req/sec at peak. > well, wo far I have analyzed the webserver logs of one week. this > indicates that indeed there would be at least some cache hits. we have > about 20 mio. images on our storage, and in one week about 3.5 mio > images were repeadetly requested. to be more precise: I just wonder why would you use varnish? Is there any extensive database or other processing power you need to complete before serving the image? Wouldn't you be better of using some kind of Content Delivery Network, possibly using shards of your image store? Serving static files using a http-accelerator seems odd to me. Greetings Christoph From phk at phk.freebsd.dk Mon Mar 17 19:06:35 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Mar 2008 19:06:35 +0000 Subject: how to...accelarate randon access to millions of images? In-Reply-To: Your message of "Mon, 17 Mar 2008 20:01:54 +0100." <3d62bd5f0803171201l1fcadfean7744f62dd9641b7@mail.gmail.com> Message-ID: <5347.1205780795@critter.freebsd.dk> In message <3d62bd5f0803171201l1fcadfean7744f62dd9641b7 at mail.gmail.com>, "C. Ha ndel" writes: >Serving static files using a http-accelerator seems odd to me. Uhm, this is a pretty weird statement if you stop to think about it :-) There is no way you could serve a dynamic file from an accellerator without defining it as 'static' for a minimum short period. The advantage to running varnish in front of static content is that you can avoid loading of your disk system, and, as far as I can tell, concentrate your investment in the bits of hardware you get most benefit from: RAM instead of CPU. Finally, I would advice you guys to seriously look at flash-"disk" drives. The virtual elimination of seektime is just what you want from a web server or cache. I have had the Mtron PRO7000 series recommended, and will be getting one RSN to play with, I'll let you know what I find. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From fragfutter at gmail.com Mon Mar 17 19:34:09 2008 From: fragfutter at gmail.com (C. Handel) Date: Mon, 17 Mar 2008 20:34:09 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <5347.1205780795@critter.freebsd.dk> References: <3d62bd5f0803171201l1fcadfean7744f62dd9641b7@mail.gmail.com> <5347.1205780795@critter.freebsd.dk> Message-ID: <3d62bd5f0803171234q26fa6157q26ae1916e5a37e55@mail.gmail.com> On Mon, Mar 17, 2008 at 8:06 PM, Poul-Henning Kamp wrote: > >Serving static files using a http-accelerator seems odd to me. > > Uhm, this is a pretty weird statement if you stop to think about it :-) > > There is no way you could serve a dynamic file from an accellerator > without defining it as 'static' for a minimum short period. I didn't mean the caching-timeouts of a dynamicly calculated content. We talked about images which sit in a file on the harddisk and i assumed that they don't change that often. Actually i have a system where content is never changing once written to the disk (saving images with names of their md5 hashes) > The advantage to running varnish in front of static content is that > you can avoid loading of your disk system, and, as far as I can > tell, concentrate your investment in the bits of hardware you get > most benefit from: RAM instead of CPU. another advantage would be, that you don't need to think how you would push the content into your CDN. > Finally, I would advice you guys to seriously look at flash-"disk" > drives. The virtual elimination of seektime is just what you want > from a web server or cache. Having Flash Drives for 400GB of content could kill some budgets ;) Greetings Christoph From phk at phk.freebsd.dk Mon Mar 17 19:40:09 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 17 Mar 2008 19:40:09 +0000 Subject: how to...accelarate randon access to millions of images? In-Reply-To: Your message of "Mon, 17 Mar 2008 20:34:09 +0100." <3d62bd5f0803171234q26fa6157q26ae1916e5a37e55@mail.gmail.com> Message-ID: <5543.1205782809@critter.freebsd.dk> In message <3d62bd5f0803171234q26fa6157q26ae1916e5a37e55 at mail.gmail.com>, "C. H andel" writes: > Finally, I would advice you guys to seriously look at flash-"disk" >> drives. The virtual elimination of seektime is just what you want >> from a web server or cache. > >Having Flash Drives for 400GB of content could kill some budgets ;) It's a price performance issue: I think the 32GB 2.5" Mtron is in the $1K area, so you'd need about 15 of those + some carriers and controllers. Call it $20K in total. For that you get a compact disk system with virtually no seek-time, a power dissipation of around 40W and no disk-crashes. I know people who nearly cry when they realize that is possible :-) Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From varnish-list at itiva.com Mon Mar 17 22:32:03 2008 From: varnish-list at itiva.com (DHF) Date: Mon, 17 Mar 2008 15:32:03 -0700 Subject: Miscellaneous questions In-Reply-To: <87prttg5jb.fsf@des.linpro.no> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> Message-ID: <47DEF163.9000604@itiva.com> Dag-Erling Sm?rgrav wrote: >> One good way to do this is to put a pass-only Varnish instance (i.e., >> a content switch) in front of a set of intermediate backends (Varnish >> caching proxies), each of which is assigned to cache a subset of the >> possible URI namespace. >> >> However, in order to do this, the content switch must make consistent >> decisions about which cache to direct the incoming requests to. One >> good way of doing that is implementing a hash function H(U) -> V, >> where U is the request URI, and V is the intermediate-level proxy. >> > > That's actually a pretty good idea... Could you open a ticket for it? > > DES > This is called CARP/"Cache Array Routing Protocol" in squid land. Here's a link to some info on it: http://docs.huihoo.com/gnu_linux/squid/html/x2398.html It works quite well for reducing the number of globally duplicated objects in an multilayer accelerator setup, as you can add additional machines in the interstitial space between the frontline caches and the origin as a cheap and easy way to increase the overall ram available to hot objects without having to use some front end load balancer like perlbal, big ip or whatever to direct the individual clients to specific frontlines to accomplish the same thing ( though you usually still have a load balancer for fault tolerance ). Though in squid there are some bugs with their implementation ... --DHF From michael at dynamine.net Mon Mar 17 23:07:59 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 17 Mar 2008 16:07:59 -0700 Subject: Miscellaneous questions In-Reply-To: <47DEF163.9000604@itiva.com> References: <1167.1202770705@critter.freebsd.dk> <86db848d0803161238l2aba2ba8jc00b3f1d29f1157e@mail.gmail.com> <87prttg5jb.fsf@des.linpro.no> <47DEF163.9000604@itiva.com> Message-ID: <86db848d0803171607j56449124ubbe2ef8bd53896@mail.gmail.com> On Mon, Mar 17, 2008 at 3:32 PM, DHF wrote: > This is called CARP/"Cache Array Routing Protocol" in squid land. > Here's a link to some info on it: > > http://docs.huihoo.com/gnu_linux/squid/html/x2398.html > > It works quite well for reducing the number of globally duplicated > objects in an multilayer accelerator setup, as you can add additional > machines in the interstitial space between the frontline caches and the > origin as a cheap and easy way to increase the overall ram available to > hot objects without having to use some front end load balancer like > perlbal, big ip or whatever to direct the individual clients to specific > frontlines to accomplish the same thing ( though you usually still have > a load balancer for fault tolerance ). Though in squid there are some > bugs with their implementation ... Thanks for the reminder. I'll file RFEs for both the static and CARP implementations. I presume the static configuration will be done first (if at all), as it's probably significantly easier to implement. --Michael From des at linpro.no Wed Mar 19 08:43:16 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 19 Mar 2008 09:43:16 +0100 Subject: Miscellaneous questions In-Reply-To: <4555.1205772713@critter.freebsd.dk> (Poul-Henning Kamp's message of "Mon\, 17 Mar 2008 16\:51\:53 +0000") References: <4555.1205772713@critter.freebsd.dk> Message-ID: <873aqnumsb.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > "Michael S. Fischer" writes: > > So does Varnish close HTTP Keep-Alive backend connections after an > > idle period? or not? > We _never_ close a backend connection until the backend closed its > end. OK, my mistake. I thought we did, because I've rarely seen a backend connection that lasted longer than a request or two. This is probably a backend issue. BTW, this gives me the idea that VCL should support a "vcl_start" (or perhaps "vcl_use") function that is run only once, when you vcl.use this particular VCL script; and that it should support setting any run-time parameter (or at least those that don't require a restart). Thus, when you switch from your standard config to your emergency config, there is no need to manually change timeouts or other parameters. While we're all gathered around the wishing well, I wish "remove" was named "unset" (see attached patch) and "unset" on a run-time parameter should reset it to its default value. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no -------------- next part -------------- A non-text attachment was scrubbed... Name: unset.diff Type: text/x-diff Size: 977 bytes Desc: not available URL: From phk at phk.freebsd.dk Wed Mar 19 12:09:47 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 19 Mar 2008 12:09:47 +0000 Subject: Miscellaneous questions In-Reply-To: Your message of "Wed, 19 Mar 2008 09:43:16 +0100." <873aqnumsb.fsf@des.linpro.no> Message-ID: <9567.1205928587@critter.freebsd.dk> In message <873aqnumsb.fsf at des.linpro.no>, =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= writes: >While we're all gathered around the wishing well, I wish "remove" was >named "unset" (see attached patch) and "unset" on a run-time parameter >should reset it to its default value. I decide that "remove" was a better word when it came to http headers, which would be the most common use of it. I'm fine with simply making "unset" an synonym for "remove". -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From des at linpro.no Wed Mar 19 12:28:57 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 19 Mar 2008 13:28:57 +0100 Subject: Miscellaneous questions In-Reply-To: <9567.1205928587@critter.freebsd.dk> (Poul-Henning Kamp's message of "Wed\, 19 Mar 2008 12\:09\:47 +0000") References: <9567.1205928587@critter.freebsd.dk> Message-ID: <87zlsusxrq.fsf@des.linpro.no> "Poul-Henning Kamp" writes: > "Dag-Erling Sm?rgrav" writes: > > While we're all gathered around the wishing well, I wish "remove" > > was named "unset" (see attached patch) and "unset" on a run-time > > parameter should reset it to its default value. > I decide that "remove" was a better word when it came to http headers, > which would be the most common use of it. It doesn't really matter, except for symmetry and POLA. I've seen you use "unset" instead of "remove" yourself in discussions on this list :) Anyway, patch committed. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ottolski at web.de Wed Mar 19 12:50:42 2008 From: ottolski at web.de (Sascha Ottolski) Date: Wed, 19 Mar 2008 13:50:42 +0100 Subject: how to...accelarate randon access to millions of images? In-Reply-To: <87hcf3t0g7.fsf@des.linpro.no> References: <200803142137.03828.ottolski@web.de> <200803191119.42379.ottolski@web.de> <87hcf3t0g7.fsf@des.linpro.no> Message-ID: <200803191350.42274.ottolski@web.de> Am Mittwoch 19 M?rz 2008 12:31:04 schrieb Dag-Erling Sm?rgrav: > > and, sorry if this is FAQ, are the storage files persistent, that > > is, will they survive a restart of varnish or reboot of the > > machine, or do you always start with an empty cache? > > Varnish always startes with an empty cache. Cache persistence is on > the roadmap for 2.1, no release date at this point. uh, too sad, is it such a complicated task? sounds trivial, but I don't know anything about the architecture :-) Cheers, Sascha From ric at digitalmarbles.com Thu Mar 20 01:32:39 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Wed, 19 Mar 2008 18:32:39 -0700 Subject: Specification out of date? Message-ID: From previous discussions on this list, I've been operating on the understanding that Varnish ignores all Cache-Control tokens in the response except for max-age and s-maxage. But the following snippet from the varnish specification document seems to suggest otherwise. Does this document need to be updated or is the intent still to implement this in the future? Specifically, are there plans for Varnish to support 'public', 'private', and 'no-cache' tokens? From varnish-doc/en/varnish-specification/article.xml ... __________________ Cacheability A request which includes authentication headers must not be served from cache. Varnish must interpret Cache-Control directives received from content servers as follows: public: the document will be cached even if authentication headers are present. private: the document will not be cached, since Varnish is a shared cache. no-cache: the document will not be cached. no-store: XXX s-maxage: overrides max-age, since Varnish is a shared cache. max-age: overrides the Expires header. min-fresh: ignored. max-stale: ignored. only-if-cached: ignored. must-revalidate: as specified in RFC2616 ?14.9.4. proxy-revalidate: as must-revalidate. no-transform: ignored. Varnish must ignore Cache-Control directives received from clients. ________________ Ric From ric at digitalmarbles.com Thu Mar 20 03:26:14 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Wed, 19 Mar 2008 20:26:14 -0700 Subject: !obj.cacheable passes? Message-ID: I'm looking at the default vcl and I see the following stanza: sub vcl_hit { if (!obj.cacheable) { pass; } deliver; According to the vcl man page: obj.cacheable True if the request resulted in a cacheable response. A response is considered cacheable if it is valid (see above), the HTTP status code is 200, 203, 300, 301, 302, 404 or 410 and it has a non-zero time-to-live when Expires and Cache-Control headers are taken into account. Something about this seems odd. Perhaps someone can clear it up for me. We drop into "vcl_hit" if the object is found in the cache -- before we attempt to fetch from the backend. And a "pass" of course doesn't cache the response. Why do we not attempt to cache the response if the copy in our cache is not "cacheable"? Couldn't a subsequent response otherwise be cacheable? In other words, shouldn't this stanza instead be something like this: sub vcl_hit { if (!obj.cacheable) { fetch; } deliver; } And let "vcl_fetch" determine whether the new copy should be inserted into the cache? If I understand correctly, 'vcl_hit' cannot currently be terminated with 'fetch'. Why is that? Ric From des at linpro.no Thu Mar 20 09:15:58 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Thu, 20 Mar 2008 10:15:58 +0100 Subject: Specification out of date? In-Reply-To: (Ricardo Newbery's message of "Wed\, 19 Mar 2008 18\:32\:39 -0700") References: Message-ID: <878x0dsqlt.fsf@des.linpro.no> Ricardo Newbery writes: > [...] Yes, the spec is two years out of date. If you want Varnish to obey Cache-Control, it is trivial to implement in VCL. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ric at digitalmarbles.com Thu Mar 20 11:26:21 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 20 Mar 2008 04:26:21 -0700 Subject: Specification out of date? In-Reply-To: <878x0dsqlt.fsf@des.linpro.no> References: <878x0dsqlt.fsf@des.linpro.no> Message-ID: On Mar 20, 2008, at 2:15 AM, Dag-Erling Sm?rgrav wrote: > Ricardo Newbery writes: >> [...] > > Yes, the spec is two years out of date. Right. That much was apparent. My question again is shouldn't this document be updated? And is there still an intent to implement any of this? > If you want Varnish to obey Cache-Control, it is trivial to > implement in > VCL. Well... perhaps. I think I can implement 'no-cache' and 'private' with the following stanza in vcl_fetch: if (obj.http.Cache-Control ~ "(no-cache|private)") { pass; } But this behavior is trivial to duplicate in Varnish with just s- maxage=0 so there is probably no advantage to this unless my backend can't set s-maxage for some reason. I'm actually more interested in trying to reproduce the semantics of the 'public' token. But I'm having trouble figuring out how to implement this one in vcl. In the default vcl, authenticated requests are passed through before any cache check or backend fetch is attempted. If I rearrange this a bit so that the authenticate test comes later, I think I run into a vcl limitation. For example, the following seems like it should work: 1) Remove from vcl_recv the following... if (req.http.Authenticate || req.http.Cookie) { pass; } 2) Add to vcl_hit the following (after the !obj.cacheable test)... if (obj.http.Cache-Control ~ "public" ) { deliver; } if (req.http.Authenticate) { fetch; } 3) Add to vcl_fetch the following (after the other tests)... if (obj.http.Cache-Control ~ "public" ) { insert; } if (req.http.Authenticate) { pass; } But the vcl man page appears to tell me that 'fetch' is not a valid keyword in vcl_hit so, if I believe the docs, then this is not going to work. Do you have any suggestions on how to implement the 'public' token in vcl? Ric From des at linpro.no Thu Mar 20 11:34:59 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Thu, 20 Mar 2008 12:34:59 +0100 Subject: Specification out of date? In-Reply-To: (Ricardo Newbery's message of "Thu\, 20 Mar 2008 04\:26\:21 -0700") References: <878x0dsqlt.fsf@des.linpro.no> Message-ID: <87tzj1r5lo.fsf@des.linpro.no> Ricardo Newbery writes: > I'm actually more interested in trying to reproduce the semantics of > the 'public' token. But I'm having trouble figuring out how to > implement this one in vcl. In the default vcl, authenticated requests > are passed through before any cache check or backend fetch is > attempted. If I rearrange this a bit so that the authenticate test > comes later, I think I run into a vcl limitation. > > For example, the following seems like it should work: > > 1) Remove from vcl_recv the following... > > if (req.http.Authenticate || req.http.Cookie) { > pass; > } > > 2) Add to vcl_hit the following (after the !obj.cacheable test)... > > if (obj.http.Cache-Control ~ "public" ) { > deliver; > } > if (req.http.Authenticate) { > fetch; > } Uh, no. Why do you want to "fetch" here? Why do you even want to do anything in vcl_hit? The correct place to check Cache-Control is in vcl_fetch, *before* the object enters the cache. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ric at digitalmarbles.com Thu Mar 20 11:58:20 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 20 Mar 2008 04:58:20 -0700 Subject: Specification out of date? In-Reply-To: <87tzj1r5lo.fsf@des.linpro.no> References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> Message-ID: <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> On Mar 20, 2008, at 4:34 AM, Dag-Erling Sm?rgrav wrote: > Ricardo Newbery writes: >> I'm actually more interested in trying to reproduce the semantics of >> the 'public' token. But I'm having trouble figuring out how to >> implement this one in vcl. In the default vcl, authenticated >> requests >> are passed through before any cache check or backend fetch is >> attempted. If I rearrange this a bit so that the authenticate test >> comes later, I think I run into a vcl limitation. >> >> For example, the following seems like it should work: >> >> 1) Remove from vcl_recv the following... >> >> if (req.http.Authenticate || req.http.Cookie) { >> pass; >> } >> >> 2) Add to vcl_hit the following (after the !obj.cacheable test)... >> >> if (obj.http.Cache-Control ~ "public" ) { >> deliver; >> } >> if (req.http.Authenticate) { >> fetch; >> } > > Uh, no. Why do you want to "fetch" here? Why do you even want to do > anything in vcl_hit? The correct place to check Cache-Control is in > vcl_fetch, *before* the object enters the cache. Of course #3 in the list does indeed check Cache-Control in vcl_fetch. But I don't believe this is enough. If an authenticated request comes in and I have a valid cached copy, Varnish should not return the cached copy *unless* the copy contains a 'public' token. It's not enough that Varnish previously tested for the public token before insertion as the previous request may have been a regular non-authenticated request which should be cached regardless. So I need to test for the public token before both insertion and delivery from cache. Ric From des at linpro.no Thu Mar 20 13:07:52 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Thu, 20 Mar 2008 14:07:52 +0100 Subject: Specification out of date? In-Reply-To: <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> (Ricardo Newbery's message of "Thu\, 20 Mar 2008 04\:58\:20 -0700") References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> Message-ID: <87prtpr1av.fsf@des.linpro.no> Ricardo Newbery writes: > If an authenticated request comes in and I have a valid cached copy, > Varnish should not return the cached copy *unless* the copy contains a > public' token. It's not enough that Varnish previously tested for > the public token before insertion as the previous request may have > been a regular non-authenticated request which should be cached > regardless. So I need to test for the public token before both > insertion and delivery from cache. I still don't understand why you want to go from hit to fetch. Just pass it. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From ric at digitalmarbles.com Thu Mar 20 14:01:26 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 20 Mar 2008 07:01:26 -0700 Subject: Specification out of date? In-Reply-To: <87prtpr1av.fsf@des.linpro.no> References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> <87prtpr1av.fsf@des.linpro.no> Message-ID: <966A64CC-7375-431D-AE3E-14EF45DB8D1D@digitalmarbles.com> On Mar 20, 2008, at 6:07 AM, Dag-Erling Sm?rgrav wrote: > Ricardo Newbery writes: >> If an authenticated request comes in and I have a valid cached copy, >> Varnish should not return the cached copy *unless* the copy >> contains a >> public' token. It's not enough that Varnish previously tested for >> the public token before insertion as the previous request may have >> been a regular non-authenticated request which should be cached >> regardless. So I need to test for the public token before both >> insertion and delivery from cache. > > I still don't understand why you want to go from hit to fetch. Just > pass it. Because a pass will not store the response in cache when it otherwise should if it contains a public token. Hmm, perhaps I'm making some error in logic. If an item is in the cache and it doesn't have a 'public' token, then can I safely assume that authenticated version of the same item will also not contain a 'public' token? My first thought was that I can't make this assumption. But it's late now and my thinking is getting fuzzy. I'll have to pick this up again later. But if I tentatively accept this assumption for now, then do you see any problem with the same solution but with a 'pass' instead of 'fetch'? Like so... 1) Remove from vcl_recv the following... if (req.http.Authenticate || req.http.Cookie) { pass; } 2) Add to vcl_hit the following (after the !obj.cacheable test)... if (obj.http.Cache-Control ~ "public" ) { deliver; } if (req.http.Authenticate) { pass; } 3) Add to vcl_fetch the following (after the other tests)... if (obj.http.Cache-Control ~ "public" ) { insert; } if (req.http.Authenticate) { pass; } From ric at digitalmarbles.com Fri Mar 21 10:36:34 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Fri, 21 Mar 2008 03:36:34 -0700 Subject: what if a header I'm testing is missing? Message-ID: <802A8790-28D3-4D78-A6D4-11D93C8E7149@digitalmarbles.com> This is a minor thing but I'm wondering if I'm making an incorrect assumption. In my vcl file, I have lines similar to the following... if (req.http.Cookie && req.http.Cookie ~ "(__ac=|_ZopeId=)") { pass; } and I'm wondering if the first part of this is unnecessary. For example, what happens if I have this... if (req.http.Cookie ~ "(__ac=|_ZopeId=)") { pass; } but no Cookie header is present in the request. Is Varnish flexible enough to realize that the test fails without throwing an error? Ric From des at linpro.no Fri Mar 21 12:08:44 2008 From: des at linpro.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Fri, 21 Mar 2008 13:08:44 +0100 Subject: Specification out of date? In-Reply-To: <966A64CC-7375-431D-AE3E-14EF45DB8D1D@digitalmarbles.com> (Ricardo Newbery's message of "Thu\, 20 Mar 2008 07\:01\:26 -0700") References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> <87prtpr1av.fsf@des.linpro.no> <966A64CC-7375-431D-AE3E-14EF45DB8D1D@digitalmarbles.com> Message-ID: <87lk4cqnxv.fsf@des.linpro.no> Ricardo Newbery writes: > Dag-Erling Sm?rgrav writes: > > I still don't understand why you want to go from hit to fetch. Just > > pass it. > Because a pass will not store the response in cache when it otherwise > should if it contains a public token. Dude, it's already in the cache. That's how you ended up in vcl_hit in the first place. DES -- Dag-Erling Sm?rgrav Senior Software Developer Linpro AS - www.linpro.no From michael at dynamine.net Fri Mar 21 15:05:27 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Fri, 21 Mar 2008 08:05:27 -0700 Subject: what if a header I'm testing is missing? In-Reply-To: <802A8790-28D3-4D78-A6D4-11D93C8E7149@digitalmarbles.com> References: <802A8790-28D3-4D78-A6D4-11D93C8E7149@digitalmarbles.com> Message-ID: <86db848d0803210805o466c975ay560f40dec854831b@mail.gmail.com> On Fri, Mar 21, 2008 at 3:36 AM, Ricardo Newbery wrote: > and I'm wondering if the first part of this is unnecessary. For > example, what happens if I have this... > > > if (req.http.Cookie ~ "(__ac=|_ZopeId=)") { > pass; > } > > but no Cookie header is present in the request. Is Varnish flexible > enough to realize that the test fails without throwing an error? Why don't you try it and report your findings back to us? --Michael From ric at digitalmarbles.com Fri Mar 21 18:45:58 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Fri, 21 Mar 2008 11:45:58 -0700 Subject: Specification out of date? In-Reply-To: <87lk4cqnxv.fsf@des.linpro.no> References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> <87prtpr1av.fsf@des.linpro.no> <966A64CC-7375-431D-AE3E-14EF45DB8D1D@digitalmarbles.com> <87lk4cqnxv.fsf@des.linpro.no> Message-ID: On Mar 21, 2008, at 5:08 AM, Dag-Erling Sm?rgrav wrote: > Ricardo Newbery writes: >> Dag-Erling Sm?rgrav writes: >>> I still don't understand why you want to go from hit to fetch. Just >>> pass it. >> Because a pass will not store the response in cache when it otherwise >> should if it contains a public token. > > Dude, it's already in the cache. That's how you ended up in vcl_hit > in > the first place. Doesn't matter. An authenticated request should not pull from cache *unless* the public token is present. Ric From ric at digitalmarbles.com Fri Mar 21 19:41:39 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Fri, 21 Mar 2008 12:41:39 -0700 Subject: Specification out of date? In-Reply-To: References: <878x0dsqlt.fsf@des.linpro.no> <87tzj1r5lo.fsf@des.linpro.no> <52C33AB4-A441-43DC-86E4-BC192FECD201@digitalmarbles.com> <87prtpr1av.fsf@des.linpro.no> <966A64CC-7375-431D-AE3E-14EF45DB8D1D@digitalmarbles.com> <87lk4cqnxv.fsf@des.linpro.no> Message-ID: On Mar 21, 2008, at 11:45 AM, Ricardo Newbery wrote: > > On Mar 21, 2008, at 5:08 AM, Dag-Erling Sm?rgrav wrote: > >> Ricardo Newbery writes: >>> Dag-Erling Sm?rgrav writes: >>>> I still don't understand why you want to go from hit to fetch. >>>> Just >>>> pass it. >>> Because a pass will not store the response in cache when it >>> otherwise >>> should if it contains a public token. >> >> Dude, it's already in the cache. That's how you ended up in vcl_hit >> in >> the first place. > > > Doesn't matter. An authenticated request should not pull from cache > *unless* the public token is present. Also, if the authenticated response *does* contain a public token, then it should replace the version in cache. Note, I've already acknowledged that this second part of the logic might be flawed. If *any* response contains a 'public' token, then theoretically *all* responses to the same request should contain the token (ignoring transient differences caused by backend changes). So conversely, if we don't find the public token in the cache version, then it may be okay to assume that any subsequent request will continue to be non public. Ric From phk at phk.freebsd.dk Fri Mar 21 19:53:19 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 21 Mar 2008 19:53:19 +0000 Subject: what if a header I'm testing is missing? In-Reply-To: Your message of "Fri, 21 Mar 2008 08:05:27 MST." <86db848d0803210805o466c975ay560f40dec854831b@mail.gmail.com> Message-ID: <7151.1206129199@critter.freebsd.dk> In message <86db848d0803210805o466c975ay560f40dec854831b at mail.gmail.com>, "Mich ael S. Fischer" writes: >> and I'm wondering if the first part of this is unnecessary. For >> example, what happens if I have this... >> >> if (req.http.Cookie ~ "(__ac=|_ZopeId=)") { >> pass; >> } >> >> but no Cookie header is present in the request. Then the comparison always fails. If this is not the case, it's a bug. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Sat Mar 22 20:15:58 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sat, 22 Mar 2008 20:15:58 +0000 Subject: !obj.cacheable passes? In-Reply-To: Your message of "Wed, 19 Mar 2008 20:26:14 MST." Message-ID: <2626.1206216958@critter.freebsd.dk> In message , Ricardo Newbery writes: >I'm looking at the default vcl and I see the following stanza: > > sub vcl_hit { > if (!obj.cacheable) { > pass; > } > deliver; An object can be cached as "not cacheable". When we have a cache-miss, the client goes to the backend. If another client asks for the same object before that first client has got a response from the backend, it is put on hold. If the response was not cacheable, we insert a "not cacheable" object, so that clients that ask for this object go to vcl_hit and hit the backend right away, even if other clients are already hitting the backend for that object. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From john at jensen.net Sat Mar 22 23:15:04 2008 From: john at jensen.net (John Jensen) Date: Sat, 22 Mar 2008 16:15:04 -0700 Subject: Problem with concatenated response headers with 1.1.2 Message-ID: Hello, I have a curious problem with Varnish 1.1.2 response headers on Ubuntu Gutsy. I think it is best summarized with some telnet excerpts: Telnet to backend server: john at rush:~> telnet localhost 8080 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET /static/images/7/6/9/4257486967_large_processed.jpg HTTP/1.1 HTTP/1.1 200 Content-type: image/jpeg Content-Length: 8453 Last-Modified: Sat, 23 Feb 2008 05:44:28 GMT Date: Sat, 22 Mar 2008 23:03:30 GMT Server: CherryPy/3.0.1 Now when I try it through varnish I get the following messed up status header: john at rush:~> telnet localhost 6081 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET /static/images/7/6/9/4257486967_large_processed.jpg HTTP/1.1 HTTP/1.1 200 Content-type: image/jpeg Last-Modified: Sat, 23 Feb 2008 05:44:28 GMT Server: CherryPy/3.0.1 Content-Length: 8453 Date: Sat, 22 Mar 2008 23:05:26 GMT X-Varnish: 768656770 Age: 0 Via: 1.1 varnish Connection: keep-alive You will see the issue in the HTTP/1.200 status line. I've played with the configuration of the backend server application, and no matter what header is placed first, it is concatenated to the status line in Varnish's response. The vcl.conf is very simple, pretty well just this: sub vcl_recv { if (req.request == "GET" && req.url ~ "\.(png|gif|jpg|swf|css|js)$") { lookup; } if (req.request == "GET" && req.http.cookie) { remove obj.http.Set-Cookie; lookup; } if (req.request == "POST") { pipe; } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { remove req.http.Accept-Encoding; } } pass } I've looked at this a half-dozen ways and can't see what the problem is. Any ideas? John -- John Jensen john at jensen.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Sun Mar 23 06:42:20 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 23 Mar 2008 06:42:20 +0000 Subject: Problem with concatenated response headers with 1.1.2 In-Reply-To: Your message of "Sat, 22 Mar 2008 16:15:04 MST." Message-ID: <5278.1206254540@critter.freebsd.dk> In message , "John Jensen" writes: >john at rush:~> telnet localhost 8080 >Trying 127.0.0.1... >Connected to localhost. >Escape character is '^]'. >GET /static/images/7/6/9/4257486967_large_processed.jpg HTTP/1.1 > >HTTP/1.1 200 Your backend is not up to spec, the third field is missing. >HTTP/1.1 200 Content-type: image/jpeg But I agree, varnish shouldn't do this. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From ay at vg.no Wed Mar 26 10:36:14 2008 From: ay at vg.no (Audun Ytterdal) Date: Wed, 26 Mar 2008 11:36:14 +0100 Subject: default.vcl Message-ID: <47EA271E.1070102@vg.no> According to trunk/varnish-cache/bin/varnishd/mgt_vcc.c (line 80 to 160) this is the default.vcl running if you have just defined a default backend and nothing else: Is the fix only to adjust the default.vcl and the man page to reflect this? (I could make a document-patch :-) ( #*# are my comments) sub vcl_recv { if (req.request != \"GET\" && req.request != \"HEAD\" && req.request != \"PUT\" && req.request != \"POST\" && req.request != \"TRACE\" && req.request != \"OPTIONS\" && req.request != \"DELETE\") { /* Non-RFC2616 or CONNECT which is weird. */ pipe; #*# This probably prevents squid like PURGE commands as mentioned in the #*# man page } if (req.http.Expect) { /* Expect is just too hard at present. */ pipe; } if (req.request != \"GET\" && req.request != \"HEAD\") { /* We only deal with GET and HEAD by default */ pass; #*# This one does pipe in default.vcl and pass here.... } if (req.http.Authenticate || req.http.Cookie) { /* Not cacheable by default */ pass; } lookup; } sub vcl_pipe { pipe; } sub vcl_pass { pass; } sub vcl_hash { set req.hash += req.url; if (req.http.host) { set req.hash += req.http.host; } else { set req.hash += server.ip; } hash; #*# Is set.req.hash += obj.http.Vary done internaly in varnish? } sub vcl_hit { if (!obj.cacheable) { pass; } deliver; } sub vcl_miss { fetch; } sub vcl_fetch { if (!obj.valid) { error obj.status; } if (!obj.cacheable) { pass; } if (obj.http.Set-Cookie) { pass; } set obj.prefetch = -30s;" #*# This is new. Will it do it for all objects. Even the rare just #*# requested once. They would be in the cache forever? insert; } sub vcl_deliver { deliver; } sub vcl_discard { discard; } sub vcl_prefetch { fetch; } sub vcl_timeout { discard; }; ***************************************************************** Denne fotnoten bekrefter at denne e-postmeldingen ble skannet av MailSweeper og funnet fri for virus. ***************************************************************** This footnote confirms that this email message has been swept by MailSweeper for the presence of computer viruses. ***************************************************************** From ay at vg.no Wed Mar 26 10:41:28 2008 From: ay at vg.no (Audun Ytterdal) Date: Wed, 26 Mar 2008 11:41:28 +0100 Subject: default.vcl In-Reply-To: <47EA271E.1070102@vg.no> References: <47EA271E.1070102@vg.no> Message-ID: <47EA2858.8000609@vg.no> Audun Ytterdal wrote: > According to trunk/varnish-cache/bin/varnishd/mgt_vcc.c (line 80 to 160) > this is the default.vcl running if you have just defined a default > backend and nothing else: > > Is the fix only to adjust the default.vcl and the man page to reflect > this? (I could make a document-patch :-) http://varnish.projects.linpro.no/ticket/135 -- Audun ***************************************************************** Denne fotnoten bekrefter at denne e-postmeldingen ble skannet av MailSweeper og funnet fri for virus. ***************************************************************** This footnote confirms that this email message has been swept by MailSweeper for the presence of computer viruses. ***************************************************************** From phk at phk.freebsd.dk Wed Mar 26 11:05:41 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 26 Mar 2008 11:05:41 +0000 Subject: A new varnish slide-show Message-ID: <2126.1206529541@critter.freebsd.dk> I was in Aalborg (Denmark) yesterday to give a presentation about Varnish and in addition to the previous two Varnish presentations, I added a new one, with a focus on VCL: http://phk.freebsd.dk/pubs/varnish_vcl.pdf -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Mar 26 22:15:54 2008 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 26 Mar 2008 22:15:54 +0000 Subject: A new varnish slide-show In-Reply-To: Your message of "Wed, 26 Mar 2008 14:36:44 MST." Message-ID: <5239.1206569754@critter.freebsd.dk> In message , Ricardo N ewbery writes: >> With respect to pass: >> >> if you choose pass in vcl_recv(), nothing is cached. >> >> if you choose pass in vcl_fetch(), a "cannot be cached" >> pseudo-object is cached. > > >Is this behavior controlled in vcl at all? In the default vcl, I also >see vcl_fetch directing to "pass" in a couple of places. This looks >like a loop if I believe the flowchart. No, pass from vcl_fetch() means "the object we just got cannot be cached", it does not go to the vcl_pass() because it already has picked up the object, we don't need to do so again. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From f.engelhardt at 21torr.com Thu Mar 27 14:55:09 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Thu, 27 Mar 2008 15:55:09 +0100 Subject: Varnish vs. X-JSON header Message-ID: <20080327155509.57d470c2@21torr.com> Hello, i've got a problem with the X-JSON HTTP-Header not beeing delivered by varnish in pipe and pass mode. My application runs on PHP with lighttpd, when querying the lighty direct (via port :81), the header is present in the request. PHP Script is as follows: Requesting with curl shows the following: $ curl http://server.net/test.php -D - HTTP/1.1 200 OK Expires: Fri, 28 Mar 2008 14:49:29 GMT Cache-Control: max-age=86400 Content-type: text/html Server: lighttpd Content-Length: 6 Date: Thu, 27 Mar 2008 14:49:29 GMT Age: 0 Via: 1.1 varnish Connection: keep-alive foobar Requesting on port 81 (where lighty listens on): $ curl http://server.net:81/test.php-D - HTTP/1.1 200 OK Transfer-Encoding: chunked Expires: Fri, 28 Mar 2008 14:51:45 GMT Cache-Control: max-age=86400 X-JSON: foobar Content-type: text/html Date: Thu, 27 Mar 2008 14:51:45 GMT Server: lighttpd foobar Why is this X-JSON header missing when requested via varnish? Kind Regards Flo PS: my vcl file: backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { if (req.url ~ "^/media\.php.*" || req.url == "/status/") { pass; } if (req.url ~ "^/ADMIN.*") { pipe; } if (req.url == "/test.php") { pass; } } # test.php entry just for testing purpose From michael at dynamine.net Thu Mar 27 15:39:00 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Thu, 27 Mar 2008 08:39:00 -0700 Subject: Varnish vs. X-JSON header In-Reply-To: <20080327155509.57d470c2@21torr.com> References: <20080327155509.57d470c2@21torr.com> Message-ID: <86db848d0803270839o49c0c532ya70a1e6ccf7fc394@mail.gmail.com> The Transfer-Encoding: header is missing from the Varnish response as well. --Michael On Thu, Mar 27, 2008 at 7:55 AM, Florian Engelhardt wrote: > Hello, > > i've got a problem with the X-JSON HTTP-Header not beeing delivered by > varnish in pipe and pass mode. > My application runs on PHP with lighttpd, when querying the lighty > direct (via port :81), the header is present in the request. PHP Script > is as follows: > > header('X-JSON: foobar'); > echo 'foobar'; > ?> > > Requesting with curl shows the following: > > $ curl http://server.net/test.php -D - > HTTP/1.1 200 OK > Expires: Fri, 28 Mar 2008 14:49:29 GMT > Cache-Control: max-age=86400 > Content-type: text/html > Server: lighttpd > Content-Length: 6 > Date: Thu, 27 Mar 2008 14:49:29 GMT > Age: 0 > Via: 1.1 varnish > Connection: keep-alive > > foobar > > > Requesting on port 81 (where lighty listens on): > > $ curl http://server.net:81/test.php-D - > HTTP/1.1 200 OK > Transfer-Encoding: chunked > Expires: Fri, 28 Mar 2008 14:51:45 GMT > Cache-Control: max-age=86400 > X-JSON: foobar > Content-type: text/html > Date: Thu, 27 Mar 2008 14:51:45 GMT > Server: lighttpd > > foobar > > > Why is this X-JSON header missing when requested via varnish? > > Kind Regards > > Flo > > PS: my vcl file: > > backend default { > .host = "127.0.0.1"; > .port = "81"; > } > > sub vcl_recv { > if (req.url ~ "^/media\.php.*" || req.url == "/status/") { > pass; > } > if (req.url ~ "^/ADMIN.*") { > pipe; > } > if (req.url == "/test.php") { > pass; > } > } > > # test.php entry just for testing purpose > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ric at digitalmarbles.com Thu Mar 27 22:47:00 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 27 Mar 2008 15:47:00 -0700 Subject: Authenticate or Authorization? Message-ID: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> In the default vcl, we have the following test... if (req.http.Authenticate || req.http.Cookie) { pass; } What issues an Authenticate header? Was this supposed to be Authorization? Ric From cherife at dotimes.com Fri Mar 28 00:50:41 2008 From: cherife at dotimes.com (Cherife Li) Date: Fri, 28 Mar 2008 08:50:41 +0800 Subject: Authenticate or Authorization? In-Reply-To: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> References: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> Message-ID: <47EC40E1.3000602@dotimes.com> On 03/28/08 06:47, Ricardo Newbery wrote: > > In the default vcl, we have the following test... > > if (req.http.Authenticate || req.http.Cookie) { > pass; > } > > > What issues an Authenticate header? Was this supposed to be > Authorization? > I'm also wondering that whether this http.Authenticate means Proxy-Authenticate , Proxy-Authorization, and WWW-Authenticate headers defined in RFC 2616. > Ric > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -- Rgds, Cherife. From ric at digitalmarbles.com Fri Mar 28 01:20:20 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 27 Mar 2008 18:20:20 -0700 Subject: Authenticate or Authorization? In-Reply-To: <47EC40E1.3000602@dotimes.com> References: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> <47EC40E1.3000602@dotimes.com> Message-ID: <902EE44E-C59E-457D-A8F6-4B25DD7023F1@digitalmarbles.com> On Mar 27, 2008, at 5:50 PM, Cherife Li wrote: > On 03/28/08 06:47, Ricardo Newbery wrote: >> In the default vcl, we have the following test... >> if (req.http.Authenticate || req.http.Cookie) { >> pass; >> } >> What issues an Authenticate header? Was this supposed to be >> Authorization? > I'm also wondering that whether this http.Authenticate means Proxy- > Authenticate > , Proxy-Authorization, and WWW-Authenticate headers defined in RFC > 2616. WWW-Authenticate and Proxy-Authenticate are response headers, not request headers. And they are supposed to accompany a 401 or 407 response, neither of which should be cacheable in any event. Proxy-Authorization is a request header but it would only be sent by a browser if Varnish first requested it with a 407 response, which I'm pretty sure Varnish does not do. Ric From ssm at linpro.no Fri Mar 28 05:27:22 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Fri, 28 Mar 2008 06:27:22 +0100 Subject: Varnish vs. X-JSON header In-Reply-To: <20080327155509.57d470c2@21torr.com> (Florian Engelhardt's message of "Thu, 27 Mar 2008 15:55:09 +0100") References: <20080327155509.57d470c2@21torr.com> Message-ID: <7xfxub8ll1.fsf@iostat.linpro.no> On Thu, 27 Mar 2008 15:55:09 +0100, Florian Engelhardt said: > Why is this X-JSON header missing when requested via varnish? It would help if you include output from varnishlog which shows both the client and the backend communication from one request. That'll provide sufficient detail of all request and response headers transferred between the backend, varnish, and the client during that transaction. -- Stig Sandbeck Mathisen, Linpro From ssm at linpro.no Fri Mar 28 05:35:56 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Fri, 28 Mar 2008 06:35:56 +0100 Subject: Authenticate or Authorization? In-Reply-To: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> (Ricardo Newbery's message of "Thu, 27 Mar 2008 15:47:00 -0700") References: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> Message-ID: <7xbq4z8l6r.fsf@iostat.linpro.no> On Thu, 27 Mar 2008 15:47:00 -0700, Ricardo Newbery said: > What issues an Authenticate header? Was this supposed to be > Authorization? Maybe, not sure. However, in order to check for HTTP authenticated connections, the headers look something like: GET / HTTP/1.1 Host: http://login.example.com Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= ...so you'll probably need to change that to match for "Authorization" instead, to not cache these documents. -- Stig Sandbeck Mathisen, Linpro From ssm at linpro.no Fri Mar 28 05:38:18 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Fri, 28 Mar 2008 06:38:18 +0100 Subject: Authenticate or Authorization? In-Reply-To: <47EC40E1.3000602@dotimes.com> (Cherife Li's message of "Fri, 28 Mar 2008 08:50:41 +0800") References: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> <47EC40E1.3000602@dotimes.com> Message-ID: <7x7ifn8l2t.fsf@iostat.linpro.no> On Fri, 28 Mar 2008 08:50:41 +0800, Cherife Li said: > I'm also wondering that whether this http.Authenticate means > Proxy-Authenticate , Proxy-Authorization, and WWW-Authenticate > headers defined in RFC 2616. req.http.Authenticate would refer to a single request http header called "Authenticate:". It is not a substring match. -- Stig Sandbeck Mathisen, Linpro From f.engelhardt at 21torr.com Fri Mar 28 07:15:59 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Fri, 28 Mar 2008 08:15:59 +0100 Subject: Varnish vs. X-JSON header In-Reply-To: <7xfxub8ll1.fsf@iostat.linpro.no> References: <20080327155509.57d470c2@21torr.com> <7xfxub8ll1.fsf@iostat.linpro.no> Message-ID: <20080328081559.5fb93025@21torr.com> On Fri, 28 Mar 2008 06:27:22 +0100 Stig Sandbeck Mathisen wrote: > On Thu, 27 Mar 2008 15:55:09 +0100, Florian Engelhardt > said: > > > Why is this X-JSON header missing when requested via varnish? > > It would help if you include output from varnishlog which shows both > the client and the backend communication from one request. That'll > provide sufficient detail of all request and response headers > transferred between the backend, varnish, and the client during that > transaction. Here it is: 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687625 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687628 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687631 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687634 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687637 0 WorkThread - 0x43203c20 start 14 SessionOpen c xxx.xxx.xxx.xxx 11851 14 ReqStart c xxx.xxx.xxx.xxx 11851 1310276097 14 RxRequest c GET 14 RxURL c /test.php 14 RxProtocol c HTTP/1.1 14 RxHeader c User-Agent: curl/7.18.0 (i686-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3 14 RxHeader c Host: server.net 14 RxHeader c Accept: */* 14 VCL_call c recv 14 VCL_return c pass 14 VCL_call c pass 14 VCL_return c pass 15 BackendOpen b default 127.0.0.1 38592 127.0.0.1 81 15 TxRequest b GET 15 TxURL b /test.php 15 TxProtocol b HTTP/1.1 15 TxHeader b User-Agent: curl/7.18.0 (i686-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3 15 TxHeader b Host: server.net 15 TxHeader b Accept: */* 15 TxHeader b X-Varnish: 1310276097 15 TxHeader b X-Forwarded-for: xxx.xxx.xxx.xxx 15 RxProtocol b HTTP/1.1 15 RxStatus b 200 15 RxResponse b OK 15 RxHeader b Transfer-Encoding: chunked 15 RxHeader b Expires: Sat, 29 Mar 2008 07:00:39 GMT 15 RxHeader b Cache-Control: max-age=86400 15 RxHeader b X-JSON: foobar 15 RxHeader b Pragma: test 15 RxHeader b Content-type: text/html 15 RxHeader b Date: Fri, 28 Mar 2008 07:00:39 GMT 15 RxHeader b Server: lighttpd 14 ObjProtocol c HTTP/1.1 14 ObjStatus c 200 14 ObjResponse c OK 14 ObjHeader c Expires: Sat, 29 Mar 2008 07:00:39 GMT 14 ObjHeader c Cache-Control: max-age=86400 14 ObjHeader c X-JSON: foobar 14 ObjHeader c Pragma: test 14 ObjHeader c Content-type: text/html 14 ObjHeader c Date: Fri, 28 Mar 2008 07:00:39 GMT 14 ObjHeader c Server: lighttpd 15 BackendReuse b default 14 TTL c 1310276097 RFC 86399 1206687639 1206687639 1206774039 86400 0 14 VCL_call c fetch 14 VCL_return c insert 14 Length c 6 14 VCL_call c deliver 14 VCL_return c deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 200 14 TxResponse c OK 14 TxHeader c Expires: Sat, 29 Mar 2008 07:00:39 GMT 14 TxHeader c Cache-Control: max-age=86400 14 TxHeader c X-JSON: foobar 14 TxHeader c Pragma: test 14 TxHeader c Content-type: text/html 14 TxHeader c Server: lighttpd 14 TxHeader c Content-Length: 6 14 TxHeader c Date: Fri, 28 Mar 2008 07:00:39 GMT 14 TxHeader c X-Varnish: 1310276097 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: keep-alive 14 ReqEnd c 1310276097 1206687639.357743979 1206687639.359473944 0.001337051 0.001684904 0.000045061 0 StatAddr - xxx.xxx.xxx.xxx 0 0 1 1 0 0 1 291 6 14 SessionClose c no request 14 StatSess c xxx.xxx.xxx.xxx 11851 0 1 1 0 0 1 291 6 0 CLI - Rd ping 0 CLI - Wr 0 200 PONG 1206687640 Hehe, problem solved. It looks like our admin configured our firewall a little bit to restrictive. The header is in the response, but it gets filtered out firewall. One thing left: The "Transfer-Encoding" is still missing in the response. Kind regards Flo From ric at digitalmarbles.com Fri Mar 28 05:59:53 2008 From: ric at digitalmarbles.com (Ricardo Newbery) Date: Thu, 27 Mar 2008 22:59:53 -0700 Subject: Authenticate or Authorization? In-Reply-To: <7xbq4z8l6r.fsf@iostat.linpro.no> References: <3E075161-8AED-449C-B5DB-D4320EAAE987@digitalmarbles.com> <7xbq4z8l6r.fsf@iostat.linpro.no> Message-ID: <511FE05A-05F8-45E9-922A-FFBF19A5ADB2@digitalmarbles.com> On Mar 27, 2008, at 10:35 PM, Stig Sandbeck Mathisen wrote: > On Thu, 27 Mar 2008 15:47:00 -0700, Ricardo Newbery > said: > >> What issues an Authenticate header? Was this supposed to be >> Authorization? > > Maybe, not sure. > > However, in order to check for HTTP authenticated connections, the > headers look something like: > > GET / HTTP/1.1 > Host: http://login.example.com > Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= > > ...so you'll probably need to change that to match for "Authorization" > instead, to not cache these documents. Right... and if you wanted to follow RFC 2616 a bit closer, you could move the test for Authorization to vcl_fetch instead of vcl_recv since the spec allows a non-authenticated cached response to be served to an authenticated request. Ric From ssm at linpro.no Fri Mar 28 09:41:43 2008 From: ssm at linpro.no (Stig Sandbeck Mathisen) Date: Fri, 28 Mar 2008 10:41:43 +0100 Subject: Varnish vs. X-JSON header In-Reply-To: <20080328081559.5fb93025@21torr.com> (Florian Engelhardt's message of "Fri, 28 Mar 2008 08:15:59 +0100") References: <20080327155509.57d470c2@21torr.com> <7xfxub8ll1.fsf@iostat.linpro.no> <20080328081559.5fb93025@21torr.com> Message-ID: <7xlk435go8.fsf@iostat.linpro.no> On Fri, 28 Mar 2008 08:15:59 +0100, Florian Engelhardt said: Received from backend. > 15 RxHeader b X-JSON: foobar Varnish object contains the header. > 14 ObjHeader c X-JSON: foobar Sent to client. > 14 TxHeader c X-JSON: foobar Lost on the way :P > Hehe, problem solved. It looks like our admin configured our > firewall a little bit to restrictive. The header is in the > response, but it gets filtered out firewall. Good thing you have logs to see what happened. What kind of firewall is it, and what is it trying to do with your HTTP requests? Remove all headers it does not recognize? I remember the Cisco PIX doing something like that with SMTP, it rewrote all non-SMTP commands, including ESMTP, to "XXXX ", and rewrote them back to the original command when the server responded with "XXXX: Command not implemented". It was kind of surprising the first time... > One thing left: The "Transfer-Encoding" is still missing in the > response. "Transfer-Encoding: chunked" is set by the backend, but when the object is sent from Varnish to the client, it's not present. I'm not sure if it is still relevant for the varnish->client connection. Does the absense of the header create problems? -- Stig Sandbeck Mathisen, Linpro From f.engelhardt at 21torr.com Fri Mar 28 09:46:02 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Fri, 28 Mar 2008 10:46:02 +0100 Subject: Varnish vs. X-JSON header In-Reply-To: <7xlk435go8.fsf@iostat.linpro.no> References: <20080327155509.57d470c2@21torr.com> <7xfxub8ll1.fsf@iostat.linpro.no> <20080328081559.5fb93025@21torr.com> <7xlk435go8.fsf@iostat.linpro.no> Message-ID: <20080328104602.011abab3@21torr.com> On Fri, 28 Mar 2008 10:41:43 +0100 Stig Sandbeck Mathisen wrote: > On Fri, 28 Mar 2008 08:15:59 +0100, Florian Engelhardt > said: > > Received from backend. > > > 15 RxHeader b X-JSON: foobar > > Varnish object contains the header. > > > 14 ObjHeader c X-JSON: foobar > > Sent to client. > > > 14 TxHeader c X-JSON: foobar > > Lost on the way :P > > > Hehe, problem solved. It looks like our admin configured our > > firewall a little bit to restrictive. The header is in the > > response, but it gets filtered out firewall. > > Good thing you have logs to see what happened. What kind of firewall > is it, and what is it trying to do with your HTTP requests? Remove > all headers it does not recognize? Its a Watchguard Firewall, configured to remove all headers not recognized. > > One thing left: The "Transfer-Encoding" is still missing in the > > response. > > "Transfer-Encoding: chunked" is set by the backend, but when the > object is sent from Varnish to the client, it's not present. I'm not > sure if it is still relevant for the varnish->client connection. > > Does the absense of the header create problems? No, no problems so far. Thanks for helping me. Kind regards Flo From f.engelhardt at 21torr.com Fri Mar 28 09:50:45 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Fri, 28 Mar 2008 10:50:45 +0100 Subject: Access Log Message-ID: <20080328105045.2e2f2b18@21torr.com> Hello, i have a question about access loggin in varnish. On the "old" environment we had just a plain lighttpd on port 80 an its logfile. When caching via varnish in the "new" environment, most of the requests will not hit lighttpd and will therefor not show up in its access.log file. Is there any way to create a access.log file similar to the one that lighty creates, via varnishlog? Or do i have to log everything via varnishlog and create that access.log maybe via a cronjob? Kind regards Flo PS: Lighty acces.log entry looks as follows: 127.0.0.1 server.net - [10/Mar/2008:11:48:12 +0100] "GET /server-status HTTP/1.1" 200 4671 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.12) Gecko/20080213 BonEcho/2.0.0.12" From f.engelhardt at 21torr.com Fri Mar 28 09:57:47 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Fri, 28 Mar 2008 10:57:47 +0100 Subject: Access Log In-Reply-To: <20080328105045.2e2f2b18@21torr.com> References: <20080328105045.2e2f2b18@21torr.com> Message-ID: <20080328105747.5d7886ff@21torr.com> On Fri, 28 Mar 2008 10:50:45 +0100 Florian Engelhardt wrote: > Hello, > > i have a question about access loggin in varnish. On the "old" > environment we had just a plain lighttpd on port 80 an its logfile. > When caching via varnish in the "new" environment, most of the > requests will not hit lighttpd and will therefor not show up in its > access.log file. Is there any way to create a access.log file similar > to the one that lighty creates, via varnishlog? > Or do i have to log everything via varnishlog and create that > access.log maybe via a cronjob? Shame on me, just ignore my mail, i have found it. "varnishncsa" makes my day ;-) Kind regards Flo From Phuwadon at sanookonline.co.th Fri Mar 28 10:45:12 2008 From: Phuwadon at sanookonline.co.th (Phuwadon Danrahan) Date: Fri, 28 Mar 2008 17:45:12 +0700 Subject: Access Log Message-ID: Hi, I have some questions about "varnishncsa". I would like to split access log via "varnishncsa" by Host header because the backend uses naming virtual hosts. I tried "varnishncsa -i RxHeader -I 'Host: mydomain'" but nothing display on screen. Even "varnishncsa -I RxHeader" can not show any log. Note, I tried "varnishlog -I RxHeader -I 'Host: mydomain'" and it works. Do you have any example on howto log multiple domains by using varnishncsa? Thank you. Phuwadon D. -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Florian Engelhardt Sent: Friday, March 28, 2008 4:58 PM To: varnish-misc at projects.linpro.no Subject: Re: Access Log On Fri, 28 Mar 2008 10:50:45 +0100 Florian Engelhardt wrote: > Hello, > > i have a question about access loggin in varnish. On the "old" > environment we had just a plain lighttpd on port 80 an its logfile. > When caching via varnish in the "new" environment, most of the > requests will not hit lighttpd and will therefor not show up in its > access.log file. Is there any way to create a access.log file similar > to the one that lighty creates, via varnishlog? > Or do i have to log everything via varnishlog and create that > access.log maybe via a cronjob? Shame on me, just ignore my mail, i have found it. "varnishncsa" makes my day ;-) Kind regards Flo _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From duja at torlen.net Fri Mar 28 11:05:23 2008 From: duja at torlen.net (duja at torlen.net) Date: Fri, 28 Mar 2008 12:05:23 +0100 Subject: Directors user sessions Message-ID: Hi, I got a question regarding the Directors in varnish vcl. If user A is logging in to http://mywebsite.com and the website is using varnish (with directors) in front of 4 backend servers. The 4 backend servers is identical. User A is logging in and hits server 1. He then goes to his profile and hits server 2. The server 2 doesn't know that user A is logged and redirect him to some "Not logged in"-page. Is there any way for varnish to lookup which server that user A should be directed to? Some kind of Sticky Session function? / Erik From cherife at dotimes.com Fri Mar 28 11:44:41 2008 From: cherife at dotimes.com (Cherife Li) Date: Fri, 28 Mar 2008 19:44:41 +0800 Subject: Directors user sessions In-Reply-To: References: Message-ID: <47ECDA29.1060709@dotimes.com> On 2008-3-28 19:05, duja at torlen.net wrote: > Hi, > > I got a question regarding the Directors in varnish vcl. > If user A is logging in to http://mywebsite.com and the website is using varnish (with directors) in front of 4 backend servers. > The 4 backend servers is identical. > > User A is logging in and hits server 1. He then goes to his profile and hits server 2. The server 2 doesn't know that user A is logged > and redirect him to some "Not logged in"-page. > > Is there any way for varnish to lookup which server that user A should be directed to? Some kind of Sticky Session function? > IMHO, Varnish is for caching, rather than for redirecting. Maybe you could consider HAProxy, or pound, or IPVS, or similar implementation. Besides, I know that sessions can be shared. > / Erik > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -- Rgds, Cherife. From f.engelhardt at 21torr.com Fri Mar 28 11:58:52 2008 From: f.engelhardt at 21torr.com (Florian Engelhardt) Date: Fri, 28 Mar 2008 12:58:52 +0100 Subject: Directors user sessions In-Reply-To: References: Message-ID: <20080328125852.708f12f6@21torr.com> On Fri, 28 Mar 2008 12:05:23 +0100 wrote: > Hi, > > I got a question regarding the Directors in varnish vcl. > If user A is logging in to http://mywebsite.com and the website is > using varnish (with directors) in front of 4 backend servers. The 4 > backend servers is identical. > > User A is logging in and hits server 1. He then goes to his profile > and hits server 2. The server 2 doesn't know that user A is logged > and redirect him to some "Not logged in"-page. > > Is there any way for varnish to lookup which server that user A > should be directed to? Some kind of Sticky Session function? You could store the sessions on a separate server, for instance on a memcache or in a database, or mount the the filesystem where the session is stored via nfs on every backend server. Kind regards Flo From michael at dynamine.net Fri Mar 28 15:27:02 2008 From: michael at dynamine.net (Michael S. Fischer) Date: Fri, 28 Mar 2008 08:27:02 -0700 Subject: Directors user sessions In-Reply-To: <20080328125852.708f12f6@21torr.com> References: <20080328125852.708f12f6@21torr.com> Message-ID: <86db848d0803280827q578b7d3r1a684ee0a562dca6@mail.gmail.com> On Fri, Mar 28, 2008 at 4:58 AM, Florian Engelhardt wrote: > You could store the sessions on a separate server, for instance on a > memcache or in a database Good idea. (Though if you use memcached, you'd probably want to periodically copy the backing store to a file to survive system failure.) > or mount the the filesystem where the > session is stored via nfs on every backend server. Bad idea. NFS file locking is unreliable at best. --Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From ottolski at web.de Mon Mar 31 18:08:57 2008 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 31 Mar 2008 20:08:57 +0200 Subject: Miscellaneous questions In-Reply-To: <86db848d0803171607j56449124ubbe2ef8bd53896@mail.gmail.com> References: <1167.1202770705@critter.freebsd.dk> <47DEF163.9000604@itiva.com> <86db848d0803171607j56449124ubbe2ef8bd53896@mail.gmail.com> Message-ID: <200803312008.57372.ottolski@web.de> Am Dienstag 18 M?rz 2008 00:07:59 schrieb Michael S. Fischer: > On Mon, Mar 17, 2008 at 3:32 PM, DHF wrote: > > This is called CARP/"Cache Array Routing Protocol" in squid land. > > Here's a link to some info on it: > > > > http://docs.huihoo.com/gnu_linux/squid/html/x2398.html > > > > It works quite well for reducing the number of globally duplicated > > objects in an multilayer accelerator setup, as you can add > > additional machines in the interstitial space between the frontline > > caches and the origin as a cheap and easy way to increase the > > overall ram available to hot objects without having to use some > > front end load balancer like perlbal, big ip or whatever to direct > > the individual clients to specific frontlines to accomplish the > > same thing ( though you usually still have a load balancer for > > fault tolerance ). Though in squid there are some bugs with their > > implementation ... > > Thanks for the reminder. I'll file RFEs for both the static and CARP > implementations. I presume the static configuration will be done > first (if at all), as it's probably significantly easier to > implement. probably not exactly the same, but may be someone finds it useful: If just started to dive a bit into HAProxy (http://haproxy.1wt.eu/): the development version has the ability to calculate the loadbalancing based on the hash of the URI to decide which backend should receive a request. I guess this could be a nice companion to put in front of several reverse proxies to increase the hit rate of each one. Cheers, Sascha From ottolski at web.de Mon Mar 31 18:10:06 2008 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 31 Mar 2008 20:10:06 +0200 Subject: production ready devel snapshot? Message-ID: <200803312010.06804.ottolski@web.de> Hi, probably a stupid question, but if I'd like to use more recent features like the load-balancer, and since the latest official release is a bit dated, is there anything like a snapshot release that is worth giving it a try, especially if my configuration will hopefully stay simple for a while? Thanks, Sascha