From ssm at redpill-linpro.com Tue Dec 1 09:43:08 2009 From: ssm at redpill-linpro.com (Stig Sandbeck Mathisen) Date: Tue, 01 Dec 2009 10:43:08 +0100 Subject: sample varnish configuration for ec2? In-Reply-To: (Joe Van Dyk's message of "Mon, 30 Nov 2009 13:04:25 -0800") References: Message-ID: <7xr5rfuj0z.fsf@fsck.linpro.no> Joe Van Dyk writes: > Anyone got a sample varnish configuration that they use on Amazon's > EC2? This is a topic more suited for the varnish-misc at projects.linpro.no mailing list, varnish-dev is used for development of varnish. The VCL should be configured according to your backend web application, not necessarily the underlying hardware, virtual or not. The default configuration should serve as well on an EC2 server as on any other server. -- Stig Sandbeck Mathisen From paul.mansfield at taptu.com Mon Dec 7 14:22:46 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Mon, 07 Dec 2009 14:22:46 +0000 Subject: munin plugin with multiple instances of varnish Message-ID: <4B1D0FB6.9020706@taptu.com> hello, I've tried out the munin plugin and had to set "env.name varnish1" so that I can measure the performance of the first of several varnish instances is there a way please of monitoring more than one varnish instance with the plugin written by Kristian Lyngst?l? thanks Paul From jon.skarpeteig at gmail.com Fri Dec 4 20:53:36 2009 From: jon.skarpeteig at gmail.com (Jon Skarpeteig) Date: Fri, 4 Dec 2009 21:53:36 +0100 Subject: 120 of 143 tests failed Message-ID: <5e98f3430912041253v7f239817s2d59b4e9fffcdbc4@mail.gmail.com> Assert error in varnish_start(), vtc_varnish.c line 279: Condition(u == CLIS_OK) not true. errno = 29 (Illegal seek) /bin/bash: line 4: 21154 Aborted ./varnishtest ${dir}$tst FAIL: ./tests/v00023.vtc # top TEST ././tests/v00024.vtc starting # TEST Test that headers can be compared ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 101 Assert error in varnish_start(), vtc_varnish.c line 279: Condition(u == CLIS_OK) not true. errno = 29 (Illegal seek) /bin/bash: line 4: 21206 Aborted ./varnishtest ${dir}$tst FAIL: ./tests/v00024.vtc =============================================== 120 of 143 tests failed Please report to varnish-dev at projects.linpro.no =============================================== make[3]: *** [check-TESTS] Error 1 make[3]: Leaving directory `/root/varnish-2.0.5/bin/varnishtest' make[2]: *** [check-am] Error 2 make[2]: Leaving directory `/root/varnish-2.0.5/bin/varnishtest' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/root/varnish-2.0.5/bin' make: *** [check-recursive] Error 1 I am attempting to install Varnish 2.0.5 from sourceforge downloaded source on Ubuntu Hardy 8.04 LTS My steps where: sudo apt-get install xsltproc sudo apt-get install automake autoconf libtool libncurses5 sudo apt-get install groff-base ./autogen.sh ./configure && make && make check && sudo make install From paul.mansfield at taptu.com Tue Dec 8 10:34:19 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Tue, 08 Dec 2009 10:34:19 +0000 Subject: munin plugin with multiple instances of varnish In-Reply-To: <4B1D0FB6.9020706@taptu.com> References: <4B1D0FB6.9020706@taptu.com> Message-ID: <4B1E2BAB.7040500@taptu.com> On 07/12/09 14:22, Paul Mansfield wrote: > hello, I've tried out the munin plugin and had to set "env.name > varnish1" so that I can measure the performance of the first of several > varnish instances > > is there a way please of monitoring more than one varnish instance with > the plugin written by Kristian Lyngst?l? I decided to go ahead and tweak the plugin to do this, by simply extending the formatting of the plugin name and retain backwards compatibility. It's attached. old format: varnish_hit_rate - still works varnish_NAME__hit_rate - produces stats for varnish instance NAME note the double underscore just in case your varnish instance has an underscore in its name! so all you have to do is rename existing plugins to reflect your varnish instance, and of course rename the RRD files on the master node. then you can create additional instances of the plugins for any additional instances of varnish! here's a one-liner that will do the job; I opted to copy the RRDs rather than rename, just in case... for Y in $X; do Z=`echo $Y | sed 's/varnish_/varnish_NAME__/'`; echo $Y $Z; cp -p $Y $Z ; done -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: varnish_ URL: From paul.mansfield at taptu.com Tue Dec 8 10:36:22 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Tue, 08 Dec 2009 10:36:22 +0000 Subject: munin plugin with multiple instances of varnish In-Reply-To: <4B1E2BAB.7040500@taptu.com> References: <4B1D0FB6.9020706@taptu.com> <4B1E2BAB.7040500@taptu.com> Message-ID: <4B1E2C26.1090906@taptu.com> On 08/12/09 10:34, Paul Mansfield wrote: > here's a one-liner that will do the job; I opted to copy the RRDs rather > than rename, just in case... > > for Y in $X; do Z=`echo $Y | sed 's/varnish_/varnish_NAME__/'`; echo $Y > $Z; cp -p $Y $Z ; done oops, there was a preceding line missing so it should be X=`ls HOSTNAME*-varnish_* | grep -v varnish__` for Y in $X; do Z=`echo $Y | sed 's/varnish_/varnish_NAME__/'`; echo $Y $Z; cp -p $Y $Z ; done From paul.mansfield at taptu.com Tue Dec 8 10:57:02 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Tue, 08 Dec 2009 10:57:02 +0000 Subject: munin plugin with multiple instances of varnish In-Reply-To: <4B1E2C26.1090906@taptu.com> References: <4B1D0FB6.9020706@taptu.com> <4B1E2BAB.7040500@taptu.com> <4B1E2C26.1090906@taptu.com> Message-ID: <4B1E30FE.3030306@taptu.com> d'oh, I realised that you couldn't tell the difference between the graphs of difference instances new version attached unfortunately this means you lose your old graphs. Maybe someone with better munin fu can fix what I hacked up? -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: varnish_ URL: From slink at schokola.de Wed Dec 9 19:29:01 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 20:29:01 +0100 Subject: HSH_Lookup entered multiply - why could that happen? Message-ID: <4B1FFA7D.3020505@schokola.de> Hi, I am trying to understand under which circumstances in Varnish 2.0.3 HSH_Lookup could get entered entered multiply in succession in a restart-scenatio. The effect I am seeing is duplication of session objects on the waitinglist. I was also considering duplication of output lines, but if I add a high resolution time stamp in HSH_Lookup's handling of busy objects, if (busy_o != NULL) { /* There are one or more busy objects, wait for them */ if (sp->esis == 0) VTAILQ_INSERT_TAIL(&oh->waitinglist, sp, list); if (params->diag_bitmap & 0x20) WSP(sp, SLT_Debug, "%lld %p on waiting list %p <%s>", gethrtime(), sp, oh, oh->hash); sp->objhead = oh; Lck_Unlock(&oh->mtx); return (NULL); } what I get is this (URL removed to protect the innocent): 48 VCL_call c recv lookup 48 VCL_call c hash hash 48 Debug c 24115201202978841 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Debug c 24115203101868689 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Debug c 24115203106349919 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Debug c 24115205121308416 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Debug c 24115205122330112 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Debug c 24115207141435616 8d6278 on waiting list 8b72e0 <##URL##> 48 VCL_call c hash hash 48 Hit c 1506286692 the restart VCL code basically is this: sub vcl_recv { if (req.restarts == 0) { # set default backend and default grace } else if (req.restarts == 1) { set req.grace = 24h; # set other backend } else { error 503 "Retry count exceeded"; } } Basically, what I am trying to achieve is to have restarts fall back to the cache. I'd appreciate any pointers, at this point I really don't understand why the hash vcl should get invoked multiple times. Thank you, Nils From slink at schokola.de Wed Dec 9 19:30:11 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 20:30:11 +0100 Subject: Varnish and sticky sessions In-Reply-To: <38547.1259321048@critter.freebsd.dk> References: <38547.1259321048@critter.freebsd.dk> Message-ID: <4B1FFAC3.4030709@schokola.de> BTW, I am working on something like this. Won't commit on any deadlines either. From slink at schokola.de Wed Dec 9 19:55:16 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 20:55:16 +0100 Subject: HSH_Lookup entered multiply - why could that happen? In-Reply-To: <4B1FFA7D.3020505@schokola.de> References: <4B1FFA7D.3020505@schokola.de> Message-ID: <4B2000A4.4020408@schokola.de> Maybe I should also mention that I am trying to test the restart behavior by simulating a a slow and unresponsive backend using this VCL code: sub vcl_fetch { /* TEST CODE to test restarting REMOVE ME !!!! */ if ((req.url ~ "MAGIC TESTING URL SUBSTRING") && (req.restarts <= 3)) { C{ sleep(2); }C return (restart); } Is there a better way to do this? From phk at phk.freebsd.dk Wed Dec 9 19:56:12 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Dec 2009 19:56:12 +0000 Subject: HSH_Lookup entered multiply - why could that happen? In-Reply-To: Your message of "Wed, 09 Dec 2009 20:29:01 +0100." <4B1FFA7D.3020505@schokola.de> Message-ID: <7885.1260388572@critter.freebsd.dk> In message <4B1FFA7D.3020505 at schokola.de>, Nils Goroll writes: >Hi, >what I get is this (URL removed to protect the innocent): > > 48 VCL_call c recv lookup > 48 VCL_call c hash hash > 48 Debug c 24115201202978841 8d6278 on waiting list 8b72e0 <##URL##> What happens here is that another client/thread holds this object "busy" while it is being fetched from the backend. Once the object is marked unbusy, the waiting threads are relased, and calls hash again. I'm not quite sure you you keep hitting it so many times, it smells like som weird situation where the object takes a long time to fetch, has no cacheability, but does not get marked "pass" in vcl_fetch ? Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Wed Dec 9 20:00:05 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 21:00:05 +0100 Subject: Solved: HSH_Lookup entered multiply - why could that happen? In-Reply-To: <4B1FFA7D.3020505@schokola.de> References: <4B1FFA7D.3020505@schokola.de> Message-ID: <4B2001C5.7080401@schokola.de> Sorry, there's an easy answer to this: I was using varnishlog -o. Sorry, folks and thanks to all who have started thinking about this. Nils From slink at schokola.de Wed Dec 9 20:13:02 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 21:13:02 +0100 Subject: scheduling off the waiting list In-Reply-To: <7885.1260388572@critter.freebsd.dk> References: <7885.1260388572@critter.freebsd.dk> Message-ID: <4B2004CE.5080604@schokola.de> Hi Poul-Henning, thank you for taking the time to respond. > What happens here is that another client/thread holds this object > "busy" while it is being fetched from the backend. Once the object > is marked unbusy, the waiting threads are relased, and calls hash > again. My understanding is that the waiting sessions are re-scheduled on threads, right? > I'm not quite sure you you keep hitting it so many times, it smells > like som weird situation where the object takes a long time to fetch, > has no cacheability, but does not get marked "pass" in vcl_fetch ? I am seeing hsh_rush getting called much more often than I thought it should be and I don't yet understand why. What I would really like to see is that the waitinglist gets rescheduled when the busy object is actually becomes in the cache. I am suspecting this has to do with calling HSH_Deref(&Parent) in HSH_Unbusy and/or the fact that HSH_Drop calls both Unbusy and Deref, but I don't understand this yet. Nils From phk at phk.freebsd.dk Wed Dec 9 20:23:20 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 09 Dec 2009 20:23:20 +0000 Subject: scheduling off the waiting list In-Reply-To: Your message of "Wed, 09 Dec 2009 21:13:02 +0100." <4B2004CE.5080604@schokola.de> Message-ID: <8003.1260390200@critter.freebsd.dk> In message <4B2004CE.5080604 at schokola.de>, Nils Goroll writes: >Hi Poul-Henning, > >thank you for taking the time to respond. > >> What happens here is that another client/thread holds this object >> "busy" while it is being fetched from the backend. Once the object >> is marked unbusy, the waiting threads are relased, and calls hash >> again. > >My understanding is that the waiting sessions are re-scheduled on threads, right? Correct. >What I would really like to see is that the waitinglist gets rescheduled when >the busy object is actually becomes in the cache. I am suspecting this has to do >with calling HSH_Deref(&Parent) in HSH_Unbusy and/or the fact that HSH_Drop >calls both Unbusy and Deref, but I don't understand this yet. That is how it is supposed to work, and I belive, how it works. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Wed Dec 9 20:27:22 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 09 Dec 2009 21:27:22 +0100 Subject: scheduling off the waiting list In-Reply-To: <8003.1260390200@critter.freebsd.dk> References: <8003.1260390200@critter.freebsd.dk> Message-ID: <4B20082A.9030505@schokola.de> >> What I would really like to see is that the waitinglist gets rescheduled when >> the busy object is actually becomes in the cache. I am suspecting this has to do >> with calling HSH_Deref(&Parent) in HSH_Unbusy and/or the fact that HSH_Drop >> calls both Unbusy and Deref, but I don't understand this yet. > > That is how it is supposed to work, and I belive, how it works. Good. Then I am either messing up this behavior with my config, or I've hit a corner case. I need to have a break now, but I will definitely get back to you on this when I have gained new insights. BTW, if you get to this, I would very much appreciate comments on http://varnish.projects.linpro.no/ticket/599 (WRK_Queue should prefer thread pools with idle threads / improve thread pool loadbalancing) and on http://varnish.projects.linpro.no/ticket/598 (SIGSEGV due to null pointer dereference in SES_Delete - VSL). Thank you, Nils From paul.mansfield at taptu.com Fri Dec 11 17:10:39 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Fri, 11 Dec 2009 17:10:39 +0000 Subject: munin plugin with multiple instances of varnish In-Reply-To: <4B1D0FB6.9020706@taptu.com> References: <4B1D0FB6.9020706@taptu.com> Message-ID: <4B227D0F.2050607@taptu.com> I revamped my hacks and have attached my munin plugin, I renamed it to vanishm_ for multi instance. Paul -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/http-index-format Size: 1331 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: varnishm_ URL: From kristian at redpill-linpro.com Tue Dec 15 15:21:48 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Tue, 15 Dec 2009 16:21:48 +0100 Subject: munin plugin with multiple instances of varnish In-Reply-To: <4B227D0F.2050607@taptu.com> References: <4B1D0FB6.9020706@taptu.com> <4B227D0F.2050607@taptu.com> Message-ID: <20091215152148.GA6866@kjeks.linpro.no> On Fri, Dec 11, 2009 at 05:10:39PM +0000, Paul Mansfield wrote: > I revamped my hacks and have attached my munin plugin, I renamed it to > vanishm_ for multi instance. Could you provide a diff/patch? Easier to read... Is it compatible with varnish_ syntax? (Ie: should we just replace it?) Also: you may want to add your own copyright notice below ours if you've done significant work. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From jhayter at manta.com Mon Dec 14 15:36:15 2009 From: jhayter at manta.com (Jim Hayter) Date: Mon, 14 Dec 2009 10:36:15 -0500 Subject: make check errors for varnish 2.0.5 on Solaris 10 Message-ID: System is Solaris 10 on intel hardware. root # uname -a SunOS ecnext39 5.10 Generic_138889-08 i86pc i386 i86pc I made the following change based on a fix I found for Solaris: lib/libvarnish/tcp.c, line 245 change AZ(setsockopt(sock, SOL_SOCKET, SO_LINGER, &lin, sizeof lin)); to setsockopt(sock, SOL_SOCKET, SO_LINGER, &lin, sizeof lin); 'make check' reports 9 of 143 tests failed. The output for the failed tests is below. I can provide the complete 'make check' output if it is useful. Thanks, Jim # top TEST ././tests/b00020.vtc starting # TEST Check the between_bytes_timeout behaves from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00020.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from parameters FAIL: ./tests/b00020.vtc # top TEST ././tests/b00021.vtc starting # TEST Check the between_bytes_timeout behaves from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00021.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from vcl FAIL: ./tests/b00021.vtc # top TEST ././tests/b00022.vtc starting # TEST Check the between_bytes_timeout behaves from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00022.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from backend definition FAIL: ./tests/b00022.vtc # top TEST ././tests/b00023.vtc starting # TEST Check that the first_byte_timeout works from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00023.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from parameters FAIL: ./tests/b00023.vtc # top TEST ././tests/b00024.vtc starting # TEST Check that the first_byte_timeout works from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00024.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from vcl FAIL: ./tests/b00024.vtc # top TEST ././tests/b00025.vtc starting # TEST Check that the first_byte_timeout works from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00025.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from backend definition FAIL: ./tests/b00025.vtc ... # top TEST ././tests/r00345.vtc starting # TEST #345, ESI waitinglist trouble ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c2 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c2 Connected to 127.0.0.1:9081 fd is 10 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 rxreq ### c2 rxresp ### s1 delaying 1 second(s) ### s1 shutting fd 4 ## s1 Ending ### c2 Closing fd 10 ## c2 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 4 ### c1 rxresp ### c1 Closing fd 4 ## c1 Ending # top RESETTING after ././tests/r00345.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ---- ?? HTTP rx failed (Error 0) ---- TEST FILE: ././tests/r00345.vtc ---- TEST DESCRIPTION: #345, ESI waitinglist trouble FAIL: ./tests/r00345.vtc ... # top TEST ././tests/v00009.vtc starting # TEST Test round robin director ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:2000 ### s2 listen on 127.0.0.1:3000 (fd 4) ## s3 Starting server ## s2 Started on 127.0.0.1:3000 ### s3 listen on 127.0.0.1:4000 (fd 6) ## s3 Started on 127.0.0.1:4000 ## s4 Starting server ### s4 listen on 127.0.0.1:5000 (fd 9) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s4 Started on 127.0.0.1:5000 ### v1 opening CLI connection ### v1 CLI connection fd = 10 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 15 ### c1 rxresp ### s1 Accepted socket fd is 5 ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 rxresp ### s2 Accepted socket fd is 7 ### s2 rxreq ### s2 shutting fd 7 ## s2 Ending ### c1 rxresp ### s3 Accepted socket fd is 8 ### s3 rxreq ### s3 shutting fd 8 ## s3 Ending ### c1 rxresp ### s4 Accepted socket fd is 14 ### s4 rxreq ### s4 shutting fd 14 ## s4 Ending ### c1 Closing fd 15 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ### s2 listen on 127.0.0.1:3000 (fd 4) ## s1 Started on 127.0.0.1:2000 ## c2 Starting client ## s2 Started on 127.0.0.1:3000 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s2 Accepted socket fd is 7 ### s2 rxreq ### s2 shutting fd 7 ## s2 Ending ---- c2 EXPECT resp.http.content-length (2) == 1 (1) failed ---- TEST FILE: ././tests/v00009.vtc ---- TEST DESCRIPTION: Test round robin director FAIL: ./tests/v00009.vtc ... # top TEST ././tests/v00014.vtc starting # TEST Check req.backend.healthy ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### s1 Iteration 0 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### s1 Accepted socket fd is 4 ### s1 rxreq Assert error in http_rxchar(), vtc_http.c line 345: Condition(i > 0) not true. bash: line 5: 25119 Abort (core dumped) ./varnishtest ${dir}$tst FAIL: ./tests/v00014.vtc ... =============================================== 9 of 143 tests failed Please report to varnish-dev at projects.linpro.no =============================================== From paul.mansfield at taptu.com Wed Dec 16 10:15:06 2009 From: paul.mansfield at taptu.com (Paul Mansfield) Date: Wed, 16 Dec 2009 10:15:06 +0000 Subject: munin plugin with multiple instances of varnish In-Reply-To: <20091215152148.GA6866@kjeks.linpro.no> References: <4B1D0FB6.9020706@taptu.com> <4B227D0F.2050607@taptu.com> <20091215152148.GA6866@kjeks.linpro.no> Message-ID: <4B28B32A.2020104@taptu.com> On 15/12/09 15:21, Kristian Lyngstol wrote: > On Fri, Dec 11, 2009 at 05:10:39PM +0000, Paul Mansfield wrote: >> I revamped my hacks and have attached my munin plugin, I renamed it to >> vanishm_ for multi instance. > > Could you provide a diff/patch? Easier to read... sure, it's appended > Is it compatible with varnish_ syntax? (Ie: should we just replace it?) yes, you can use either "multi-varnish syntax" varnishm_NAME__ASPECT or single varnish instance syntax as before varnishm_ASPECT since you seem to be happy to replace yours (I didn't want to tread on your toes and assume my hack was worthy!), I've renamed my plugin to varnish_; I've attached it too. > Also: you may want to add your own copyright notice below ours if you've > done significant work. I added a comment below your copyright to reflect my simple amendment Hope it proves useful to someone. We are using varnish quite extensively now and it's pretty good, thanks! Paul $ diff varnish_.old varnish_ 3c3 < # varnish_ - Munin plugin to for Varnish --- > # varnish_ - Munin plugin for Multiple Varnish Servers 7c7,9 < # --- > # Modified by Paul Mansfield so that it can be used with multiple > # varnish instances > # 641c643,645 < print "$title $values{$value}{$field}\n"; --- > print "$title $values{$value}{$field}"; > print " - $varnishname" if ($title eq 'graph_title'); > print "\n"; 684c688,689 < # Read and verify the aspect ($self). --- > # Read and verify the aspect ($self) - > # the format is varnish_NAME__aspect or varnish_aspect 688a694,699 > > if ($self =~ /^(\w+)__(.*)$/) > { > $varnishname = $1; > $self = $2; > } 788a800 > #print "$varnishname.$value.value "; 792a805,806 > > # end varnish_ plugin -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: varnish_ URL: From slink at schokola.de Mon Dec 28 21:20:15 2009 From: slink at schokola.de (Nils Goroll) Date: Mon, 28 Dec 2009 22:20:15 +0100 Subject: scheduling off the waiting list In-Reply-To: <4B20082A.9030505@schokola.de> References: <8003.1260390200@critter.freebsd.dk> <4B20082A.9030505@schokola.de> Message-ID: <4B39210F.7080908@schokola.de> Hi Poul and all, Nils Goroll wrote: >>> What I would really like to see is that the waitinglist gets rescheduled when >>> the busy object is actually becomes in the cache. I am suspecting this has to do >>> with calling HSH_Deref(&Parent) in HSH_Unbusy and/or the fact that HSH_Drop >>> calls both Unbusy and Deref, but I don't understand this yet. >> That is how it is supposed to work, and I belive, how it works. > > Good. Then I am either messing up this behavior with my config, or I've hit a > corner case. I need to have a break now, but I will definitely get back to you > on this when I have gained new insights. I'm trying to sort my thoughts on this in public: - A fundamental issue seems to be that the waitinglist is attached to the object head, and if no proper match is found in the cache, we wait for whatever is to come, even if this is not what we are going to need. On the other hand, while the object is busy, not all selection criteria will be known a priori (in particular not the Vary header), so this design might just be as good as it can be. - The only way a session can get onto the waiting list is when there is a busy object being waited for - but hsh_rush is not only called when an object gets unbusied (HSH_Unbusy), but also whenever is it dereferenced (HSH_Deref) Call trees are: cnt_fetch -> HSH_Unbusy->hsh_rush ^ | / | HSH_Drop (parent) \ | V V HSH_Deref->hsh_rush HSH_Deref is called from cache_expire EXP_NukeOne and exp_timer, as well as cache_center cnt_hit (if not delivering), cnt_lookup (if it's a pass) and cnt_deliver. HSH_Drop is called from various functions in cache_center. So basically there are two different scenarios when hsh_rush is called. * Trigger delivery of an object which just got unbusied * and trigger delivery of more sessions which did not fire in the first round The point is that when many sessions are waiting on a busy object, there are many reasons for those to be rescheduled even if the object they are waiting for has not yet become available - in particular as many different objects may live under the object head. I think we need to change that. The only reason why we need to call hsh_rush outside cnt_fetch->HSH_Unbusy case is that we have the rush_exponent and limit the number of sessions to be rescheduled with each hsh_rush, so one option would be to do away with the rush_exponent and the the waiters loose all at once. This would also solve the case where, once a session get its thread, the cached content has become invalidated so it would itself fetch again. I am not sure about an alternative solution. When we unbusy an object, we have a good chance that it's actually worth rescheduling waiting sessions, but for the other cases, we can't easily tell if the session would wait again or not. What if we noted in the object head the number of busy objects so hsh_rush would only actually schedule sessions if there aren't any or when called from cnt_fetch? Any better ideas? Thank you for reading, Nils From phk at phk.freebsd.dk Mon Dec 28 21:24:08 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 28 Dec 2009 21:24:08 +0000 Subject: scheduling off the waiting list In-Reply-To: Your message of "Mon, 28 Dec 2009 22:20:15 +0100." <4B39210F.7080908@schokola.de> Message-ID: <46354.1262035448@critter.freebsd.dk> In message <4B39210F.7080908 at schokola.de>, Nils Goroll writes: >So basically there are two different scenarios when hsh_rush is called. > >* Trigger delivery of an object which just got unbusied >* and trigger delivery of more sessions which did not fire in the first round The basic point here is that we do not want to unleash 500 waiting sessions when the object is unbusied, so we release a few, and when they are done they each release a few etc. Sort of inspired by TCP-slowstart, but not half as advanced. I'm not entirely happy with how this works in practice, but within the current reach, I have no better ideas. Grace mode helps a lot, if you can use it. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Tue Dec 29 06:58:08 2009 From: slink at schokola.de (Nils Goroll) Date: Tue, 29 Dec 2009 07:58:08 +0100 Subject: scheduling off the waiting list In-Reply-To: <46354.1262035448@critter.freebsd.dk> References: <46354.1262035448@critter.freebsd.dk> Message-ID: <4B39A880.3030208@schokola.de> Hi Poul-Henning, thank you for your quick response. >> So basically there are two different scenarios when hsh_rush is called. >> >> * Trigger delivery of an object which just got unbusied >> * and trigger delivery of more sessions which did not fire in the first round > > The basic point here is that we do not want to unleash 500 waiting sessions > when the object is unbusied, so we release a few, and when they are done > they each release a few etc. Sure, and I think this is a good idea. I should have mentioned that the reason why I care about this is that in HSH_Prepare, memory is allocated on sp->http->ws each time a session fires, so when it doesn't get to delivery, we can run out of session memory. And I have a dozen coredumps showing the workspace filled up with hashptrs. The symptoms look very much like those in http://varnish.projects.linpro.no/ticket/551, and reports of these symptoms might be related to this issue. > Grace mode helps a lot, if you can use it. Yes, absolutely. Nils From slink at schokola.de Tue Dec 29 16:49:38 2009 From: slink at schokola.de (Nils Goroll) Date: Tue, 29 Dec 2009 17:49:38 +0100 Subject: scheduling off the waiting list In-Reply-To: <4B39A880.3030208@schokola.de> References: <46354.1262035448@critter.freebsd.dk> <4B39A880.3030208@schokola.de> Message-ID: <4B3A3322.6010003@schokola.de> Hi Poul-Henning and all, I have documented my understanding of this issue and a suggested fix in http://varnish.projects.linpro.no/ticket/610 . I hope this fix will implement the intended behavior. I have done some basic tests, but the fix is pretty fresh and I still need to verify it in production. Poul-Henning Kamp wrote (on December 9th): >> What I would really like to see is that the waitinglist gets rescheduled when >> the busy object is actually becomes in the cache. I am suspecting this has to >> do with calling HSH_Deref(&Parent) in HSH_Unbusy and/or the fact that >> HSH_Drop calls both Unbusy and Deref, but I don't understand this yet. > > That is how it is supposed to work [...] Again, thank you very much for your fantastic work and this great product. Nils From cal at fbsdata.com Thu Dec 31 22:32:15 2009 From: cal at fbsdata.com (Cal Heldenbrand) Date: Thu, 31 Dec 2009 16:32:15 -0600 Subject: inline C code and post data Message-ID: <6d0f643a0912311432v1e594e5cl601f1a5b19b59956@mail.gmail.com> Hi everyone, I just started experimenting with the coolness of using inline C in VCL, and I've run into a bit of a hurdle -- I can't find any VRT functions that allow me to dig into the request body where the post data is at. For some background, I'm working on using varnish for dynamic pages that hit our database, but aren't updated frequently. I have some inline code in vcl_hash that adds a user ID to the hash from the get parameters, and also adds a "skip list" to ignore variables. (For stuff like timestamps being passed in that don't affect the output of the page) I attempted hashing on our per-user cookie value, but I have a lot of one way hashes in there and it makes it difficult to purge an individual page for a particular user. Passing a user ID via a parameter also makes it easier to "turn on" varnish for a small set of high volume pages, and pattern match that in our load balancer to direct to our varnish cluster. I have all of this working, but much of our content uses post requests due to some dumb feature in MSIE that limits the length of a get request to 2048 characters. If anyone is interested in this functionality, let me know. After it's finished I could wrap this up into a plugin package and publish it somewhere. Thanks for any help! --Cal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhayter at manta.com Wed Dec 16 19:24:13 2009 From: jhayter at manta.com (Jim Hayter) Date: Wed, 16 Dec 2009 19:24:13 -0000 Subject: make check errors for varnish 2.0.6 on Solaris 10 Message-ID: System is Solaris 10 on intel hardware. root # uname -a SunOS ecnext39 5.10 Generic_138889-08 i86pc i386 i86pc 'make check' reports 10 of 145 tests failed. The output for the failed tests is below. I'd appreciate any feedback on this. We want to put varnish into production but are worried due to these test failures. Thanks, Jim # top TEST ././tests/b00020.vtc starting # TEST Check the between_bytes_timeout behaves from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00020.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from parameters FAIL: ./tests/b00020.vtc # top TEST ././tests/b00021.vtc starting # TEST Check the between_bytes_timeout behaves from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00021.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from vcl FAIL: ./tests/b00021.vtc # top TEST ././tests/b00022.vtc starting # TEST Check the between_bytes_timeout behaves from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 7 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00022.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from backend definition FAIL: ./tests/b00022.vtc # top TEST ././tests/b00023.vtc starting # TEST Check that the first_byte_timeout works from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00023.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from parameters FAIL: ./tests/b00023.vtc # top TEST ././tests/b00024.vtc starting # TEST Check that the first_byte_timeout works from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00024.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from vcl FAIL: ./tests/b00024.vtc # top TEST ././tests/b00025.vtc starting # TEST Check that the first_byte_timeout works from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00025.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from backend definition FAIL: ./tests/b00025.vtc ... # top TEST ././tests/r00345.vtc starting # TEST #345, ESI waitinglist trouble ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c2 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### c2 Connected to 127.0.0.1:9081 fd is 10 ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 rxreq ### c2 rxresp ### s1 delaying 1 second(s) ### s1 shutting fd 4 ## s1 Ending ### c2 Closing fd 10 ## c2 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 4 ### c1 rxresp ### c1 Closing fd 4 ## c1 Ending # top RESETTING after ././tests/r00345.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ---- ? HTTP rx failed (Error 0) ---- TEST FILE: ././tests/r00345.vtc ---- TEST DESCRIPTION: #345, ESI waitinglist trouble FAIL: ./tests/r00345.vtc ... # top TEST ././tests/v00009.vtc starting # TEST Test round robin director ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:2000 ### s2 listen on 127.0.0.1:3000 (fd 4) ## s3 Starting server ## s2 Started on 127.0.0.1:3000 ### s3 listen on 127.0.0.1:4000 (fd 7) ## s4 Starting server ## s3 Started on 127.0.0.1:4000 ### s4 listen on 127.0.0.1:5000 (fd 9) ## v1 Launch ## s4 Started on 127.0.0.1:5000 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 11 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 15 ### c1 rxresp ### s1 Accepted socket fd is 5 ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 rxresp ### s2 Accepted socket fd is 6 ### s2 rxreq ### s2 shutting fd 6 ## s2 Ending ### c1 rxresp ### s3 Accepted socket fd is 8 ### s3 rxreq ### s3 shutting fd 8 ## s3 Ending ### c1 rxresp ### s4 Accepted socket fd is 10 ### s4 rxreq ### s4 shutting fd 10 ## s4 Ending ### c1 Closing fd 15 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ### s2 listen on 127.0.0.1:3000 (fd 4) ## s1 Started on 127.0.0.1:2000 ## c2 Starting client ## s2 Started on 127.0.0.1:3000 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s2 Accepted socket fd is 6 ### s2 rxreq ### s2 shutting fd 6 ## s2 Ending ---- c2 EXPECT resp.http.content-length (2) == 1 (1) failed ---- TEST FILE: ././tests/v00009.vtc ---- TEST DESCRIPTION: Test round robin director FAIL: ./tests/v00009.vtc ... =============================================== 8 of 145 tests failed Please report to varnish-dev at projects.linpro.no =============================================== From jhayter at manta.com Wed Dec 16 22:13:33 2009 From: jhayter at manta.com (Jim Hayter) Date: Wed, 16 Dec 2009 22:13:33 -0000 Subject: make check errors for varnish 2.0.6 on Solaris 10 In-Reply-To: References: Message-ID: There is a typo below. 'make check' reports *8* of 145 tests failed, not 10. -----Original Message----- From: Jim Hayter Sent: Wednesday, December 16, 2009 2:24 PM To: 'varnish-dev at projects.linpro.no' Subject: make check errors for varnish 2.0.6 on Solaris 10 System is Solaris 10 on intel hardware. root # uname -a SunOS ecnext39 5.10 Generic_138889-08 i86pc i386 i86pc 'make check' reports 10 of 145 tests failed. The output for the failed tests is below. I'd appreciate any feedback on this. We want to put varnish into production but are worried due to these test failures. Thanks, Jim # top TEST ././tests/b00020.vtc starting # TEST Check the between_bytes_timeout behaves from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00020.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from parameters FAIL: ./tests/b00020.vtc # top TEST ././tests/b00021.vtc starting # TEST Check the between_bytes_timeout behaves from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00021.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from vcl FAIL: ./tests/b00021.vtc # top TEST ././tests/b00022.vtc starting # TEST Check the between_bytes_timeout behaves from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 7 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 4 ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00022.vtc ---- TEST DESCRIPTION: Check the between_bytes_timeout behaves from backend definition FAIL: ./tests/b00022.vtc # top TEST ././tests/b00023.vtc starting # TEST Check that the first_byte_timeout works from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00023.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from parameters FAIL: ./tests/b00023.vtc # top TEST ././tests/b00024.vtc starting # TEST Check that the first_byte_timeout works from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00024.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from vcl FAIL: ./tests/b00024.vtc # top TEST ././tests/b00025.vtc starting # TEST Check that the first_byte_timeout works from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 1.5 second(s) ### s1 shutting fd 8 ## s1 Ending ---- c1 EXPECT resp.status (200) == 503 (503) failed ---- TEST FILE: ././tests/b00025.vtc ---- TEST DESCRIPTION: Check that the first_byte_timeout works from backend definition FAIL: ./tests/b00025.vtc ... # top TEST ././tests/r00345.vtc starting # TEST #345, ESI waitinglist trouble ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c2 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### c2 Connected to 127.0.0.1:9081 fd is 10 ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 rxreq ### c2 rxresp ### s1 delaying 1 second(s) ### s1 shutting fd 4 ## s1 Ending ### c2 Closing fd 10 ## c2 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 4 ### c1 rxresp ### c1 Closing fd 4 ## c1 Ending # top RESETTING after ././tests/r00345.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ---- ? HTTP rx failed (Error 0) ---- TEST FILE: ././tests/r00345.vtc ---- TEST DESCRIPTION: #345, ESI waitinglist trouble FAIL: ./tests/r00345.vtc ... # top TEST ././tests/v00009.vtc starting # TEST Test round robin director ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:2000 ### s2 listen on 127.0.0.1:3000 (fd 4) ## s3 Starting server ## s2 Started on 127.0.0.1:3000 ### s3 listen on 127.0.0.1:4000 (fd 7) ## s4 Starting server ## s3 Started on 127.0.0.1:4000 ### s4 listen on 127.0.0.1:5000 (fd 9) ## v1 Launch ## s4 Started on 127.0.0.1:5000 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 11 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 15 ### c1 rxresp ### s1 Accepted socket fd is 5 ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 rxresp ### s2 Accepted socket fd is 6 ### s2 rxreq ### s2 shutting fd 6 ## s2 Ending ### c1 rxresp ### s3 Accepted socket fd is 8 ### s3 rxreq ### s3 shutting fd 8 ## s3 Ending ### c1 rxresp ### s4 Accepted socket fd is 10 ### s4 rxreq ### s4 shutting fd 10 ## s4 Ending ### c1 Closing fd 15 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ### s2 listen on 127.0.0.1:3000 (fd 4) ## s1 Started on 127.0.0.1:2000 ## c2 Starting client ## s2 Started on 127.0.0.1:3000 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s2 Accepted socket fd is 6 ### s2 rxreq ### s2 shutting fd 6 ## s2 Ending ---- c2 EXPECT resp.http.content-length (2) == 1 (1) failed ---- TEST FILE: ././tests/v00009.vtc ---- TEST DESCRIPTION: Test round robin director FAIL: ./tests/v00009.vtc ... =============================================== 8 of 145 tests failed Please report to varnish-dev at projects.linpro.no ===============================================