From slink at schokola.de Sun Mar 1 16:49:25 2009 From: slink at schokola.de (Nils Goroll) Date: Sun, 01 Mar 2009 17:49:25 +0100 Subject: Timeouts In-Reply-To: <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> Message-ID: <49AABC95.8090409@schokola.de> Theo, > http://bugs.opensolaris.org/view_bug.do?bug_id=4641715 > > Either that or application level support for timeouts. Given Varnish's > design, this wouldn't be that hard, but still a bit of work. thank you for your answer. I was hoping that someone might had started work on this, but I understand that this won't be too easy to implement. Nils From slink at schokola.de Sun Mar 1 16:53:35 2009 From: slink at schokola.de (Nils Goroll) Date: Sun, 01 Mar 2009 17:53:35 +0100 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> Message-ID: <49AABD8F.2080305@schokola.de> John, thank you for sharing your experience and your settings with the community! Nils From jesus at omniti.com Sun Mar 1 17:16:25 2009 From: jesus at omniti.com (Theo Schlossnagle) Date: Sun, 1 Mar 2009 12:16:25 -0500 Subject: Timeouts In-Reply-To: <49AABC95.8090409@schokola.de> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> <49AABC95.8090409@schokola.de> Message-ID: <140D075E-8BC0-49F1-8322-581D4CF3CE2A@omniti.com> On Mar 1, 2009, at 11:49 AM, Nils Goroll wrote: > Theo, > >> http://bugs.opensolaris.org/view_bug.do?bug_id=4641715 >> Either that or application level support for timeouts. Given >> Varnish's design, this wouldn't be that hard, but still a bit of >> work. > > thank you for your answer. I was hoping that someone might had > started work on this, but I understand that this won't be too easy > to implement. I see two approaches: (1) the traditional: change all the the read/write/readv/writev/send/ recv/sendfile calls with non-blocking counterparts and wrap them in a poll loop with the timeout management. (2) the contemporary: create a timeout management thread that orchestrates interrupting the system calls in the threads. It's kinda magical. It's basically and implementation of alarm() in each thread where the alarms are actually managed in a watching thread and it raises a signal in the requested thread by using pthread_kill explicitly. I've implemented this before and it works. But, it is a bit painful and given that this implementation would exist to work around _only_ the kernel lacking in Solaris, it seems crazy. I think I'll go with the general Varnish developer attitude here: "we expect the OS to support the advanced features we need." Upside is that it would work well with sendfile. I still wish all network I/O was done in an event system... and we just had a lot of concurrently operating event loops all consuming events full-tilt. I've had better success with that. Que sera sera. Varnish is still the fastest thing around. -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications & Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 From phk at phk.freebsd.dk Sun Mar 1 18:23:08 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sun, 01 Mar 2009 18:23:08 +0000 Subject: Timeouts In-Reply-To: Your message of "Sun, 01 Mar 2009 17:49:25 +0100." <49AABC95.8090409@schokola.de> Message-ID: <3318.1235931788@critter.freebsd.dk> In message <49AABC95.8090409 at schokola.de>, Nils Goroll writes: >Theo, > >> http://bugs.opensolaris.org/view_bug.do?bug_id=4641715 >> >> Either that or application level support for timeouts. Given Varnish's >> design, this wouldn't be that hard, but still a bit of work. > >thank you for your answer. I was hoping that someone might had started work on >this, but I understand that this won't be too easy to implement. They can all be implemented in userland, but at a considerable cost in systemcalls required. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Mon Mar 2 15:56:10 2009 From: slink at schokola.de (Nils Goroll) Date: Mon, 02 Mar 2009 16:56:10 +0100 Subject: Timeouts In-Reply-To: <140D075E-8BC0-49F1-8322-581D4CF3CE2A@omniti.com> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> <49AABC95.8090409@schokola.de> <140D075E-8BC0-49F1-8322-581D4CF3CE2A@omniti.com> Message-ID: <49AC019A.8000308@schokola.de> Theo, > I see two approaches: Thank you for your thoughts. To me, this sounds like it might actually be more appropriate to spend the effort on the Solaris source instead. Cheers, Nils From sky at crucially.net Mon Mar 2 21:10:16 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 2 Mar 2009 13:10:16 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> Message-ID: <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> HAProxy doesn't do keep-alive, so it makes everything slower. Artur On Feb 27, 2009, at 9:02 PM, Cloude Porteus wrote: > John, > Thanks so much for the info, that's a huge help for us!!! > > I love HAProxy and Willy has been awesome to us. We run everything > through it, since it's really easy to monitor and also easy to debug > where the lag is when something in the chain is not responding fast > enough. It's been rock solid for us. > > The nice part for us is that we can use it as a content switcher to > send all /xxx traffic or certain user-agent traffic to different > backends. > > best, > cloude > > On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: >> cc'ing the varnish dev list for comments... >> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >> >> John, >> Goodto hear from you. You must be slammed at Twitter. I'm happy to >> hear that ESI is holding up for you. It's been in my backlog since >> you >> mentioned it to me pre-Twitter. >> >> Any performance info would be great. >> >> >> Any comments on our setup are welcome. You may also choose to call us >> crazypants. Many, many thanks to Artur Bergman of Wikia for helping >> us get >> this configuration straightened out. >> Right now, we're running varnish (on search) in a bit of a non- >> standard way. >> We plan to use it in the normal fashion (varnish to Internet, nothing >> inbetween) on our API at some point. We're running version 2.0.2, no >> patches. Cache hit rates range from 10% to 30%, or higher when a >> real-time >> event is flooding search. >> 2.0.2 is quite stable for us, with the occasional child death here >> and there >> when we get massive headers coming in that flood sess_workspace. I >> hear this >> is fixed in 2.0.3, but haven't had time to try it yet. >> We have a number of search boxes, and each search box has an apache >> instance >> on it, and varnish instance. We plan to merge the varnish instances >> at some >> point, but we use very low TTLs (Twitter is the real time web!) and >> don't >> see much of a savings by running less of them. >> We do: >> Apache --> Varnish --> Apache -> Mongrels >> Apaches are using mod_proxy_balancer. The front end apache is there >> because >> we've long had a fear that Varnish would crash on us, which it did >> many >> times prior to our figuring out the proper parameters for startup. >> We have >> two entries in that balancer. Either the request goes to varnish, >> or, if >> varnish bombs out, it goes directly to the mongrel. >> We do this, because we need a load balancing algorithm that varnish >> doesn't >> support, called bybusiness. Without bybusiness, varnish tries to >> direct >> requests to Mongrels that are busy, and requests end up in the >> listen queue. >> that adds ~100-150mS to load times, and that's no good for our >> desired >> service times of 200-250mS (or less.) >> We'd be so happy if someone put bybusiness into Varnish's backend >> load >> balancing, but it's not there yet. >> We also know that taking the extra hop through localhost costs us >> next to >> nothing in service time, so it's good to have Apache there incase >> we need to >> yank out Varnish. In the future, we might get rid of Apache and use >> HAProxy >> (it's load balancing and backend monitoring is much richer than >> Apache, and, >> it has a beautiful HTTP interface to look at.) >> Some variables and our decisions: >> -p obj_workspace=4096 \ >> -p sess_workspace=262144 \ >> Absolutely vital! Varnish does not allocate enough space by >> default for >> headers, regexps on cookies, and otherwise. It was increased in >> 2.0.3, but >> really, not increased enough. Without this we were panicing every >> 20-30 >> requests and overflowing the sess hash. >> -p listen_depth=8192 \ >> 8192 is probably excessive for now. If we're queuing 8k conns, >> something is >> really broke! >> -p log_hashstring=off \ >> Who cares about this - we don't need it. >> -p lru_interval=60 \ >> We have many small objects in the search cache. Run LRU more often. >> -p sess_timeout=10 \ >> If you keep session data around for too long, you waste memory. >> -p shm_workspace=32768 \ >> Give us a bit more room in shm >> -p ping_interval=1 \ >> Frequent pings in case the child dies on us. >> -p thread_pools=4 \ >> -p thread_pool_min=100 \ >> This must match up with VARNISH_MIN_THREADS. We use four pools, >> (pools * >> thread_pool_min == VARNISH_MIN_THREADS) >> -p srcaddr_ttl=0 \ >> Disable the (effectively unused) per source-IP statistics >> -p esi_syntax=1 >> Disable ESI syntax verification so we can use it to process JSON >> requests. >> If you have more than 2.1M objects, you should also add: >> # -h classic,250007 = recommeded value for 2.1M objects >> # number should be 1/10 expected working set. >> >> In our VCL, we have a few fancy tricks that we use. We label the >> cache >> server and cache hit/miss rate in vcl_deliver with this code: >> Top of VCL: >> C{ >> #include >> #include >> char myhostname[255] = ""; >> >> }C >> vcl_deliver: >> C{ >> VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, >> vrt_magic_string_end); >> }C >> /* mark hit/miss on the request */ >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> set resp.http.X-Cache-Hits = obj.hits; >> } else { >> set resp.http.X-Cache = "MISS"; >> } >> >> vcl_recv: >> C{ >> if (myhostname[0] == '\0') { >> /* only get hostname once - restart required if hostname >> changes */ >> gethostname(myhostname, 255); >> } >> }C >> >> Portions of /etc/sysconfig/varnish follow... >> # The minimum number of worker threads to start >> VARNISH_MIN_THREADS=400 >> # The Maximum number of worker threads to start >> VARNISH_MAX_THREADS=1000 >> # Idle timeout for worker threads >> VARNISH_THREAD_TIMEOUT=60 >> # Cache file location >> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >> # Cache file size: in bytes, optionally using k / M / G / T suffix, >> # or in percentage of available disk space using the % suffix. >> VARNISH_STORAGE_SIZE="8G" >> # >> # Backend storage specification >> VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" >> # Default TTL used when the backend does not specify one >> VARNISH_TTL=5 >> # the working directory >> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >> -f ${VARNISH_VCL_CONF} \ >> -T >> ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ >> -t ${VARNISH_TTL} \ >> -n ${VARNISH_WORKDIR} \ >> -w >> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},$ >> {VARNISH_THREAD_TIMEOUT} \ >> -u varnish -g varnish \ >> -p obj_workspace=4096 \ >> -p sess_workspace=262144 \ >> -p listen_depth=8192 \ >> -p log_hashstring=off \ >> -p lru_interval=60 \ >> -p sess_timeout=10 \ >> -p shm_workspace=32768 \ >> -p ping_interval=1 \ >> -p thread_pools=4 \ >> -p thread_pool_min=100 \ >> -p srcaddr_ttl=0 \ >> -p esi_syntax=1 \ >> -s ${VARNISH_STORAGE}" >> >> --- >> John Adams >> Twitter Operations >> jna at twitter.com >> http://twitter.com/netik >> >> >> >> > > > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev From cloude at instructables.com Mon Mar 2 21:33:01 2009 From: cloude at instructables.com (Cloude Porteus) Date: Mon, 2 Mar 2009 13:33:01 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> Message-ID: <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> I believe TCP Keep-alive has been supported in HAProxy since version 1.2. We've been using 1.3.x for at least a year. -cloude On Mon, Mar 2, 2009 at 1:10 PM, Artur Bergman wrote: > HAProxy doesn't do keep-alive, so it makes everything slower. > > Artur > > On Feb 27, 2009, at 9:02 PM, Cloude Porteus wrote: > >> John, >> Thanks so much for the info, that's a huge help for us!!! >> >> I love HAProxy and Willy has been awesome to us. We run everything >> through it, since it's really easy to monitor and also easy to debug >> where the lag is when something in the chain is not responding fast >> enough. It's been rock solid for us. >> >> The nice part for us is that we can use it as a content switcher to >> send all /xxx traffic or certain user-agent traffic to different >> backends. >> >> best, >> cloude >> >> On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: >>> >>> cc'ing the varnish dev list for comments... >>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>> >>> John, >>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>> hear that ESI is holding up for you. It's been in my backlog since you >>> mentioned it to me pre-Twitter. >>> >>> Any performance info would be great. >>> >>> >>> Any comments on our setup are welcome. You may also choose to call us >>> crazypants. Many, many thanks to Artur Bergman of Wikia for helping us >>> get >>> this configuration straightened out. >>> Right now, we're running varnish (on search) in a bit of a non-standard >>> way. >>> We plan to use it in the normal fashion (varnish to Internet, nothing >>> inbetween) on our API at some point. We're running version 2.0.2, no >>> patches. Cache hit rates range from 10% to 30%, or higher when a >>> real-time >>> event is flooding search. >>> 2.0.2 is quite stable for us, with the occasional child death here and >>> there >>> when we get massive headers coming in that flood sess_workspace. I hear >>> this >>> is fixed in 2.0.3, but haven't had time to try it yet. >>> We have a number of search boxes, and each search box has an apache >>> instance >>> on it, and varnish instance. We plan to merge the varnish instances at >>> some >>> point, but we use very low TTLs (Twitter is the real time web!) and don't >>> see much of a savings by running less of them. >>> We do: >>> Apache --> Varnish --> Apache -> Mongrels >>> Apaches are using mod_proxy_balancer. The front end apache is there >>> because >>> we've long had a fear that Varnish would crash on us, which it did many >>> times prior to our figuring out the proper parameters for startup. We >>> have >>> two entries in that balancer. Either the request goes to varnish, or, if >>> varnish bombs out, it goes directly to the mongrel. >>> We do this, because we need a load balancing algorithm that varnish >>> doesn't >>> support, called bybusiness. Without bybusiness, varnish tries to direct >>> requests to Mongrels that are busy, and requests end up in the listen >>> queue. >>> that adds ~100-150mS to load times, and that's no good for our desired >>> service times of 200-250mS (or less.) >>> We'd be so happy if someone put bybusiness into Varnish's backend load >>> balancing, but it's not there yet. >>> We also know that taking the extra hop through localhost costs us next to >>> nothing in service time, so it's good to have Apache there incase we need >>> to >>> yank out Varnish. In the future, we might get rid of Apache and use >>> HAProxy >>> (it's load balancing and backend monitoring is much richer than Apache, >>> and, >>> it has a beautiful HTTP interface to look at.) >>> Some variables and our decisions: >>> ? ? ? ? ? ? -p obj_workspace=4096 \ >>> ? ? -p sess_workspace=262144 \ >>> Absolutely vital! ?Varnish does not allocate enough space by default for >>> headers, regexps on cookies, and otherwise. It was increased in 2.0.3, >>> but >>> really, not increased enough. Without this we were panicing every 20-30 >>> requests and overflowing the sess hash. >>> ? ? ? ? ? ? -p listen_depth=8192 \ >>> 8192 is probably excessive for now. If we're queuing 8k conns, something >>> is >>> really broke! >>> ? ? ? ? ? ? -p log_hashstring=off \ >>> Who cares about this - we don't need it. >>> ? ? -p lru_interval=60 \ >>> We have many small objects in the search cache. Run LRU more often. >>> ? ? ? ? ? ? -p sess_timeout=10 \ >>> If you keep session data around for too long, you waste memory. >>> ? ? -p shm_workspace=32768 \ >>> Give us a bit more room in shm >>> ? ? ? ? ? ? -p ping_interval=1 \ >>> Frequent pings in case the child dies on us. >>> ? ? ? ? ? ? -p thread_pools=4 \ >>> ? ? ? ? ? ? -p thread_pool_min=100 \ >>> This must match up with VARNISH_MIN_THREADS. We use four pools, (pools * >>> thread_pool_min == VARNISH_MIN_THREADS) >>> ? ? -p srcaddr_ttl=0 \ >>> Disable the (effectively unused) per source-IP statistics >>> ? ? -p esi_syntax=1 >>> Disable ESI syntax verification so we can use it to process JSON >>> requests. >>> If you have more than 2.1M objects, you should also add: >>> # -h classic,250007 = recommeded value for 2.1M objects >>> # ? ? number should be 1/10 expected working set. >>> >>> In our VCL, we have a few fancy tricks that we use. We label the cache >>> server and cache hit/miss rate in vcl_deliver with this code: >>> Top of VCL: >>> C{ >>> #include >>> #include >>> char myhostname[255] = ""; >>> >>> }C >>> vcl_deliver: >>> C{ >>> ? ?VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, >>> vrt_magic_string_end); >>> }C >>> ? ?/* mark hit/miss on the request */ >>> ? ?if (obj.hits > 0) { >>> ? ? ?set resp.http.X-Cache = "HIT"; >>> ? ? ?set resp.http.X-Cache-Hits = obj.hits; >>> ? ?} else { >>> ? ? ?set resp.http.X-Cache = "MISS"; >>> ? ?} >>> >>> vcl_recv: >>> C{ >>> ? if (myhostname[0] == '\0') { >>> ? ? /* only get hostname once - restart required if hostname changes */ >>> ? ? gethostname(myhostname, 255); >>> ? } >>> }C >>> >>> Portions of /etc/sysconfig/varnish follow... >>> # The minimum number of worker threads to start >>> VARNISH_MIN_THREADS=400 >>> # The Maximum number of worker threads to start >>> VARNISH_MAX_THREADS=1000 >>> # Idle timeout for worker threads >>> VARNISH_THREAD_TIMEOUT=60 >>> # Cache file location >>> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >>> # Cache file size: in bytes, optionally using k / M / G / T suffix, >>> # or in percentage of available disk space using the % suffix. >>> VARNISH_STORAGE_SIZE="8G" >>> # >>> # Backend storage specification >>> VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" >>> # Default TTL used when the backend does not specify one >>> VARNISH_TTL=5 >>> # the working directory >>> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>> ? ? ? ? ? ? -f ${VARNISH_VCL_CONF} \ >>> ? ? ? ? ? ? -T >>> ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ >>> ? ? ? ? ? ? -t ${VARNISH_TTL} \ >>> ? ?-n ${VARNISH_WORKDIR} \ >>> ? ? ? ? ? ? -w >>> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ >>> ? ? ? ? ? ? -u varnish -g varnish \ >>> ? ? ? ? ? ? -p obj_workspace=4096 \ >>> ? ?-p sess_workspace=262144 \ >>> ? ? ? ? ? ? -p listen_depth=8192 \ >>> ? ? ? ? ? ? -p log_hashstring=off \ >>> ? ?-p lru_interval=60 \ >>> ? ? ? ? ? ? -p sess_timeout=10 \ >>> ? ?-p shm_workspace=32768 \ >>> ? ? ? ? ? ? -p ping_interval=1 \ >>> ? ? ? ? ? ? -p thread_pools=4 \ >>> ? ? ? ? ? ? -p thread_pool_min=100 \ >>> ? ?-p srcaddr_ttl=0 \ >>> ? ?-p esi_syntax=1 \ >>> ? ? ? ? ? ? -s ${VARNISH_STORAGE}" >>> >>> --- >>> John Adams >>> Twitter Operations >>> jna at twitter.com >>> http://twitter.com/netik >>> >>> >>> >>> >> >> >> >> -- >> VP of Product Development >> Instructables.com >> >> http://www.instructables.com/member/lebowski >> _______________________________________________ >> varnish-dev mailing list >> varnish-dev at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-dev > > -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From sky at crucially.net Mon Mar 2 21:40:16 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 2 Mar 2009 13:40:16 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> Message-ID: <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net> On Feb 27, 2009, at 2:24 PM, John Adams wrote: > cc'ing the varnish dev list for comments... > > On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: > >> John, >> Goodto hear from you. You must be slammed at Twitter. I'm happy to >> hear that ESI is holding up for you. It's been in my backlog since >> you >> mentioned it to me pre-Twitter. >> >> Any performance info would be great. >> > > Any comments on our setup are welcome. You may also choose to call > us crazypants. Many, many thanks to Artur Bergman of Wikia for > helping us get this configuration straightened out. > Thanks John :) I'll describe the settings we use. (We don't use ESI because of gzip) The first important step is that we put the shmlog on tmpfs tmpfs /usr/var/varnish/ tmpfs noatime,defaults,size=150M 0 0 /dev/md0 /var/lib/varnish ext2 noatime,nodiratime,norelatime 0 0 Notice also ext2 we don't care about journaling. (Ignore the broken paths) This is because linux will asynchronously write the log to disk, this puts a large io pressure on the system (interfering with your normal reads if you use the same disks) It also scales the IO load with traffic and not working set. # Maximum number of open files (for ulimit -n) NFILES=131072 # Locked shared memory (for ulimit -l) # Default log size is 82MB + header MEMLOCK=90000 DAEMON_COREFILE_LIMIT="unlimited" DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/wikia.vcl \ -p obj_workspace=4096 \ # We have lots of objects -p sess_workspace=32768 \ # Need lots of sessoin space -p listen_depth=8192 \ -p ping_interval=1 \ -s file,/var/lib/varnish/mmap,120G \ # lots of mmap -p log_hashstring=off \ -h classic,250007 \ # 2.5 mmilion objects -p thread_pool_max=4000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=4 \ -p thread_pool_min=500 \ # we force 4000 threads pre-created # otherwise we run into overflows -p shm_workspace=32768 \ # avoid shm_mtx -p srcaddr_ttl=0" # avoid hash lookup # we link geoip into the vcl CC_COMMAND='cc_command=exec cc -fpic -shared -Wl,-x -L/usr/local/lib/ - lGeoIP -o %o %s' #### VCL # declare the function signature # so we can use them C{ #include double TIM_real(void); void TIM_format(double t, char *p); }C # init GeoIP code C{ #include #include #include #include #include #include pthread_mutex_t geoip_mutex = PTHREAD_MUTEX_INITIALIZER; GeoIP* gi; void geo_init () { if(!gi) { gi = GeoIP_open_type(GEOIP_CITY_EDITION_REV1,GEOIP_MEMORY_CACHE); } } }C vcl_recv { set req.url = regsub(req.url, "http://[^/]*",""); #will normalize proxied requests, specificl curl -x foo:80 # get out error handler for geoiplookup if(req.http.host == "geoiplookup.wikia.com") { error 200 "Ok"; } # lvs check if (req.url == "/lvscheck.html") { error 200 "Ok"; } # normalize Accept-Encoding to reduce vary if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } # Yahoo uses this to check for 404 if (req.url ~ "^/SlurpConfirm404") { error 404 "Not found"; } set req.grace = 360000s; #if the backend is down, just serve # check for specific cookies, otherwise nuke them # save them so we can re-inject them later in pipe or miss set req.http.X-Orig-Cookie = req.http.Cookie; if(req.http.Cookie ~ "(session|UserID|UserName|Token|LoggedOut)") { # dont do anything, the user is logged in } else { # dont care about any other cookies unset req.http.Cookie; } } # varnish XFF is broken, it doesn't chain them # if you have chained varnishes, or trust AOL, you need to append them sub vcl_pipe { # do the right XFF processing set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", "); set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", client.ip); set bereq.http.Cookie = req.http.X-Orig-Cookie; } # this implements purging (we purge all 3 versions of the accept- encoding, none,gzip,deflate) sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not purged"; } set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", "); set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", client.ip); } # this marks if something is cacheable or not, if it isn't # say why vcl_fetch { # so we have access to this in deliver set obj.http.X-Orighost = req.http.host; set obj.http.X-Origurl = req.url; if (!obj.cacheable) { set obj.http.X-Cacheable = "NO:Not-Cacheable"; pass; } if (obj.http.Cache-Control ~ "private") { if(req.http.Cookie ~"(UserID|_session)") { set obj.http.X-Cacheable = "NO:Got Session"; } else { set obj.http.X-Cacheable = "NO:Cache- Control=private"; } pass; } if (obj.http.Set-Cookie ~ "(UserID|_session)") { set obj.http.X-Cacheable = "NO:Set-Cookie"; pass; } set obj.http.X-Cacheable = "YES"; set obj.grace = 360000s; } #Following sets X-Served-By, if it is already set it appends it # it also says if it is a HIT, and how many hits sub vcl_deliver { #add or append Served By if(!resp.http.X-Served-By) { set resp.http.X-Served-By = "varnish8"; if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } set resp.http.X-Cache-Hits = obj.hits; } else { # append current data set resp.http.X-Served-By = regsub(resp.http.X-Served-By, "$", ", varnish8"); if (obj.hits > 0) { set resp.http.X-Cache = regsub(resp.http.X-Cache, "$", ", HIT"); } else { set resp.http.X-Cache = regsub(resp.http.X-Cache, "$" , ", MISS"); } set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", ", "); set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", obj.hits); } # # if the client is another DC, just remove stuff and deliver if ( client.ip ~ LON || client.ip ~ SJC || client.ip ~ IOWA ) { unset resp.http.X-CPU-Time; unset resp.http.X-Real-Time; unset resp.http.X-Served-By-Backend; unset resp.http.X-User-Id; unset resp.http.X-Namespace-Number; unset resp.http.X-Orighost; unset resp.http.X-Origurl; deliver; } # else do cache-control # nuke the headers since they were generally meant for varnish # these rules are mostly based on mediawiki rules if ( resp.http.X-Pass-Cache-Control ) { set resp.http.Cache-Control = resp.http.X-Pass-Cache-Control; } elsif ( resp.status == 304 ) { # no headers on if-modified since } elsif ( resp.http.X-Origurl ~ ".*/index\.php.*(css|js)" || resp.http.X-Origurl ~ "raw") { # dont touch it let mediawiki decide } elsif (resp.http.X-Orighost ~ "images.wikia.com") { # lighttpd knows what it is doing } elsif (resp.http.X-Orighost ~ "geoiplookup") { } else { #follow squid content here set resp.http.Cache-Control = "private, s-maxage=0, max-age=0, must-revalidate"; } # this will calculate an Expire headers which is based on now+max-age # if you cache the Expire header, then it won't match max-age since it is static if (!resp.status == 304) { C{ char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); char date[40]; int max_age; int want_equals = 0; if(cache) { while(*cache != '\0') { if (want_equals && *cache == '=') { cache++; max_age = strtoul(cache, 0, 0); break; } if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { cache += 7; want_equals = 1; continue; } cache++; } if (max_age) { TIM_format(TIM_real() + max_age, date); VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, vrt_magic_string_end); } } }C #; } } vcl_error { # this implements geoip lookups inside varnish # so clients can get the data without hitting the backend if(req.http.host == "geoiplookup.wikia.com" || req.url == "/ __varnish/geoip") { set obj.http.Content-Type = "text/plain"; set obj.http.cache-control = "private, s-maxage=0, max-age=360"; set obj.http.X-Orighost = req.http.host; C{ char *ip = VRT_IP_string(sp, VRT_r_client_ip(sp)); char date[40]; char json[255]; pthread_mutex_lock(&geoip_mutex); if(!gi) { geo_init(); } GeoIPRecord *record = GeoIP_record_by_addr(gi, ip); if(record) { snprintf(json, 255, "Geo = {\"city\":\"%s\",\"country\":\"%s \",\"lat\":\"%f\",\"lon\":\"%f\",\"classC\":\"%s\",\"netmask\":\"%d\"}", record->city, record->country_code, record->latitude, record->longitude, ip, GeoIP_last_netmask(gi) ); pthread_mutex_unlock(&geoip_mutex); VRT_synth_page(sp, 0, json, vrt_magic_string_end); } else { pthread_mutex_unlock(&geoip_mutex); VRT_synth_page(sp, 0, "Geo = {}", vrt_magic_string_end); } TIM_format(TIM_real(), date); VRT_SetHdr(sp, HDR_OBJ, "\016Last-Modified:", date, vrt_magic_string_end); }C # check if site is working if(req.url ~ "lvscheck.html") { synthetic {"varnish is okay"}; deliver; } deliver; } ############# sysctl net.ipv4.ip_local_port_range = 1024 65536 net.core.rmem_max=16777216 net.core.wmem_max=16777216 net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 net.ipv4.tcp_fin_timeout = 3 net.ipv4.tcp_tw_recycle = 1 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_no_metrics_save=1 net.core.somaxconn = 262144 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_orphans = 262144 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 These are mostly cargo culted from previous emails here. Cheers Artur From sky at crucially.net Mon Mar 2 21:42:39 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 2 Mar 2009 13:42:39 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> Message-ID: <235F75D0-DCD3-4DB9-8CE9-5874839BC9B2@crucially.net> Then the page is wrong "Keep-alive was invented to reduce CPU usage on servers when CPUs were 100 times slower. But what is not said is that persistent connections consume a lot of memory while not being usable by anybody except the client who openned them. Today in 2006, CPUs are very cheap and memory is still limited to a few gigabytes by the architecture or the price. If a siteneeds keep-alive, there is a real problem. Highly loaded sites often disable keep-alive to support the maximum number of simultaneous clients. The real downside of not having keep-alive is a slightly increased latency to fetch objects. Browsers double the number of concurrent connections on non-keepalive sites to compensate for this" (and also widely incorrect) :) Artur On Mar 2, 2009, at 1:33 PM, Cloude Porteus wrote: > I believe TCP Keep-alive has been supported in HAProxy since version > 1.2. We've been using 1.3.x for at least a year. > > -cloude > > On Mon, Mar 2, 2009 at 1:10 PM, Artur Bergman > wrote: >> HAProxy doesn't do keep-alive, so it makes everything slower. >> >> Artur >> >> On Feb 27, 2009, at 9:02 PM, Cloude Porteus wrote: >> >>> John, >>> Thanks so much for the info, that's a huge help for us!!! >>> >>> I love HAProxy and Willy has been awesome to us. We run everything >>> through it, since it's really easy to monitor and also easy to debug >>> where the lag is when something in the chain is not responding fast >>> enough. It's been rock solid for us. >>> >>> The nice part for us is that we can use it as a content switcher to >>> send all /xxx traffic or certain user-agent traffic to different >>> backends. >>> >>> best, >>> cloude >>> >>> On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: >>>> >>>> cc'ing the varnish dev list for comments... >>>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>>> >>>> John, >>>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>>> hear that ESI is holding up for you. It's been in my backlog >>>> since you >>>> mentioned it to me pre-Twitter. >>>> >>>> Any performance info would be great. >>>> >>>> >>>> Any comments on our setup are welcome. You may also choose to >>>> call us >>>> crazypants. Many, many thanks to Artur Bergman of Wikia for >>>> helping us >>>> get >>>> this configuration straightened out. >>>> Right now, we're running varnish (on search) in a bit of a non- >>>> standard >>>> way. >>>> We plan to use it in the normal fashion (varnish to Internet, >>>> nothing >>>> inbetween) on our API at some point. We're running version 2.0.2, >>>> no >>>> patches. Cache hit rates range from 10% to 30%, or higher when a >>>> real-time >>>> event is flooding search. >>>> 2.0.2 is quite stable for us, with the occasional child death >>>> here and >>>> there >>>> when we get massive headers coming in that flood sess_workspace. >>>> I hear >>>> this >>>> is fixed in 2.0.3, but haven't had time to try it yet. >>>> We have a number of search boxes, and each search box has an apache >>>> instance >>>> on it, and varnish instance. We plan to merge the varnish >>>> instances at >>>> some >>>> point, but we use very low TTLs (Twitter is the real time web!) >>>> and don't >>>> see much of a savings by running less of them. >>>> We do: >>>> Apache --> Varnish --> Apache -> Mongrels >>>> Apaches are using mod_proxy_balancer. The front end apache is there >>>> because >>>> we've long had a fear that Varnish would crash on us, which it >>>> did many >>>> times prior to our figuring out the proper parameters for >>>> startup. We >>>> have >>>> two entries in that balancer. Either the request goes to varnish, >>>> or, if >>>> varnish bombs out, it goes directly to the mongrel. >>>> We do this, because we need a load balancing algorithm that varnish >>>> doesn't >>>> support, called bybusiness. Without bybusiness, varnish tries to >>>> direct >>>> requests to Mongrels that are busy, and requests end up in the >>>> listen >>>> queue. >>>> that adds ~100-150mS to load times, and that's no good for our >>>> desired >>>> service times of 200-250mS (or less.) >>>> We'd be so happy if someone put bybusiness into Varnish's backend >>>> load >>>> balancing, but it's not there yet. >>>> We also know that taking the extra hop through localhost costs us >>>> next to >>>> nothing in service time, so it's good to have Apache there incase >>>> we need >>>> to >>>> yank out Varnish. In the future, we might get rid of Apache and use >>>> HAProxy >>>> (it's load balancing and backend monitoring is much richer than >>>> Apache, >>>> and, >>>> it has a beautiful HTTP interface to look at.) >>>> Some variables and our decisions: >>>> -p obj_workspace=4096 \ >>>> -p sess_workspace=262144 \ >>>> Absolutely vital! Varnish does not allocate enough space by >>>> default for >>>> headers, regexps on cookies, and otherwise. It was increased in >>>> 2.0.3, >>>> but >>>> really, not increased enough. Without this we were panicing every >>>> 20-30 >>>> requests and overflowing the sess hash. >>>> -p listen_depth=8192 \ >>>> 8192 is probably excessive for now. If we're queuing 8k conns, >>>> something >>>> is >>>> really broke! >>>> -p log_hashstring=off \ >>>> Who cares about this - we don't need it. >>>> -p lru_interval=60 \ >>>> We have many small objects in the search cache. Run LRU more often. >>>> -p sess_timeout=10 \ >>>> If you keep session data around for too long, you waste memory. >>>> -p shm_workspace=32768 \ >>>> Give us a bit more room in shm >>>> -p ping_interval=1 \ >>>> Frequent pings in case the child dies on us. >>>> -p thread_pools=4 \ >>>> -p thread_pool_min=100 \ >>>> This must match up with VARNISH_MIN_THREADS. We use four pools, >>>> (pools * >>>> thread_pool_min == VARNISH_MIN_THREADS) >>>> -p srcaddr_ttl=0 \ >>>> Disable the (effectively unused) per source-IP statistics >>>> -p esi_syntax=1 >>>> Disable ESI syntax verification so we can use it to process JSON >>>> requests. >>>> If you have more than 2.1M objects, you should also add: >>>> # -h classic,250007 = recommeded value for 2.1M objects >>>> # number should be 1/10 expected working set. >>>> >>>> In our VCL, we have a few fancy tricks that we use. We label the >>>> cache >>>> server and cache hit/miss rate in vcl_deliver with this code: >>>> Top of VCL: >>>> C{ >>>> #include >>>> #include >>>> char myhostname[255] = ""; >>>> >>>> }C >>>> vcl_deliver: >>>> C{ >>>> VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, >>>> vrt_magic_string_end); >>>> }C >>>> /* mark hit/miss on the request */ >>>> if (obj.hits > 0) { >>>> set resp.http.X-Cache = "HIT"; >>>> set resp.http.X-Cache-Hits = obj.hits; >>>> } else { >>>> set resp.http.X-Cache = "MISS"; >>>> } >>>> >>>> vcl_recv: >>>> C{ >>>> if (myhostname[0] == '\0') { >>>> /* only get hostname once - restart required if hostname >>>> changes */ >>>> gethostname(myhostname, 255); >>>> } >>>> }C >>>> >>>> Portions of /etc/sysconfig/varnish follow... >>>> # The minimum number of worker threads to start >>>> VARNISH_MIN_THREADS=400 >>>> # The Maximum number of worker threads to start >>>> VARNISH_MAX_THREADS=1000 >>>> # Idle timeout for worker threads >>>> VARNISH_THREAD_TIMEOUT=60 >>>> # Cache file location >>>> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >>>> # Cache file size: in bytes, optionally using k / M / G / T suffix, >>>> # or in percentage of available disk space using the % suffix. >>>> VARNISH_STORAGE_SIZE="8G" >>>> # >>>> # Backend storage specification >>>> VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" >>>> # Default TTL used when the backend does not specify one >>>> VARNISH_TTL=5 >>>> # the working directory >>>> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>>> -f ${VARNISH_VCL_CONF} \ >>>> -T >>>> ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ >>>> -t ${VARNISH_TTL} \ >>>> -n ${VARNISH_WORKDIR} \ >>>> -w >>>> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},$ >>>> {VARNISH_THREAD_TIMEOUT} \ >>>> -u varnish -g varnish \ >>>> -p obj_workspace=4096 \ >>>> -p sess_workspace=262144 \ >>>> -p listen_depth=8192 \ >>>> -p log_hashstring=off \ >>>> -p lru_interval=60 \ >>>> -p sess_timeout=10 \ >>>> -p shm_workspace=32768 \ >>>> -p ping_interval=1 \ >>>> -p thread_pools=4 \ >>>> -p thread_pool_min=100 \ >>>> -p srcaddr_ttl=0 \ >>>> -p esi_syntax=1 \ >>>> -s ${VARNISH_STORAGE}" >>>> >>>> --- >>>> John Adams >>>> Twitter Operations >>>> jna at twitter.com >>>> http://twitter.com/netik >>>> >>>> >>>> >>>> >>> >>> >>> >>> -- >>> VP of Product Development >>> Instructables.com >>> >>> http://www.instructables.com/member/lebowski >>> _______________________________________________ >>> varnish-dev mailing list >>> varnish-dev at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-dev >> >> > > > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski From sky at crucially.net Mon Mar 2 21:45:10 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 2 Mar 2009 13:45:10 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> Message-ID: <026A1AAD-AAAF-4420-9C11-E9C79BA43CD7@crucially.net> "Right now, HAProxy only supports the first mode (HTTP close) if it needs to process the request. This means that for each request, there will be one TCP connection. If keep-alive or pipelining are required, HAProxy will still support them, but will only see the first request and the first response of each transaction. While this is generally problematic with regards to logs, content switching or filtering, it most often causes no problem for persistence with cookie insertion." Really not fully supporting keep-alive! More like varnish pipe mode Artur On Mar 2, 2009, at 1:33 PM, Cloude Porteus wrote: > I believe TCP Keep-alive has been supported in HAProxy since version > 1.2. We've been using 1.3.x for at least a year. > > -cloude > > On Mon, Mar 2, 2009 at 1:10 PM, Artur Bergman > wrote: >> HAProxy doesn't do keep-alive, so it makes everything slower. >> >> Artur >> >> On Feb 27, 2009, at 9:02 PM, Cloude Porteus wrote: >> >>> John, >>> Thanks so much for the info, that's a huge help for us!!! >>> >>> I love HAProxy and Willy has been awesome to us. We run everything >>> through it, since it's really easy to monitor and also easy to debug >>> where the lag is when something in the chain is not responding fast >>> enough. It's been rock solid for us. >>> >>> The nice part for us is that we can use it as a content switcher to >>> send all /xxx traffic or certain user-agent traffic to different >>> backends. >>> >>> best, >>> cloude >>> >>> On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: >>>> >>>> cc'ing the varnish dev list for comments... >>>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>>> >>>> John, >>>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>>> hear that ESI is holding up for you. It's been in my backlog >>>> since you >>>> mentioned it to me pre-Twitter. >>>> >>>> Any performance info would be great. >>>> >>>> >>>> Any comments on our setup are welcome. You may also choose to >>>> call us >>>> crazypants. Many, many thanks to Artur Bergman of Wikia for >>>> helping us >>>> get >>>> this configuration straightened out. >>>> Right now, we're running varnish (on search) in a bit of a non- >>>> standard >>>> way. >>>> We plan to use it in the normal fashion (varnish to Internet, >>>> nothing >>>> inbetween) on our API at some point. We're running version 2.0.2, >>>> no >>>> patches. Cache hit rates range from 10% to 30%, or higher when a >>>> real-time >>>> event is flooding search. >>>> 2.0.2 is quite stable for us, with the occasional child death >>>> here and >>>> there >>>> when we get massive headers coming in that flood sess_workspace. >>>> I hear >>>> this >>>> is fixed in 2.0.3, but haven't had time to try it yet. >>>> We have a number of search boxes, and each search box has an apache >>>> instance >>>> on it, and varnish instance. We plan to merge the varnish >>>> instances at >>>> some >>>> point, but we use very low TTLs (Twitter is the real time web!) >>>> and don't >>>> see much of a savings by running less of them. >>>> We do: >>>> Apache --> Varnish --> Apache -> Mongrels >>>> Apaches are using mod_proxy_balancer. The front end apache is there >>>> because >>>> we've long had a fear that Varnish would crash on us, which it >>>> did many >>>> times prior to our figuring out the proper parameters for >>>> startup. We >>>> have >>>> two entries in that balancer. Either the request goes to varnish, >>>> or, if >>>> varnish bombs out, it goes directly to the mongrel. >>>> We do this, because we need a load balancing algorithm that varnish >>>> doesn't >>>> support, called bybusiness. Without bybusiness, varnish tries to >>>> direct >>>> requests to Mongrels that are busy, and requests end up in the >>>> listen >>>> queue. >>>> that adds ~100-150mS to load times, and that's no good for our >>>> desired >>>> service times of 200-250mS (or less.) >>>> We'd be so happy if someone put bybusiness into Varnish's backend >>>> load >>>> balancing, but it's not there yet. >>>> We also know that taking the extra hop through localhost costs us >>>> next to >>>> nothing in service time, so it's good to have Apache there incase >>>> we need >>>> to >>>> yank out Varnish. In the future, we might get rid of Apache and use >>>> HAProxy >>>> (it's load balancing and backend monitoring is much richer than >>>> Apache, >>>> and, >>>> it has a beautiful HTTP interface to look at.) >>>> Some variables and our decisions: >>>> -p obj_workspace=4096 \ >>>> -p sess_workspace=262144 \ >>>> Absolutely vital! Varnish does not allocate enough space by >>>> default for >>>> headers, regexps on cookies, and otherwise. It was increased in >>>> 2.0.3, >>>> but >>>> really, not increased enough. Without this we were panicing every >>>> 20-30 >>>> requests and overflowing the sess hash. >>>> -p listen_depth=8192 \ >>>> 8192 is probably excessive for now. If we're queuing 8k conns, >>>> something >>>> is >>>> really broke! >>>> -p log_hashstring=off \ >>>> Who cares about this - we don't need it. >>>> -p lru_interval=60 \ >>>> We have many small objects in the search cache. Run LRU more often. >>>> -p sess_timeout=10 \ >>>> If you keep session data around for too long, you waste memory. >>>> -p shm_workspace=32768 \ >>>> Give us a bit more room in shm >>>> -p ping_interval=1 \ >>>> Frequent pings in case the child dies on us. >>>> -p thread_pools=4 \ >>>> -p thread_pool_min=100 \ >>>> This must match up with VARNISH_MIN_THREADS. We use four pools, >>>> (pools * >>>> thread_pool_min == VARNISH_MIN_THREADS) >>>> -p srcaddr_ttl=0 \ >>>> Disable the (effectively unused) per source-IP statistics >>>> -p esi_syntax=1 >>>> Disable ESI syntax verification so we can use it to process JSON >>>> requests. >>>> If you have more than 2.1M objects, you should also add: >>>> # -h classic,250007 = recommeded value for 2.1M objects >>>> # number should be 1/10 expected working set. >>>> >>>> In our VCL, we have a few fancy tricks that we use. We label the >>>> cache >>>> server and cache hit/miss rate in vcl_deliver with this code: >>>> Top of VCL: >>>> C{ >>>> #include >>>> #include >>>> char myhostname[255] = ""; >>>> >>>> }C >>>> vcl_deliver: >>>> C{ >>>> VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, >>>> vrt_magic_string_end); >>>> }C >>>> /* mark hit/miss on the request */ >>>> if (obj.hits > 0) { >>>> set resp.http.X-Cache = "HIT"; >>>> set resp.http.X-Cache-Hits = obj.hits; >>>> } else { >>>> set resp.http.X-Cache = "MISS"; >>>> } >>>> >>>> vcl_recv: >>>> C{ >>>> if (myhostname[0] == '\0') { >>>> /* only get hostname once - restart required if hostname >>>> changes */ >>>> gethostname(myhostname, 255); >>>> } >>>> }C >>>> >>>> Portions of /etc/sysconfig/varnish follow... >>>> # The minimum number of worker threads to start >>>> VARNISH_MIN_THREADS=400 >>>> # The Maximum number of worker threads to start >>>> VARNISH_MAX_THREADS=1000 >>>> # Idle timeout for worker threads >>>> VARNISH_THREAD_TIMEOUT=60 >>>> # Cache file location >>>> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >>>> # Cache file size: in bytes, optionally using k / M / G / T suffix, >>>> # or in percentage of available disk space using the % suffix. >>>> VARNISH_STORAGE_SIZE="8G" >>>> # >>>> # Backend storage specification >>>> VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" >>>> # Default TTL used when the backend does not specify one >>>> VARNISH_TTL=5 >>>> # the working directory >>>> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>>> -f ${VARNISH_VCL_CONF} \ >>>> -T >>>> ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ >>>> -t ${VARNISH_TTL} \ >>>> -n ${VARNISH_WORKDIR} \ >>>> -w >>>> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},$ >>>> {VARNISH_THREAD_TIMEOUT} \ >>>> -u varnish -g varnish \ >>>> -p obj_workspace=4096 \ >>>> -p sess_workspace=262144 \ >>>> -p listen_depth=8192 \ >>>> -p log_hashstring=off \ >>>> -p lru_interval=60 \ >>>> -p sess_timeout=10 \ >>>> -p shm_workspace=32768 \ >>>> -p ping_interval=1 \ >>>> -p thread_pools=4 \ >>>> -p thread_pool_min=100 \ >>>> -p srcaddr_ttl=0 \ >>>> -p esi_syntax=1 \ >>>> -s ${VARNISH_STORAGE}" >>>> >>>> --- >>>> John Adams >>>> Twitter Operations >>>> jna at twitter.com >>>> http://twitter.com/netik >>>> >>>> >>>> >>>> >>> >>> >>> >>> -- >>> VP of Product Development >>> Instructables.com >>> >>> http://www.instructables.com/member/lebowski >>> _______________________________________________ >>> varnish-dev mailing list >>> varnish-dev at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-dev >> >> > > > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski From cloude at instructables.com Mon Mar 2 21:48:23 2009 From: cloude at instructables.com (Cloude Porteus) Date: Mon, 2 Mar 2009 13:48:23 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net> Message-ID: <4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> Artur, What is the issue with ESI & gzip? Does this mean that if we want to use ESI, we can't gzip the pages that have ESI includes? But we could still gzip the pages that are included by ESI. thanks, cloude On Mon, Mar 2, 2009 at 1:40 PM, Artur Bergman wrote: > > On Feb 27, 2009, at 2:24 PM, John Adams wrote: > >> cc'ing the varnish dev list for comments... >> >> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >> >>> John, >>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>> hear that ESI is holding up for you. It's been in my backlog since you >>> mentioned it to me pre-Twitter. >>> >>> Any performance info would be great. >>> >> >> Any comments on our setup are welcome. You may also choose to call us >> crazypants. Many, many thanks to Artur Bergman of Wikia for helping us get >> this configuration straightened out. >> > > Thanks John :) > > I'll describe the settings we use. (We don't use ESI because of gzip) > > The first important step is that we put the shmlog on tmpfs > > tmpfs ? ? ? ? ? /usr/var/varnish/ tmpfs noatime,defaults,size=150M ?0 0 > /dev/md0 ? ? ? ?/var/lib/varnish ? ? ? ?ext2 noatime,nodiratime,norelatime 0 > 0 > > Notice also ext2 we don't care about journaling. (Ignore the broken paths) > > This is because linux will asynchronously write the log to disk, this puts a > large io pressure on the system (interfering with your normal reads if you > use the same disks) It also scales the IO load with traffic and not working > set. > > # Maximum number of open files (for ulimit -n) > NFILES=131072 > > # Locked shared memory (for ulimit -l) > # Default log size is 82MB + header > MEMLOCK=90000 > > DAEMON_COREFILE_LIMIT="unlimited" > > > DAEMON_OPTS="-a :80 \ > ? ? ? ? ? ? ? -T localhost:6082 \ > ? ? ? ? ? ? ? -f /etc/varnish/wikia.vcl \ > ? ? ? ? ? ? ? -p obj_workspace=4096 \ > # We have lots of objects > ? ? ? ? ? ? ? -p sess_workspace=32768 \ > # Need lots of sessoin space > ? ? ? ? ? ? ? -p listen_depth=8192 \ > ? ? ? ? ? ? ? -p ping_interval=1 \ > ? ? ? ? ? ? ? -s file,/var/lib/varnish/mmap,120G \ > # lots of mmap > ? ? ? ? ? ? ? -p log_hashstring=off \ > ? ? ? ? ? ? ? -h classic,250007 \ > # 2.5 mmilion objects > ? ? ? ? ? ? ? -p thread_pool_max=4000 \ > ? ? ? ? ? ? ? -p lru_interval=60 \ > ? ? ? ? ? ? ? -p esi_syntax=0x00000003 \ > ? ? ? ? ? ? ? -p sess_timeout=10 \ > ? ? ? ? ? ? ? -p thread_pools=4 \ > ? ? ? ? ? ? ? -p thread_pool_min=500 \ > # we force 4000 threads pre-created > # otherwise we run into overflows > ? ? ? ? ? ? ? -p shm_workspace=32768 \ > # avoid shm_mtx > ? ? ? ? ? ? ? -p srcaddr_ttl=0" > # avoid hash lookup > > # we link geoip into the vcl > CC_COMMAND='cc_command=exec cc -fpic -shared -Wl,-x -L/usr/local/lib/ > -lGeoIP -o %o %s' > > #### VCL > > # declare the function signature > # so we can use them > C{ > #include > ?double TIM_real(void); > ?void TIM_format(double t, char *p); > }C > > > > # init GeoIP code > C{ > ?#include > ?#include > ?#include > ?#include > ?#include > ?#include > > ?pthread_mutex_t geoip_mutex = PTHREAD_MUTEX_INITIALIZER; > > ?GeoIP* gi; > ?void geo_init () { > ? ?if(!gi) { > ? ? ?gi = GeoIP_open_type(GEOIP_CITY_EDITION_REV1,GEOIP_MEMORY_CACHE); > ? ?} > ?} > }C > > vcl_recv { > > set req.url = regsub(req.url, "http://[^/]*",""); > #will normalize proxied requests, specificl curl -x foo:80 > > ?# get out error handler for geoiplookup > ?if(req.http.host == "geoiplookup.wikia.com") { > ? ?error 200 "Ok"; > ?} > > ?# lvs check > ?if (req.url == "/lvscheck.html") { > ? ?error 200 "Ok"; > ?} > > ?# normalize Accept-Encoding to reduce vary > ?if (req.http.Accept-Encoding) { > ? ?if (req.http.Accept-Encoding ~ "gzip") { > ? ? ?set req.http.Accept-Encoding = "gzip"; > ? ?} elsif (req.http.Accept-Encoding ~ "deflate") { > ? ? ?set req.http.Accept-Encoding = "deflate"; > ? ?} else { > ? ? ?unset req.http.Accept-Encoding; > ? ?} > ?} > > > ?# Yahoo uses this to check for 404 > ?if (req.url ~ "^/SlurpConfirm404") { > ? ?error 404 "Not found"; > ?} > > set req.grace = 360000s; ?#if the backend is down, just serve > > > # check for specific cookies, otherwise nuke them > # save them so we can re-inject them later in pipe or miss > ?set req.http.X-Orig-Cookie = req.http.Cookie; > ?if(req.http.Cookie ~ "(session|UserID|UserName|Token|LoggedOut)") { > ? ?# dont do anything, the user is logged in > ?} else { > ? ?# dont care about any other cookies > ? ?unset req.http.Cookie; > ?} > > > } > > # varnish XFF is broken, it doesn't chain them > # if you have chained varnishes, or trust AOL, you need to append them > sub vcl_pipe { > ?# do the right XFF processing > ?set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", > "); > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", > client.ip); > ?set bereq.http.Cookie = req.http.X-Orig-Cookie; > } > > > # this implements purging (we purge all 3 versions of the accept-encoding, > none,gzip,deflate) > sub vcl_hit { > ?if (req.request == "PURGE") { > ? ?set obj.ttl = 0s; > ? ?error 200 "Purged."; > ?} > } > > sub vcl_miss { > > ?if (req.request == "PURGE") { > ? ?error 404 "Not purged"; > ?} > > ?set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", > "); > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", > client.ip); > } > > > # this marks if something is cacheable or not, if it isn't > # say why > vcl_fetch { > # so we have access to this in deliver > ? ? ? ?set obj.http.X-Orighost = req.http.host; > ? ? ? ?set obj.http.X-Origurl = req.url; > ? ? ? ?if (!obj.cacheable) { > ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Not-Cacheable"; > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > ? ? ? ?if (obj.http.Cache-Control ~ "private") { > ? ? ? ? ? ? ? ?if(req.http.Cookie ~"(UserID|_session)") { > ? ? ? ? ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Got Session"; > ? ? ? ? ? ? ? ?} else { > ? ? ? ? ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = > "NO:Cache-Control=private"; > ? ? ? ? ? ? ? ?} > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > ? ? ? ?if (obj.http.Set-Cookie ~ "(UserID|_session)") { > ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Set-Cookie"; > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > > ? ? ? ?set obj.http.X-Cacheable = "YES"; > ?set obj.grace = 360000s; > > > } > > > #Following sets X-Served-By, if it is already set it appends it > # it also says if it is a HIT, and how many hits > > sub vcl_deliver { > > ?#add or append Served By > ?if(!resp.http.X-Served-By) { > ? ?set resp.http.X-Served-By ?= "varnish8"; > ? ?if (obj.hits > 0) { > ? ? ?set resp.http.X-Cache = "HIT"; > ? ?} else { > ? ? ?set resp.http.X-Cache = "MISS"; > ? ?} > ? ?set resp.http.X-Cache-Hits = obj.hits; > ?} else { > # append current data > ? ?set resp.http.X-Served-By = regsub(resp.http.X-Served-By, "$", ", > varnish8"); > ? ?if (obj.hits > 0) { > ? ? ?set resp.http.X-Cache = regsub(resp.http.X-Cache, "$", ", HIT"); > ? ?} else { > ? ? ?set resp.http.X-Cache = regsub(resp.http.X-Cache, "$" , ", MISS"); > ? ?} > ? ?set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", ", "); > ? ?set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", > obj.hits); > ?} > > # > > # if the client is another DC, just remove stuff and deliver > ? ?if ( client.ip ~ LON > ? ? ?|| client.ip ~ SJC > ? ? ?|| client.ip ~ IOWA > ? ? ? ? ) { > ? ?unset resp.http.X-CPU-Time; > ? ?unset resp.http.X-Real-Time; > ? ?unset resp.http.X-Served-By-Backend; > ? ?unset resp.http.X-User-Id; > ? ?unset resp.http.X-Namespace-Number; > ? ?unset resp.http.X-Orighost; > ? ?unset resp.http.X-Origurl; > ? ?deliver; > ?} > # else do cache-control > # nuke the headers since they were generally meant for varnish > # these rules are mostly based on mediawiki rules > ?if ( resp.http.X-Pass-Cache-Control ) { > ? ?set resp.http.Cache-Control = resp.http.X-Pass-Cache-Control; > ?} elsif ( resp.status == 304 ) { > ? ?# no headers on if-modified since > ?} elsif ( resp.http.X-Origurl ~ ".*/index\.php.*(css|js)" > ? ? ? ? ? ?|| resp.http.X-Origurl ~ "raw") { > ? ?# dont touch it let mediawiki decide > ?} elsif (resp.http.X-Orighost ~ "images.wikia.com") { > ? ?# lighttpd knows what it is doing > ?} elsif (resp.http.X-Orighost ~ "geoiplookup") { > ?} else { > ? ?#follow squid content here > ? ?set resp.http.Cache-Control = "private, s-maxage=0, max-age=0, > must-revalidate"; > ?} > > # this will calculate an Expire headers which is based on now+max-age > # if you cache the Expire header, then it won't match max-age since it is > static > ?if (!resp.status == 304) { > ? ?C{ > ? ? ?char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); > ? ? ?char date[40]; > ? ? ?int max_age; > ? ? ?int want_equals = 0; > ? ? ?if(cache) { > ? ? ? ?while(*cache != '\0') { > ? ? ? ? ?if (want_equals && *cache == '=') { > ? ? ? ? ? ?cache++; > ? ? ? ? ? ?max_age = strtoul(cache, 0, 0); > ? ? ? ? ? ?break; > ? ? ? ? ?} > > ? ? ? ? ?if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { > ? ? ? ? ? ?cache += 7; > ? ? ? ? ? ?want_equals = 1; > ? ? ? ? ? ?continue; > ? ? ? ? ?} > ? ? ? ? ?cache++; > ? ? ? ?} > ? ? ? ?if (max_age) { > ? ? ? ? ?TIM_format(TIM_real() + max_age, date); > ? ? ? ? ?VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, > vrt_magic_string_end); > ? ? ? ?} > ? ? ?} > ? ?}C > ? ? ? #; > ?} > > } > > > vcl_error { > # this implements geoip lookups inside varnish > # so clients can get the data without hitting the backend > ?if(req.http.host == "geoiplookup.wikia.com" || req.url == > "/__varnish/geoip") { > ? ?set obj.http.Content-Type = "text/plain"; > ? ?set obj.http.cache-control = "private, s-maxage=0, max-age=360"; > ? ?set obj.http.X-Orighost = req.http.host; > ? ?C{ > ? ? ?char *ip = VRT_IP_string(sp, VRT_r_client_ip(sp)); > ? ? ?char date[40]; > ? ? ?char json[255]; > > ? ? ?pthread_mutex_lock(&geoip_mutex); > > ? ? ?if(!gi) { geo_init(); } > > ? ? ?GeoIPRecord *record = GeoIP_record_by_addr(gi, ip); > ? ? ?if(record) { > ? ? ? ?snprintf(json, 255, "Geo = > {\"city\":\"%s\",\"country\":\"%s\",\"lat\":\"%f\",\"lon\":\"%f\",\"classC\":\"%s\",\"netmask\":\"%d\"}", > ? ? ? ? ? ? ? ? record->city, > ? ? ? ? ? ? ? ? record->country_code, > ? ? ? ? ? ? ? ? record->latitude, > ? ? ? ? ? ? ? ? record->longitude, > ? ? ? ? ? ? ? ? ip, > ? ? ? ? ? ? ? ? GeoIP_last_netmask(gi) > ? ? ? ? ? ? ? ? ); > ? ? ? ?pthread_mutex_unlock(&geoip_mutex); > ? ? ? ?VRT_synth_page(sp, 0, json, ?vrt_magic_string_end); > ? ? ?} else { > ? ? ? ?pthread_mutex_unlock(&geoip_mutex); > ? ? ? ?VRT_synth_page(sp, 0, "Geo = {}", ?vrt_magic_string_end); > ? ? ?} > > > ? ? ?TIM_format(TIM_real(), date); > ? ? ?VRT_SetHdr(sp, HDR_OBJ, "\016Last-Modified:", date, > vrt_magic_string_end); > ? ?}C > ?# check if site is working > ?if(req.url ~ "lvscheck.html") { > ? ?synthetic {"varnish is okay"}; > ? ?deliver; > ?} > > ?deliver; > > } > > > ############# > > sysctl > > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max=16777216 > net.core.wmem_max=16777216 > net.ipv4.tcp_rmem=4096 87380 16777216 > net.ipv4.tcp_wmem=4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.ipv4.tcp_tw_recycle = 1 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save=1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > These are mostly cargo culted from previous emails here. > > Cheers > Artur > -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From cloude at instructables.com Mon Mar 2 21:49:37 2009 From: cloude at instructables.com (Cloude Porteus) Date: Mon, 2 Mar 2009 13:49:37 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <026A1AAD-AAAF-4420-9C11-E9C79BA43CD7@crucially.net> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> <3B401FE3-2BD8-4930-B94A-0819D14476A7@crucially.net> <4a05e1020903021333j5e862bc3ybb32f28f61eb8690@mail.gmail.com> <026A1AAD-AAAF-4420-9C11-E9C79BA43CD7@crucially.net> Message-ID: <4a05e1020903021349g2470be5ct1161c5daa4fff6@mail.gmail.com> Fair enough. We don't use keep-alive, so it hasn't been an issue. What are you guys using for load balancing? best, cloude On Mon, Mar 2, 2009 at 1:45 PM, Artur Bergman wrote: > "Right now, HAProxy only supports the first mode (HTTP close) if it needs to > process the request. This means that for each request, there will be one TCP > connection. If keep-alive or pipelining are required, HAProxy will still > support them, but will only see the first request and the first response of > each transaction. While this is generally problematic with regards to logs, > content switching or filtering, it most often causes no problem for > persistence > with cookie insertion." > Really not fully supporting keep-alive! More like varnish pipe mode > Artur > On Mar 2, 2009, at 1:33 PM, Cloude Porteus wrote: > >> I believe TCP Keep-alive has been supported in HAProxy since version >> 1.2. We've been using 1.3.x for at least a year. >> >> -cloude >> >> On Mon, Mar 2, 2009 at 1:10 PM, Artur Bergman wrote: >>> >>> HAProxy doesn't do keep-alive, so it makes everything slower. >>> >>> Artur >>> >>> On Feb 27, 2009, at 9:02 PM, Cloude Porteus wrote: >>> >>>> John, >>>> Thanks so much for the info, that's a huge help for us!!! >>>> >>>> I love HAProxy and Willy has been awesome to us. We run everything >>>> through it, since it's really easy to monitor and also easy to debug >>>> where the lag is when something in the chain is not responding fast >>>> enough. It's been rock solid for us. >>>> >>>> The nice part for us is that we can use it as a content switcher to >>>> send all /xxx traffic or certain user-agent traffic to different >>>> backends. >>>> >>>> best, >>>> cloude >>>> >>>> On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: >>>>> >>>>> cc'ing the varnish dev list for comments... >>>>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>>>> >>>>> John, >>>>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>>>> hear that ESI is holding up for you. It's been in my backlog since you >>>>> mentioned it to me pre-Twitter. >>>>> >>>>> Any performance info would be great. >>>>> >>>>> >>>>> Any comments on our setup are welcome. You may also choose to call us >>>>> crazypants. Many, many thanks to Artur Bergman of Wikia for helping us >>>>> get >>>>> this configuration straightened out. >>>>> Right now, we're running varnish (on search) in a bit of a non-standard >>>>> way. >>>>> We plan to use it in the normal fashion (varnish to Internet, nothing >>>>> inbetween) on our API at some point. We're running version 2.0.2, no >>>>> patches. Cache hit rates range from 10% to 30%, or higher when a >>>>> real-time >>>>> event is flooding search. >>>>> 2.0.2 is quite stable for us, with the occasional child death here and >>>>> there >>>>> when we get massive headers coming in that flood sess_workspace. I hear >>>>> this >>>>> is fixed in 2.0.3, but haven't had time to try it yet. >>>>> We have a number of search boxes, and each search box has an apache >>>>> instance >>>>> on it, and varnish instance. We plan to merge the varnish instances at >>>>> some >>>>> point, but we use very low TTLs (Twitter is the real time web!) and >>>>> don't >>>>> see much of a savings by running less of them. >>>>> We do: >>>>> Apache --> Varnish --> Apache -> Mongrels >>>>> Apaches are using mod_proxy_balancer. The front end apache is there >>>>> because >>>>> we've long had a fear that Varnish would crash on us, which it did many >>>>> times prior to our figuring out the proper parameters for startup. We >>>>> have >>>>> two entries in that balancer. Either the request goes to varnish, or, >>>>> if >>>>> varnish bombs out, it goes directly to the mongrel. >>>>> We do this, because we need a load balancing algorithm that varnish >>>>> doesn't >>>>> support, called bybusiness. Without bybusiness, varnish tries to direct >>>>> requests to Mongrels that are busy, and requests end up in the listen >>>>> queue. >>>>> that adds ~100-150mS to load times, and that's no good for our desired >>>>> service times of 200-250mS (or less.) >>>>> We'd be so happy if someone put bybusiness into Varnish's backend load >>>>> balancing, but it's not there yet. >>>>> We also know that taking the extra hop through localhost costs us next >>>>> to >>>>> nothing in service time, so it's good to have Apache there incase we >>>>> need >>>>> to >>>>> yank out Varnish. In the future, we might get rid of Apache and use >>>>> HAProxy >>>>> (it's load balancing and backend monitoring is much richer than Apache, >>>>> and, >>>>> it has a beautiful HTTP interface to look at.) >>>>> Some variables and our decisions: >>>>> ? ? ? ? ? ?-p obj_workspace=4096 \ >>>>> ? ?-p sess_workspace=262144 \ >>>>> Absolutely vital! ?Varnish does not allocate enough space by default >>>>> for >>>>> headers, regexps on cookies, and otherwise. It was increased in 2.0.3, >>>>> but >>>>> really, not increased enough. Without this we were panicing every 20-30 >>>>> requests and overflowing the sess hash. >>>>> ? ? ? ? ? ?-p listen_depth=8192 \ >>>>> 8192 is probably excessive for now. If we're queuing 8k conns, >>>>> something >>>>> is >>>>> really broke! >>>>> ? ? ? ? ? ?-p log_hashstring=off \ >>>>> Who cares about this - we don't need it. >>>>> ? ?-p lru_interval=60 \ >>>>> We have many small objects in the search cache. Run LRU more often. >>>>> ? ? ? ? ? ?-p sess_timeout=10 \ >>>>> If you keep session data around for too long, you waste memory. >>>>> ? ?-p shm_workspace=32768 \ >>>>> Give us a bit more room in shm >>>>> ? ? ? ? ? ?-p ping_interval=1 \ >>>>> Frequent pings in case the child dies on us. >>>>> ? ? ? ? ? ?-p thread_pools=4 \ >>>>> ? ? ? ? ? ?-p thread_pool_min=100 \ >>>>> This must match up with VARNISH_MIN_THREADS. We use four pools, (pools >>>>> * >>>>> thread_pool_min == VARNISH_MIN_THREADS) >>>>> ? ?-p srcaddr_ttl=0 \ >>>>> Disable the (effectively unused) per source-IP statistics >>>>> ? ?-p esi_syntax=1 >>>>> Disable ESI syntax verification so we can use it to process JSON >>>>> requests. >>>>> If you have more than 2.1M objects, you should also add: >>>>> # -h classic,250007 = recommeded value for 2.1M objects >>>>> # ? ? number should be 1/10 expected working set. >>>>> >>>>> In our VCL, we have a few fancy tricks that we use. We label the cache >>>>> server and cache hit/miss rate in vcl_deliver with this code: >>>>> Top of VCL: >>>>> C{ >>>>> #include >>>>> #include >>>>> char myhostname[255] = ""; >>>>> >>>>> }C >>>>> vcl_deliver: >>>>> C{ >>>>> ? VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, >>>>> vrt_magic_string_end); >>>>> }C >>>>> ? /* mark hit/miss on the request */ >>>>> ? if (obj.hits > 0) { >>>>> ? ? set resp.http.X-Cache = "HIT"; >>>>> ? ? set resp.http.X-Cache-Hits = obj.hits; >>>>> ? } else { >>>>> ? ? set resp.http.X-Cache = "MISS"; >>>>> ? } >>>>> >>>>> vcl_recv: >>>>> C{ >>>>> ?if (myhostname[0] == '\0') { >>>>> ? ?/* only get hostname once - restart required if hostname changes */ >>>>> ? ?gethostname(myhostname, 255); >>>>> ?} >>>>> }C >>>>> >>>>> Portions of /etc/sysconfig/varnish follow... >>>>> # The minimum number of worker threads to start >>>>> VARNISH_MIN_THREADS=400 >>>>> # The Maximum number of worker threads to start >>>>> VARNISH_MAX_THREADS=1000 >>>>> # Idle timeout for worker threads >>>>> VARNISH_THREAD_TIMEOUT=60 >>>>> # Cache file location >>>>> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >>>>> # Cache file size: in bytes, optionally using k / M / G / T suffix, >>>>> # or in percentage of available disk space using the % suffix. >>>>> VARNISH_STORAGE_SIZE="8G" >>>>> # >>>>> # Backend storage specification >>>>> VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" >>>>> # Default TTL used when the backend does not specify one >>>>> VARNISH_TTL=5 >>>>> # the working directory >>>>> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >>>>> ? ? ? ? ? ?-f ${VARNISH_VCL_CONF} \ >>>>> ? ? ? ? ? ?-T >>>>> ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ >>>>> ? ? ? ? ? ?-t ${VARNISH_TTL} \ >>>>> ? -n ${VARNISH_WORKDIR} \ >>>>> ? ? ? ? ? ?-w >>>>> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} >>>>> \ >>>>> ? ? ? ? ? ?-u varnish -g varnish \ >>>>> ? ? ? ? ? ?-p obj_workspace=4096 \ >>>>> ? -p sess_workspace=262144 \ >>>>> ? ? ? ? ? ?-p listen_depth=8192 \ >>>>> ? ? ? ? ? ?-p log_hashstring=off \ >>>>> ? -p lru_interval=60 \ >>>>> ? ? ? ? ? ?-p sess_timeout=10 \ >>>>> ? -p shm_workspace=32768 \ >>>>> ? ? ? ? ? ?-p ping_interval=1 \ >>>>> ? ? ? ? ? ?-p thread_pools=4 \ >>>>> ? ? ? ? ? ?-p thread_pool_min=100 \ >>>>> ? -p srcaddr_ttl=0 \ >>>>> ? -p esi_syntax=1 \ >>>>> ? ? ? ? ? ?-s ${VARNISH_STORAGE}" >>>>> >>>>> --- >>>>> John Adams >>>>> Twitter Operations >>>>> jna at twitter.com >>>>> http://twitter.com/netik >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> -- >>>> VP of Product Development >>>> Instructables.com >>>> >>>> http://www.instructables.com/member/lebowski >>>> _______________________________________________ >>>> varnish-dev mailing list >>>> varnish-dev at projects.linpro.no >>>> http://projects.linpro.no/mailman/listinfo/varnish-dev >>> >>> >> >> >> >> -- >> VP of Product Development >> Instructables.com >> >> http://www.instructables.com/member/lebowski > > -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From jesus at omniti.com Mon Mar 2 21:51:05 2009 From: jesus at omniti.com (Theo Schlossnagle) Date: Mon, 2 Mar 2009 16:51:05 -0500 Subject: Timeouts In-Reply-To: <49AC019A.8000308@schokola.de> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> <49AABC95.8090409@schokola.de> <140D075E-8BC0-49F1-8322-581D4CF3CE2A@omniti.com> <49AC019A.8000308@schokola.de> Message-ID: <2528B678-B654-40F2-A9E9-2D7DA7890E7F@omniti.com> Indeed! On Mar 2, 2009, at 10:56 AM, Nils Goroll wrote: > Theo, > >> I see two approaches: > > Thank you for your thoughts. To me, this sounds like it might > actually be more appropriate to spend the effort on the Solaris > source instead. > > Cheers, > > Nils -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications & Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 From sky at crucially.net Mon Mar 2 22:22:55 2009 From: sky at crucially.net (sky at crucially.net) Date: Mon, 2 Mar 2009 22:22:55 +0000 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net><4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> Message-ID: <781961842-1236032586-cardhu_decombobulator_blackberry.rim.net-1794463899-@bxe1018.bisx.prod.on.blackberry> Unless you have something in front of varnish you can gzip. Varnish doesn't understand it. Unless I guess you split it up from the server and not the esi statement Sent via BlackBerry by AT&T -----Original Message----- From: Cloude Porteus Date: Mon, 2 Mar 2009 13:48:23 To: Artur Bergman Cc: John Adams; Subject: Re: Is anyone using ESI with a lot of traffic? Artur, What is the issue with ESI & gzip? Does this mean that if we want to use ESI, we can't gzip the pages that have ESI includes? But we could still gzip the pages that are included by ESI. thanks, cloude On Mon, Mar 2, 2009 at 1:40 PM, Artur Bergman wrote: > > On Feb 27, 2009, at 2:24 PM, John Adams wrote: > >> cc'ing the varnish dev list for comments... >> >> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >> >>> John, >>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>> hear that ESI is holding up for you. It's been in my backlog since you >>> mentioned it to me pre-Twitter. >>> >>> Any performance info would be great. >>> >> >> Any comments on our setup are welcome. You may also choose to call us >> crazypants. Many, many thanks to Artur Bergman of Wikia for helping us get >> this configuration straightened out. >> > > Thanks John :) > > I'll describe the settings we use. (We don't use ESI because of gzip) > > The first important step is that we put the shmlog on tmpfs > > tmpfs ? ? ? ? ? /usr/var/varnish/ tmpfs noatime,defaults,size=150M ?0 0 > /dev/md0 ? ? ? ?/var/lib/varnish ? ? ? ?ext2 noatime,nodiratime,norelatime 0 > 0 > > Notice also ext2 we don't care about journaling. (Ignore the broken paths) > > This is because linux will asynchronously write the log to disk, this puts a > large io pressure on the system (interfering with your normal reads if you > use the same disks) It also scales the IO load with traffic and not working > set. > > # Maximum number of open files (for ulimit -n) > NFILES=131072 > > # Locked shared memory (for ulimit -l) > # Default log size is 82MB + header > MEMLOCK=90000 > > DAEMON_COREFILE_LIMIT="unlimited" > > > DAEMON_OPTS="-a :80 \ > ? ? ? ? ? ? ? -T localhost:6082 \ > ? ? ? ? ? ? ? -f /etc/varnish/wikia.vcl \ > ? ? ? ? ? ? ? -p obj_workspace=4096 \ > # We have lots of objects > ? ? ? ? ? ? ? -p sess_workspace=32768 \ > # Need lots of sessoin space > ? ? ? ? ? ? ? -p listen_depth=8192 \ > ? ? ? ? ? ? ? -p ping_interval=1 \ > ? ? ? ? ? ? ? -s file,/var/lib/varnish/mmap,120G \ > # lots of mmap > ? ? ? ? ? ? ? -p log_hashstring=off \ > ? ? ? ? ? ? ? -h classic,250007 \ > # 2.5 mmilion objects > ? ? ? ? ? ? ? -p thread_pool_max=4000 \ > ? ? ? ? ? ? ? -p lru_interval=60 \ > ? ? ? ? ? ? ? -p esi_syntax=0x00000003 \ > ? ? ? ? ? ? ? -p sess_timeout=10 \ > ? ? ? ? ? ? ? -p thread_pools=4 \ > ? ? ? ? ? ? ? -p thread_pool_min=500 \ > # we force 4000 threads pre-created > # otherwise we run into overflows > ? ? ? ? ? ? ? -p shm_workspace=32768 \ > # avoid shm_mtx > ? ? ? ? ? ? ? -p srcaddr_ttl=0" > # avoid hash lookup > > # we link geoip into the vcl > CC_COMMAND='cc_command=exec cc -fpic -shared -Wl,-x -L/usr/local/lib/ > -lGeoIP -o %o %s' > > #### VCL > > # declare the function signature > # so we can use them > C{ > #include > ?double TIM_real(void); > ?void TIM_format(double t, char *p); > }C > > > > # init GeoIP code > C{ > ?#include > ?#include > ?#include > ?#include > ?#include > ?#include > > ?pthread_mutex_t geoip_mutex = PTHREAD_MUTEX_INITIALIZER; > > ?GeoIP* gi; > ?void geo_init () { > ? ?if(!gi) { > ? ? ?gi = GeoIP_open_type(GEOIP_CITY_EDITION_REV1,GEOIP_MEMORY_CACHE); > ? ?} > ?} > }C > > vcl_recv { > > set req.url = regsub(req.url, "http://[^/]*",""); > #will normalize proxied requests, specificl curl -x foo:80 > > ?# get out error handler for geoiplookup > ?if(req.http.host == "geoiplookup.wikia.com") { > ? ?error 200 "Ok"; > ?} > > ?# lvs check > ?if (req.url == "/lvscheck.html") { > ? ?error 200 "Ok"; > ?} > > ?# normalize Accept-Encoding to reduce vary > ?if (req.http.Accept-Encoding) { > ? ?if (req.http.Accept-Encoding ~ "gzip") { > ? ? ?set req.http.Accept-Encoding = "gzip"; > ? ?} elsif (req.http.Accept-Encoding ~ "deflate") { > ? ? ?set req.http.Accept-Encoding = "deflate"; > ? ?} else { > ? ? ?unset req.http.Accept-Encoding; > ? ?} > ?} > > > ?# Yahoo uses this to check for 404 > ?if (req.url ~ "^/SlurpConfirm404") { > ? ?error 404 "Not found"; > ?} > > set req.grace = 360000s; ?#if the backend is down, just serve > > > # check for specific cookies, otherwise nuke them > # save them so we can re-inject them later in pipe or miss > ?set req.http.X-Orig-Cookie = req.http.Cookie; > ?if(req.http.Cookie ~ "(session|UserID|UserName|Token|LoggedOut)") { > ? ?# dont do anything, the user is logged in > ?} else { > ? ?# dont care about any other cookies > ? ?unset req.http.Cookie; > ?} > > > } > > # varnish XFF is broken, it doesn't chain them > # if you have chained varnishes, or trust AOL, you need to append them > sub vcl_pipe { > ?# do the right XFF processing > ?set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", > "); > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", > client.ip); > ?set bereq.http.Cookie = req.http.X-Orig-Cookie; > } > > > # this implements purging (we purge all 3 versions of the accept-encoding, > none,gzip,deflate) > sub vcl_hit { > ?if (req.request == "PURGE") { > ? ?set obj.ttl = 0s; > ? ?error 200 "Purged."; > ?} > } > > sub vcl_miss { > > ?if (req.request == "PURGE") { > ? ?error 404 "Not purged"; > ?} > > ?set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", ", > "); > ?set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded-For, "$", > client.ip); > } > > > # this marks if something is cacheable or not, if it isn't > # say why > vcl_fetch { > # so we have access to this in deliver > ? ? ? ?set obj.http.X-Orighost = req.http.host; > ? ? ? ?set obj.http.X-Origurl = req.url; > ? ? ? ?if (!obj.cacheable) { > ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Not-Cacheable"; > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > ? ? ? ?if (obj.http.Cache-Control ~ "private") { > ? ? ? ? ? ? ? ?if(req.http.Cookie ~"(UserID|_session)") { > ? ? ? ? ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Got Session"; > ? ? ? ? ? ? ? ?} else { > ? ? ? ? ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = > "NO:Cache-Control=private"; > ? ? ? ? ? ? ? ?} > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > ? ? ? ?if (obj.http.Set-Cookie ~ "(UserID|_session)") { > ? ? ? ? ? ? ? ?set obj.http.X-Cacheable = "NO:Set-Cookie"; > ? ? ? ? ? ? ? ?pass; > ? ? ? ?} > > ? ? ? ?set obj.http.X-Cacheable = "YES"; > ?set obj.grace = 360000s; > > > } > > > #Following sets X-Served-By, if it is already set it appends it > # it also says if it is a HIT, and how many hits > > sub vcl_deliver { > > ?#add or append Served By > ?if(!resp.http.X-Served-By) { > ? ?set resp.http.X-Served-By ?= "varnish8"; > ? ?if (obj.hits > 0) { > ? ? ?set resp.http.X-Cache = "HIT"; > ? ?} else { > ? ? ?set resp.http.X-Cache = "MISS"; > ? ?} > ? ?set resp.http.X-Cache-Hits = obj.hits; > ?} else { > # append current data > ? ?set resp.http.X-Served-By = regsub(resp.http.X-Served-By, "$", ", > varnish8"); > ? ?if (obj.hits > 0) { > ? ? ?set resp.http.X-Cache = regsub(resp.http.X-Cache, "$", ", HIT"); > ? ?} else { > ? ? ?set resp.http.X-Cache = regsub(resp.http.X-Cache, "$" , ", MISS"); > ? ?} > ? ?set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", ", "); > ? ?set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", > obj.hits); > ?} > > # > > # if the client is another DC, just remove stuff and deliver > ? ?if ( client.ip ~ LON > ? ? ?|| client.ip ~ SJC > ? ? ?|| client.ip ~ IOWA > ? ? ? ? ) { > ? ?unset resp.http.X-CPU-Time; > ? ?unset resp.http.X-Real-Time; > ? ?unset resp.http.X-Served-By-Backend; > ? ?unset resp.http.X-User-Id; > ? ?unset resp.http.X-Namespace-Number; > ? ?unset resp.http.X-Orighost; > ? ?unset resp.http.X-Origurl; > ? ?deliver; > ?} > # else do cache-control > # nuke the headers since they were generally meant for varnish > # these rules are mostly based on mediawiki rules > ?if ( resp.http.X-Pass-Cache-Control ) { > ? ?set resp.http.Cache-Control = resp.http.X-Pass-Cache-Control; > ?} elsif ( resp.status == 304 ) { > ? ?# no headers on if-modified since > ?} elsif ( resp.http.X-Origurl ~ ".*/index\.php.*(css|js)" > ? ? ? ? ? ?|| resp.http.X-Origurl ~ "raw") { > ? ?# dont touch it let mediawiki decide > ?} elsif (resp.http.X-Orighost ~ "images.wikia.com") { > ? ?# lighttpd knows what it is doing > ?} elsif (resp.http.X-Orighost ~ "geoiplookup") { > ?} else { > ? ?#follow squid content here > ? ?set resp.http.Cache-Control = "private, s-maxage=0, max-age=0, > must-revalidate"; > ?} > > # this will calculate an Expire headers which is based on now+max-age > # if you cache the Expire header, then it won't match max-age since it is > static > ?if (!resp.status == 304) { > ? ?C{ > ? ? ?char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); > ? ? ?char date[40]; > ? ? ?int max_age; > ? ? ?int want_equals = 0; > ? ? ?if(cache) { > ? ? ? ?while(*cache != '\0') { > ? ? ? ? ?if (want_equals && *cache == '=') { > ? ? ? ? ? ?cache++; > ? ? ? ? ? ?max_age = strtoul(cache, 0, 0); > ? ? ? ? ? ?break; > ? ? ? ? ?} > > ? ? ? ? ?if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { > ? ? ? ? ? ?cache += 7; > ? ? ? ? ? ?want_equals = 1; > ? ? ? ? ? ?continue; > ? ? ? ? ?} > ? ? ? ? ?cache++; > ? ? ? ?} > ? ? ? ?if (max_age) { > ? ? ? ? ?TIM_format(TIM_real() + max_age, date); > ? ? ? ? ?VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, > vrt_magic_string_end); > ? ? ? ?} > ? ? ?} > ? ?}C > ? ? ? #; > ?} > > } > > > vcl_error { > # this implements geoip lookups inside varnish > # so clients can get the data without hitting the backend > ?if(req.http.host == "geoiplookup.wikia.com" || req.url == > "/__varnish/geoip") { > ? ?set obj.http.Content-Type = "text/plain"; > ? ?set obj.http.cache-control = "private, s-maxage=0, max-age=360"; > ? ?set obj.http.X-Orighost = req.http.host; > ? ?C{ > ? ? ?char *ip = VRT_IP_string(sp, VRT_r_client_ip(sp)); > ? ? ?char date[40]; > ? ? ?char json[255]; > > ? ? ?pthread_mutex_lock(&geoip_mutex); > > ? ? ?if(!gi) { geo_init(); } > > ? ? ?GeoIPRecord *record = GeoIP_record_by_addr(gi, ip); > ? ? ?if(record) { > ? ? ? ?snprintf(json, 255, "Geo = > {\"city\":\"%s\",\"country\":\"%s\",\"lat\":\"%f\",\"lon\":\"%f\",\"classC\":\"%s\",\"netmask\":\"%d\"}", > ? ? ? ? ? ? ? ? record->city, > ? ? ? ? ? ? ? ? record->country_code, > ? ? ? ? ? ? ? ? record->latitude, > ? ? ? ? ? ? ? ? record->longitude, > ? ? ? ? ? ? ? ? ip, > ? ? ? ? ? ? ? ? GeoIP_last_netmask(gi) > ? ? ? ? ? ? ? ? ); > ? ? ? ?pthread_mutex_unlock(&geoip_mutex); > ? ? ? ?VRT_synth_page(sp, 0, json, ?vrt_magic_string_end); > ? ? ?} else { > ? ? ? ?pthread_mutex_unlock(&geoip_mutex); > ? ? ? ?VRT_synth_page(sp, 0, "Geo = {}", ?vrt_magic_string_end); > ? ? ?} > > > ? ? ?TIM_format(TIM_real(), date); > ? ? ?VRT_SetHdr(sp, HDR_OBJ, "\016Last-Modified:", date, > vrt_magic_string_end); > ? ?}C > ?# check if site is working > ?if(req.url ~ "lvscheck.html") { > ? ?synthetic {"varnish is okay"}; > ? ?deliver; > ?} > > ?deliver; > > } > > > ############# > > sysctl > > net.ipv4.ip_local_port_range = 1024 65536 > net.core.rmem_max=16777216 > net.core.wmem_max=16777216 > net.ipv4.tcp_rmem=4096 87380 16777216 > net.ipv4.tcp_wmem=4096 65536 16777216 > net.ipv4.tcp_fin_timeout = 3 > net.ipv4.tcp_tw_recycle = 1 > net.core.netdev_max_backlog = 30000 > net.ipv4.tcp_no_metrics_save=1 > net.core.somaxconn = 262144 > net.ipv4.tcp_syncookies = 0 > net.ipv4.tcp_max_orphans = 262144 > net.ipv4.tcp_max_syn_backlog = 262144 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > > These are mostly cargo culted from previous emails here. > > Cheers > Artur > -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From jna at twitter.com Mon Mar 2 22:39:42 2009 From: jna at twitter.com (John Adams) Date: Mon, 2 Mar 2009 14:39:42 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net> <4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> Message-ID: We fix this by front-ending varnish with apache. Not the best solution but we still get to compress. -j On Mar 2, 2009, at 1:48 PM, Cloude Porteus wrote: > Artur, > What is the issue with ESI & gzip? > > Does this mean that if we want to use ESI, we can't gzip the pages > that have ESI includes? But we could still gzip the pages that are > included by ESI. > > thanks, > cloude > > On Mon, Mar 2, 2009 at 1:40 PM, Artur Bergman > wrote: >> >> On Feb 27, 2009, at 2:24 PM, John Adams wrote: >> >>> cc'ing the varnish dev list for comments... >>> >>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>> >>>> John, >>>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>>> hear that ESI is holding up for you. It's been in my backlog >>>> since you >>>> mentioned it to me pre-Twitter. >>>> >>>> Any performance info would be great. >>>> >>> >>> Any comments on our setup are welcome. You may also choose to call >>> us >>> crazypants. Many, many thanks to Artur Bergman of Wikia for >>> helping us get >>> this configuration straightened out. >>> >> >> Thanks John :) >> >> I'll describe the settings we use. (We don't use ESI because of gzip) >> >> The first important step is that we put the shmlog on tmpfs >> >> tmpfs /usr/var/varnish/ tmpfs noatime,defaults,size=150M >> 0 0 >> /dev/md0 /var/lib/varnish ext2 >> noatime,nodiratime,norelatime 0 >> 0 >> >> Notice also ext2 we don't care about journaling. (Ignore the broken >> paths) >> >> This is because linux will asynchronously write the log to disk, >> this puts a >> large io pressure on the system (interfering with your normal reads >> if you >> use the same disks) It also scales the IO load with traffic and not >> working >> set. >> >> # Maximum number of open files (for ulimit -n) >> NFILES=131072 >> >> # Locked shared memory (for ulimit -l) >> # Default log size is 82MB + header >> MEMLOCK=90000 >> >> DAEMON_COREFILE_LIMIT="unlimited" >> >> >> DAEMON_OPTS="-a :80 \ >> -T localhost:6082 \ >> -f /etc/varnish/wikia.vcl \ >> -p obj_workspace=4096 \ >> # We have lots of objects >> -p sess_workspace=32768 \ >> # Need lots of sessoin space >> -p listen_depth=8192 \ >> -p ping_interval=1 \ >> -s file,/var/lib/varnish/mmap,120G \ >> # lots of mmap >> -p log_hashstring=off \ >> -h classic,250007 \ >> # 2.5 mmilion objects >> -p thread_pool_max=4000 \ >> -p lru_interval=60 \ >> -p esi_syntax=0x00000003 \ >> -p sess_timeout=10 \ >> -p thread_pools=4 \ >> -p thread_pool_min=500 \ >> # we force 4000 threads pre-created >> # otherwise we run into overflows >> -p shm_workspace=32768 \ >> # avoid shm_mtx >> -p srcaddr_ttl=0" >> # avoid hash lookup >> >> # we link geoip into the vcl >> CC_COMMAND='cc_command=exec cc -fpic -shared -Wl,-x -L/usr/local/lib/ >> -lGeoIP -o %o %s' >> >> #### VCL >> >> # declare the function signature >> # so we can use them >> C{ >> #include >> double TIM_real(void); >> void TIM_format(double t, char *p); >> }C >> >> >> >> # init GeoIP code >> C{ >> #include >> #include >> #include >> #include >> #include >> #include >> >> pthread_mutex_t geoip_mutex = PTHREAD_MUTEX_INITIALIZER; >> >> GeoIP* gi; >> void geo_init () { >> if(!gi) { >> gi = >> GeoIP_open_type(GEOIP_CITY_EDITION_REV1,GEOIP_MEMORY_CACHE); >> } >> } >> }C >> >> vcl_recv { >> >> set req.url = regsub(req.url, "http://[^/]*",""); >> #will normalize proxied requests, specificl curl -x foo:80 >> >> # get out error handler for geoiplookup >> if(req.http.host == "geoiplookup.wikia.com") { >> error 200 "Ok"; >> } >> >> # lvs check >> if (req.url == "/lvscheck.html") { >> error 200 "Ok"; >> } >> >> # normalize Accept-Encoding to reduce vary >> if (req.http.Accept-Encoding) { >> if (req.http.Accept-Encoding ~ "gzip") { >> set req.http.Accept-Encoding = "gzip"; >> } elsif (req.http.Accept-Encoding ~ "deflate") { >> set req.http.Accept-Encoding = "deflate"; >> } else { >> unset req.http.Accept-Encoding; >> } >> } >> >> >> # Yahoo uses this to check for 404 >> if (req.url ~ "^/SlurpConfirm404") { >> error 404 "Not found"; >> } >> >> set req.grace = 360000s; #if the backend is down, just serve >> >> >> # check for specific cookies, otherwise nuke them >> # save them so we can re-inject them later in pipe or miss >> set req.http.X-Orig-Cookie = req.http.Cookie; >> if(req.http.Cookie ~ "(session|UserID|UserName|Token|LoggedOut)") { >> # dont do anything, the user is logged in >> } else { >> # dont care about any other cookies >> unset req.http.Cookie; >> } >> >> >> } >> >> # varnish XFF is broken, it doesn't chain them >> # if you have chained varnishes, or trust AOL, you need to append >> them >> sub vcl_pipe { >> # do the right XFF processing >> set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; >> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >> For, "$", ", >> "); >> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >> For, "$", >> client.ip); >> set bereq.http.Cookie = req.http.X-Orig-Cookie; >> } >> >> >> # this implements purging (we purge all 3 versions of the accept- >> encoding, >> none,gzip,deflate) >> sub vcl_hit { >> if (req.request == "PURGE") { >> set obj.ttl = 0s; >> error 200 "Purged."; >> } >> } >> >> sub vcl_miss { >> >> if (req.request == "PURGE") { >> error 404 "Not purged"; >> } >> >> set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; >> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >> For, "$", ", >> "); >> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >> For, "$", >> client.ip); >> } >> >> >> # this marks if something is cacheable or not, if it isn't >> # say why >> vcl_fetch { >> # so we have access to this in deliver >> set obj.http.X-Orighost = req.http.host; >> set obj.http.X-Origurl = req.url; >> if (!obj.cacheable) { >> set obj.http.X-Cacheable = "NO:Not-Cacheable"; >> pass; >> } >> if (obj.http.Cache-Control ~ "private") { >> if(req.http.Cookie ~"(UserID|_session)") { >> set obj.http.X-Cacheable = "NO:Got Session"; >> } else { >> set obj.http.X-Cacheable = >> "NO:Cache-Control=private"; >> } >> pass; >> } >> if (obj.http.Set-Cookie ~ "(UserID|_session)") { >> set obj.http.X-Cacheable = "NO:Set-Cookie"; >> pass; >> } >> >> set obj.http.X-Cacheable = "YES"; >> set obj.grace = 360000s; >> >> >> } >> >> >> #Following sets X-Served-By, if it is already set it appends it >> # it also says if it is a HIT, and how many hits >> >> sub vcl_deliver { >> >> #add or append Served By >> if(!resp.http.X-Served-By) { >> set resp.http.X-Served-By = "varnish8"; >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> } else { >> set resp.http.X-Cache = "MISS"; >> } >> set resp.http.X-Cache-Hits = obj.hits; >> } else { >> # append current data >> set resp.http.X-Served-By = regsub(resp.http.X-Served-By, "$", ", >> varnish8"); >> if (obj.hits > 0) { >> set resp.http.X-Cache = regsub(resp.http.X-Cache, "$", ", HIT"); >> } else { >> set resp.http.X-Cache = regsub(resp.http.X-Cache, "$" , ", >> MISS"); >> } >> set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", >> ", "); >> set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", >> obj.hits); >> } >> >> # >> >> # if the client is another DC, just remove stuff and deliver >> if ( client.ip ~ LON >> || client.ip ~ SJC >> || client.ip ~ IOWA >> ) { >> unset resp.http.X-CPU-Time; >> unset resp.http.X-Real-Time; >> unset resp.http.X-Served-By-Backend; >> unset resp.http.X-User-Id; >> unset resp.http.X-Namespace-Number; >> unset resp.http.X-Orighost; >> unset resp.http.X-Origurl; >> deliver; >> } >> # else do cache-control >> # nuke the headers since they were generally meant for varnish >> # these rules are mostly based on mediawiki rules >> if ( resp.http.X-Pass-Cache-Control ) { >> set resp.http.Cache-Control = resp.http.X-Pass-Cache-Control; >> } elsif ( resp.status == 304 ) { >> # no headers on if-modified since >> } elsif ( resp.http.X-Origurl ~ ".*/index\.php.*(css|js)" >> || resp.http.X-Origurl ~ "raw") { >> # dont touch it let mediawiki decide >> } elsif (resp.http.X-Orighost ~ "images.wikia.com") { >> # lighttpd knows what it is doing >> } elsif (resp.http.X-Orighost ~ "geoiplookup") { >> } else { >> #follow squid content here >> set resp.http.Cache-Control = "private, s-maxage=0, max-age=0, >> must-revalidate"; >> } >> >> # this will calculate an Expire headers which is based on now+max-age >> # if you cache the Expire header, then it won't match max-age since >> it is >> static >> if (!resp.status == 304) { >> C{ >> char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); >> char date[40]; >> int max_age; >> int want_equals = 0; >> if(cache) { >> while(*cache != '\0') { >> if (want_equals && *cache == '=') { >> cache++; >> max_age = strtoul(cache, 0, 0); >> break; >> } >> >> if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { >> cache += 7; >> want_equals = 1; >> continue; >> } >> cache++; >> } >> if (max_age) { >> TIM_format(TIM_real() + max_age, date); >> VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, >> vrt_magic_string_end); >> } >> } >> }C >> #; >> } >> >> } >> >> >> vcl_error { >> # this implements geoip lookups inside varnish >> # so clients can get the data without hitting the backend >> if(req.http.host == "geoiplookup.wikia.com" || req.url == >> "/__varnish/geoip") { >> set obj.http.Content-Type = "text/plain"; >> set obj.http.cache-control = "private, s-maxage=0, max-age=360"; >> set obj.http.X-Orighost = req.http.host; >> C{ >> char *ip = VRT_IP_string(sp, VRT_r_client_ip(sp)); >> char date[40]; >> char json[255]; >> >> pthread_mutex_lock(&geoip_mutex); >> >> if(!gi) { geo_init(); } >> >> GeoIPRecord *record = GeoIP_record_by_addr(gi, ip); >> if(record) { >> snprintf(json, 255, "Geo = >> {\"city\":\"%s\",\"country\":\"%s\",\"lat\":\"%f\",\"lon\":\"%f\", >> \"classC\":\"%s\",\"netmask\":\"%d\"}", >> record->city, >> record->country_code, >> record->latitude, >> record->longitude, >> ip, >> GeoIP_last_netmask(gi) >> ); >> pthread_mutex_unlock(&geoip_mutex); >> VRT_synth_page(sp, 0, json, vrt_magic_string_end); >> } else { >> pthread_mutex_unlock(&geoip_mutex); >> VRT_synth_page(sp, 0, "Geo = {}", vrt_magic_string_end); >> } >> >> >> TIM_format(TIM_real(), date); >> VRT_SetHdr(sp, HDR_OBJ, "\016Last-Modified:", date, >> vrt_magic_string_end); >> }C >> # check if site is working >> if(req.url ~ "lvscheck.html") { >> synthetic {"varnish is okay"}; >> deliver; >> } >> >> deliver; >> >> } >> >> >> ############# >> >> sysctl >> >> net.ipv4.ip_local_port_range = 1024 65536 >> net.core.rmem_max=16777216 >> net.core.wmem_max=16777216 >> net.ipv4.tcp_rmem=4096 87380 16777216 >> net.ipv4.tcp_wmem=4096 65536 16777216 >> net.ipv4.tcp_fin_timeout = 3 >> net.ipv4.tcp_tw_recycle = 1 >> net.core.netdev_max_backlog = 30000 >> net.ipv4.tcp_no_metrics_save=1 >> net.core.somaxconn = 262144 >> net.ipv4.tcp_syncookies = 0 >> net.ipv4.tcp_max_orphans = 262144 >> net.ipv4.tcp_max_syn_backlog = 262144 >> net.ipv4.tcp_synack_retries = 2 >> net.ipv4.tcp_syn_retries = 2 >> >> These are mostly cargo culted from previous emails here. >> >> Cheers >> Artur >> > > > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski --- John Adams Twitter Operations jna at twitter.com http://twitter.com/netik From sky at crucially.net Mon Mar 2 22:49:29 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 2 Mar 2009 14:49:29 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> <9DB31CBD-EE39-4621-A551-E24B5688DAFC@crucially.net> <4a05e1020903021348k51887c7cj2ac7fd6c9ab4de7a@mail.gmail.com> Message-ID: <04AAB3C1-AD0E-4438-AFA3-615BB729E420@crucially.net> We considered that, but it is important our backend traffic is gziped. So we would end up with apache->varnish->apache->apache->varnish- >apache which is suboptimal at best! On Mar 2, 2009, at 2:39 PM, John Adams wrote: > We fix this by front-ending varnish with apache. Not the best > solution but we still get to compress. > > -j > > On Mar 2, 2009, at 1:48 PM, Cloude Porteus wrote: > >> Artur, >> What is the issue with ESI & gzip? >> >> Does this mean that if we want to use ESI, we can't gzip the pages >> that have ESI includes? But we could still gzip the pages that are >> included by ESI. >> >> thanks, >> cloude >> >> On Mon, Mar 2, 2009 at 1:40 PM, Artur Bergman >> wrote: >>> >>> On Feb 27, 2009, at 2:24 PM, John Adams wrote: >>> >>>> cc'ing the varnish dev list for comments... >>>> >>>> On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: >>>> >>>>> John, >>>>> Goodto hear from you. You must be slammed at Twitter. I'm happy to >>>>> hear that ESI is holding up for you. It's been in my backlog >>>>> since you >>>>> mentioned it to me pre-Twitter. >>>>> >>>>> Any performance info would be great. >>>>> >>>> >>>> Any comments on our setup are welcome. You may also choose to >>>> call us >>>> crazypants. Many, many thanks to Artur Bergman of Wikia for >>>> helping us get >>>> this configuration straightened out. >>>> >>> >>> Thanks John :) >>> >>> I'll describe the settings we use. (We don't use ESI because of >>> gzip) >>> >>> The first important step is that we put the shmlog on tmpfs >>> >>> tmpfs /usr/var/varnish/ tmpfs >>> noatime,defaults,size=150M 0 0 >>> /dev/md0 /var/lib/varnish ext2 >>> noatime,nodiratime,norelatime 0 >>> 0 >>> >>> Notice also ext2 we don't care about journaling. (Ignore the >>> broken paths) >>> >>> This is because linux will asynchronously write the log to disk, >>> this puts a >>> large io pressure on the system (interfering with your normal >>> reads if you >>> use the same disks) It also scales the IO load with traffic and >>> not working >>> set. >>> >>> # Maximum number of open files (for ulimit -n) >>> NFILES=131072 >>> >>> # Locked shared memory (for ulimit -l) >>> # Default log size is 82MB + header >>> MEMLOCK=90000 >>> >>> DAEMON_COREFILE_LIMIT="unlimited" >>> >>> >>> DAEMON_OPTS="-a :80 \ >>> -T localhost:6082 \ >>> -f /etc/varnish/wikia.vcl \ >>> -p obj_workspace=4096 \ >>> # We have lots of objects >>> -p sess_workspace=32768 \ >>> # Need lots of sessoin space >>> -p listen_depth=8192 \ >>> -p ping_interval=1 \ >>> -s file,/var/lib/varnish/mmap,120G \ >>> # lots of mmap >>> -p log_hashstring=off \ >>> -h classic,250007 \ >>> # 2.5 mmilion objects >>> -p thread_pool_max=4000 \ >>> -p lru_interval=60 \ >>> -p esi_syntax=0x00000003 \ >>> -p sess_timeout=10 \ >>> -p thread_pools=4 \ >>> -p thread_pool_min=500 \ >>> # we force 4000 threads pre-created >>> # otherwise we run into overflows >>> -p shm_workspace=32768 \ >>> # avoid shm_mtx >>> -p srcaddr_ttl=0" >>> # avoid hash lookup >>> >>> # we link geoip into the vcl >>> CC_COMMAND='cc_command=exec cc -fpic -shared -Wl,-x -L/usr/local/ >>> lib/ >>> -lGeoIP -o %o %s' >>> >>> #### VCL >>> >>> # declare the function signature >>> # so we can use them >>> C{ >>> #include >>> double TIM_real(void); >>> void TIM_format(double t, char *p); >>> }C >>> >>> >>> >>> # init GeoIP code >>> C{ >>> #include >>> #include >>> #include >>> #include >>> #include >>> #include >>> >>> pthread_mutex_t geoip_mutex = PTHREAD_MUTEX_INITIALIZER; >>> >>> GeoIP* gi; >>> void geo_init () { >>> if(!gi) { >>> gi = >>> GeoIP_open_type(GEOIP_CITY_EDITION_REV1,GEOIP_MEMORY_CACHE); >>> } >>> } >>> }C >>> >>> vcl_recv { >>> >>> set req.url = regsub(req.url, "http://[^/]*",""); >>> #will normalize proxied requests, specificl curl -x foo:80 >>> >>> # get out error handler for geoiplookup >>> if(req.http.host == "geoiplookup.wikia.com") { >>> error 200 "Ok"; >>> } >>> >>> # lvs check >>> if (req.url == "/lvscheck.html") { >>> error 200 "Ok"; >>> } >>> >>> # normalize Accept-Encoding to reduce vary >>> if (req.http.Accept-Encoding) { >>> if (req.http.Accept-Encoding ~ "gzip") { >>> set req.http.Accept-Encoding = "gzip"; >>> } elsif (req.http.Accept-Encoding ~ "deflate") { >>> set req.http.Accept-Encoding = "deflate"; >>> } else { >>> unset req.http.Accept-Encoding; >>> } >>> } >>> >>> >>> # Yahoo uses this to check for 404 >>> if (req.url ~ "^/SlurpConfirm404") { >>> error 404 "Not found"; >>> } >>> >>> set req.grace = 360000s; #if the backend is down, just serve >>> >>> >>> # check for specific cookies, otherwise nuke them >>> # save them so we can re-inject them later in pipe or miss >>> set req.http.X-Orig-Cookie = req.http.Cookie; >>> if(req.http.Cookie ~ "(session|UserID|UserName|Token|LoggedOut)") { >>> # dont do anything, the user is logged in >>> } else { >>> # dont care about any other cookies >>> unset req.http.Cookie; >>> } >>> >>> >>> } >>> >>> # varnish XFF is broken, it doesn't chain them >>> # if you have chained varnishes, or trust AOL, you need to append >>> them >>> sub vcl_pipe { >>> # do the right XFF processing >>> set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; >>> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >>> For, "$", ", >>> "); >>> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >>> For, "$", >>> client.ip); >>> set bereq.http.Cookie = req.http.X-Orig-Cookie; >>> } >>> >>> >>> # this implements purging (we purge all 3 versions of the accept- >>> encoding, >>> none,gzip,deflate) >>> sub vcl_hit { >>> if (req.request == "PURGE") { >>> set obj.ttl = 0s; >>> error 200 "Purged."; >>> } >>> } >>> >>> sub vcl_miss { >>> >>> if (req.request == "PURGE") { >>> error 404 "Not purged"; >>> } >>> >>> set bereq.http.X-Forwarded-For = req.http.X-Forwarded-For; >>> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >>> For, "$", ", >>> "); >>> set bereq.http.X-Forwarded-For = regsub(bereq.http.X-Forwarded- >>> For, "$", >>> client.ip); >>> } >>> >>> >>> # this marks if something is cacheable or not, if it isn't >>> # say why >>> vcl_fetch { >>> # so we have access to this in deliver >>> set obj.http.X-Orighost = req.http.host; >>> set obj.http.X-Origurl = req.url; >>> if (!obj.cacheable) { >>> set obj.http.X-Cacheable = "NO:Not-Cacheable"; >>> pass; >>> } >>> if (obj.http.Cache-Control ~ "private") { >>> if(req.http.Cookie ~"(UserID|_session)") { >>> set obj.http.X-Cacheable = "NO:Got Session"; >>> } else { >>> set obj.http.X-Cacheable = >>> "NO:Cache-Control=private"; >>> } >>> pass; >>> } >>> if (obj.http.Set-Cookie ~ "(UserID|_session)") { >>> set obj.http.X-Cacheable = "NO:Set-Cookie"; >>> pass; >>> } >>> >>> set obj.http.X-Cacheable = "YES"; >>> set obj.grace = 360000s; >>> >>> >>> } >>> >>> >>> #Following sets X-Served-By, if it is already set it appends it >>> # it also says if it is a HIT, and how many hits >>> >>> sub vcl_deliver { >>> >>> #add or append Served By >>> if(!resp.http.X-Served-By) { >>> set resp.http.X-Served-By = "varnish8"; >>> if (obj.hits > 0) { >>> set resp.http.X-Cache = "HIT"; >>> } else { >>> set resp.http.X-Cache = "MISS"; >>> } >>> set resp.http.X-Cache-Hits = obj.hits; >>> } else { >>> # append current data >>> set resp.http.X-Served-By = regsub(resp.http.X-Served-By, "$", ", >>> varnish8"); >>> if (obj.hits > 0) { >>> set resp.http.X-Cache = regsub(resp.http.X-Cache, "$", ", HIT"); >>> } else { >>> set resp.http.X-Cache = regsub(resp.http.X-Cache, "$" , ", >>> MISS"); >>> } >>> set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", >>> ", "); >>> set resp.http.X-Cache-Hits = regsub(resp.http.X-Cache-Hits, "$", >>> obj.hits); >>> } >>> >>> # >>> >>> # if the client is another DC, just remove stuff and deliver >>> if ( client.ip ~ LON >>> || client.ip ~ SJC >>> || client.ip ~ IOWA >>> ) { >>> unset resp.http.X-CPU-Time; >>> unset resp.http.X-Real-Time; >>> unset resp.http.X-Served-By-Backend; >>> unset resp.http.X-User-Id; >>> unset resp.http.X-Namespace-Number; >>> unset resp.http.X-Orighost; >>> unset resp.http.X-Origurl; >>> deliver; >>> } >>> # else do cache-control >>> # nuke the headers since they were generally meant for varnish >>> # these rules are mostly based on mediawiki rules >>> if ( resp.http.X-Pass-Cache-Control ) { >>> set resp.http.Cache-Control = resp.http.X-Pass-Cache-Control; >>> } elsif ( resp.status == 304 ) { >>> # no headers on if-modified since >>> } elsif ( resp.http.X-Origurl ~ ".*/index\.php.*(css|js)" >>> || resp.http.X-Origurl ~ "raw") { >>> # dont touch it let mediawiki decide >>> } elsif (resp.http.X-Orighost ~ "images.wikia.com") { >>> # lighttpd knows what it is doing >>> } elsif (resp.http.X-Orighost ~ "geoiplookup") { >>> } else { >>> #follow squid content here >>> set resp.http.Cache-Control = "private, s-maxage=0, max-age=0, >>> must-revalidate"; >>> } >>> >>> # this will calculate an Expire headers which is based on now+max- >>> age >>> # if you cache the Expire header, then it won't match max-age >>> since it is >>> static >>> if (!resp.status == 304) { >>> C{ >>> char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); >>> char date[40]; >>> int max_age; >>> int want_equals = 0; >>> if(cache) { >>> while(*cache != '\0') { >>> if (want_equals && *cache == '=') { >>> cache++; >>> max_age = strtoul(cache, 0, 0); >>> break; >>> } >>> >>> if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { >>> cache += 7; >>> want_equals = 1; >>> continue; >>> } >>> cache++; >>> } >>> if (max_age) { >>> TIM_format(TIM_real() + max_age, date); >>> VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, >>> vrt_magic_string_end); >>> } >>> } >>> }C >>> #; >>> } >>> >>> } >>> >>> >>> vcl_error { >>> # this implements geoip lookups inside varnish >>> # so clients can get the data without hitting the backend >>> if(req.http.host == "geoiplookup.wikia.com" || req.url == >>> "/__varnish/geoip") { >>> set obj.http.Content-Type = "text/plain"; >>> set obj.http.cache-control = "private, s-maxage=0, max-age=360"; >>> set obj.http.X-Orighost = req.http.host; >>> C{ >>> char *ip = VRT_IP_string(sp, VRT_r_client_ip(sp)); >>> char date[40]; >>> char json[255]; >>> >>> pthread_mutex_lock(&geoip_mutex); >>> >>> if(!gi) { geo_init(); } >>> >>> GeoIPRecord *record = GeoIP_record_by_addr(gi, ip); >>> if(record) { >>> snprintf(json, 255, "Geo = >>> {\"city\":\"%s\",\"country\":\"%s\",\"lat\":\"%f\",\"lon\":\"%f\", >>> \"classC\":\"%s\",\"netmask\":\"%d\"}", >>> record->city, >>> record->country_code, >>> record->latitude, >>> record->longitude, >>> ip, >>> GeoIP_last_netmask(gi) >>> ); >>> pthread_mutex_unlock(&geoip_mutex); >>> VRT_synth_page(sp, 0, json, vrt_magic_string_end); >>> } else { >>> pthread_mutex_unlock(&geoip_mutex); >>> VRT_synth_page(sp, 0, "Geo = {}", vrt_magic_string_end); >>> } >>> >>> >>> TIM_format(TIM_real(), date); >>> VRT_SetHdr(sp, HDR_OBJ, "\016Last-Modified:", date, >>> vrt_magic_string_end); >>> }C >>> # check if site is working >>> if(req.url ~ "lvscheck.html") { >>> synthetic {"varnish is okay"}; >>> deliver; >>> } >>> >>> deliver; >>> >>> } >>> >>> >>> ############# >>> >>> sysctl >>> >>> net.ipv4.ip_local_port_range = 1024 65536 >>> net.core.rmem_max=16777216 >>> net.core.wmem_max=16777216 >>> net.ipv4.tcp_rmem=4096 87380 16777216 >>> net.ipv4.tcp_wmem=4096 65536 16777216 >>> net.ipv4.tcp_fin_timeout = 3 >>> net.ipv4.tcp_tw_recycle = 1 >>> net.core.netdev_max_backlog = 30000 >>> net.ipv4.tcp_no_metrics_save=1 >>> net.core.somaxconn = 262144 >>> net.ipv4.tcp_syncookies = 0 >>> net.ipv4.tcp_max_orphans = 262144 >>> net.ipv4.tcp_max_syn_backlog = 262144 >>> net.ipv4.tcp_synack_retries = 2 >>> net.ipv4.tcp_syn_retries = 2 >>> >>> These are mostly cargo culted from previous emails here. >>> >>> Cheers >>> Artur >>> >> >> >> >> -- >> VP of Product Development >> Instructables.com >> >> http://www.instructables.com/member/lebowski > > --- > John Adams > Twitter Operations > jna at twitter.com > http://twitter.com/netik > > > > From cloude at instructables.com Tue Mar 3 22:41:19 2009 From: cloude at instructables.com (Cloude Porteus) Date: Tue, 3 Mar 2009 14:41:19 -0800 Subject: ESI & Cookies questions Message-ID: <4a05e1020903031441t40234286tc550db748295e101@mail.gmail.com> When an ESI request is made/fetched, will the page get passed all the request headers and cookies? For example, would I be able to access our domain cookies in "/cgi-bin/date.cgi" for the example below. sub vcl_fetch { if (req.url == "/test.html") { esi; /* Do ESI processing */ set obj.ttl = 24 h; } elseif (req.url == "/cgi-bin/date.cgi") { set obj.ttl = 1m; } } Do I have access to the cookies when evaluating my ESI processing, so we could give users a cached version or not based on their cookie? Something like this: sub vcl_fetch { if (req.url == "/test.html") { esi; /* Do ESI processing */ set obj.ttl = 24 h; } elseif (req.url == "/cgi-bin/date.cgi") { if(req.http.Cookie ~"(APPSERVER)") { set obj.ttl = 30m; } else { pass; } } } It also looks like GZIP support is on the list of post 2.0 features. This is critical as we only have a load balancer in front of Varnish and would rather not put the load of gzip'ing all of our pages that require ESI. Thanks for any information on the above. best, cloude From des at des.no Wed Mar 4 11:58:43 2009 From: des at des.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 04 Mar 2009 12:58:43 +0100 Subject: Any known issues with Solaris event ports? In-Reply-To: <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> (Theo Schlossnagle's message of "Thu, 26 Feb 2009 22:08:17 -0500") References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> Message-ID: <86prgxa498.fsf@ds4.des.no> Theo Schlossnagle writes: > Also, as a note, the configure.in with 2.0.3 managed to disable > sendfile (which works on Solaris). 1) Are you absolutely sure that Solaris's sendfile() works, i.e. that it does not return until all the data has been transmitted? 2) I disabled the sendfile() test for Solaris in r3338 (between 2.0.1 and 2.0.2), but even before that, it didn't work, because the #ifdefs in *your* code were incorrect. 3) Experience has shown that even when it works, sendfile() does not provide any significant performance improvement unless your working set consists mostly of large (multi-megabyte) objects. DES -- Dag-Erling Sm?rgrav - des at des.no From slink at schokola.de Wed Mar 4 18:50:41 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 04 Mar 2009 19:50:41 +0100 Subject: sendfilev vs writev In-Reply-To: <86prgxa498.fsf@ds4.des.no> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <86prgxa498.fsf@ds4.des.no> Message-ID: <49AECD81.3090900@schokola.de> Hi Dag, > 3) Experience has shown that even when it works, sendfile() does not > provide any significant performance improvement unless your working > set consists mostly of large (multi-megabyte) objects. Which wouldn't surprise me as (at least on Solaris) sendfilev and writev both do scatter-gather writes when used on memory buffers. My understanding is that sendfilev would only be advantageous if Varnish actually read from file descriptors rather than from cache buffers. IMHO, (at least on Solaris) only the cache_pipe code would benefit from sendfile[v], as at first sight that piece of code seems to loop over read/write. Nils From slink at schokola.de Wed Mar 4 18:53:06 2009 From: slink at schokola.de (Nils Goroll) Date: Wed, 04 Mar 2009 19:53:06 +0100 Subject: sendfilev vs writev In-Reply-To: <49AECD81.3090900@schokola.de> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <86prgxa498.fsf@ds4.des.no> <49AECD81.3090900@schokola.de> Message-ID: <49AECE12.20102@schokola.de> Nils Goroll wrote: > if Varnish actually read from file > descriptors rather than from cache buffers. ... or a mix of both. Sorry for over-simplifying.. ;-) Nils From des at des.no Wed Mar 4 21:37:35 2009 From: des at des.no (=?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?=) Date: Wed, 04 Mar 2009 22:37:35 +0100 Subject: sendfilev vs writev In-Reply-To: <49AECD81.3090900@schokola.de> (Nils Goroll's message of "Wed, 04 Mar 2009 19:50:41 +0100") References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <86prgxa498.fsf@ds4.des.no> <49AECD81.3090900@schokola.de> Message-ID: <86ab819dgg.fsf@ds4.des.no> Nils Goroll writes: > Which wouldn't surprise me as (at least on Solaris) sendfilev and > writev both do scatter-gather writes when used on memory buffers. My > understanding is that sendfilev would only be advantageous if Varnish > actually read from file descriptors rather than from cache buffers. It does. That's the whole point. But empirical testing shows that sendfile() does not improve performance, most likely because of the overhead of setting up the VM structures for the transfer. DES -- Dag-Erling Sm?rgrav - des at des.no From phk at phk.freebsd.dk Wed Mar 4 22:26:22 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 04 Mar 2009 22:26:22 +0000 Subject: sendfilev vs writev In-Reply-To: Your message of "Wed, 04 Mar 2009 22:37:35 +0100." <86ab819dgg.fsf@ds4.des.no> Message-ID: <52028.1236205582@critter.freebsd.dk> The subject of sendfile is compounded by the lack of usable implementations, but the experience data I have gathered so far is that you need to have significant paging activity to backing store, and you still should not enable it for objects smaller than 8-16 VM pages. sendfile really shines with huge files on vastly overcommitted VM, such as ftp.cdrom.com where it was developed. For lots of small files, not so much. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From 191919 at gmail.com Fri Mar 6 07:42:11 2009 From: 191919 at gmail.com (191919) Date: Fri, 6 Mar 2009 15:42:11 +0800 Subject: vct.c trunk compilation failure Message-ID: I encountered compilation errors when making trunk version: Making all in varnishtest gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../.. -I../../include -g -O2 -MT vtc.o -MD -MP -MF .deps/vtc.Tpo -c -o vtc.o vtc.c vtc.c: In function 'cmd_shell': vtc.c:250: error: invalid lvalue in unary '&' make[3]: *** [vtc.o] Error 1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 ~/box/varnish/varnish-cache# gcc -v Using built-in specs. Target: i686-apple-darwin9 Configured with: /var/tmp/gcc/gcc-5490~1/src/configure --disable-checking -enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib --build=i686-apple-darwin9 --with-arch=apple --with-tune=generic --host=i686-apple-darwin9 --target=i686-apple-darwin9 Thread model: posix gcc version 4.0.1 (Apple Inc. build 5490) It's been there for several days, perhaps it's a compiler-related problem? Regards, 191919 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Fri Mar 6 09:35:26 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 06 Mar 2009 09:35:26 +0000 Subject: vct.c trunk compilation failure In-Reply-To: Your message of "Fri, 06 Mar 2009 15:42:11 +0800." Message-ID: <73119.1236332126@critter.freebsd.dk> In message , 19191 9 writes: >Making all in varnishtest >gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../.. -I../../include -g -O2 -MT >vtc.o -MD -MP -MF .deps/vtc.Tpo -c -o vtc.o vtc.c >vtc.c: In function 'cmd_shell': >vtc.c:250: error: invalid lvalue in unary '&' critter phk> sed -n 250p vtc.c assert(WEXITSTATUS(system(av[1])) == 0); I have no idea what the trouble is. As a first sanity-check, remove the vtc.c file and run "svn update" to make sure your local copy is not corrupt. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From 191919 at gmail.com Fri Mar 6 14:34:18 2009 From: 191919 at gmail.com (191919) Date: Fri, 6 Mar 2009 22:34:18 +0800 Subject: vct.c trunk compilation failure In-Reply-To: <73119.1236332126@critter.freebsd.dk> References: <73119.1236332126@critter.freebsd.dk> Message-ID: I found that perhaps it is a MacOS-specific problem (I am not sure). In Leopard: #if __DARWIN_UNIX03 #define WEXITSTATUS(x) ((_W_INT(x) >> 8) & 0x000000ff) #else /* !__DARWIN_UNIX03 */ #define WEXITSTATUS(x) (_W_INT(x) >> 8) #endif /* !__DARWIN_UNIX03 */ and _W_INT is defined as: #if defined(_POSIX_C_SOURCE) && !defined(_DARWIN_C_SOURCE) #define _W_INT(i) (i) #else #define _W_INT(w) (*(int *)&(w)) /* convert union wait to int */ ... #endif (In Linux: #define WEXITSTATUS(status) __WEXITSTATUS(__WAIT_INT(status)) where __WAIT_INT is defined as: # if defined __GNUC__ && !defined __cplusplus # define __WAIT_INT(status) \ (__extension__ ({ union { __typeof(status) __in; int __i; } __u; \ __u.__in = (status); __u.__i; })) # else # define __WAIT_INT(status) (*(int *) &(status)) # endif ) So the 250th line of vtc.c assert(WEXITSTATUS(system(av[1])) == 0); will be expanded as assert((*(int *)&(system(av[1]))) == 0); and gcc reported that error. Rewriting cmd_shell as follows solved the problem. 238 static void 239 cmd_shell(CMD_ARGS) 240 { 241 int sr; 242 (void)priv; 243 (void)cmd; 244 245 if (av == NULL) 246 return; 247 AN(av[1]); 248 AZ(av[2]); 249 vtc_dump(vl, 4, "shell", av[1]); 250 251 sr = system(av[1]); 252 assert(WEXITSTATUS(sr) == 0); 253 } 2009/3/6 Poul-Henning Kamp > In message , > 19191 > 9 writes: > > >Making all in varnishtest > >gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../.. -I../../include -g -O2 -MT > >vtc.o -MD -MP -MF .deps/vtc.Tpo -c -o vtc.o vtc.c > >vtc.c: In function 'cmd_shell': > >vtc.c:250: error: invalid lvalue in unary '&' > > critter phk> sed -n 250p vtc.c > assert(WEXITSTATUS(system(av[1])) == 0); > > I have no idea what the trouble is. > > As a first sanity-check, remove the vtc.c file and run "svn update" > to make sure your local copy is not corrupt. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.ocean at me.com Fri Mar 6 03:22:56 2009 From: erich.ocean at me.com (erich.ocean at me.com) Date: Thu, 05 Mar 2009 19:22:56 -0800 Subject: failed test with varnish 2.0.3, Mac OS X 10.5.6 Message-ID: <992F5814-9BBC-425B-BBEB-0019B0C31D49@me.com> troy:varnish-2.0.3 onitunes$ sudo make check Password: Making check in include make[1]: Nothing to be done for `check'. Making check in lib Making check in libvarnish make check-TESTS ================== All 0 tests passed ================== Making check in libvarnishapi make[2]: Nothing to be done for `check'. Making check in libvarnishcompat make[2]: Nothing to be done for `check'. Making check in libvcl make[2]: Nothing to be done for `check'. make[2]: Nothing to be done for `check-am'. Making check in bin Making check in varnishadm make[2]: Nothing to be done for `check'. Making check in varnishd make[2]: Nothing to be done for `check'. Making check in varnishlog make[2]: Nothing to be done for `check'. Making check in varnishncsa make[2]: Nothing to be done for `check'. Making check in varnishreplay make[2]: Nothing to be done for `check'. Making check in varnishtest make check-TESTS # top TEST ././tests/a00000.vtc starting # TEST basic default HTTP transactions ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## s1 Started on 127.0.0.1:9080 ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### s1 Accepted socket fd is 5 ### c1 rxresp ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 Closing fd 4 ## c1 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00000.vtc # top TEST ././tests/a00000.vtc completed PASS: ./tests/a00000.vtc # top TEST ././tests/a00001.vtc starting # TEST basic default HTTP transactions with expect ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### s1 Accepted socket fd is 5 ### c1 rxresp ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 Closing fd 4 ## c1 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00001.vtc # top TEST ././tests/a00001.vtc completed PASS: ./tests/a00001.vtc # top TEST ././tests/a00002.vtc starting # TEST basic default HTTP transactions with expect and options ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### s1 Accepted socket fd is 5 ### c1 rxresp ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 Closing fd 4 ## c1 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00002.vtc # top TEST ././tests/a00002.vtc completed PASS: ./tests/a00002.vtc # top TEST ././tests/a00003.vtc starting # TEST dual independent HTTP transactions ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## s2 Starting server ### s2 listen on 127.0.0.1:9081 (fd 4) ## s2 Started on 127.0.0.1:9081 ## c1 Starting client ## c2 Starting client ## c1 Waiting for client ## c1 Started ## c2 Started ### c1 Connect to 127.0.0.1:9080 ### c2 Connect to 127.0.0.1:9081 ### s1 Accepted socket fd is 7 ### c1 Connected to 127.0.0.1:9080 fd is 5 ### s1 rxreq ### c2 Connected to 127.0.0.1:9081 fd is 6 ### s2 Accepted socket fd is 8 ### c1 rxresp ### c2 rxresp ### s2 rxreq ### s1 shutting fd 7 ## s1 Ending ### s2 shutting fd 8 ## s2 Ending ### c2 Closing fd 6 ### c1 Closing fd 5 ## c2 Ending ## c1 Ending ## c2 Waiting for client ## s1 Waiting for server ## s2 Waiting for server # top RESETTING after ././tests/a00003.vtc # top TEST ././tests/a00003.vtc completed PASS: ./tests/a00003.vtc # top TEST ././tests/a00004.vtc starting # TEST dual shared server HTTP transactions ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c2 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ### s1 Iteration 0 ## c1 Started ### c1 Connect to 127.0.0.1:9080 ## c2 Started ### c2 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### s1 Accepted socket fd is 6 ### s1 rxreq ### c1 rxresp ### s1 shutting fd 6 ### s1 Iteration 1 ### c1 Closing fd 4 ## c1 Ending ## c2 Waiting for client ### c2 Connected to 127.0.0.1:9080 fd is 5 ### c2 rxresp ### s1 Accepted socket fd is 4 ### s1 rxreq ### s1 shutting fd 4 ## s1 Ending ### c2 Closing fd 5 ## c2 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00004.vtc # top TEST ././tests/a00004.vtc completed PASS: ./tests/a00004.vtc # top TEST ././tests/a00005.vtc starting # TEST dual shared client HTTP transactions ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s2 Starting server ### s2 listen on 127.0.0.1:9081 (fd 4) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## s2 Started on 127.0.0.1:9081 ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### s1 Accepted socket fd is 6 ### c1 Connected to 127.0.0.1:9080 fd is 5 ### s1 rxreq ### c1 rxresp ### s1 shutting fd 6 ## s1 Ending ### c1 Closing fd 5 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 5 ### c1 rxresp ### s2 Accepted socket fd is 6 ### s2 rxreq ### s2 shutting fd 6 ## s2 Ending ### c1 Closing fd 5 ## c1 Ending ## s1 Waiting for server ## s2 Waiting for server # top RESETTING after ././tests/a00005.vtc # top TEST ././tests/a00005.vtc completed PASS: ./tests/a00005.vtc # top TEST ././tests/a00006.vtc starting # TEST bidirectional message bodies ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### s1 Accepted socket fd is 5 ### c1 rxresp ### s1 rxreq ### s1 shutting fd 5 ## s1 Ending ### c1 Closing fd 4 ## c1 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00006.vtc # top TEST ././tests/a00006.vtc completed PASS: ./tests/a00006.vtc # top TEST ././tests/a00007.vtc starting # TEST TCP reuse ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9080 ### c1 Connected to 127.0.0.1:9080 fd is 4 ### c1 rxresp ### s1 Accepted socket fd is 5 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 5 ## s1 Ending ### c1 Closing fd 4 ## c1 Ending ## s1 Waiting for server # top RESETTING after ././tests/a00007.vtc # top TEST ././tests/a00007.vtc completed PASS: ./tests/a00007.vtc # top TEST ././tests/a00008.vtc starting # TEST Sema operations ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## s2 Starting server ### s2 listen on 127.0.0.1:9081 (fd 4) ## s2 Started on 127.0.0.1:9081 ## s3 Starting server ### s3 listen on 127.0.0.1:9082 (fd 5) ## s3 Started on 127.0.0.1:9082 ## c1 Starting client ## c2 Starting client ## c3 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9080 ## c2 Started ### c1 Connected to 127.0.0.1:9080 fd is 6 ### c2 Connect to 127.0.0.1:9081 ## c3 Started ### c1 delaying 0.2 second(s) ### s1 Accepted socket fd is 7 ### c3 Connect to 127.0.0.1:9082 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### s1 rxreq ### s2 Accepted socket fd is 9 ### c2 delaying 0.6 second(s) ### s2 rxreq ### s3 Accepted socket fd is 11 ### c3 Connected to 127.0.0.1:9082 fd is 10 ### s3 rxreq ### c3 delaying 0.9 second(s) ### c1 rxresp ### c2 rxresp ### c3 rxresp ### s3 delaying 0.2 second(s) ### s1 delaying 0.9 second(s) ### s2 delaying 0.6 second(s) ### s3 shutting fd 11 ## s3 Ending ### s2 shutting fd 9 ## s2 Ending ### s1 shutting fd 7 ## s1 Ending ### c1 Closing fd 6 # top RESETTING after ././tests/a00008.vtc ### c3 Closing fd 10 ### c2 Closing fd 8 ## c1 Ending ## s1 Waiting for server ## c3 Ending ## c2 Ending ## s2 Waiting for server ## s3 Waiting for server ## c1 Waiting for client ## c2 Waiting for client ## c3 Waiting for client # top TEST ././tests/a00008.vtc completed PASS: ./tests/a00008.vtc # top TEST ././tests/a00009.vtc starting # TEST See that the VCL compiler works Cannot create working directory '/usr/local/var/varnish/troy.local': No such file or directory # top RESETTING after ././tests/a00009.vtc # top TEST ././tests/a00009.vtc completed PASS: ./tests/a00009.vtc # top TEST ././tests/b00000.vtc starting # TEST Does anything get through at all ? ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_object (0) == 0 ## v1 as expected: client_conn (0) == 0 ## v1 as expected: client_req (0) == 0 ## v1 as expected: cache_miss (0) == 0 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_object (1) == 1 ## v1 as expected: client_conn (1) == 1 ## v1 as expected: client_req (1) == 1 ## v1 as expected: cache_miss (1) == 1 ## v1 as expected: s_sess (1) == 1 ## v1 as expected: s_req (1) == 1 # top RESETTING after ././tests/b00000.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34157 Status: 0200 # top TEST ././tests/b00000.vtc completed PASS: ./tests/b00000.vtc # top TEST ././tests/b00001.vtc starting # TEST Check that a pipe transaction works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_object (0) == 0 ## v1 as expected: client_conn (1) == 1 ## v1 as expected: client_req (1) == 1 ## v1 as expected: s_sess (1) == 1 ## v1 as expected: s_req (1) == 1 ## v1 as expected: s_pipe (1) == 1 # top RESETTING after ././tests/b00001.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34201 Status: 0200 # top TEST ././tests/b00001.vtc completed PASS: ./tests/b00001.vtc # top TEST ././tests/b00002.vtc starting # TEST Check that a pass transaction works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ### top delaying 0.1 second(s) ## v1 as expected: n_object (0) == 0 ## v1 as expected: client_conn (1) == 1 ## v1 as expected: client_req (1) == 1 ## v1 as expected: s_sess (1) == 1 ## v1 as expected: s_req (1) == 1 ## v1 as expected: s_pass (1) == 1 # top RESETTING after ././tests/b00002.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34245 Status: 0200 # top TEST ././tests/b00002.vtc completed PASS: ./tests/b00002.vtc # top TEST ././tests/b00003.vtc starting # TEST Check that a cache fetch + hit transaction works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c1 Waiting for client ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c2 Starting client ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### c2 Closing fd 8 ## c2 Ending ### top delaying 0.1 second(s) ## v1 as expected: client_conn (2) == 2 ## v1 as expected: cache_hit (1) == 1 ## v1 as expected: cache_miss (1) == 1 ## v1 as expected: client_req (2) == 2 ## v1 as expected: s_sess (2) == 2 ## v1 as expected: s_req (2) == 2 ## v1 as expected: s_fetch (1) == 1 # top RESETTING after ././tests/b00003.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34289 Status: 0200 # top TEST ././tests/b00003.vtc completed PASS: ./tests/b00003.vtc # top TEST ././tests/b00004.vtc starting # TEST Torture Varnish with start/stop commands ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 300 ## v1 CLI 300 ### v1 CLI STATUS 300 ## v1 CLI 300 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 300 ## v1 CLI 300 ### v1 CLI STATUS 300 ## v1 CLI 300 ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34333 Status: 0200 # top RESETTING after ././tests/b00004.vtc # top TEST ././tests/b00004.vtc completed PASS: ./tests/b00004.vtc # top TEST ././tests/b00005.vtc starting # TEST Check that -s works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -s file,varnishtest_backing,10M ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 # top RESETTING after ././tests/b00005.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34380 Status: 0200 # top TEST ././tests/b00005.vtc completed PASS: ./tests/b00005.vtc # top TEST ././tests/b00006.vtc starting # TEST Check that -s malloc works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -s malloc ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00006.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34425 Status: 0200 # top TEST ././tests/b00006.vtc completed PASS: ./tests/b00006.vtc # top TEST ././tests/b00007.vtc starting # TEST Check chunked encoding from backend works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00007.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34469 Status: 0200 # top TEST ././tests/b00007.vtc completed PASS: ./tests/b00007.vtc # top TEST ././tests/b00008.vtc starting # TEST Test CLI help and parameter functions ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -b 127.0.0.2:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 # top RESETTING after ././tests/b00008.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34513 Status: 0200 # top TEST ././tests/b00008.vtc completed PASS: ./tests/b00008.vtc # top TEST ././tests/b00009.vtc starting # TEST Check poll acceptor ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p acceptor=poll ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00009.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34557 Status: 0200 # top TEST ././tests/b00009.vtc completed PASS: ./tests/b00009.vtc # top TEST ././tests/b00010.vtc starting # TEST Check simple list hasher ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -h simple_list ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00010.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34601 Status: 0200 # top TEST ././tests/b00010.vtc completed PASS: ./tests/b00010.vtc # top TEST ././tests/b00011.vtc starting # TEST Check HTTP/1.0 EOF transmission ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00011.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34645 Status: 0200 # top TEST ././tests/b00011.vtc completed PASS: ./tests/b00011.vtc # top TEST ././tests/b00012.vtc starting # TEST Check pipelining ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: sess_pipeline (2) == 2 # top RESETTING after ././tests/b00012.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34692 Status: 0200 # top TEST ././tests/b00012.vtc completed PASS: ./tests/b00012.vtc # top TEST ././tests/b00013.vtc starting # TEST Check read-head / partial pipelining ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c1 Waiting for client ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: sess_readahead (2) == 2 # top RESETTING after ././tests/b00013.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34736 Status: 0200 # top TEST ././tests/b00013.vtc completed PASS: ./tests/b00013.vtc # top TEST ././tests/b00014.vtc starting # TEST Check -f command line arg ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -f /tmp/ _b00014.vcl ### v1 opening CLI connection ### v1 CLI connection fd = 3 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00014.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34781 Status: 0200 # top TEST ././tests/b00014.vtc completed PASS: ./tests/b00014.vtc # top TEST ././tests/b00015.vtc starting # TEST Check synthetic error page caching ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Started on 127.0.0.1:9080 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Started on 127.0.0.1:9080 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00015.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34834 Status: 0200 # top TEST ././tests/b00015.vtc completed PASS: ./tests/b00015.vtc # top TEST ././tests/b00016.vtc starting # TEST Check naming of backends ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending # top RESETTING after ././tests/b00016.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34894 Status: 0200 # top TEST ././tests/b00016.vtc completed PASS: ./tests/b00016.vtc # top TEST ././tests/b00017.vtc starting # TEST Check that we close one error ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending # top RESETTING after ././tests/b00017.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34954 Status: 0200 # top TEST ././tests/b00017.vtc completed PASS: ./tests/b00017.vtc # top TEST ././tests/b00018.vtc starting # TEST Check that error in vcl_fetch works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_object (0) == 0 # top RESETTING after ././tests/b00018.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 34998 Status: 0200 # top TEST ././tests/b00018.vtc completed PASS: ./tests/b00018.vtc # top TEST ././tests/b00019.vtc starting # TEST Check that max_restarts works and that we don't fall over ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00019.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35042 Status: 0200 # top TEST ././tests/b00019.vtc completed PASS: ./tests/b00019.vtc # top TEST ././tests/b00020.vtc starting # TEST Check the between_bytes_timeout behaves from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 delaying 0.5 second(s) ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00020.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35086 Status: 0200 # top TEST ././tests/b00020.vtc completed PASS: ./tests/b00020.vtc # top TEST ././tests/b00021.vtc starting # TEST Check the between_bytes_timeout behaves from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 delaying 0.5 second(s) ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00021.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35130 Status: 0200 # top TEST ././tests/b00021.vtc completed PASS: ./tests/b00021.vtc # top TEST ././tests/b00022.vtc starting # TEST Check the between_bytes_timeout behaves from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 delaying 0.5 second(s) ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00022.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35174 Status: 0200 # top TEST ././tests/b00022.vtc completed PASS: ./tests/b00022.vtc # top TEST ././tests/b00023.vtc starting # TEST Check that the first_byte_timeout works from parameters ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00023.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35218 Status: 0200 # top TEST ././tests/b00023.vtc completed PASS: ./tests/b00023.vtc # top TEST ././tests/b00024.vtc starting # TEST Check that the first_byte_timeout works from vcl ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00024.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35262 Status: 0200 # top TEST ././tests/b00024.vtc completed PASS: ./tests/b00024.vtc # top TEST ././tests/b00025.vtc starting # TEST Check that the first_byte_timeout works from backend definition ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 1.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00025.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35306 Status: 0200 # top TEST ././tests/b00025.vtc completed PASS: ./tests/b00025.vtc # top TEST ././tests/b00026.vtc starting # TEST Check the precedence for timeouts ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:9080 ### s2 listen on 127.0.0.1:9180 (fd 4) ## s2 Started on 127.0.0.1:9180 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 delaying 1 second(s) ### s1 shutting fd 10 ## s1 Ending ### c1 rxresp ### s2 Accepted socket fd is 10 ### s2 rxreq ### s2 delaying 1.5 second(s) ### s2 shutting fd 10 ## s2 Ending ### c1 Closing fd 9 ## c1 Ending # top RESETTING after ././tests/b00026.vtc ## s1 Waiting for server ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35350 Status: 0200 # top TEST ././tests/b00026.vtc completed PASS: ./tests/b00026.vtc # top TEST ././tests/b00027.vtc starting # TEST test backend transmission corner cases ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00027.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35394 Status: 0200 # top TEST ././tests/b00027.vtc completed PASS: ./tests/b00027.vtc # top TEST ././tests/b00028.vtc starting # TEST regexp match and no-match ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/b00028.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35438 Status: 0200 # top TEST ././tests/b00028.vtc completed PASS: ./tests/b00028.vtc # top TEST ././tests/c00001.vtc starting # TEST Test VCL regsub() ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00001.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35482 Status: 0200 # top TEST ././tests/c00001.vtc completed PASS: ./tests/c00001.vtc # top TEST ././tests/c00002.vtc starting # TEST Check that all thread pools all get started and get minimum threads ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p thread_pool_min=2 -p thread_pools=4 ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### top delaying 1 second(s) ## v1 as expected: n_wrk_create (8) == 8 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00002.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35526 Status: 0200 # top TEST ././tests/c00002.vtc completed PASS: ./tests/c00002.vtc # top TEST ././tests/c00003.vtc starting # TEST Check that we start if at least one listen address works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 300 ## v1 CLI 300 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00003.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35570 Status: 0200 # top TEST ././tests/c00003.vtc completed PASS: ./tests/c00003.vtc # top TEST ././tests/c00004.vtc starting # TEST Test Vary functionality ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -b 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 3 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 rxreq ### c1 rxresp ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00004.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35614 Status: 0200 # top TEST ././tests/c00004.vtc completed PASS: ./tests/c00004.vtc # top TEST ././tests/c00005.vtc starting # TEST Test simple ACL ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p vcl_trace=on ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00005.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35658 Status: 0200 # top TEST ././tests/c00005.vtc completed PASS: ./tests/c00005.vtc # top TEST ././tests/c00006.vtc starting # TEST Test banning a url ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00006.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35710 Status: 0200 # top TEST ././tests/c00006.vtc completed PASS: ./tests/c00006.vtc # top TEST ././tests/c00007.vtc starting # TEST Test banning a hash ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid - ppurge_hash=off ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 300 ## v1 CLI 300 ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00007.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35754 Status: 0200 # top TEST ././tests/c00007.vtc completed PASS: ./tests/c00007.vtc # top TEST ././tests/c00008.vtc starting # TEST Test If-Modified-Since ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 rxresp ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 rxresp ### c1 rxresp ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00008.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35799 Status: 0200 # top TEST ././tests/c00008.vtc completed PASS: ./tests/c00008.vtc # top TEST ././tests/c00009.vtc starting # TEST Test restarts ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:9080 ### s2 listen on 127.0.0.1:9180 (fd 4) ## v1 Launch ## s2 Started on 127.0.0.1:9180 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 shutting fd 10 ## s1 Ending ### s2 Accepted socket fd is 10 ### s2 rxreq ### s2 shutting fd 10 ## s2 Ending ### c1 Closing fd 9 ## c1 Ending # top RESETTING after ././tests/c00009.vtc ## s1 Waiting for server ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35843 Status: 0200 # top TEST ././tests/c00009.vtc completed PASS: ./tests/c00009.vtc # top TEST ././tests/c00010.vtc starting # TEST Test pass from hit ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00010.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35887 Status: 0200 # top TEST ././tests/c00010.vtc completed PASS: ./tests/c00010.vtc # top TEST ././tests/c00011.vtc starting # TEST Test hit for pass (pass from fetch) ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00011.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35931 Status: 0200 # top TEST ././tests/c00011.vtc completed PASS: ./tests/c00011.vtc # top TEST ././tests/c00012.vtc starting # TEST Test pass from miss ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00012.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 35975 Status: 0200 # top TEST ././tests/c00012.vtc completed PASS: ./tests/c00012.vtc # top TEST ././tests/c00013.vtc starting # TEST Test parking second request on backend delay ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ### top delaying 0.2 second(s) ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 0.5 second(s) ## c2 Starting client # top RESETTING after ././tests/c00013.vtc ## s1 Waiting for server ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 10 ### c2 rxresp ### s1 delaying 0.5 second(s) ### s1 shutting fd 9 ## s1 Ending ## c1 Waiting for client ### c1 Closing fd 8 ## c1 Ending ## c2 Waiting for client ### c2 Closing fd 10 ## c2 Ending ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36019 Status: 0200 # top TEST ././tests/c00013.vtc completed PASS: ./tests/c00013.vtc # top TEST ././tests/c00014.vtc starting # TEST Test parking second request on backend delay, then pass ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ### top delaying 0.2 second(s) ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 0.5 second(s) ## c2 Starting client # top RESETTING after ././tests/c00014.vtc ## s1 Waiting for server ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 10 ### c2 rxresp ### s1 delaying 0.5 second(s) ### c1 Closing fd 8 ## c1 Ending ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ## c1 Waiting for client ## c2 Waiting for client ### c2 Closing fd 10 ## c2 Ending ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36063 Status: 0200 # top TEST ././tests/c00014.vtc completed PASS: ./tests/c00014.vtc # top TEST ././tests/c00015.vtc starting # TEST Test switching VCLs ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c2 Starting client ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s1 shutting fd 9 ## s1 Ending ### c2 Closing fd 8 ## c2 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c3 Starting client ## c3 Waiting for client ## c3 Started ### c3 Connect to 127.0.0.1:9081 ### c3 Connected to 127.0.0.1:9081 fd is 8 ### c3 rxresp ### c3 Closing fd 8 ## c3 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 # top RESETTING after ././tests/c00015.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36107 Status: 0200 # top TEST ././tests/c00015.vtc completed PASS: ./tests/c00015.vtc # top TEST ././tests/c00016.vtc starting # TEST Test Connection header handling ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client # top RESETTING after ././tests/c00016.vtc ## s1 Waiting for server ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ## c1 Waiting for client ### c1 Closing fd 8 ## c1 Ending ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36159 Status: 0200 # top TEST ././tests/c00016.vtc completed PASS: ./tests/c00016.vtc # top TEST ././tests/c00017.vtc starting # TEST Test Backend Polling ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Started on 127.0.0.1:9080 ## s1 Waiting for server ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Started on 127.0.0.1:9080 ## s1 Waiting for server ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 shutting fd 8 ## s1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## s1 Waiting for server ## s1 Started on 127.0.0.1:9080 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 delaying 2 second(s) ### s1 shutting fd 8 ## s1 Ending ### top delaying 2 second(s) ### v1 CLI STATUS 200 ## v1 CLI 200 # top RESETTING after ././tests/c00017.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36203 Status: 0200 # top TEST ././tests/c00017.vtc completed PASS: ./tests/c00017.vtc # top TEST ././tests/c00018.vtc starting # TEST Check Expect headers ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### c1 rxresp ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00018.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36247 Status: 0200 # top TEST ././tests/c00018.vtc completed PASS: ./tests/c00018.vtc # top TEST ././tests/c00019.vtc starting # TEST Check purge counters and duplicate purge elimination ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p purge_hash=on ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_purge_add (2) == 2 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_purge_obj_test (0) == 0 ## v1 as expected: n_purge_re_test (0) == 0 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_purge_add (3) == 3 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_purge_obj_test (1) == 1 ## v1 as expected: n_purge_re_test (1) == 1 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_purge_add (5) == 5 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_purge_add (6) == 6 ## v1 as expected: n_purge_dups (3) == 3 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_purge_obj_test (2) == 2 ## v1 as expected: n_purge_re_test (2) == 2 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 106 ## v1 CLI 106 # top RESETTING after ././tests/c00019.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36291 Status: 0200 # top TEST ././tests/c00019.vtc completed PASS: ./tests/c00019.vtc # top TEST ././tests/c00020.vtc starting # TEST Test -h critbit a bit ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -hcritbit ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c2 Starting client ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### c2 Closing fd 8 ## c2 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c2 Starting client ## s1 Started on 127.0.0.1:9080 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c2 rxresp ### c2 rxresp ### s1 shutting fd 9 ## s1 Ending ### c2 rxresp ### c2 Closing fd 8 ## c2 Ending ## v1 as expected: client_conn (3) == 3 ## v1 as expected: cache_hit (3) == 3 ## v1 as expected: cache_miss (3) == 3 ## v1 as expected: client_req (6) == 6 ## v1 as expected: s_sess (3) == 3 ## v1 as expected: s_req (6) == 6 ## v1 as expected: s_fetch (3) == 3 # top RESETTING after ././tests/c00020.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36335 Status: 0200 # top TEST ././tests/c00020.vtc completed PASS: ./tests/c00020.vtc # top TEST ././tests/c00021.vtc starting # TEST Test banning a url with cli:purge ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Started ## c1 Waiting for client ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 104 ## v1 CLI 104 ### v1 CLI STATUS 104 ## v1 CLI 104 ### v1 CLI STATUS 104 ## v1 CLI 104 ### v1 CLI STATUS 106 ## v1 CLI 106 ### v1 CLI STATUS 106 ## v1 CLI 106 ### v1 CLI STATUS 106 ## v1 CLI 106 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00021.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36379 Status: 0200 # top TEST ././tests/c00021.vtc completed PASS: ./tests/c00021.vtc # top TEST ././tests/c00022.vtc starting # TEST Test banning a url with VCL purge ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/c00022.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36423 Status: 0200 # top TEST ././tests/c00022.vtc completed PASS: ./tests/c00022.vtc # top TEST ././tests/e00000.vtc starting # TEST ESI test with no ESI content ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00000.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36469 Status: 0200 # top TEST ././tests/e00000.vtc completed PASS: ./tests/e00000.vtc # top TEST ././tests/e00001.vtc starting # TEST ESI:remove ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00001.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36513 Status: 0200 # top TEST ././tests/e00001.vtc completed PASS: ./tests/e00001.vtc # top TEST ././tests/e00002.vtc starting # TEST ESI CDATA ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00002.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36557 Status: 0200 # top TEST ././tests/e00002.vtc completed PASS: ./tests/e00002.vtc # top TEST ././tests/e00003.vtc starting # TEST ESI include ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00003.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36601 Status: 0200 # top TEST ././tests/e00003.vtc completed PASS: ./tests/e00003.vtc # top TEST ././tests/e00004.vtc starting # TEST ESI commented include ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00004.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36645 Status: 0200 # top TEST ././tests/e00004.vtc completed PASS: ./tests/e00004.vtc # top TEST ././tests/e00005.vtc starting # TEST ESI relative include ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ### c1 Closing fd 8 ## s1 Ending ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00005.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36689 Status: 0200 # top TEST ././tests/e00005.vtc completed PASS: ./tests/e00005.vtc # top TEST ././tests/e00006.vtc starting # TEST ESI include with http:// ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s2 Starting server ### s2 listen on 127.0.0.1:9180 (fd 4) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s2 Started on 127.0.0.1:9180 ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 shutting fd 10 ## s1 Ending ### s2 Accepted socket fd is 10 ### s2 rxreq ### s2 shutting fd 10 ## s2 Ending ### c1 Closing fd 9 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00006.vtc ## s1 Waiting for server ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36733 Status: 0200 # top TEST ././tests/e00006.vtc completed PASS: ./tests/e00006.vtc # top TEST ././tests/e00007.vtc starting # TEST ESI spanning storage bits ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00007.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36777 Status: 0200 # top TEST ././tests/e00007.vtc completed PASS: ./tests/e00007.vtc # top TEST ././tests/e00008.vtc starting # TEST ESI parsing errors ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (14) == 14 # top RESETTING after ././tests/e00008.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36821 Status: 0200 # top TEST ././tests/e00008.vtc completed PASS: ./tests/e00008.vtc # top TEST ././tests/e00009.vtc starting # TEST ESI binary detector ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (1) == 1 # top RESETTING after ././tests/e00009.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36865 Status: 0200 # top TEST ././tests/e00009.vtc completed PASS: ./tests/e00009.vtc # top TEST ././tests/e00010.vtc starting # TEST Ignoring non esi elements ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00010.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36909 Status: 0200 # top TEST ././tests/e00010.vtc completed PASS: ./tests/e00010.vtc # top TEST ././tests/e00011.vtc starting # TEST Make sure that PASS'ed ESI requests use GET for includes ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00011.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36953 Status: 0200 # top TEST ././tests/e00011.vtc completed PASS: ./tests/e00011.vtc # top TEST ././tests/e00012.vtc starting # TEST ESI includes for pre HTTP/1.1 cannot used chunked encoding ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00012.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 36997 Status: 0200 # top TEST ././tests/e00012.vtc completed PASS: ./tests/e00012.vtc # top TEST ././tests/e00013.vtc starting # TEST All white-space object, in multiple storage segments ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_parse (0) == 0 ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00013.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37041 Status: 0200 # top TEST ././tests/e00013.vtc completed PASS: ./tests/e00013.vtc # top TEST ././tests/e00014.vtc starting # TEST Check ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: esi_parse (0) == 0 ## v1 as expected: esi_errors (0) == 0 # top RESETTING after ././tests/e00014.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37085 Status: 0200 # top TEST ././tests/e00014.vtc completed PASS: ./tests/e00014.vtc # top TEST ././tests/r00102.vtc starting # TEST Test VCL regsub() ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00102.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37129 Status: 0200 # top TEST ././tests/r00102.vtc completed PASS: ./tests/r00102.vtc # top TEST ././tests/r00251.vtc starting # TEST Regression test for #251: segfault on regsub on missing http header ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00251.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37173 Status: 0200 # top TEST ././tests/r00251.vtc completed PASS: ./tests/r00251.vtc # top TEST ././tests/r00255.vtc starting # TEST Regression test for #255: Segfault on header token separation ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00255.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37217 Status: 0200 # top TEST ././tests/r00255.vtc completed PASS: ./tests/r00255.vtc # top TEST ././tests/r00262.vtc starting # TEST Test that inter-request whitespace trimming works ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p session_linger=20 ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 7) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00262.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37261 Status: 0200 # top TEST ././tests/r00262.vtc completed PASS: ./tests/r00262.vtc # top TEST ././tests/r00263.vtc starting # TEST Test refcounting backends from director ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37308 Status: 0200 # top RESETTING after ././tests/r00263.vtc # top TEST ././tests/r00263.vtc completed PASS: ./tests/r00263.vtc # top TEST ././tests/r00292.vtc starting # TEST Header deletion test ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00292.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37354 Status: 0200 # top TEST ././tests/r00292.vtc completed PASS: ./tests/r00292.vtc # top TEST ././tests/r00306.vtc starting # TEST Regression test for ticket #306, random director ignoring good backend ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:9080 ### s2 listen on 127.0.0.1:9180 (fd 4) ## v1 Launch ## s2 Started on 127.0.0.1:9180 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 5 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### s2 Accepted socket fd is 9 ### s2 rxreq ### s2 shutting fd 9 ## s2 Ending ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 9 ## c1 Ending # top RESETTING after ././tests/r00306.vtc ## s1 Waiting for server ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37399 Status: 0200 # top TEST ././tests/r00306.vtc completed PASS: ./tests/r00306.vtc # top TEST ././tests/r00318.vtc starting # TEST ESI with no body in response ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00318.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37443 Status: 0200 # top TEST ././tests/r00318.vtc completed PASS: ./tests/r00318.vtc # top TEST ././tests/r00325.vtc starting # TEST Check lack of response-string ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00325.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37487 Status: 0200 # top TEST ././tests/r00325.vtc completed PASS: ./tests/r00325.vtc # top TEST ././tests/r00326.vtc starting # TEST No zerolength verbatim before ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00326.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37531 Status: 0200 # top TEST ././tests/r00326.vtc completed PASS: ./tests/r00326.vtc # top TEST ././tests/r00345.vtc starting # TEST #345, ESI waitinglist trouble ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c2 Starting client ## c2 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c2 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s1 Accepted socket fd is 10 ### s1 rxreq ### s1 rxreq ### s1 delaying 1 second(s) ### c2 rxresp ### s1 shutting fd 10 ## s1 Ending ### c2 Closing fd 9 ## c2 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### c1 Closing fd 9 ## c1 Ending ### c1 Closing fd 8 # top RESETTING after ././tests/r00345.vtc ## c1 Ending ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37575 Status: 0200 # top TEST ././tests/r00345.vtc completed PASS: ./tests/r00345.vtc # top TEST ././tests/r00354.vtc starting # TEST #354 Segfault in strcmp in http_DissectRequest() ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 7 ### c1 rxresp ### c1 Closing fd 7 ## c1 Ending # top RESETTING after ././tests/r00354.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37619 Status: 0200 # top TEST ././tests/r00354.vtc completed PASS: ./tests/r00354.vtc # top TEST ././tests/r00365.vtc starting # TEST Test restarts in vcl_hit ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00365.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37663 Status: 0200 # top TEST ././tests/r00365.vtc completed PASS: ./tests/r00365.vtc # top TEST ././tests/r00386.vtc starting # TEST #386, failure to insert include ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client # top RESETTING after ././tests/r00386.vtc ## s1 Waiting for server ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### c1 rxresp ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ## c1 Waiting for client ### c1 Closing fd 8 ## c1 Ending ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37707 Status: 0200 # top TEST ././tests/r00386.vtc completed PASS: ./tests/r00386.vtc # top TEST ././tests/r00387.vtc starting # TEST Regression test for #387: too long chunk header ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00387.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37751 Status: 0200 # top TEST ././tests/r00387.vtc completed PASS: ./tests/r00387.vtc # top TEST ././tests/r00400.vtc starting # TEST Regression test for ticket 409 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00400.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37795 Status: 0200 # top TEST ././tests/r00400.vtc completed PASS: ./tests/r00400.vtc # top TEST ././tests/r00409.vtc starting # TEST Regression test for ticket 409 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/r00409.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37839 Status: 0200 # top TEST ././tests/r00409.vtc completed PASS: ./tests/r00409.vtc # top TEST ././tests/r00412.vtc starting # TEST Regression test for ticket 412 ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00412.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37875 Status: 0200 # top TEST ././tests/r00412.vtc completed PASS: ./tests/r00412.vtc # top TEST ././tests/r00416.vtc starting # TEST Regression test for #416: a surplus of HTTP headers ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00416.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37919 Status: 0200 # top TEST ././tests/r00416.vtc completed PASS: ./tests/r00416.vtc # top TEST ././tests/r00425.vtc starting # TEST check late pass stalling ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: cache_hitpass (2) == 2 # top RESETTING after ././tests/r00425.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 37963 Status: 0200 # top TEST ././tests/r00425.vtc completed PASS: ./tests/r00425.vtc # top TEST ././tests/r00427.vtc starting # TEST client close in ESI delivery ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 Closing fd 8 ## c1 Ending ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### s1 rxreq ### c1 rxresp ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/r00427.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38007 Status: 0200 # top TEST ././tests/r00427.vtc completed PASS: ./tests/r00427.vtc # top TEST ././tests/s00000.vtc starting # TEST Simple expiry test ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 3 second(s) ### c1 Closing fd 8 ## c1 Ending ### top delaying 3 second(s) ### s1 rxreq ## c2 Starting client ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s1 shutting fd 9 ## s1 Ending ### c2 Closing fd 8 ## c2 Ending # top RESETTING after ././tests/s00000.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38051 Status: 0200 # top TEST ././tests/s00000.vtc completed PASS: ./tests/s00000.vtc # top TEST ././tests/s00001.vtc starting # TEST Simple expiry test (fully reaped object) ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 delaying 20 second(s) ### c1 Closing fd 8 ## c1 Ending ### top delaying 20 second(s) ### s1 rxreq ## c2 Starting client ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### s1 shutting fd 9 ## s1 Ending ### c2 Closing fd 8 ## c2 Ending # top RESETTING after ././tests/s00001.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38095 Status: 0200 # top TEST ././tests/s00001.vtc completed PASS: ./tests/s00001.vtc # top TEST ././tests/v00000.vtc starting # TEST VCL/VRT: req.grace, obj.ttl and obj.grace ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 # top RESETTING after ././tests/v00000.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38141 Status: 0200 # top TEST ././tests/v00000.vtc completed PASS: ./tests/v00000.vtc # top TEST ././tests/v00001.vtc starting # TEST VCL/VRT: url/request/proto/response/status ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 # top RESETTING after ././tests/v00001.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38185 Status: 0200 # top TEST ././tests/v00001.vtc completed PASS: ./tests/v00001.vtc # top TEST ././tests/v00002.vtc starting # TEST VCL: test syntax/semantic checks on backend decls. (vcc_backend.c) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ---- v1 VCL compilation got 200 expected 106 ---- TEST FILE: ././tests/v00002.vtc ---- TEST DESCRIPTION: VCL: test syntax/semantic checks on backend decls. (vcc_backend.c) FAIL: ./tests/v00002.vtc # top TEST ././tests/v00003.vtc starting # TEST VCL: test syntax/semantic checks on director decls. ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00003.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38290 Status: 0200 # top TEST ././tests/v00003.vtc completed PASS: ./tests/v00003.vtc # top TEST ././tests/v00004.vtc starting # TEST VCL: test creation/destruction of backends ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (1) == 1 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl (2) == 2 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl (3) == 3 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl (2) == 2 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl (1) == 1 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl (2) == 2 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl (2) == 2 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl (1) == 1 # top RESETTING after ././tests/v00004.vtc ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38340 Status: 0200 # top TEST ././tests/v00004.vtc completed PASS: ./tests/v00004.vtc # top TEST ././tests/v00005.vtc starting # TEST VCL: test backend probe syntax ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00005.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38408 Status: 0200 # top TEST ././tests/v00005.vtc completed PASS: ./tests/v00005.vtc # top TEST ././tests/v00006.vtc starting # TEST VCL: Test backend retirement ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p thread_pools=1 -w1,1,300 ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## s1 Waiting for server ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ## c1 Waiting for client ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl_avail (1) == 1 ## v1 as expected: n_vcl_discard (0) == 0 ## s2 Starting server ### s2 listen on 127.0.0.1:9180 (fd 3) ## s2 Started on 127.0.0.1:9180 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl_avail (2) == 2 ## v1 as expected: n_vcl_discard (0) == 0 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl_avail (1) == 1 ## v1 as expected: n_vcl_discard (1) == 1 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s2 Accepted socket fd is 9 ### s2 rxreq ### s2 shutting fd 9 ## s2 Ending ### c1 Closing fd 8 ## c1 Ending ### v1 CLI STATUS 400 ## v1 CLI 400 ### v1 CLI STATUS 200 ## v1 CLI 200 ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl_avail (1) == 1 ## v1 as expected: n_vcl_discard (0) == 0 # top RESETTING after ././tests/v00006.vtc ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38461 Status: 0200 # top TEST ././tests/v00006.vtc completed PASS: ./tests/v00006.vtc # top TEST ././tests/v00007.vtc starting # TEST Test random director ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/v00007.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38515 Status: 0200 # top TEST ././tests/v00007.vtc completed PASS: ./tests/v00007.vtc # top TEST ././tests/v00008.vtc starting # TEST Test host header specification ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## s2 Starting server ### s2 listen on 127.0.0.1:9180 (fd 8) ## s2 Started on 127.0.0.1:9180 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 9 ### c1 rxresp ### s2 Accepted socket fd is 10 ### s2 rxreq ### s2 shutting fd 10 ## s2 Ending ### c1 Closing fd 9 ## c1 Ending # top RESETTING after ././tests/v00008.vtc ## s1 Waiting for server ## s2 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38559 Status: 0200 # top TEST ././tests/v00008.vtc completed PASS: ./tests/v00008.vtc # top TEST ././tests/v00009.vtc starting # TEST Test round robin director ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s1 Started on 127.0.0.1:2000 ## s2 Starting server ### s2 listen on 127.0.0.1:3000 (fd 4) ## s2 Started on 127.0.0.1:3000 ## s3 Starting server ### s3 listen on 127.0.0.1:4000 (fd 5) ## s4 Starting server ## s3 Started on 127.0.0.1:4000 ### s4 listen on 127.0.0.1:5000 (fd 6) ## v1 Launch ## s4 Started on 127.0.0.1:5000 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 7 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 11 ### c1 rxresp ### s1 Accepted socket fd is 12 ### s1 rxreq ### s1 shutting fd 12 ## s1 Ending ### c1 rxresp ### s2 Accepted socket fd is 12 ### s2 rxreq ### s2 shutting fd 12 ## s2 Ending ### c1 rxresp ### s3 Accepted socket fd is 12 ### s3 rxreq ### s3 shutting fd 12 ## s3 Ending ### c1 rxresp ### s4 Accepted socket fd is 12 ### s4 rxreq ### s4 shutting fd 12 ## s4 Ending ### c1 Closing fd 11 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:2000 (fd 3) ## s2 Starting server ## s1 Started on 127.0.0.1:2000 ### s2 listen on 127.0.0.1:3000 (fd 4) ## c2 Starting client ## s2 Started on 127.0.0.1:3000 ## c2 Waiting for client ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 11 ### c2 rxresp ### s1 Accepted socket fd is 12 ### s1 rxreq ### s1 shutting fd 12 ## s1 Ending ### c2 rxresp ### s2 Accepted socket fd is 12 ### s2 rxreq ### s2 shutting fd 12 ## s2 Ending ### c2 Closing fd 11 ## c2 Ending # top RESETTING after ././tests/v00009.vtc ## s1 Waiting for server ## s2 Waiting for server ## s3 Waiting for server ## s4 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38612 Status: 0200 # top TEST ././tests/v00009.vtc completed PASS: ./tests/v00009.vtc # top TEST ././tests/v00010.vtc starting # TEST VCL: check panic and restart ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 delaying 2.5 second(s) ### c1 Closing fd 8 ## c1 Ending ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## c1 Starting client ## c1 Waiting for client ## s1 Started on 127.0.0.1:9080 ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/v00010.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38656 Status: 0200 # top TEST ././tests/v00010.vtc completed PASS: ./tests/v00010.vtc # top TEST ././tests/v00011.vtc starting # TEST Test vcl purging ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client # top RESETTING after ././tests/v00011.vtc ## s1 Waiting for server ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## c1 Waiting for client ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38701 Status: 0200 # top TEST ././tests/v00011.vtc completed PASS: ./tests/v00011.vtc # top TEST ././tests/v00012.vtc starting # TEST Check backend connection limit ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ## s1 Started on 127.0.0.1:9080 ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c2 Starting client ## c2 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c2 Connected to 127.0.0.1:9081 fd is 9 ### s1 Accepted socket fd is 10 ### s1 rxreq ### c2 rxresp ### c2 Closing fd 9 ## c2 Ending ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Waiting for client ### s1 shutting fd 10 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending ## v1 as expected: backend_busy (1) == 1 # top RESETTING after ././tests/v00012.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38745 Status: 0200 # top TEST ././tests/v00012.vtc completed PASS: ./tests/v00012.vtc # top TEST ././tests/v00013.vtc starting # TEST Check obj.hits ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/v00013.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38789 Status: 0200 # top TEST ././tests/v00013.vtc completed PASS: ./tests/v00013.vtc # top TEST ././tests/v00014.vtc starting # TEST Check req.backend.healthy ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### s1 Iteration 0 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ### s1 Iteration 1 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### c1 Closing fd 8 ## c1 Ending ### top delaying 1 second(s) ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ### s1 Iteration 2 ## c2 Starting client # top RESETTING after ././tests/v00014.vtc ## s1 Waiting for server ## c2 Started ### c2 Connect to 127.0.0.1:9081 ### c2 Connected to 127.0.0.1:9081 fd is 8 ### c2 rxresp ### c2 Closing fd 8 ## c2 Ending ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ### s1 Iteration 3 ### s1 Accepted socket fd is 8 ### s1 rxreq ### s1 shutting fd 8 ## s1 Ending ## c2 Waiting for client ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38836 Status: 0200 # top TEST ././tests/v00014.vtc completed PASS: ./tests/v00014.vtc # top TEST ././tests/v00015.vtc starting # TEST Check function calls with no action return ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ## s1 Started on 127.0.0.1:9080 ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/v00015.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38880 Status: 0200 # top TEST ././tests/v00015.vtc completed PASS: ./tests/v00015.vtc # top TEST ././tests/v00016.vtc starting # TEST Various VCL compiler coverage tests ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00016.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 38925 Status: 0200 # top TEST ././tests/v00016.vtc completed PASS: ./tests/v00016.vtc # top TEST ././tests/v00017.vtc starting # TEST VCL compiler coverage test: vcc_acl.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ---- v1 VCL compilation got 200 expected 106 ---- TEST FILE: ././tests/v00017.vtc ---- TEST DESCRIPTION: VCL compiler coverage test: vcc_acl.c FAIL: ./tests/v00017.vtc # top TEST ././tests/v00018.vtc starting # TEST VCL compiler coverage test: vcc_action.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00018.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39057 Status: 0200 # top TEST ././tests/v00018.vtc completed PASS: ./tests/v00018.vtc # top TEST ././tests/v00019.vtc starting # TEST VCL compiler coverage test: vcc_token.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00019.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39165 Status: 0200 # top TEST ././tests/v00019.vtc completed PASS: ./tests/v00019.vtc # top TEST ././tests/v00020.vtc starting # TEST VCL compiler coverage test: vcc_parse.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 # top RESETTING after ././tests/v00020.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39232 Status: 0200 # top TEST ././tests/v00020.vtc completed PASS: ./tests/v00020.vtc # top TEST ././tests/v00021.vtc starting # TEST VCL compiler coverage test: vcc_xref.c ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 3 ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) ### v1 CLI STATUS 106 ## v1 VCL compilation failed (as expected) # top RESETTING after ././tests/v00021.vtc ## v1 Stop ### v1 CLI STATUS 300 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39285 Status: 0200 # top TEST ././tests/v00021.vtc completed PASS: ./tests/v00021.vtc # top TEST ././tests/v00022.vtc starting # TEST Deeper test of random director ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## s2 Starting server ### s2 listen on 127.0.0.1:9180 (fd 4) ## s3 Starting server ## s2 Started on 127.0.0.1:9180 ### s3 listen on 127.0.0.1:9181 (fd 5) ## s4 Starting server ## s3 Started on 127.0.0.1:9181 ### s4 listen on 127.0.0.1:9182 (fd 6) ## s4 Started on 127.0.0.1:9182 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 7 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 CLI 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 11 ### c1 rxresp ### s4 Accepted socket fd is 12 ### s4 rxreq ### s4 rxreq ### c1 rxresp ### s2 Accepted socket fd is 13 ### s2 rxreq ### s2 rxreq ### c1 rxresp ### s4 rxreq ### c1 rxresp ### s4 rxreq ### c1 rxresp ### s4 rxreq ### c1 rxresp ### s1 Accepted socket fd is 14 ### s1 rxreq ### s1 shutting fd 14 ## s1 Ending ### c1 rxresp ### s2 rxreq ### c1 rxresp ### s4 shutting fd 12 ## s4 Ending ### c1 rxresp ### s2 shutting fd 13 ## s2 Ending ### c1 rxresp ### s3 Accepted socket fd is 12 ### s3 rxreq ### s3 shutting fd 12 ## s3 Ending ### c1 Closing fd 11 ## c1 Ending # top RESETTING after ././tests/v00022.vtc ## s1 Waiting for server ## s2 Waiting for server ## s3 Waiting for server ## s4 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39327 Status: 0200 # top TEST ././tests/v00022.vtc completed PASS: ./tests/v00022.vtc # top TEST ././tests/v00023.vtc starting # TEST Test that obj.ttl = 0s prevents subsequent hits ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## s1 Started on 127.0.0.1:9080 ## v1 Launch ### v1 CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### v1 opening CLI connection ### v1 CLI connection fd = 4 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Start ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## c1 Starting client ## c1 Waiting for client ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 8 ### c1 rxresp ### s1 Accepted socket fd is 9 ### s1 rxreq ### s1 rxreq ### c1 rxresp ### s1 shutting fd 9 ## s1 Ending ### c1 Closing fd 8 ## c1 Ending # top RESETTING after ././tests/v00023.vtc ## s1 Waiting for server ## v1 Stop ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ### v1 CLI STATUS 200 ## v1 Wait ## v1 R 39371 Status: 0200 # top TEST ././tests/v00023.vtc completed PASS: ./tests/v00023.vtc =============================================== 2 of 123 tests failed Please report to varnish-dev at projects.linpro.no =============================================== make[3]: *** [check-TESTS] Error 1 make[2]: *** [check-am] Error 2 make[1]: *** [check-recursive] Error 1 make: *** [check-recursive] Error 1 From joe at pinkpucker.net Sun Mar 8 04:25:56 2009 From: joe at pinkpucker.net (Joe Van Dyk) Date: Sat, 7 Mar 2009 20:25:56 -0800 Subject: Varnish + Webserver on same machine Message-ID: When running Varnish plus other software (mail, web server, etc), I should limit the amount of memory that Varnish uses, right? From jfrias at gmail.com Mon Mar 9 21:17:46 2009 From: jfrias at gmail.com (Javier Frias) Date: Mon, 9 Mar 2009 17:17:46 -0400 Subject: More Detailed Logging (varnishncsa) and Clustering/Cache sharing questions Message-ID: <22964b960903091417u3804c414m56085dac1000dfb8@mail.gmail.com> First, congrats on a great product. So far in my testing, it has met and exceed my expectations. A fresh change from the squid days :) I'm still on the testing phase of my first varnish deployment, and have come up with a few questions. varnishncsa currently only seems to log on a fix format, %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i", while the code and man page mention that the availability of a custom format will be forthcoming, has anyone patched it to add things like bytes served and request time or a custom header? Or can anyone answer if the varnish in memory log has these attributes? Also, there are hints of clustering support in the wiki. I'm specifically interested in cache sharing between varnish installs in a cluster. ( akin to the Digest feature of Squid, where squid servers constantly share what they are caching with each other ). Is there any initial support for this, or is this planned in the near future? And lastly, how stable is the backend load balancing? I have dedicated hardware load balancers for my application, and have been debating whether to point the varnish caches to it, or list all backends on varnish. The plusses for listing them in varnish, are of course, having more detailed tests, and retry logic, but just curious what other high traffic sites have done. Again, thanks for a great project. -Javier -------------- next part -------------- An HTML attachment was scrubbed... URL: From sky at crucially.net Mon Mar 9 21:24:58 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 9 Mar 2009 14:24:58 -0700 Subject: More Detailed Logging (varnishncsa) and Clustering/Cache sharing questions In-Reply-To: <22964b960903091417u3804c414m56085dac1000dfb8@mail.gmail.com> References: <22964b960903091417u3804c414m56085dac1000dfb8@mail.gmail.com> Message-ID: <37E34B3A-9814-4818-8805-38ACA8F57D9A@crucially.net> On Mar 9, 2009, at 2:17 PM, Javier Frias wrote: > And lastly, how stable is the backend load balancing? I have > dedicated hardware load balancers for my application, and have been > debating whether to point the varnish caches to it, or list all > backends on varnish. The plusses for listing them in varnish, are of > course, having more detailed tests, and retry logic, but just > curious what other high traffic sites have done. We use a loadbalancer behind the varnishes, it makes it easier to take things in and out of ration. Artur From jfrias at gmail.com Mon Mar 9 22:10:02 2009 From: jfrias at gmail.com (Javier Frias) Date: Mon, 9 Mar 2009 18:10:02 -0400 Subject: More Detailed Logging (varnishncsa) and Clustering/Cache sharing questions In-Reply-To: <37E34B3A-9814-4818-8805-38ACA8F57D9A@crucially.net> References: <22964b960903091417u3804c414m56085dac1000dfb8@mail.gmail.com> <37E34B3A-9814-4818-8805-38ACA8F57D9A@crucially.net> Message-ID: <22964b960903091510k6c5b0303x932f08700da644d5@mail.gmail.com> On Mon, Mar 9, 2009 at 5:24 PM, Artur Bergman wrote: > > On Mar 9, 2009, at 2:17 PM, Javier Frias wrote: > > And lastly, how stable is the backend load balancing? I have dedicated >> hardware load balancers for my application, and have been debating whether >> to point the varnish caches to it, or list all backends on varnish. The >> plusses for listing them in varnish, are of course, having more detailed >> tests, and retry logic, but just curious what other high traffic sites have >> done. >> > > We use a loadbalancer behind the varnishes, it makes it easier to take > things in and out of ration. > > My thoughts also, but just checking to see if there is a wonder-full-reason(tm) I should go the other way. Thanks for the reply. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfrias at gmail.com Tue Mar 10 01:53:05 2009 From: jfrias at gmail.com (Javier Frias) Date: Mon, 9 Mar 2009 21:53:05 -0400 Subject: Objects in grace while backend is down Message-ID: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> I'm trying to accomplish having varnish always return a cached object by having a large grace period, and have set: in sub vcl_recv set req.grace = 600m; and sub vcl_fetch { set obj.grace = 600m; if (obj.ttl < 600s) { set obj.ttl = 600s; } } Grace just doesn't seem to work when the backend is down... within the ttl, the object gets returned when the backend is down, but as soon as the ttl is over it, it gives me a 503 Service unavailable, even though the grace is of 10hours. Is there a way to server objects out of grace when the backend is down? -Javier -------------- next part -------------- An HTML attachment was scrubbed... URL: From sky at crucially.net Tue Mar 10 02:48:33 2009 From: sky at crucially.net (sky at crucially.net) Date: Tue, 10 Mar 2009 02:48:33 +0000 Subject: Objects in grace while backend is down In-Reply-To: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> Message-ID: <1726309463-1236653327-cardhu_decombobulator_blackberry.rim.net-457579748-@bxe1018.bisx.prod.on.blackberry> The first request hits the back end, the others will get the graced version. (From my understanding.) Artur Sent via BlackBerry by AT&T -----Original Message----- From: Javier Frias Date: Mon, 9 Mar 2009 21:53:05 To: varnish-dev Subject: Objects in grace while backend is down _______________________________________________ varnish-dev mailing list varnish-dev at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev From kristian at redpill-linpro.com Tue Mar 10 09:00:16 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Tue, 10 Mar 2009 10:00:16 +0100 Subject: Objects in grace while backend is down In-Reply-To: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> Message-ID: <20090310090015.GA10476@kjeks.linpro.no> On Mon, Mar 09, 2009 at 09:53:05PM -0400, Javier Frias wrote: > Grace just doesn't seem to work when the backend is down... within the ttl, > the object gets returned when the backend is down, but as soon as the ttl is > over it, it gives me a 503 Service unavailable, even though the grace is of > 10hours. > > Is there a way to server objects out of grace when the backend is down? This was implemented in r3886 [1] and is available in trunk only at the moment. Also see #369 [2] regarding 'forced' grace/saint mode/attempted grace. [1] http://varnish.projects.linpro.no/changeset/3886 [2] http://varnish.projects.linpro.no/ticket/369 -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From tfheen at redpill-linpro.com Mon Mar 9 10:33:00 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Mon, 09 Mar 2009 11:33:00 +0100 Subject: vct.c trunk compilation failure In-Reply-To: (191919@gmail.com's message of "Fri, 6 Mar 2009 22:34:18 +0800") References: <73119.1236332126@critter.freebsd.dk> Message-ID: <8763ijhtpf.fsf@qurzaw.linpro.no> ]] 191919 | I found that perhaps it is a MacOS-specific problem (I am not sure). It seems to be; thanks for the analysis, fixed in SVN trunk now. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From jfrias at gmail.com Tue Mar 10 18:04:19 2009 From: jfrias at gmail.com (Javier Frias) Date: Tue, 10 Mar 2009 14:04:19 -0400 Subject: Objects in grace while backend is down In-Reply-To: <20090310090015.GA10476@kjeks.linpro.no> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> <20090310090015.GA10476@kjeks.linpro.no> Message-ID: <22964b960903101104j569c2856x6edc4f348f7368ba@mail.gmail.com> This is awesome, worked perfectly! Thank you guys. -Javier On Tue, Mar 10, 2009 at 5:00 AM, Kristian Lyngstol < kristian at redpill-linpro.com> wrote: > On Mon, Mar 09, 2009 at 09:53:05PM -0400, Javier Frias wrote: > > Grace just doesn't seem to work when the backend is down... within the > ttl, > > the object gets returned when the backend is down, but as soon as the ttl > is > > over it, it gives me a 503 Service unavailable, even though the grace is > of > > 10hours. > > > > Is there a way to server objects out of grace when the backend is down? > > This was implemented in r3886 [1] and is available in trunk only at the > moment. > > Also see #369 [2] regarding 'forced' grace/saint mode/attempted grace. > > [1] http://varnish.projects.linpro.no/changeset/3886 > [2] http://varnish.projects.linpro.no/ticket/369 > > -- > Kristian Lyngst?l > Redpill Linpro AS > Tlf: +47 21544179 > Mob: +47 99014497 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From audun at ytterdal.net Wed Mar 11 09:50:32 2009 From: audun at ytterdal.net (Audun Ytterdal) Date: Wed, 11 Mar 2009 10:50:32 +0100 Subject: Objects in grace while backend is down In-Reply-To: <20090310090015.GA10476@kjeks.linpro.no> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> <20090310090015.GA10476@kjeks.linpro.no> Message-ID: <8318f61f0903110250u7a000659n748025d071d3cded@mail.gmail.com> On Tue, Mar 10, 2009 at 10:00 AM, Kristian Lyngstol wrote: > On Mon, Mar 09, 2009 at 09:53:05PM -0400, Javier Frias wrote: >> Grace just doesn't seem to work when the backend is down... within the ttl, >> the object gets returned when the backend is down, but as soon as the ttl is >> over it, it gives me a 503 ?Service unavailable, even though the grace is of >> 10hours. >> >> Is there a way to server objects out of grace when the backend is down? > > This was implemented in r3886 [1] and is available in trunk only at the moment. > > Also see #369 [2] regarding 'forced' grace/saint mode/attempted grace. > > [1] http://varnish.projects.linpro.no/changeset/3886 > [2] http://varnish.projects.linpro.no/ticket/369 Hm. One of backend application does HTTP purge at the moment. Would it be possible to have some logic to that could enable you to say "purge this url only if you are not in degraded/supergrace mode" -- Audun Ytterdal http://audun.ytterdal.net From cloude at instructables.com Wed Mar 11 21:17:16 2009 From: cloude at instructables.com (Cloude Porteus) Date: Wed, 11 Mar 2009 14:17:16 -0700 Subject: What's the best way to give feedback for future Varnish features? Message-ID: <4a05e1020903111417w2c175418rd9bcf7a3aef109bc@mail.gmail.com> Our testing with Varnish is going great, but the features around ESI that haven't been implemented are going to make things a little bit harder than I had hoped. I was reading the PostTwoShoppingList and saw a request (please tell us which!) after more ESI features, but I'm not sure how to voice my preference. So here's what would make Varnish shine for our implementation. Both are already on the PostTwoShoppingList: 5. More ESI features Cookies & Conditionals, so we can do this example pulled from the ESI Spec: ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 6. Gzip support We just need the ability to GZIP outbound text files, since we can't GZIP before the ESI statements are processed. best, cloude -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From phk at phk.freebsd.dk Wed Mar 11 21:56:15 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 11 Mar 2009 21:56:15 +0000 Subject: What's the best way to give feedback for future Varnish features? In-Reply-To: Your message of "Wed, 11 Mar 2009 14:17:16 MST." <4a05e1020903111417w2c175418rd9bcf7a3aef109bc@mail.gmail.com> Message-ID: <24973.1236808575@critter.freebsd.dk> In message <4a05e1020903111417w2c175418rd9bcf7a3aef109bc at mail.gmail.com>, Cloud e Porteus writes: >Our testing with Varnish is going great, but the features around ESI >that haven't been implemented are going to make things a little bit >harder than I had hoped. I was reading the PostTwoShoppingList and saw >a request (please tell us which!) after more ESI features, but I'm not >sure how to voice my preference. Get a wiki login[1], edit the page. [1] They are free and we happily give them away, but you have to ask so we kan keep spammers away. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From 191919 at gmail.com Thu Mar 12 16:12:43 2009 From: 191919 at gmail.com (191919) Date: Fri, 13 Mar 2009 00:12:43 +0800 Subject: svn compilation failure on MacOS X Message-ID: $ make ... gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I../.. -I../../include -g -O2 -MT vtc_http.o -MD -MP -MF .deps/vtc_http.Tpo -c -o vtc_http.o vtc_http.c In file included from vtc_http.c:47: vtc.h:60: error: syntax error before 'vtc_thread' vtc.h:60: warning: type defaults to 'int' in declaration of 'vtc_thread' vtc.h:60: warning: data definition has no type or storage class make: *** [vtc_http.o] Error 1 Adding #include to vtc_http.c solves the problem. Regards, jh -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristian at redpill-linpro.com Fri Mar 13 07:10:53 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Fri, 13 Mar 2009 08:10:53 +0100 Subject: Objects in grace while backend is down In-Reply-To: <22964b960903121613q6b3ed035q5cfae0ad34e3c5be@mail.gmail.com> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> <20090310090015.GA10476@kjeks.linpro.no> <22964b960903121613q6b3ed035q5cfae0ad34e3c5be@mail.gmail.com> Message-ID: <20090313071053.GD25044@kjeks.lyngstol.int> (Added varnish-dev back on CC as this might be of general interest) On Thu, Mar 12, 2009 at 07:13:01PM -0400, Javier Frias wrote: > and it would also complain about any use of "obj.*" in vcl_fetch. > > So I looked at your included test, and say that you used "beresp.grace" Yes, obj is no longer available in vcl_fetch, instead, beresp has essentially replaced it. > sub vcl_fetch { > set beresp.grace = 600m; > set beresp.ttl = 600s; > set beresp.cacheable = true; > } > > Doesn't complain about syntax, but also doesn't work when the backend is > down. > > Any suggestions? is beresp.grace the right variable to set? beresp.grace will work, yes, but you are still affected by req.grace. Grace-timing works like this: beresp.grace sets the time beyond TTL that an object will exist in cache and req.grace sets the time beyond TTL that an object is acceptable to return. Something like this will set a quasi-dynamic req.grace based on backend health: sub vcl_recv { if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 30m; } } sub vcl_fetch { set beresp.grace = 30m; set beresp.ttl = 10m; set beresp.cacheable = true; } This VCL will tell varnish to always wait for a fresh object if the backend is healthy and an object is more than 10 minutes and 30 seconds old. If the backend is sick, it will deliver the cached object for 10m+30m. Also keep in mind that this depends on backend health polling. Hope this clarified things. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From kristian at redpill-linpro.com Fri Mar 13 09:48:07 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Fri, 13 Mar 2009 10:48:07 +0100 Subject: Objects in grace while backend is down In-Reply-To: <8318f61f0903110250u7a000659n748025d071d3cded@mail.gmail.com> References: <22964b960903091853t6ace2ea1n2de6f9acb69a209a@mail.gmail.com> <20090310090015.GA10476@kjeks.linpro.no> <8318f61f0903110250u7a000659n748025d071d3cded@mail.gmail.com> Message-ID: <20090313094806.GA12424@kjeks.linpro.no> On Wed, Mar 11, 2009 at 10:50:32AM +0100, Audun Ytterdal wrote: > On Tue, Mar 10, 2009 at 10:00 AM, Kristian Lyngstol > wrote: > > On Mon, Mar 09, 2009 at 09:53:05PM -0400, Javier Frias wrote: > >> Is there a way to server objects out of grace when the backend is down? > > > > This was implemented in r3886 [1] and is available in trunk only at the moment. > > > > Also see #369 [2] regarding 'forced' grace/saint mode/attempted grace. > > > > [1] http://varnish.projects.linpro.no/changeset/3886 > > [2] http://varnish.projects.linpro.no/ticket/369 > > Hm. One of backend application does HTTP purge at the moment. Would it > be possible to have some logic to that could enable you to say "purge > this url only if you are not in degraded/supergrace mode" This can be done by evaluating req.backend.healthy in VCL with the current implementation. It's going to be slightly trickier with regards to next part of #369 that deals with erroneous responses from a backend though. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From theo at omniti.com Fri Mar 13 13:57:54 2009 From: theo at omniti.com (Theo Schlossnagle) Date: Fri, 13 Mar 2009 09:57:54 -0400 Subject: Varnish on OpenBSD Message-ID: <3394316B-123B-415E-AAC9-BFB9A4FDAD4D@omniti.com> Does anyone here run Varnish on OpenBSD? I'm interested in either successes or known feature issues that would make it less than ideal as a platform. Thanks! -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ From varnish-dev at projects.linpro.no Fri Mar 13 18:54:22 2009 From: varnish-dev at projects.linpro.no (Dr. french Drahzal) Date: Fri, 13 Mar 2009 19:54:22 +0100 (CET) Subject: Any ideas about printer? Message-ID: <20090313185422.D8DC91F7474@projects.linpro.no> An HTML attachment was scrubbed... URL: From pbruna at it-linux.cl Mon Mar 16 16:06:30 2009 From: pbruna at it-linux.cl (Patricio A. Bruna) Date: Mon, 16 Mar 2009 12:06:30 -0400 (CLT) Subject: Serve only from cache Message-ID: <20706733.110531237219590366.JavaMail.root@lisa.itlinux.cl> Hi, Is possible to serve the content only from cache and no go to look for it on the backend. I mean, if we are doing an update to a web site we want the clients only see what is on the cache, the old site, and when the update is ready start showing the new site. Thanks ------------------------------------ Patricio Bruna V. IT Linux Ltda. http://www.it-linux.cl http://wiki.itlinux.cl Fono : (+56-2) 333 0578 -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish-dev at projects.linpro.no Tue Mar 17 08:09:31 2009 From: varnish-dev at projects.linpro.no (Eryru KIPP) Date: Tue, 17 Mar 2009 09:09:31 +0100 (CET) Subject: Is this exact time alright? Message-ID: <20090317080931.B04A41F751A@projects.linpro.no> An HTML attachment was scrubbed... URL: From kristian at redpill-linpro.com Tue Mar 17 10:29:51 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Tue, 17 Mar 2009 11:29:51 +0100 Subject: Serve only from cache In-Reply-To: <20706733.110531237219590366.JavaMail.root@lisa.itlinux.cl> References: <20706733.110531237219590366.JavaMail.root@lisa.itlinux.cl> Message-ID: <20090317102951.GC12929@kjeks.escenic.com> On Mon, Mar 16, 2009 at 12:06:30PM -0400, Patricio A. Bruna wrote: > Hi, > Is possible to serve the content only from cache and no go to look for it on the backend. > I mean, if we are doing an update to a web site we want the clients only > see what is on the cache, the old site, and when the update is ready > start showing the new site. Sure. Mark the backend as bad with a health check. If you're using trunk, that will also trigger grace for expired objects. After the site is updated, you can run a purge to get new content. You'll want to look into "Backend Health Polling". -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From eden at mojiti.com Fri Mar 20 19:37:05 2009 From: eden at mojiti.com (Eden Li) Date: Fri, 20 Mar 2009 12:37:05 -0700 Subject: accept_fd_holdoff second/millisecond confusion Message-ID: <180e6a10903201237y1e86bc00vdd7c7b473a9969cd@mail.gmail.com> Hi all, We ran into a situation where our backend held connections open for so long that we ran into the open file limit. After clearing up the backend and ensuring, varnish never came back and we had to restart it in order for it to start relaying connections again. Flipping on debug mode shows the error "Too many open files when accept(2)ing. Sleeping." which should sleep for 50 milliseconds (according to param.show). Instead it seems to be sleeping for 50*1000 *seconds* (13 hours). Looking at the code, it appears that this is either a doc bug or a code bug. I was able to fix the root issue with this patch: --- a/varnish-2.0.1/bin/varnishd/cache_acceptor.c 2008-10-17 11:59:49.000000000 -0700 +++ b/varnish-2.0.1/bin/varnishd/cache_acceptor.c 2009-03-20 12:16:15.000000000 -0700 @@ -228,7 +228,7 @@ case EMFILE: VSL(SLT_Debug, ls->sock, "Too many open files when accept(2)ing. Sleeping."); - TIM_sleep(params->accept_fd_holdoff * 1000.0); + TIM_sleep(params->accept_fd_holdoff * 0.001); break; default: VSL(SLT_Debug, ls->sock, Is this the right fix? Should I create a ticket in trac for this? We're getting around it now by setting the max open file limit and listen_depth appropriately so that varnish never gets to this point, but it'd be nice if this was fixed in case we ever accidentally get here again. From phk at phk.freebsd.dk Fri Mar 20 19:43:57 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 20 Mar 2009 19:43:57 +0000 Subject: accept_fd_holdoff second/millisecond confusion In-Reply-To: Your message of "Fri, 20 Mar 2009 12:37:05 MST." <180e6a10903201237y1e86bc00vdd7c7b473a9969cd@mail.gmail.com> Message-ID: <35801.1237578237@critter.freebsd.dk> Fixed, thanks! Poul-Henning In message <180e6a10903201237y1e86bc00vdd7c7b473a9969cd at mail.gmail.com>, Eden L i writes: >Hi all, > >We ran into a situation where our backend held connections open for so >long that we ran into the open file limit. After clearing up the >backend and ensuring, varnish never came back and we had to restart it >in order for it to start relaying connections again. > >Flipping on debug mode shows the error "Too many open files when >accept(2)ing. Sleeping." which should sleep for 50 milliseconds >(according to param.show). Instead it seems to be sleeping for >50*1000 *seconds* (13 hours). Looking at the code, it appears that >this is either a doc bug or a code bug. I was able to fix the root >issue with this patch: > >--- a/varnish-2.0.1/bin/varnishd/cache_acceptor.c 2008-10-17 >11:59:49.000000000 -0700 >+++ b/varnish-2.0.1/bin/varnishd/cache_acceptor.c 2009-03-20 >12:16:15.000000000 -0700 >@@ -228,7 +228,7 @@ > case EMFILE: > VSL(SLT_Debug, ls->sock, > "Too many open files when >accept(2)ing. Sleeping."); >- >TIM_sleep(params->accept_fd_holdoff * 1000.0); >+ >TIM_sleep(params->accept_fd_holdoff * 0.001); > break; > default: > VSL(SLT_Debug, ls->sock, > >Is this the right fix? Should I create a ticket in trac for this? >We're getting around it now by setting the max open file limit and >listen_depth appropriately so that varnish never gets to this point, >but it'd be nice if this was fixed in case we ever accidentally get >here again. >_______________________________________________ >varnish-dev mailing list >varnish-dev at projects.linpro.no >http://projects.linpro.no/mailman/listinfo/varnish-dev > -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at redpill-linpro.com Tue Mar 24 08:46:12 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Tue, 24 Mar 2009 09:46:12 +0100 Subject: Varnish on OpenBSD In-Reply-To: <3394316B-123B-415E-AAC9-BFB9A4FDAD4D@omniti.com> (Theo Schlossnagle's message of "Fri, 13 Mar 2009 09:57:54 -0400") References: <3394316B-123B-415E-AAC9-BFB9A4FDAD4D@omniti.com> Message-ID: <87k56fl357.fsf@qurzaw.linpro.no> ]] Theo Schlossnagle | Does anyone here run Varnish on OpenBSD? I'm interested in either | successes or known feature issues that would make it less than ideal | as a platform. I believe it had some quirks associated with it where something or another worked suboptimally. Grep the archives for what it was, it was discussed here or on -misc about eight months ago, iirc. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From varnish-dev at projects.linpro.no Tue Mar 24 22:26:06 2009 From: varnish-dev at projects.linpro.no (Drugs.com) Date: Tue, 24 Mar 2009 23:26:06 +0100 (CET) Subject: Shocking Amanda Bynes video Message-ID: <20090324222606.EA20128531@projects.linpro.no> An HTML attachment was scrubbed... URL: From cloude at instructables.com Thu Mar 26 03:34:59 2009 From: cloude at instructables.com (Cloude Porteus) Date: Wed, 25 Mar 2009 20:34:59 -0700 Subject: Storage Size & Virtual Memory? Message-ID: <4a05e1020903252034l78cdec9cw5146efd8c3c7c2b2@mail.gmail.com> I thought I could have a cache size that was larger than the amount of RAM. My config looks like this: VARNISH_STORAGE_FILE=/var/spool/squid/varnish/varnish_storage.bin VARNISH_STORAGE_SIZE=50G VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" But I'm seeing my Varnish process with 55.6g of virtual memory and some usage of swap memory. I'm also seeing a system load average of 1.5-3, where my current squids run well under 1. I'm also trying to figure out how many objects are in the cache, but I can't tell which varnishstat will get me close to that number, is it? - Objects sent with write - SHM records Thanks for any help! best, cloude -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From aotto at mosso.com Thu Mar 26 03:55:23 2009 From: aotto at mosso.com (Adrian Otto) Date: Wed, 25 Mar 2009 20:55:23 -0700 Subject: Storage Size & Virtual Memory? In-Reply-To: <4a05e1020903252034l78cdec9cw5146efd8c3c7c2b2@mail.gmail.com> References: <4a05e1020903252034l78cdec9cw5146efd8c3c7c2b2@mail.gmail.com> Message-ID: Cloude, Stop varnish, drop your swap partition with "/sbin/swapoff". Restart Varnish, and comment your swap partition out of fstab. You'll be golden after that. Cheers, Adrian On Mar 25, 2009, at 8:34 PM, Cloude Porteus wrote: > I thought I could have a cache size that was larger than the amount > of RAM. My config looks like this: > > VARNISH_STORAGE_FILE=/var/spool/squid/varnish/varnish_storage.bin > VARNISH_STORAGE_SIZE=50G > VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" > > But I'm seeing my Varnish process with 55.6g of virtual memory and > some usage of swap memory. I'm also seeing a system load average of > 1.5-3, where my current squids run well under 1. > > I'm also trying to figure out how many objects are in the cache, > but I can't tell which varnishstat will get me close to that > number, is it? > > - Objects sent with write > - SHM records > > Thanks for any help! > > best, > cloude > > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.chornik at elserver.com Sat Mar 28 00:40:17 2009 From: joel.chornik at elserver.com (Joel A. Chornik) Date: Fri, 27 Mar 2009 21:40:17 -0300 Subject: varnish high load average and dropped http session Message-ID: <8269B3BB19AC45CBA016A9643A6D3957@desktop> Hi, When handling a request for a large file (~300mb) varnish just closes the http connection and as a side effect, load average on the Server pikes. Hitting on that file just a few times a minute is enough to get 200LA on a Sun x2250 with 4GB RAM. Varnish is configured with a relatively small file cache of 2.7GB, but anyhow it should not fail this way. On a similar 64bit Server but with 7GB file cache configured, the file gets Server without problem. Tried on varnish 2.0.3 with different file sizes (above 300mb) and extensiones (.tar, .mp3. .htm) The log reports: varnishd[28124]: Child (7003) Panic message: Assert error in STV_alloc(), stevedore.c line 71: Condition((st) != NULL) not true. errno = 107 (Transport endpoint is not connected) thread = (cache-worker)sp = 0x7f87a7ff2008 { fd = 1134, id = 1134, xid = 1972308839, client = xxx.xxx.xxx.xxx:1433, step = STP_FETCH, handling = discard, ws = 0x7f87a7ff2078 { id = "sess", {s,f,r,e} = {0x7f87a7ff2808,,+308,(nil),+16384}, }, worker = 0x7724ebd0 { }, vcl = { srcname = { "input", "Default", }, }, obj = 0x7f87ec179000 { refcnt = 1, xid = 1972308839, ws = 0x7f87ec179028 { id = "obj", {s,f,r,e} = {0x7f87ec179358,,+251,(nil),+7336}, }, http = { ws = 0x7f87ec179028 { id = "obj", {s,f,r,e} = {0x7f87ec179358,,+251,(nil),+7336}, }, hd = { "Date: Thu, 26 Mar 2009 23:30:15 GMT", "Server: Apache", "Last-Modified: Thu, 26 Mar 2009 06:57:11 GMT", -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at pbwiki.com Wed Mar 25 23:36:34 2009 From: david at pbwiki.com (David Weekly) Date: Wed, 25 Mar 2009 16:36:34 -0700 Subject: "make check" fails for vernish 2.0.03 on Debian? Message-ID: connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?8595 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?8650 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?8707 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?8763 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?8820 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: ?9004 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9059 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9115 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9170 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9226 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9282 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9340 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9397 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: ?9453 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: ?9507 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9563 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9618 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. /bin/sh: line 4: ?9674 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9731 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9790 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9847 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9914 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: ?9970 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10027 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10082 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10140 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10195 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10251 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10308 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10366 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10429 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10494 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10552 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10609 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10665 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10720 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10777 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10833 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10889 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 10945 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11001 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11057 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11113 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11169 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11224 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11281 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11337 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11393 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11449 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11505 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: 11562 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11617 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11673 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11729 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11785 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11841 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11898 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 11955 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12010 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12066 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12122 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12178 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12234 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: 12290 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12346 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12401 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12456 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12517 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12572 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12627 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12683 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: 12740 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12796 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12853 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12908 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 12964 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13019 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13076 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13131 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13187 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13286 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13341 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13397 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: 13453 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13509 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13565 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13622 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13677 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 22 (Invalid argument) /bin/sh: line 4: 13871 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 13980 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14035 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14091 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14147 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14206 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14262 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14318 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. /bin/sh: line 4: 14375 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14430 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14486 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 14965 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst connect(): Connection refused connect(): Connection refused connect(): Connection refused Assert error in varnish_ask_cli(), vtc_varnish.c line 98: ?Condition(i == 0) not true. ?errno = 111 (Connection refused) /bin/sh: line 4: 15024 Aborted ? ? ? ? ? ? ? ? ./varnishtest ${dir}$tst make[3]: *** [check-TESTS] Error 1 make[2]: *** [check-am] Error 2 make[1]: *** [check-recursive] Error 1 make: *** [check-recursive] Error 1 ting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00008.vtc # ? ?top ?TEST ././tests/c00009.vtc starting # ? ?TEST Test restarts ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s2 ? Starting server ## ? s1 ? Started on 127.0.0.1:9080 ### ?s2 ? listen on 127.0.0.1:9180 (fd 4) ## ? v1 ? Launch ## ? s2 ? Started on 127.0.0.1:9180 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00009.vtc # ? ?top ?TEST ././tests/c00010.vtc starting # ? ?TEST Test pass from hit ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s1 ? Started on 127.0.0.1:9080 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00010.vtc # ? ?top ?TEST ././tests/c00011.vtc starting # ? ?TEST Test hit for pass (pass from fetch) ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00011.vtc # ? ?top ?TEST ././tests/c00012.vtc starting # ? ?TEST Test pass from miss ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00012.vtc # ? ?top ?TEST ././tests/c00013.vtc starting # ? ?TEST Test parking second request on backend delay ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00013.vtc # ? ?top ?TEST ././tests/c00014.vtc starting # ? ?TEST Test parking second request on backend delay, then pass ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00014.vtc # ? ?top ?TEST ././tests/c00015.vtc starting # ? ?TEST Test switching VCLs ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00015.vtc # ? ?top ?TEST ././tests/c00016.vtc starting # ? ?TEST Test Connection header handling ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00016.vtc # ? ?top ?TEST ././tests/c00017.vtc starting # ? ?TEST Test Backend Polling ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00017.vtc # ? ?top ?TEST ././tests/c00018.vtc starting # ? ?TEST Check Expect headers ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s1 ? Started on 127.0.0.1:9080 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00018.vtc # ? ?top ?TEST ././tests/c00019.vtc starting # ? ?TEST Check purge counters and duplicate purge elimination ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p purge_hash=on ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00019.vtc # ? ?top ?TEST ././tests/c00020.vtc starting # ? ?TEST Test -h critbit a bit ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -hcritbit ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00020.vtc # ? ?top ?TEST ././tests/c00021.vtc starting # ? ?TEST Test banning a url with cli:purge ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00021.vtc # ? ?top ?TEST ././tests/c00022.vtc starting # ? ?TEST Test banning a url with VCL purge ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/c00022.vtc # ? ?top ?TEST ././tests/e00000.vtc starting # ? ?TEST ESI test with no ESI content ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00000.vtc # ? ?top ?TEST ././tests/e00001.vtc starting # ? ?TEST ESI:remove ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00001.vtc # ? ?top ?TEST ././tests/e00002.vtc starting # ? ?TEST ESI CDATA ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00002.vtc # ? ?top ?TEST ././tests/e00003.vtc starting # ? ?TEST ESI include ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00003.vtc # ? ?top ?TEST ././tests/e00004.vtc starting # ? ?TEST ESI commented include ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00004.vtc # ? ?top ?TEST ././tests/e00005.vtc starting # ? ?TEST ESI relative include ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00005.vtc # ? ?top ?TEST ././tests/e00006.vtc starting # ? ?TEST ESI include with http:// ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s2 ? Starting server ## ? s1 ? Started on 127.0.0.1:9080 ### ?s2 ? listen on 127.0.0.1:9180 (fd 4) ## ? s2 ? Started on 127.0.0.1:9180 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00006.vtc # ? ?top ?TEST ././tests/e00007.vtc starting # ? ?TEST ESI spanning storage bits ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00007.vtc # ? ?top ?TEST ././tests/e00008.vtc starting # ? ?TEST ESI parsing errors ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00008.vtc # ? ?top ?TEST ././tests/e00009.vtc starting # ? ?TEST ESI binary detector ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00009.vtc # ? ?top ?TEST ././tests/e00010.vtc starting # ? ?TEST Ignoring non esi elements ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00010.vtc # ? ?top ?TEST ././tests/e00011.vtc starting # ? ?TEST Make sure that PASS'ed ESI requests use GET for includes ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00011.vtc # ? ?top ?TEST ././tests/e00012.vtc starting # ? ?TEST ESI includes for pre HTTP/1.1 cannot used chunked encoding ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s1 ? Started on 127.0.0.1:9080 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00012.vtc # ? ?top ?TEST ././tests/e00013.vtc starting # ? ?TEST All white-space object, in multiple storage segments ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/e00013.vtc # ? ?top ?TEST ././tests/e00014.vtc starting # ? ?TEST Check ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00326.vtc # ? ?top ?TEST ././tests/r00345.vtc starting # ? ?TEST #345, ESI waitinglist trouble ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00345.vtc # ? ?top ?TEST ././tests/r00354.vtc starting # ? ?TEST #354 Segfault in strcmp in http_DissectRequest() ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00354.vtc # ? ?top ?TEST ././tests/r00365.vtc starting # ? ?TEST Test restarts in vcl_hit ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00365.vtc # ? ?top ?TEST ././tests/r00386.vtc starting # ? ?TEST #386, failure to insert include ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p diag_bitmap=0x20 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00386.vtc # ? ?top ?TEST ././tests/r00387.vtc starting # ? ?TEST Regression test for #387: too long chunk header ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 7 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00387.vtc # ? ?top ?TEST ././tests/r00400.vtc starting # ? ?TEST Regression test for ticket 409 ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00400.vtc # ? ?top ?TEST ././tests/r00409.vtc starting # ? ?TEST Regression test for ticket 409 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/r00409.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 13261 Status: 0200 # ? ?top ?TEST ././tests/r00409.vtc completed PASS: ./tests/r00409.vtc # ? ?top ?TEST ././tests/r00412.vtc starting # ? ?TEST Regression test for ticket 412 ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00412.vtc # ? ?top ?TEST ././tests/r00416.vtc starting # ? ?TEST Regression test for #416: a surplus of HTTP headers ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00416.vtc # ? ?top ?TEST ././tests/r00425.vtc starting # ? ?TEST check late pass stalling ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00425.vtc # ? ?top ?TEST ././tests/r00427.vtc starting # ? ?TEST client close in ESI delivery ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/r00427.vtc # ? ?top ?TEST ././tests/s00000.vtc starting # ? ?TEST Simple expiry test ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/s00000.vtc # ? ?top ?TEST ././tests/s00001.vtc starting # ? ?TEST Simple expiry test (fully reaped object) ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/s00001.vtc # ? ?top ?TEST ././tests/v00000.vtc starting # ? ?TEST VCL/VRT: req.grace, obj.ttl and obj.grace ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00000.vtc # ? ?top ?TEST ././tests/v00001.vtc starting # ? ?TEST VCL/VRT: url/request/proto/response/status ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00001.vtc # ? ?top ?TEST ././tests/v00002.vtc starting # ? ?TEST VCL: test syntax/semantic checks on backend decls. (vcc_backend.c) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00002.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 13750 Status: 0200 # ? ?top ?TEST ././tests/v00002.vtc completed PASS: ./tests/v00002.vtc # ? ?top ?TEST ././tests/v00003.vtc starting # ? ?TEST VCL: test syntax/semantic checks on director decls. ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00003.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 13834 Status: 0200 # ? ?top ?TEST ././tests/v00003.vtc completed PASS: ./tests/v00003.vtc # ? ?top ?TEST ././tests/v00004.vtc starting # ? ?TEST VCL: test creation/destruction of backends ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? CLI 200 ## ? v1 ? Start FAIL: ./tests/v00004.vtc # ? ?top ?TEST ././tests/v00005.vtc starting # ? ?TEST VCL: test backend probe syntax ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00005.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 13942 Status: 0200 # ? ?top ?TEST ././tests/v00005.vtc completed PASS: ./tests/v00005.vtc # ? ?top ?TEST ././tests/v00006.vtc starting # ? ?TEST VCL: Test backend retirement ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid -p thread_pools=1 -w1,1,300 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00006.vtc # ? ?top ?TEST ././tests/v00007.vtc starting # ? ?TEST Test random director ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00007.vtc # ? ?top ?TEST ././tests/v00008.vtc starting # ? ?TEST Test host header specification ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00008.vtc # ? ?top ?TEST ././tests/v00009.vtc starting # ? ?TEST Test round robin director ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:2000 (fd 3) ## ? s2 ? Starting server ## ? s1 ? Started on 127.0.0.1:2000 ### ?s2 ? listen on 127.0.0.1:3000 (fd 4) ## ? s3 ? Starting server ## ? s2 ? Started on 127.0.0.1:3000 ### ?s3 ? listen on 127.0.0.1:4000 (fd 6) ## ? s4 ? Starting server ## ? s3 ? Started on 127.0.0.1:4000 ### ?s4 ? listen on 127.0.0.1:5000 (fd 8) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s4 ? Started on 127.0.0.1:5000 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 10 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00009.vtc # ? ?top ?TEST ././tests/v00010.vtc starting # ? ?TEST VCL: check panic and restart ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00010.vtc # ? ?top ?TEST ././tests/v00011.vtc starting # ? ?TEST Test vcl purging ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00011.vtc # ? ?top ?TEST ././tests/v00012.vtc starting # ? ?TEST Check backend connection limit ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00012.vtc # ? ?top ?TEST ././tests/v00013.vtc starting # ? ?TEST Check obj.hits ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00013.vtc # ? ?top ?TEST ././tests/v00014.vtc starting # ? ?TEST Check req.backend.healthy ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s1 ? Started on 127.0.0.1:9080 ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 5 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00014.vtc # ? ?top ?TEST ././tests/v00015.vtc starting # ? ?TEST Check function calls with no action return ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00015.vtc # ? ?top ?TEST ././tests/v00016.vtc starting # ? ?TEST Various VCL compiler coverage tests ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00016.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14560 Status: 0200 # ? ?top ?TEST ././tests/v00016.vtc completed PASS: ./tests/v00016.vtc # ? ?top ?TEST ././tests/v00017.vtc starting # ? ?TEST VCL compiler coverage test: vcc_acl.c ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 # ? ?top ?RESETTING after ././tests/v00017.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14633 Status: 0200 # ? ?top ?TEST ././tests/v00017.vtc completed PASS: ./tests/v00017.vtc # ? ?top ?TEST ././tests/v00018.vtc starting # ? ?TEST VCL compiler coverage test: vcc_action.c ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00018.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14713 Status: 0200 # ? ?top ?TEST ././tests/v00018.vtc completed PASS: ./tests/v00018.vtc # ? ?top ?TEST ././tests/v00019.vtc starting # ? ?TEST VCL compiler coverage test: vcc_token.c ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00019.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14812 Status: 0200 # ? ?top ?TEST ././tests/v00019.vtc completed PASS: ./tests/v00019.vtc # ? ?top ?TEST ././tests/v00020.vtc starting # ? ?TEST VCL compiler coverage test: vcc_parse.c ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 # ? ?top ?RESETTING after ././tests/v00020.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14878 Status: 0200 # ? ?top ?TEST ././tests/v00020.vtc completed PASS: ./tests/v00020.vtc # ? ?top ?TEST ././tests/v00021.vtc starting # ? ?TEST VCL compiler coverage test: vcc_xref.c ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 3 ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) ### ?v1 ? CLI STATUS 106 ## ? v1 ? VCL compilation failed (as expected) # ? ?top ?RESETTING after ././tests/v00021.vtc ## ? v1 ? Stop ### ?v1 ? CLI STATUS 300 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Wait ## ? v1 ? R 14932 Status: 0200 # ? ?top ?TEST ././tests/v00021.vtc completed PASS: ./tests/v00021.vtc # ? ?top ?TEST ././tests/v00022.vtc starting # ? ?TEST Deeper test of random director ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? s1 ? Started on 127.0.0.1:9080 ## ? s2 ? Starting server ### ?s2 ? listen on 127.0.0.1:9180 (fd 5) ## ? s3 ? Starting server ## ? s2 ? Started on 127.0.0.1:9180 ### ?s3 ? listen on 127.0.0.1:9181 (fd 6) ## ? s4 ? Starting server ## ? s3 ? Started on 127.0.0.1:9181 ### ?s4 ? listen on 127.0.0.1:9182 (fd 8) ## ? v1 ? Launch ## ? s4 ? Started on 127.0.0.1:9182 ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 11 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00022.vtc # ? ?top ?TEST ././tests/v00023.vtc starting # ? ?TEST Test that obj.ttl = 0s prevents subsequent hits ## ? s1 ? Starting server ### ?s1 ? listen on 127.0.0.1:9080 (fd 3) ## ? v1 ? Launch ### ?v1 ? CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -P /tmp/__v1/varnishd.pid ## ? s1 ? Started on 127.0.0.1:9080 ### ?v1 ? opening CLI connection ### ?v1 ? CLI connection fd = 4 ### ?v1 ? CLI STATUS 200 ### ?v1 ? CLI STATUS 200 ## ? v1 ? Start FAIL: ./tests/v00023.vtc =============================================== 101 of 123 tests failed Please report to varnish-dev at projects.linpro.no =============================================== make[3]: Leaving directory `/root/varnish-2.0.3/bin/varnishtest' make[2]: Leaving directory `/root/varnish-2.0.3/bin/varnishtest' make[1]: Leaving directory `/root/varnish-2.0.3/bin' -- Follow me on Twitter! http://twitter.com/dweekly From tfheen at redpill-linpro.com Fri Mar 27 15:09:02 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Mar 2009 16:09:02 +0100 Subject: "make check" fails for vernish 2.0.03 on Debian? In-Reply-To: (David Weekly's message of "Wed, 25 Mar 2009 16:36:34 -0700") References: Message-ID: <87fxgzj94h.fsf@qurzaw.linpro.no> ]] David Weekly | 101 of 123 tests failed Check if /tmp/__v1 is owned by another user than the one running make check? -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From david at pbwiki.com Fri Mar 27 15:28:47 2009 From: david at pbwiki.com (David Weekly) Date: Fri, 27 Mar 2009 08:28:47 -0700 Subject: "make check" fails for vernish 2.0.03 on Debian? In-Reply-To: <87fxgzj94h.fsf@qurzaw.linpro.no> References: <87fxgzj94h.fsf@qurzaw.linpro.no> Message-ID: 2009/3/27 Tollef Fog Heen > ]] David Weekly > > | 101 of 123 tests failed > > Check if /tmp/__v1 is owned by another user than the one running make > check? The directory is owned by root and I ran 'make check' as root. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Mon Mar 30 10:48:23 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Mon, 30 Mar 2009 12:48:23 +0200 Subject: svn compilation failure on MacOS X In-Reply-To: (191919@gmail.com's message of "Fri, 13 Mar 2009 00:12:43 +0800") References: Message-ID: <87ocvjuw08.fsf@qurzaw.linpro.no> ]] 191919 | Adding | #include | | to vtc_http.c solves the problem. Thanks, added. | Regards, | jh If you'd like to be credited with something else than ?191919?, it would be helpful to know your real name. :-) -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From cloude at instructables.com Mon Mar 30 17:09:11 2009 From: cloude at instructables.com (Cloude Porteus) Date: Mon, 30 Mar 2009 10:09:11 -0700 Subject: How many objects in the cache? Message-ID: <4a05e1020903301009p12b6f3b3lb4989d508ba1e807@mail.gmail.com> Any idea how to tell how many objects are currently cached? I'm trying to fine tune our configuration, but I can't figure out if any of the varnishstat numbers will tell me this. thanks, cloude -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From sky at crucially.net Mon Mar 30 17:15:53 2009 From: sky at crucially.net (Artur Bergman) Date: Mon, 30 Mar 2009 10:15:53 -0700 Subject: How many objects in the cache? In-Reply-To: <4a05e1020903301009p12b6f3b3lb4989d508ba1e807@mail.gmail.com> References: <4a05e1020903301009p12b6f3b3lb4989d508ba1e807@mail.gmail.com> Message-ID: <418EC7B8-2577-414C-9F99-9501E35BF063@crucially.net> 4268155 . . N struct object 3161248 . . N struct objecthead from varnish stat object is unique objects with vary objecthead is unique hashed entitys On Mar 30, 2009, at 10:09 AM, Cloude Porteus wrote: > Any idea how to tell how many objects are currently cached? I'm trying > to fine tune our configuration, but I can't figure out if any of the > varnishstat numbers will tell me this. > > thanks, > cloude > > -- > VP of Product Development > Instructables.com > > http://www.instructables.com/member/lebowski > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev From bobd at mitre.org Tue Mar 31 18:59:52 2009 From: bobd at mitre.org (Dingwell, Robert A.) Date: Tue, 31 Mar 2009 14:59:52 -0400 Subject: Cannot get varnish to start Message-ID: Hi, I just built and installed varnish from what is currently available in trunk. The configure, make , sudo make install cycle went off just fine, no errors. When I attempt to startup varnishd however it just hangs for a few seconds and then dies. I'm using the following command to start it up. sudo /usr/local/sbin/varnishd -a localhost:8090 -f /usr/local/etc/varnish/default.vcl -T localhost:6082 -d -d And this is the output I get on the screen. storage_file: filename: ./varnish.0uF7sj size 1065 MB. Using old SHMFILE 200 193 ----------------------------- Varnish HTTP accelerator CLI. ----------------------------- Type 'help' for command list. Type 'quit' to close CLI session. Type 'start' to launch worker process. If I don't include the -d -d params I get this. storage_file: filename: ./varnish.xlJKfV size 801 MB. Using old SHMFILE Apple does not want programs to use daemon(3) and suggests using launchd(1). We don't agree, but their dad is bigger than our dad. Is there something I'm missing as to how to start up varnish? Is there a log file that gets generated that I look through for errors? The main reason I'm using trunk is so I can test out the grace feature for when a backend goes down. Any help you be appreciated . Thanks Rob From sky at crucially.net Tue Mar 31 19:45:46 2009 From: sky at crucially.net (Artur Bergman) Date: Tue, 31 Mar 2009 12:45:46 -0700 Subject: Cannot get varnish to start In-Reply-To: References: Message-ID: <01791E0B-E2C2-41AD-A276-395BD49BA656@crucially.net> On Mar 31, 2009, at 11:59 AM, Dingwell, Robert A. wrote: > > storage_file: filename: ./varnish.xlJKfV size 801 MB. > Using old SHMFILE > Apple does not want programs to use daemon(3) and suggests using > launchd(1). > We don't agree, but their dad is bigger than our dad. Start it with -F