From dsave at 163.com Wed Feb 4 02:20:16 2009 From: dsave at 163.com (David) Date: Wed, 4 Feb 2009 10:20:16 +0800 (CST) Subject: What's varnish' max cache capacity? 2G? Message-ID: <23249732.415071233714016199.JavaMail.coremail@bj163app98.163.com> I noticed the cache' capacity couldn't be exceeded more than 2047M in V2.0.2, but the capacity could be set to 3072M in V1.1.2. So,I wonder what the max is ,why ,and if there's anyway to increase varnish' capacity due to my personal needs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Feb 4 03:45:07 2009 From: ask at develooper.com (=?ISO-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Tue, 3 Feb 2009 19:45:07 -0800 Subject: What's varnish' max cache capacity? 2G? In-Reply-To: <23249732.415071233714016199.JavaMail.coremail@bj163app98.163.com> References: <23249732.415071233714016199.JavaMail.coremail@bj163app98.163.com> Message-ID: <3F34D79B-8F97-470E-B1D5-077ADB772638@develooper.com> On Feb 3, 2009, at 18:20, David wrote: > I noticed the cache' capacity couldn't be exceeded more than 2047M > in V2.0.2, but the capacity could be set to 3072M in V1.1.2. > > So,I wonder what the max is ,why ,and if there's anyway to increase > varnish' capacity due to my personal needs. It sounds like you are running it on a 32bit system -- don't do that. :-) - ask -- http://develooper.com/ - http://askask.com/ From dsave at 163.com Wed Feb 4 07:59:55 2009 From: dsave at 163.com (David) Date: Wed, 4 Feb 2009 15:59:55 +0800 (CST) Subject: What's varnish' max cache capacity? 2G? Message-ID: <21833152.581501233734395623.JavaMail.coremail@bj163app123.163.com> > It sounds like you are running it on a 32bit system -- don't do > that. :-) yep,that's right.And I always get info as below:------------------------------------------------------NB: Storage size limited to 2GB on 32 bit architecture, NB: otherwise we could run out of address space. ------------------------------------------------------ Do you mean varnish couldn't work best on 32bit system and the best enviroment would be 64bit system? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Feb 4 08:01:47 2009 From: ask at develooper.com (=?ISO-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Wed, 4 Feb 2009 00:01:47 -0800 Subject: What's varnish' max cache capacity? 2G? In-Reply-To: <21833152.581501233734395623.JavaMail.coremail@bj163app123.163.com> References: <21833152.581501233734395623.JavaMail.coremail@bj163app123.163.com> Message-ID: <42F9677B-6A41-45D7-A7DC-343D5C5782FB@develooper.com> On Feb 3, 2009, at 23:59, David wrote: > Do you mean varnish couldn't work best on 32bit system and the best > enviroment would be 64bit system? Yes. Varnish keeps the cache in (virtual) memory, so it really needs a system where it can address more than 2GB. - ask -- http://develooper.com/ - http://askask.com/ From dsave at 163.com Wed Feb 4 09:23:59 2009 From: dsave at 163.com (David) Date: Wed, 4 Feb 2009 17:23:59 +0800 (CST) Subject: What's varnish' max cache capacity? 2G? Message-ID: <14842362.626131233739439470.JavaMail.coremail@bj163app20.163.com> >Yes. Varnish keeps the cache in (virtual) memory, so it really >needs >a system where it can address more than 2GB.Okay,by the way, what's the usage of the cache file assigned by "-s",such as :-s file,/usr/local/varnish/varnish_cache.data,2047MIs there any document about varnish' configuration in detail?The docs on http://varnish.projects.linpro.no seem too basic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Wed Feb 4 09:46:46 2009 From: ask at develooper.com (=?ISO-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Wed, 4 Feb 2009 01:46:46 -0800 Subject: What's varnish' max cache capacity? 2G? In-Reply-To: <14842362.626131233739439470.JavaMail.coremail@bj163app20.163.com> References: <14842362.626131233739439470.JavaMail.coremail@bj163app20.163.com> Message-ID: <9BC3B535-150C-4421-AA3F-F18EC5FC1E6C@develooper.com> On Feb 4, 2009, at 1:23, David wrote: > Okay,by the way, what's the usage of the cache file assigned by "- > s",such as :? > -s file,/usr/local/varnish/varnish_cache.data,2047M Did you try "man varnishd"? If you want more about How It Works, you might want http://varnish.projects.linpro.no/wiki/ArchitectNotes > - ask -- http://develooper.com/ - http://askask.com/ From dsave at 163.com Thu Feb 5 02:56:30 2009 From: dsave at 163.com (David) Date: Thu, 5 Feb 2009 10:56:30 +0800 (CST) Subject: What's varnish' max cache capacity? 2G? Message-ID: <17818858.98531233802590868.JavaMail.coremail@bj163app30.163.com> >Did you try "man varnishd"? >If you want more about How It Works, you might want >http://varnish.projects.linpro.no/wiki/ArchitectNotes Y,I trid man at my first step,but I think it's not detailed enough.I'll refer to the ArchitectNotes and hope to find something useful,thanks a lot~! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsave at 163.com Tue Feb 10 10:15:52 2009 From: dsave at 163.com (David) Date: Tue, 10 Feb 2009 18:15:52 +0800 (CST) Subject: How to set "Expires" of HTTP Header with VCL? Message-ID: <3350624.758121234260952451.JavaMail.coremail@bj163app20.163.com> I want to set the "Expires" of HTTP Header in GMT format in vcl_deliver with VCL. How to achieve that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sky at crucially.net Tue Feb 10 22:53:03 2009 From: sky at crucially.net (Artur) Date: Tue, 10 Feb 2009 14:53:03 -0800 Subject: How to set "Expires" of HTTP Header with VCL? In-Reply-To: <3350624.758121234260952451.JavaMail.coremail@bj163app20.163.com> References: <3350624.758121234260952451.JavaMail.coremail@bj163app20.163.com> Message-ID: <62A3FF3A-8A79-4D5E-82CD-24E1B59800BC@crucially.net> in vcl_deliver C{ char *cache = VRT_GetHdr(sp, HDR_REQ, "\016cache-control:"); char date[40]; int max_age; int want_equals = 0; if(cache) { while(*cache != '\0') { if (want_equals && *cache == '=') { cache++; max_age = strtoul(cache, 0, 0); break; } if (*cache == 'm' && !memcmp(cache, "max-age", 7)) { cache += 7; want_equals = 1; continue; } cache++; } if (max_age) { TIM_format(TIM_real() + max_age, date); VRT_SetHdr(sp, HDR_RESP, "\010Expires:", date, vrt_magic_string_end); } } }C On Feb 10, 2009, at 2:15 AM, David wrote: > I want to set the "Expires" of HTTP Header in GMT format in > vcl_deliver with VCL. > How to achieve that? > > > ????????????????? > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev From slink at schokola.de Thu Feb 26 22:50:27 2009 From: slink at schokola.de (Nils Goroll) Date: Thu, 26 Feb 2009 23:50:27 +0100 Subject: Any known issues with Solaris event ports? Message-ID: <49A71CB3.5060905@schokola.de> Hi, I have not dug deeply enough into this issue, but I believe to have stepped on an issue surfacing in "hanging" client connections with the Solaris event ports interface. Using poll seems to avoid the issue. Is this a known problem? Cheers, Nils From jesus at omniti.com Fri Feb 27 03:08:17 2009 From: jesus at omniti.com (Theo Schlossnagle) Date: Thu, 26 Feb 2009 22:08:17 -0500 Subject: Any known issues with Solaris event ports? In-Reply-To: <49A71CB3.5060905@schokola.de> References: <49A71CB3.5060905@schokola.de> Message-ID: <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> We haven't run into that problem. However, we are running trunk and not 2.0.3. Varnish 2.0.3 appears to fail b17 and c22 tests in the suite for me. I haven't had a chance to look deeper yet... it's on my todo list. Also, as a note, the configure.in with 2.0.3 managed to disable sendfile (which works on Solaris). I have a patch that reenables that. Once I track down the b17/c22 issues and fix and/or explain them, I'll send in the patch. On Feb 26, 2009, at 5:50 PM, Nils Goroll wrote: > Hi, > > I have not dug deeply enough into this issue, but I believe to have > stepped on > an issue surfacing in "hanging" client connections with the Solaris > event ports > interface. > > Using poll seems to avoid the issue. > > Is this a known problem? > > Cheers, > > Nils > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications & Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 From yu at irx.jp Fri Feb 27 18:15:08 2009 From: yu at irx.jp (Yutaro Shimamura) Date: Sat, 28 Feb 2009 03:15:08 +0900 Subject: backend health check Message-ID: <095844A3-B3BC-4A77-88BE-A0AF4B557406@irx.jp> Hi, I'm tring Backend_health check by .probe in backend{} vcl. got a problem, cannot get read() in vbp_poke(). ------------------------- Server OS: Debian libev-based httpd Client OS: FreeBSD 7.1 amd64, varnish rev 3827 ------------------------- Working server fine with IPv4, but varnishlog appeared > 0 Backend_health - api_88 Still sick 4--X-S--- 0 3 10 0.000000 0.000000 Don't apperaed R, H flags. So I was check response by telnet and C based client . But they were work fine, this is a dump of telnet response. >> yu at yu:/root> telnet 210.135.99.88 12345 Trying 210.135.99.88... Connected to 210.135.99.88. Escape character is '^]'. GET / HTTP/1.1 HTTP/1.1 200 OK Content-Type: text/javascript; charset=UTF-8 Date: Fri, 27 Feb 2009 16:55:38 GMT Content-Length: 18 HOGEHOGE PIYOPIYO << C based client is very simple, write(), shutdown(SHUT_WR) and read(). It was work fine, too. So debug with gdb. In vbp_poke(), I got some strange responses. [[[[[ 1 ]]]]] read() was return 0, errno = 9 (EBADF), 0 byte in vt->resp_buf. so varnishlog is empty after "0.000000 0.000000". ( read() not return -1, so I didn't check errno. but I tried errno = 0; above read(), it changed 0 -> 9 ) [[[[ 2 ]]]] commentout shutdown(s, SHUT_WR), varnishlog return >> 0 Backend_health - api_88 Still sick 4--X-S--- 0 3 10 0.000000 0.000000 HTTP/1.1 200 OK Content-Type: text/javascript; charset=UTF-8 Date: Fri, 27 Feb 2009 17:15:40 GMT Content-Length: 18 HOGEHO << this reply, vbp_poke() returned here. >> 195 do { 196 pfd->events = POLLIN; 197 pfd->revents = 0; 198 tmo = (int)round((t_end - t_now) * 1e3); 199 if (tmo > 0) 200 i = poll(pfd, 1, tmo); 201 if (i == 0 || tmo <= 0) { 202 TCP_close(&s); 203 return; 204 } << poll return 0, i = 0; -> TCP_close & return. so don't work sscanf(vt->resp_buf, "HTTP/%*f %u %s", &resp, buf), and didn't get R and H flags.. (is this work fine? I don't know whether poll return 0 is excepted work here.) anyway, shutdown() not using, it was work fine compare shutdown() use. I can't understand shutdown(SHUT_WR) affected read()... :vcl: >> backend api_88 { .host = "210.135.99.88"; .port = "12345"; .probe = { .request = "GET / HTTP/1.1"; .timeout = 1 s; .interval = 2 s; .window = 10; .threshold = 3; } } director dic_apis random { { .backend = api_88; .weight = 1;} } sub vcl_recv { set req.backend = dic_apis; set req.http.host = "dic.hoge.com"; lookup; } << :varnishd args: >> /root/sbin/varnishd -T 127.0.0.1:10401 -f /root/etc/test.vcl\ -s malloc -p client_http11=on -p backend_http11=on \ -p ping_interval=200000 -p cc_command="cc -fpic -shared -O3 -Wl,-x - o %o %s" \ -w2,300,300 -a 0.0.0.0:12345 << Thanks for your help. regards, ================= ?? ??? / Yutaro Shimamura yu at irx.jp From cloude at instructables.com Fri Feb 27 21:06:54 2009 From: cloude at instructables.com (Cloude Porteus) Date: Fri, 27 Feb 2009 13:06:54 -0800 Subject: Is anyone using ESI with a lot of traffic? Message-ID: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> We're going to start implementing some ESI tests within Varnish. I'm curious what the performance hit is going to be. I would imagine we'll have 6-10 ESI statements in our pages. Also, is there anyone who could consult for a couple of hours to make sure our setup is properly tuned correctly? I have used the recommendations on the Varnish Performance page, but that's not the same as having some hands on experience. I spent quite a bit of time getting the squids we have running now and I want to make sure I give Varnish a fair crack at replacing them. We're also very excited about ESI, but also cautious. Thanks for any help/information. Best, Cloude -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From jna at twitter.com Fri Feb 27 21:12:51 2009 From: jna at twitter.com (John Adams) Date: Fri, 27 Feb 2009 13:12:51 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> Message-ID: On Feb 27, 2009, at 1:06 PM, Cloude Porteus wrote: > We're going to start implementing some ESI tests within Varnish. I'm > curious what the performance hit is going to be. I would imagine we'll > have 6-10 ESI statements in our pages. We're using ESI on search.twitter.com under fairly high load. ESI works well, and we use it to defeat the cache-busting parameters that jQuery requests send our way. I don't have much time to assist you, but I could certainly post some of our startup parameters (which took a fair amount of time to figure out) and give you reasons why we made the decisions we did. -john --- John Adams Twitter Operations jna at twitter.com http://twitter.com/netik From pbruna at it-linux.cl Fri Feb 27 21:24:56 2009 From: pbruna at it-linux.cl (Patricio A. Bruna) Date: Fri, 27 Feb 2009 18:24:56 -0300 (CLST) Subject: Is anyone using ESI with a lot of traffic? Message-ID: <02d101c99921$cf1218d0$6d364a70$@cl> It would be great to know those parameters and why ? Enviado usando Zimbra BlackBerry Conector www.zbox.cl ----- Mensaje original ----- De: varnish-dev-bounces at projects.linpro.no Para: Cloude Porteus CC: varnish-dev at projects.linpro.no Enviado: Fri Feb 27 18:12:51 2009 Asunto: Re: Is anyone using ESI with a lot of traffic? On Feb 27, 2009, at 1:06 PM, Cloude Porteus wrote: > We're going to start implementing some ESI tests within Varnish. I'm > curious what the performance hit is going to be. I would imagine we'll > have 6-10 ESI statements in our pages. We're using ESI on search.twitter.com under fairly high load. ESI works well, and we use it to defeat the cache-busting parameters that jQuery requests send our way. I don't have much time to assist you, but I could certainly post some of our startup parameters (which took a fair amount of time to figure out) and give you reasons why we made the decisions we did. -john --- John Adams Twitter Operations jna at twitter.com http://twitter.com/netik _______________________________________________ varnish-dev mailing list varnish-dev at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev From alex at path101.com Fri Feb 27 21:31:57 2009 From: alex at path101.com (Alex Lines) Date: Fri, 27 Feb 2009 16:31:57 -0500 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> Message-ID: John, would love too see an in-depth post on how you're using varnish and esi. On Fri, Feb 27, 2009 at 4:12 PM, John Adams wrote: > On Feb 27, 2009, at 1:06 PM, Cloude Porteus wrote: > >> We're going to start implementing some ESI tests within Varnish. I'm >> curious what the performance hit is going to be. I would imagine we'll >> have 6-10 ESI statements in our pages. > > We're using ESI on search.twitter.com under fairly high load. ?ESI > works well, and we use it to defeat the cache-busting parameters that > jQuery requests send our way. > > I don't have much time to assist you, but I could certainly post some > of our startup parameters (which took a fair amount of time to figure > out) and give you reasons why we made the decisions we did. > > -john > > --- > John Adams > Twitter Operations > jna at twitter.com > http://twitter.com/netik > > > > > _______________________________________________ > varnish-dev mailing list > varnish-dev at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-dev > From jna at twitter.com Fri Feb 27 22:24:34 2009 From: jna at twitter.com (John Adams) Date: Fri, 27 Feb 2009 14:24:34 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> Message-ID: <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> cc'ing the varnish dev list for comments... On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: > John, > Goodto hear from you. You must be slammed at Twitter. I'm happy to > hear that ESI is holding up for you. It's been in my backlog since you > mentioned it to me pre-Twitter. > > Any performance info would be great. > Any comments on our setup are welcome. You may also choose to call us crazypants. Many, many thanks to Artur Bergman of Wikia for helping us get this configuration straightened out. Right now, we're running varnish (on search) in a bit of a non- standard way. We plan to use it in the normal fashion (varnish to Internet, nothing inbetween) on our API at some point. We're running version 2.0.2, no patches. Cache hit rates range from 10% to 30%, or higher when a real-time event is flooding search. 2.0.2 is quite stable for us, with the occasional child death here and there when we get massive headers coming in that flood sess_workspace. I hear this is fixed in 2.0.3, but haven't had time to try it yet. We have a number of search boxes, and each search box has an apache instance on it, and varnish instance. We plan to merge the varnish instances at some point, but we use very low TTLs (Twitter is the real time web!) and don't see much of a savings by running less of them. We do: Apache --> Varnish --> Apache -> Mongrels Apaches are using mod_proxy_balancer. The front end apache is there because we've long had a fear that Varnish would crash on us, which it did many times prior to our figuring out the proper parameters for startup. We have two entries in that balancer. Either the request goes to varnish, or, if varnish bombs out, it goes directly to the mongrel. We do this, because we need a load balancing algorithm that varnish doesn't support, called bybusiness. Without bybusiness, varnish tries to direct requests to Mongrels that are busy, and requests end up in the listen queue. that adds ~100-150mS to load times, and that's no good for our desired service times of 200-250mS (or less.) We'd be so happy if someone put bybusiness into Varnish's backend load balancing, but it's not there yet. We also know that taking the extra hop through localhost costs us next to nothing in service time, so it's good to have Apache there incase we need to yank out Varnish. In the future, we might get rid of Apache and use HAProxy (it's load balancing and backend monitoring is much richer than Apache, and, it has a beautiful HTTP interface to look at.) Some variables and our decisions: -p obj_workspace=4096 \ -p sess_workspace=262144 \ Absolutely vital! Varnish does not allocate enough space by default for headers, regexps on cookies, and otherwise. It was increased in 2.0.3, but really, not increased enough. Without this we were panicing every 20-30 requests and overflowing the sess hash. -p listen_depth=8192 \ 8192 is probably excessive for now. If we're queuing 8k conns, something is really broke! -p log_hashstring=off \ Who cares about this - we don't need it. -p lru_interval=60 \ We have many small objects in the search cache. Run LRU more often. -p sess_timeout=10 \ If you keep session data around for too long, you waste memory. -p shm_workspace=32768 \ Give us a bit more room in shm -p ping_interval=1 \ Frequent pings in case the child dies on us. -p thread_pools=4 \ -p thread_pool_min=100 \ This must match up with VARNISH_MIN_THREADS. We use four pools, (pools * thread_pool_min == VARNISH_MIN_THREADS) -p srcaddr_ttl=0 \ Disable the (effectively unused) per source-IP statistics -p esi_syntax=1 Disable ESI syntax verification so we can use it to process JSON requests. If you have more than 2.1M objects, you should also add: # -h classic,250007 = recommeded value for 2.1M objects # number should be 1/10 expected working set. In our VCL, we have a few fancy tricks that we use. We label the cache server and cache hit/miss rate in vcl_deliver with this code: Top of VCL: C{ #include #include char myhostname[255] = ""; }C vcl_deliver: C{ VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, vrt_magic_string_end); }C /* mark hit/miss on the request */ if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS"; } vcl_recv: C{ if (myhostname[0] == '\0') { /* only get hostname once - restart required if hostname changes */ gethostname(myhostname, 255); } }C Portions of /etc/sysconfig/varnish follow... # The minimum number of worker threads to start VARNISH_MIN_THREADS=400 # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=60 # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # Cache file size: in bytes, optionally using k / M / G / T suffix, # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE="8G" # # Backend storage specification VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" # Default TTL used when the backend does not specify one VARNISH_TTL=5 # the working directory DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:$ {VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -n ${VARNISH_WORKDIR} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},$ {VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=8192 \ -p log_hashstring=off \ -p lru_interval=60 \ -p sess_timeout=10 \ -p shm_workspace=32768 \ -p ping_interval=1 \ -p thread_pools=4 \ -p thread_pool_min=100 \ -p srcaddr_ttl=0 \ -p esi_syntax=1 \ -s ${VARNISH_STORAGE}" --- John Adams Twitter Operations jna at twitter.com http://twitter.com/netik -------------- next part -------------- An HTML attachment was scrubbed... URL: From cloude at instructables.com Sat Feb 28 05:02:55 2009 From: cloude at instructables.com (Cloude Porteus) Date: Fri, 27 Feb 2009 21:02:55 -0800 Subject: Is anyone using ESI with a lot of traffic? In-Reply-To: <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> References: <4a05e1020902271306l197e5609r622d3f8078a658d2@mail.gmail.com> <4a05e1020902271333xe4d56c3g81b8e7c1b3a59946@mail.gmail.com> <5849C2FA-369A-4EF2-BD0B-57C9826BAB60@twitter.com> Message-ID: <4a05e1020902272102x777bfbc1t2d4ea117fc96cd7a@mail.gmail.com> John, Thanks so much for the info, that's a huge help for us!!! I love HAProxy and Willy has been awesome to us. We run everything through it, since it's really easy to monitor and also easy to debug where the lag is when something in the chain is not responding fast enough. It's been rock solid for us. The nice part for us is that we can use it as a content switcher to send all /xxx traffic or certain user-agent traffic to different backends. best, cloude On Fri, Feb 27, 2009 at 2:24 PM, John Adams wrote: > cc'ing the varnish dev list for comments... > On Feb 27, 2009, at 1:33 PM, Cloude Porteus wrote: > > John, > Goodto hear from you. You must be slammed at Twitter. I'm happy to > hear that ESI is holding up for you. It's been in my backlog since you > mentioned it to me pre-Twitter. > > Any performance info would be great. > > > Any comments on our setup are welcome. You may also choose to call us > crazypants. Many, many thanks to Artur Bergman of Wikia for helping us get > this configuration straightened out. > Right now, we're running varnish (on search) in a bit of a non-standard way. > We plan to use it in the normal fashion (varnish to Internet, nothing > inbetween) on our API at some point. We're running version 2.0.2, no > patches. Cache hit rates range from 10% to 30%, or higher when a real-time > event is flooding search. > 2.0.2 is quite stable for us, with the occasional child death here and there > when we get massive headers coming in that flood sess_workspace. I hear this > is fixed in 2.0.3, but haven't had time to try it yet. > We have a number of search boxes, and each search box has an apache instance > on it, and varnish instance. We plan to merge the varnish instances at some > point, but we use very low TTLs (Twitter is the real time web!) and don't > see much of a savings by running less of them. > We do: > Apache --> Varnish --> Apache -> Mongrels > Apaches are using mod_proxy_balancer. The front end apache is there because > we've long had a fear that Varnish would crash on us, which it did many > times prior to our figuring out the proper parameters for startup. We have > two entries in that balancer. Either the request goes to varnish, or, if > varnish bombs out, it goes directly to the mongrel. > We do this, because we need a load balancing algorithm that varnish doesn't > support, called bybusiness. Without bybusiness, varnish tries to direct > requests to Mongrels that are busy, and requests end up in the listen queue. > that adds ~100-150mS to load times, and that's no good for our desired > service times of 200-250mS (or less.) > We'd be so happy if someone put bybusiness into Varnish's backend load > balancing, but it's not there yet. > We also know that taking the extra hop through localhost costs us next to > nothing in service time, so it's good to have Apache there incase we need to > yank out Varnish. In the future, we might get rid of Apache and use HAProxy > (it's load balancing and backend monitoring is much richer than Apache, and, > it has a beautiful HTTP interface to look at.) > Some variables and our decisions: > -p obj_workspace=4096 \ > -p sess_workspace=262144 \ > Absolutely vital! Varnish does not allocate enough space by default for > headers, regexps on cookies, and otherwise. It was increased in 2.0.3, but > really, not increased enough. Without this we were panicing every 20-30 > requests and overflowing the sess hash. > -p listen_depth=8192 \ > 8192 is probably excessive for now. If we're queuing 8k conns, something is > really broke! > -p log_hashstring=off \ > Who cares about this - we don't need it. > -p lru_interval=60 \ > We have many small objects in the search cache. Run LRU more often. > -p sess_timeout=10 \ > If you keep session data around for too long, you waste memory. > -p shm_workspace=32768 \ > Give us a bit more room in shm > -p ping_interval=1 \ > Frequent pings in case the child dies on us. > -p thread_pools=4 \ > -p thread_pool_min=100 \ > This must match up with VARNISH_MIN_THREADS. We use four pools, (pools * > thread_pool_min == VARNISH_MIN_THREADS) > -p srcaddr_ttl=0 \ > Disable the (effectively unused) per source-IP statistics > -p esi_syntax=1 > Disable ESI syntax verification so we can use it to process JSON requests. > If you have more than 2.1M objects, you should also add: > # -h classic,250007 = recommeded value for 2.1M objects > # number should be 1/10 expected working set. > > In our VCL, we have a few fancy tricks that we use. We label the cache > server and cache hit/miss rate in vcl_deliver with this code: > Top of VCL: > C{ > #include > #include > char myhostname[255] = ""; > > }C > vcl_deliver: > C{ > VRT_SetHdr(sp, HDR_RESP, "\014X-Cache-Svr:", myhostname, > vrt_magic_string_end); > }C > /* mark hit/miss on the request */ > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > set resp.http.X-Cache-Hits = obj.hits; > } else { > set resp.http.X-Cache = "MISS"; > } > > vcl_recv: > C{ > if (myhostname[0] == '\0') { > /* only get hostname once - restart required if hostname changes */ > gethostname(myhostname, 255); > } > }C > > Portions of /etc/sysconfig/varnish follow... > # The minimum number of worker threads to start > VARNISH_MIN_THREADS=400 > # The Maximum number of worker threads to start > VARNISH_MAX_THREADS=1000 > # Idle timeout for worker threads > VARNISH_THREAD_TIMEOUT=60 > # Cache file location > VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin > # Cache file size: in bytes, optionally using k / M / G / T suffix, > # or in percentage of available disk space using the % suffix. > VARNISH_STORAGE_SIZE="8G" > # > # Backend storage specification > VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}" > # Default TTL used when the backend does not specify one > VARNISH_TTL=5 > # the working directory > DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T > ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ > -t ${VARNISH_TTL} \ > -n ${VARNISH_WORKDIR} \ > -w > ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ > -u varnish -g varnish \ > -p obj_workspace=4096 \ > -p sess_workspace=262144 \ > -p listen_depth=8192 \ > -p log_hashstring=off \ > -p lru_interval=60 \ > -p sess_timeout=10 \ > -p shm_workspace=32768 \ > -p ping_interval=1 \ > -p thread_pools=4 \ > -p thread_pool_min=100 \ > -p srcaddr_ttl=0 \ > -p esi_syntax=1 \ > -s ${VARNISH_STORAGE}" > > --- > John Adams > Twitter Operations > jna at twitter.com > http://twitter.com/netik > > > > -- VP of Product Development Instructables.com http://www.instructables.com/member/lebowski From slink at schokola.de Sat Feb 28 19:41:46 2009 From: slink at schokola.de (Nils Goroll) Date: Sat, 28 Feb 2009 20:41:46 +0100 Subject: Any known issues with Solaris event ports? / tests b17&c22 failing on Solaris In-Reply-To: <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> Message-ID: <49A9937A.80105@schokola.de> Hi Theo and all, > Varnish 2.0.3 appears to fail b17 and c22 tests in the suite for me. They do because a eventually a null pointer is passed to strlen. Here's a fix: --- varnish-2.0.3_with_sticky_url_director/bin/varnishd/cache_vrt.c Thu Feb 12 12:15:25 2009 +++ varnish-2.0.3_no_sticky_url_director/bin/varnishd/cache_vrt.c Sat Feb 28 20:32:16 2009 @@ -62,7 +62,7 @@ { CHECK_OBJ_NOTNULL(sp, SESS_MAGIC); - WSL(sp->wrk, SLT_Debug, 0, "VCL_error(%u, %s)", code, reason); + WSL(sp->wrk, SLT_Debug, 0, "VCL_error(%u, %s)", code, reason ? reason : "NULL"); sp->err_code = code ? code : 503; sp->err_reason = reason ? reason : http_StatusMessage(sp->err_code); } I'll open a bug for that one. By the way, does anyone have an idea yet how to make timeouts work on Solaris (tests b20-b25 failing)? Thanks a lot, Nils From slink at schokola.de Sat Feb 28 19:51:54 2009 From: slink at schokola.de (Nils Goroll) Date: Sat, 28 Feb 2009 20:51:54 +0100 Subject: tests b17&c22 failing on Solaris : null pointer dereference In-Reply-To: <49A9937A.80105@schokola.de> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> Message-ID: <49A995DA.10102@schokola.de> > I'll open a bug for that one. http://varnish.projects.linpro.no/ticket/458 Could anyone commit this, please? Thanks, Nils From jesus at omniti.com Sat Feb 28 20:42:53 2009 From: jesus at omniti.com (Theo Schlossnagle) Date: Sat, 28 Feb 2009 15:42:53 -0500 Subject: Any known issues with Solaris event ports? / tests b17&c22 failing on Solaris In-Reply-To: <49A9937A.80105@schokola.de> References: <49A71CB3.5060905@schokola.de> <2202B2DC-E228-44CF-A7CF-E853AB7B7ECC@omniti.com> <49A9937A.80105@schokola.de> Message-ID: <7D9874FD-F4F9-4096-968B-E8B51418D692@omniti.com> On Feb 28, 2009, at 2:41 PM, Nils Goroll wrote: > > By the way, does anyone have an idea yet how to make timeouts work > on Solaris (tests b20-b25 failing)? If it has to do with TCP send and receive timeouts... we're waiting on this: http://bugs.opensolaris.org/view_bug.do?bug_id=4641715 Either that or application level support for timeouts. Given Varnish's design, this wouldn't be that hard, but still a bit of work. -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications & Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911