From tfheen at redpill-linpro.com Mon Nov 2 08:18:18 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Mon, 02 Nov 2009 09:18:18 +0100 Subject: Trying to use X-Backend to select backend In-Reply-To: <4AEB2113.1010808@frit.net> (Bernardf FRIT's message of "Fri, 30 Oct 2009 18:23:31 +0100") References: <4AE98F5D.2090300@frit.net> <87y6mtw8xt.fsf@qurzaw.linpro.no> <4AEB2113.1010808@frit.net> Message-ID: <87ws29tk51.fsf@qurzaw.linpro.no> ]] Bernardf FRIT Hi, | > | I downgraded varnish to v 2.0.3 in order to use the | > | varnish-bereq-hosts.patch | > | > Which patch is this? | | http://varnish.projects.linpro.no/attachment/ticket/481/ I doubt we'll want to merge this, as it's unneeded in trunk. | > | 1. Store the backend name into X-Backend custom header | > | 2. Force each request with a X-Backend header to be directed to the | > | stored backend name | > | > There is currently no way to look up backends by name, so you would have | > to write this as a series of if statements | Yes it worked but didn't do the job I expected. After first request to | the site I want the browser to be always directed to the same backend | (due to sessionId management). I think I have to use Cookies to | achieve this or just serve static content with varnish and use haproxy | as backend for dynamic content. | | I'm just wondering which solution is best : | - varnish as frontend to haproxy | - haproxy as frontend to varnish Either should work fine, but test. You could just also move all static content to a different host name and let haproxy be the dynamic one and varnish the static one. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From bernard at frit.net Mon Nov 2 12:49:36 2009 From: bernard at frit.net (Bernardf FRIT) Date: Mon, 02 Nov 2009 13:49:36 +0100 Subject: Trying to use X-Backend to select backend In-Reply-To: <87ws29tk51.fsf@qurzaw.linpro.no> References: <4AE98F5D.2090300@frit.net> <87y6mtw8xt.fsf@qurzaw.linpro.no> <4AEB2113.1010808@frit.net> <87ws29tk51.fsf@qurzaw.linpro.no> Message-ID: <4AEED560.4030806@frit.net> Tollef Fog Heen : Hi, > | I'm just wondering which solution is best : > | - varnish as frontend to haproxy > | - haproxy as frontend to varnish > > Either should work fine, but test. You could just also move all static > content to a different host name and let haproxy be the dynamic one and > varnish the static one. > I did the tests. Varnish as frontend to haproxy is much more convenient to configure. Just a matter of minutes. On the other side haproxy as frontend to varnish is much more tricky and seems to need fine tuning as I had some timeout problems. IMHO Varnish as frontend to haproxy is actually a KISS solution. -- Bernard FRIT From vlad.dyachenko at gmail.com Mon Nov 2 13:59:17 2009 From: vlad.dyachenko at gmail.com (Vladimir Dyachenko) Date: Mon, 2 Nov 2009 14:59:17 +0100 Subject: Virtualhost issue Message-ID: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> Folks, I am discovering Varnish which looks like a gift from Gods, or from Poul-Henning. We have one case where clients gets big DDoS and we distibute HTTP servers between various ISP. I have set it up alright for one host, but now I need to do virtualhost as we have various sites. Upon restart varnish does not start (no error and nothing in logs). Wondering what obvious I did wrong. Here is my configuration: /// start # This is a basic vcl.conf file for varnish. backend default { set backend.host = "localhost"; set backend.port = "80"; } backend mysite1-com { set backend.host = "www.mysite1.com"; set backend.port = "80"; } backend mysite2-com { set backend.host = "www.mysite2.com"; set backend.port = "80"; } acl purge { "localhost"; } sub vcl_recv { # Knock knock, who's there ? if (req.http.host ~ "^(www|www2|www3\.)?mysite1\.com$") { set req.backend = mysite1-com; } elseif (req.http.host ~ "^(www|www2|www3\.)?mysite2\.com$") { set req.backend = mysite2-com; } else { error 404 "Unknown virtual host"; } if (req.request != "GET" && req.request != "HEAD") { # PURGE request if zope asks nicely if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } pipe; } if (req.http.Expect) { pipe; } if (req.http.Authenticate || req.http.Authorization) { pass; } # We only care about the "__ac.*" cookies, used for authentication if (req.http.Cookie && req.http.Cookie ~ "__ac(|_(name|password|persistent))=") { pass; } # File type that we will always cache if (req.request == "GET" && req.url ~ "\.(gif|jpg|swf|css|js|png|jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|css|js|vsd|doc|ppt|pps|xls|pdf|mp3|mp4|m4a|ogg|mov|avi|wmv|sxw|zip|gz|bz2|tgz|tar|rar)$") { lookup; lookup; } if (req.request == "POST") { pipe; } # force lookup even when cookies are present if (req.request == "GET" && req.http.cookie) { lookup; } if (req.http.Cache-Control ~ "no-cache") { pass; } lookup; } sub vcl_fetch { # force minimum ttl of 300 seconds if (obj.ttl < 3000s) { set obj.ttl = 3000s; } } # Do the PURGE thing sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged"; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache"; } } /// end Any help is most welcome. Regards. - Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: From armdan20 at gmail.com Mon Nov 2 14:09:35 2009 From: armdan20 at gmail.com (andan andan) Date: Mon, 2 Nov 2009 15:09:35 +0100 Subject: Virtualhost issue In-Reply-To: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> References: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> Message-ID: 2009/11/2 Vladimir Dyachenko : > Folks, > > I am discovering Varnish which looks like a gift from Gods, or from > Poul-Henning. > > We have one case where clients gets big DDoS and we distibute HTTP servers > between various ISP. I have set it up alright for one host, but now I need > to do virtualhost as we have various sites. > > Upon restart varnish does not start (no error and nothing in logs). > Wondering what obvious I did wrong. > > Here is my configuration: > > /// start > > # This is a basic vcl.conf file for varnish. > > backend default { > ??????? set backend.host = "localhost"; > ??????? set backend.port = "80"; > } You are using an old syntax, backend syntax has changed. backend default { .host = "localhost"; .port = "80"; } man 7 vcl (you should revise all the configuration). BTW: Probably your init scripts are sending errors to /dev/null Hope this helps. From vlad.dyachenko at gmail.com Mon Nov 2 14:32:31 2009 From: vlad.dyachenko at gmail.com (Vladimir Dyachenko) Date: Mon, 2 Nov 2009 15:32:31 +0100 Subject: Virtualhost issue In-Reply-To: References: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> Message-ID: <9dfd6f7a0911020632p51634117odc1e1f1602ea5aed@mail.gmail.com> Hey Adan, Thanks for reply. I will review all my configuration. 2009/11/2 andan andan armdan20 at gmail.com > > You are using an old syntax, backend syntax has changed. > > backend default { > .host = "localhost"; > .port = "80"; > } > BTW, I was asked what do I use to start varnish and what varnishlog output. Well varnishlog does not give any output and my init script is as follow: #! /bin/sh # # varnish Control the varnish HTTP accelerator # # chkconfig: - 90 10 # description: HTTP accelerator # processname: varnishd # config: /etc/varnish/vcl.conf # pidfile: /var/run/varnish/varnishd.pid # Source function library. . /etc/init.d/functions RETVAL=0 PROCNAME=varnishd PIDFILE=/var/run/varnish.pid LOCKFILE=/var/lock/subsys/varnish # Include varnish defaults . /etc/sysconfig/varnish DAEMON="/usr/sbin/varnishd" mkdir -p /var/run/varnish 2>/dev/null # Open files (usually 1024, which is way too small for varnish) ulimit -n ${NFILES:-131072} # See how we were called. case "$1" in start) echo -n "Starting varnish HTTP accelerator: " # $DAEMON_OPTS is set in /etc/sysconfig/varnish. At least, one # has to set up a backend, or /tmp will be used, which is a bad idea. if [ "$DAEMON_OPTS" = "" ]; then echo "\$DAEMON_OPTS empty." echo -n "Please put configuration options in /etc/sysconfig/varnish" echo_failure else daemon ${DAEMON} "$DAEMON_OPTS" -P ${PIDFILE} > /dev/null 2>&1 sleep 1 pkill -0 $PROCNAME RETVAL=$? if [ $RETVAL -eq 0 ] then echo_success touch $LOCKFILE else echo_failure fi fi echo ;; stop) echo -n "Stopping varnish HTTP accelerator: " killproc $DAEMON RETVAL=$? if [ $RETVAL -eq 0 ] then echo_success rm -f $LOCKFILE $PIDFILE else echo_failure fi echo ;; status) status $PROCNAME RETVAL=$? ;; restart|reload) $0 stop $0 start RETVAL=$? ;; condrestart) if [ -f $PIDFILE ]; then $0 stop $0 start RETVAL=$? fi ;; *) echo "Usage: $0 {start|stop|status|restart|condrestart}" exit 1 esac .Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: From ask at develooper.com Tue Nov 3 00:26:34 2009 From: ask at develooper.com (=?iso-8859-1?Q?Ask_Bj=F8rn_Hansen?=) Date: Mon, 2 Nov 2009 16:26:34 -0800 Subject: Yahoo! Traffic Server Message-ID: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> I thought this might be of interest: http://wiki.apache.org/incubator/TrafficServerProposal - ask From michael at dynamine.net Tue Nov 3 00:40:54 2009 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 2 Nov 2009 16:40:54 -0800 Subject: Yahoo! Traffic Server In-Reply-To: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> References: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> Message-ID: <6B15DFA1-61B8-40D2-BDF6-1AA29F922B4F@dynamine.net> If you'd like to examine the source, you can find it at: http://svn.apache.org/repos/asf/incubator/trafficserver/ (I'm a Yahoo! employee, though I'm not here to represent them in any way.) --Michael On Nov 2, 2009, at 4:26 PM, Ask Bj?rn Hansen wrote: > I thought this might be of interest: > > http://wiki.apache.org/incubator/TrafficServerProposal > > > - ask > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Tue Nov 3 08:27:11 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 03 Nov 2009 08:27:11 +0000 Subject: Yahoo! Traffic Server In-Reply-To: Your message of "Mon, 02 Nov 2009 16:26:34 PST." <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> Message-ID: <9457.1257236831@critter.freebsd.dk> In message <2E51048E-58CD-4599-B4BE-85CDAC78FC33 at develooper.com>, =?iso-8859-1? Q?Ask_Bj=F8rn_Hansen?= writes: >I thought this might be of interest: > > http://wiki.apache.org/incubator/TrafficServerProposal It probably is -- for some people. My impression of Inktomi is that it is a much more comprehensive solution than Varnish, it does SMTP, NNTP and much else. If I were to start Yahoo, HotMail or a similar app today, I would seriously consider starting out on Inktomi, because of the scalability, horizontal and vertical, it offers. Varnish on the other hand, only does HTTP, but aims to do it with ultimate performance and flexibility, by pushing the technological envelope as far as it can be pushed. But For me, personally, the main difference is one of size: Inktomi: *.h 93920 *.c 50892 *.hh 0 *.cc 350199 ------------ 495011 Varnish: *.h 5957 *.c 38871 *.hh 0 *.cc 0 ------------ 44828 We have a rhyming saying in Denmark that goes: "En lille og v?gen, er bedre end en stor og doven" which roughly translates to: "Small and alert is better than big and inert". But I'm happy to see the Inktomi code out in the free air, I've heard much good about it, over the years, from my FreeBSD cronies at Yahoo. And because I like a healty competition, I particularly welcome Inktomi, because quite frankly: competing with squid is not much fun... :-) Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From tfheen at redpill-linpro.com Tue Nov 3 08:45:05 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Tue, 03 Nov 2009 09:45:05 +0100 Subject: Yahoo! Traffic Server In-Reply-To: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> ("Ask =?utf-8?Q?Bj=C3=B8rn?= Hansen"'s message of "Mon, 2 Nov 2009 16:26:34 -0800") References: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> Message-ID: <87bpjkxai6.fsf@qurzaw.linpro.no> ]] Ask Bj?rn Hansen | I thought this might be of interest: | | http://wiki.apache.org/incubator/TrafficServerProposal Yeah, I've been following it loosely for a while now. A quick feature comparison (just based off the incubator page): # Scalable on SMP (TS is a hybrid thread + event processor) This might be an interesting direction to take Varnish in at some point as slow clients are a problem for us now. Being able to punt them off to a pool of event-based threads would be useful. # Extensible: TS has a feature rich plugin API We don't have this, though you can get some of it by inlining C. There's a proposal out for how to do it. Whether we end up doing it remains to be seen, though. ?We have benchmarked Traffic Server to handle in excess of 35,000 RPS on a single box.? I know Kristian has benchmarked Varnish to about three times that, though with 1-byte objects, so it's not really anything resembling a real-life scenario. I think sky has been serving ~64k requests/s using synthetic. # Porting to more Unix flavors (currently we only support Linux) We have this already, at least to some degree. # Add missing features, e.g., CARP, HTCP, ESI and native IPv6 We have native ipv6 and ESI at least. CARP and HTCP we don't. Their code seems to be C++ (they depend on STL) They support HTTPS, which we don't. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From v.bilek at 1art.cz Tue Nov 3 10:51:22 2009 From: v.bilek at 1art.cz (=?UTF-8?B?VsOhY2xhdiBCw61sZWs=?=) Date: Tue, 03 Nov 2009 11:51:22 +0100 Subject: Varnish stuck on stresstest/approved by real traffic Message-ID: <4AF00B2A.2040006@1art.cz> Hi When testing varnish throughput and scalability I have found strange varnish behavior. using 2.0.4 with cache_acceptor_epoll.c patch : http://varnish.projects.linpro.no/ticket/492 When testing number of clients scalability I am able to get varnish into a state when it stops responding but in varnishlog it looks like doing some requests from the past ... detailed : I start few instances of ab with keepalive on and with -r option (Don't exit on socket receive errors) with concurency aprox. 2000 "ab -r -k -n 100000 -c 2000 http://127.0.0.1/&" doesn't mater if from localhost or from other hosts(even distributed each instance on diferent host); when i run so many instances that connections on varnish raise aprox above 10K, varnish stucks, and stops responding.In varnishstat "Client requests received" drops to "0", In that time varnish starts to spawn new threads, but not dramatically ( for example from 2,4K to 3K ). Then I stop all the ab instances and try to fetch something ... /opt/httpd2.2/bin/ab -k -n 1 -c 1 http://127.0.0.1/ but get timeout, when I try after few minutes when varnish certainly does nothing i get timeout too. In such state there is in varnishlog every 30s request like this but nothing else: 10342 SessionOpen c 127.0.0.1 28988 0.0.0.0:80 10342 ReqStart c 127.0.0.1 28988 1170523327 10342 RxRequest c GET 10342 RxURL c / 10342 RxProtocol c HTTP/1.0 10342 RxHeader c Connection: Keep-Alive 10342 RxHeader c Host: 127.0.0.1 10342 RxHeader c User-Agent: ApacheBench/2.3 10342 RxHeader c Accept: */* 10342 VCL_call c recv lookup 10342 VCL_call c hash hash 10342 VCL_call c miss fetch 10342 Backend c 20041 default default 10342 ObjProtocol c HTTP/1.1 10342 ObjStatus c 200 10342 ObjResponse c OK 10342 ObjHeader c Date: Tue, 03 Nov 2009 10:28:33 GMT 10342 ObjHeader c Server: Apache 10342 ObjHeader c Last-Modified: Sat, 20 Nov 2004 20:16:24 GMT 10342 ObjHeader c ETag: "fc76-2c-3e9564c23b600" 10342 ObjHeader c Content-Type: text/html 10342 TTL c 1170523327 RFC 5 1257244113 0 0 0 0 10342 VCL_call c fetch 10342 VCL_info c XID 1170523327: obj.prefetch (-30) less than ttl (5.00406), ignored. 10342 VCL_return c deliver 10342 Length c 44 10342 VCL_call c deliver deliver 10342 TxProtocol c HTTP/1.1 10342 TxStatus c 200 10342 TxResponse c OK 10342 TxHeader c Server: Apache 10342 TxHeader c Last-Modified: Sat, 20 Nov 2004 20:16:24 GMT 10342 TxHeader c ETag: "fc76-2c-3e9564c23b600" 10342 TxHeader c Content-Type: text/html 10342 TxHeader c Content-Length: 44 10342 TxHeader c Date: Tue, 03 Nov 2009 10:28:33 GMT 10342 TxHeader c X-Varnish: 1170523327 10342 TxHeader c Age: 0 10342 TxHeader c Via: 1.1 varnish 10342 TxHeader c Connection: keep-alive 10342 ReqEnd c 1170523327 1257244113.149692297 1257244113.153866768 526.840385675 0.004138470 0.000036001 There is nothing strange in syslog or varnishlog, the only thing that helps recover varnish is restarting it. Bad is that it has already happened in production traffic. Is there anything in my settings what should i check? my setings: accept_fd_holdoff 50 [ms] acceptor default (epoll, poll) auto_restart on [bool] backend_http11 on [bool] between_bytes_timeout 60.000000 [s] cache_vbe_conns off [bool] cc_command "exec cc -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 15 [seconds] client_http11 off [bool] clock_skew 10 [s] connect_timeout 0.400000 [s] default_grace 10 default_ttl 60 [seconds] diag_bitmap 0x0 [bitmap] err_ttl 0 [seconds] esi_syntax 0 [bitmap] fetch_chunksize 128 [kilobytes] first_byte_timeout 60.000000 [s] group nogroup (65534) listen_address 0.0.0.0:80 listen_depth 40960 [connections] log_hashstring off [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_includes 5 [includes] max_restarts 4 [restarts] obj_workspace 8192 [bytes] overflow_max 300 [%] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] purge_dups off [bool] purge_hash on [bool] rush_exponent 3 [requests per request] send_timeout 5 [seconds] sess_timeout 1 [seconds] sess_workspace 16384 [bytes] session_linger 0 [ms] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] srcaddr_hash 1049 [buckets] srcaddr_ttl 0 [seconds] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 10000 [threads] thread_pool_min 200 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_timeout 10 [seconds] thread_pools 12 [pools] user nobody (65534) vcl_trace off [bool] vcl.conf: just backend declaration nothing else varnishstat -1 in stuck state: uptime 3450 . Child uptime client_conn 620403 179.83 Client connections accepted client_req 1212084 351.33 Client requests received cache_hit 1211297 351.10 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 140 0.04 Cache misses backend_conn 140 0.04 Backend connections success backend_unhealthy 0 0.00 Backend connections not attempted backend_busy 0 0.00 Backend connections too many backend_fail 0 0.00 Backend connections failures backend_reuse 0 0.00 Backend connections reuses backend_recycle 0 0.00 Backend connections recycles backend_unused 0 0.00 Backend connections unused n_srcaddr 0 . N struct srcaddr n_srcaddr_act 0 . N active struct srcaddr n_sess_mem 20051 . N struct sess_mem n_sess 20277 . N struct sess n_object 2431 . N struct object n_objecthead 2440 . N struct objecthead n_smf 2442 . N struct smf n_smf_frag 4 . N small free smf n_smf_large 16 . N large free smf n_vbe_conn 0 . N struct vbe_conn n_bereq 1 . N struct bereq n_wrk 2467 . N worker threads n_wrk_create 2477 0.72 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 9891 2.87 N queued work requests n_wrk_overflow 18197 5.27 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 139 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 20 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 1212216 351.37 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 610512 176.96 Total Sessions s_req 1212261 351.38 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 140 0.04 Total fetch s_hdrbytes 335018181 97106.72 Total header bytes s_bodybytes 53341094 15461.19 Total body bytes sess_closed 600121 173.95 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 612115 177.42 Session herd shm_records 40013596 11598.14 SHM records shm_writes 3028734 877.89 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 228833 66.33 SHM MTX contention shm_cycles 10 0.00 SHM cycles through buffer sm_nreq 2710 0.79 allocator requests sm_nobj 2422 . outstanding allocations sm_balloc 19836928 . bytes allocated sm_bfree 9991118848 . bytes free sma_nreq 0 0.00 SMA allocator requests sma_nobj 0 . SMA outstanding allocations sma_nbytes 0 . SMA outstanding bytes sma_balloc 0 . SMA bytes allocated sma_bfree 0 . SMA bytes free sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 140 0.04 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) From v.bilek at 1art.cz Tue Nov 3 11:58:36 2009 From: v.bilek at 1art.cz (=?ISO-8859-1?Q?V=E1clav_B=EDlek?=) Date: Tue, 03 Nov 2009 12:58:36 +0100 Subject: Varnish stuck on stresstest/approved by real traffic In-Reply-To: <4AF00B2A.2040006@1art.cz> References: <4AF00B2A.2040006@1art.cz> Message-ID: <4AF01AEC.1090502@1art.cz> V?clav B?lek napsal(a): > Hi > > When testing varnish throughput and scalability I have found strange > varnish behavior. > > using 2.0.4 with cache_acceptor_epoll.c patch : > http://varnish.projects.linpro.no/ticket/492 > without the patch there is no stuck but performace goes dramaticaly down on more than few thousands of connections. Vaclav Bilek From vlad.dyachenko at gmail.com Tue Nov 3 11:56:18 2009 From: vlad.dyachenko at gmail.com (Vladimir Dyachenko) Date: Tue, 3 Nov 2009 12:56:18 +0100 Subject: Virtualhost issue In-Reply-To: <9dfd6f7a0911020632p51634117odc1e1f1602ea5aed@mail.gmail.com> References: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> <9dfd6f7a0911020632p51634117odc1e1f1602ea5aed@mail.gmail.com> Message-ID: <9dfd6f7a0911030356h6dc8f5f4x5794e098b7c1c54d@mail.gmail.com> Folks, I have changed the configuration to the following (mostly based on mediawiki). Any idea why it does not restart? Sill empty logs. [root at net2 ~]# varnishd -V varnishd (varnish-2.0.4) Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS [root at net2 ~]# cat /etc/varnish/default.vcl # set default backend if no server cluster specified backend default { .host = "www.google.ru"; .port = "80"; } backend google-bg { .host = "www.google.bg"; .port = "80"; } backend google-pl { .host = "www.google.pl"; .port = "80"; } # access control list for "purge": open to only localhost and other local nodes acl purge { "localhost"; } # vcl_recv is called whenever a request is received sub vcl_recv { # Serve objects up to 2 minutes past their expiry if the backend # is slow to respond. set req.grace = 120s; # Use our round-robin "apaches" cluster for the backend. if (req.http.host ~ "^(www.)?google.bg$ ") { set req.http.host = "www.google.bg"; set req.backend = google-bg; } elsif (req.http.host ~ "^(www.)?google.pl$") { set req.http.host = "www.google.pl"; set req.backend = google-pl; } else { set req.backend = default; } # This uses the ACL action called "purge". Basically if a request to # PURGE the cache comes from anywhere other than localhost, ignore it. if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } # Pass any requests that Varnish does not understand straight to the backend. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {pipe;} /* Non-RFC2616 or CONNECT which is weird. */ # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") {pass;} /* We only deal with GET and HEAD by default */ # Pass requests from logged-in users directly. if (req.http.Authorization || req.http.Cookie) {pass;} /* Not cacheable by default */ # Pass any requests with the "If-None-Match" header directly. if (req.http.If-None-Match) {pass;} # Force lookup if the request is a no-cache request from the client. if (req.http.Cache-Control ~ "no-cache") {purge_url(req.url);} # normalize Accept-Encoding to reduce vary if (req.http.Accept-Encoding) { if (req.http.User-Agent ~ "MSIE 6") { unset req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } lookup; } sub vcl_pipe { # Note that only the first request to the backend will have # X-Forwarded-For set. If you use X-Forwarded-For and want to # have it set for all requests, make sure to have: # set req.http.connection = "close"; # This is otherwise not necessary if you do not do any request rewriting. set req.http.connection = "close"; } # Called if the cache has a copy of the page. sub vcl_hit { if (req.request == "PURGE") {purge_url(req.url); error 200 "Purged";} if (!obj.cacheable) {pass;} } # Called if the cache does not have a copy of the page. sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # set minimum timeouts to auto-discard stored objects # set obj.prefetch = -30s; set obj.grace = 120s; if (obj.ttl < 48h) { set obj.ttl = 48h;} if (!obj.cacheable) {pass;} if (obj.http.Set-Cookie) {pass;} # if (obj.http.Cache-Control ~ "(private|no-cache|no-store)") # {pass;} if (req.http.Authorization && !obj.http.Cache-Control ~ "public") {pass;} } Any hint most welcome. Regards. Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: From armdan20 at gmail.com Tue Nov 3 12:36:30 2009 From: armdan20 at gmail.com (andan andan) Date: Tue, 3 Nov 2009 13:36:30 +0100 Subject: Virtualhost issue In-Reply-To: <9dfd6f7a0911030356h6dc8f5f4x5794e098b7c1c54d@mail.gmail.com> References: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> <9dfd6f7a0911020632p51634117odc1e1f1602ea5aed@mail.gmail.com> <9dfd6f7a0911030356h6dc8f5f4x5794e098b7c1c54d@mail.gmail.com> Message-ID: 2009/11/3 Vladimir Dyachenko : > Folks, > > I have changed the configuration to the following (mostly based on > mediawiki). Any idea why it does not restart? Sill empty logs. > > [root at net2 ~]# varnishd -V > varnishd (varnish-2.0.4) > Copyright (c) 2006-2009 Linpro AS / Verdens Gang AS > > [root at net2 ~]# cat /etc/varnish/default.vcl > > # set default backend if no server cluster specified > backend default { > ????????.host = "www.google.ru"; > ????????.port = "80"; > } www.google.ru resolves to multiple IPs. You are trying to use varnish with a strange behaviour :) To see errors, in your init script change: daemon ${DAEMON} "$DAEMON_OPTS" -P ${PIDFILE} > /dev/null 2>&1 by: daemon ${DAEMON} "$DAEMON_OPTS" -P ${PIDFILE} > /whereveryouwant 2>&1 Or simply remove > /dev/null 2>&1 to see errors on console. Kind Regards. From vlad.dyachenko at gmail.com Tue Nov 3 14:29:19 2009 From: vlad.dyachenko at gmail.com (Vladimir Dyachenko) Date: Tue, 3 Nov 2009 15:29:19 +0100 Subject: Virtualhost issue In-Reply-To: References: <9dfd6f7a0911020559r2cf6dfb6ma8abca698cc6f728@mail.gmail.com> <9dfd6f7a0911020632p51634117odc1e1f1602ea5aed@mail.gmail.com> <9dfd6f7a0911030356h6dc8f5f4x5794e098b7c1c54d@mail.gmail.com> Message-ID: <9dfd6f7a0911030629j16e62e04s8750ce7d1b4cf7f7@mail.gmail.com> Hey, Thanks for reply. www.google.ru resolves to multiple IPs. You are trying to use varnish > with a strange behaviour :) > That was just to replace the actual hostname :) Anyway, thanks - I've found out one silly thing. I had two concurrent version of varnish being installed (one from source one from RPM). Everything works as expected (how and we can't use the string '-' (dash) in backend names, one must use '_' (underscore). Thanks all. Cheers. Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: From stockrt at gmail.com Tue Nov 3 21:30:40 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Tue, 3 Nov 2009 19:30:40 -0200 Subject: Varnish stuck on stresstest/approved by real traffic In-Reply-To: <4AF01AEC.1090502@1art.cz> References: <4AF00B2A.2040006@1art.cz> <4AF01AEC.1090502@1art.cz> Message-ID: <100657c90911031330x42c7f9e9va2f32e7df3a44a37@mail.gmail.com> V?clav, That is good to know that you have also experienced better performance with patch from ticket 492. I am replying this e-mail just to endorse your scenario. I have already been in the same circumstance of Varnish getting stuck after a bigger wave of simultaneous users. What I did to solve the problem? Put more machines. That would be great if we find the answer for this problem together. About the unpatched version, I was not able to see if freeze like the patched one because it hangs with another error before we can reach a maximum of simultaneous users, like in this report: http://varnish.projects.linpro.no/ticket/573 This problem by itself was what lead me to make this patch/port for epoll acceptor, since I use linux. Regards, Rog?rio Schneider 2009/11/3 V?clav B?lek : > > > V?clav B?lek napsal(a): >> Hi >> >> When testing varnish throughput and scalability I have found strange >> varnish behavior. >> >> using 2.0.4 with cache_acceptor_epoll.c ?patch : >> http://varnish.projects.linpro.no/ticket/492 >> > > without the patch there is no stuck but performace goes dramaticaly down > on more than few thousands of connections. > > Vaclav Bilek > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From stockrt at gmail.com Tue Nov 3 21:58:57 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Tue, 3 Nov 2009 19:58:57 -0200 Subject: Yahoo! Traffic Server In-Reply-To: <87bpjkxai6.fsf@qurzaw.linpro.no> References: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> <87bpjkxai6.fsf@qurzaw.linpro.no> Message-ID: <100657c90911031358j4fb9073rf357b55ef0c494e@mail.gmail.com> > I know Kristian has benchmarked Varnish to about three times that, > though with 1-byte objects, so it's not really anything resembling a > real-life scenario. ?I think sky has been serving ~64k requests/s using > synthetic. Just to place my results: I have made 75k reqs/s with a half of full body (http code 200) objects at about 4k bytes each and a half of 304 response codes. With only 304 I have made 104k reqs/s with Varnish (patched from ticket 492, pre instanced threads and using linger if I remember well). Regards, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From ibeginhere at gmail.com Wed Nov 4 07:37:14 2009 From: ibeginhere at gmail.com (I I) Date: Wed, 4 Nov 2009 15:37:14 +0800 Subject: how to monitor Varnish threads Message-ID: <9ff4dbc10911032337g2c3b8a1bpb71d35cbaea73132@mail.gmail.com> how can i monitor the number of Varnish threads? and i don't know how many number the varnish threads,so i don't know how big need i to set the "thread_pool_max". -------------- next part -------------- An HTML attachment was scrubbed... URL: From quasirob at googlemail.com Wed Nov 4 11:48:10 2009 From: quasirob at googlemail.com (Rob Ayres) Date: Wed, 4 Nov 2009 11:48:10 +0000 Subject: Caching POSTs Message-ID: Hi, I want to cache POSTs but can't get varnish to do it, is it possible? If it makes it any easier, all requests through this cache will be of POST type. So far I have changed the line in vcl_recv to look like this: if (req.request != "GET" && req.request != "HEAD" && req.request != "POST") { This resulted in the first request looking like this: 10 RxRequest c POST 10 RxURL c /autocomplete/autocomplete.xqy 10 RxProtocol c HTTP/1.1 10 RxHeader c User-Agent: Test client 10 RxHeader c Host: durham:8630 10 RxHeader c Content-Length: 84 10 RxHeader c Content-Type: application/x-www-form-urlencoded 10 VCL_call c recv 10 VCL_return c lookup 10 VCL_call c hash 10 VCL_return c hash 10 VCL_call c miss 10 VCL_return c fetch 11 BackendOpen b default 192.168.80.173 51082 192.168.80.101 8630 10 Backend c 11 default default 11 TxRequest b GET As you can see it arrives as a POST and gets changed to a GET. After that all requests go through as "Cache hits for pass" which at least returns data from the backend even if it isnt cached. I then added "set bereq.request = "POST";" to vcl_miss which does change the TxRequest to POST but still doesnt work. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stockrt at gmail.com Wed Nov 4 11:52:02 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Wed, 4 Nov 2009 09:52:02 -0200 Subject: how to monitor Varnish threads In-Reply-To: <9ff4dbc10911032337g2c3b8a1bpb71d35cbaea73132@mail.gmail.com> References: <9ff4dbc10911032337g2c3b8a1bpb71d35cbaea73132@mail.gmail.com> Message-ID: <100657c90911040352r7869924bw418b19f2cd3a0801@mail.gmail.com> On Wed, Nov 4, 2009 at 5:37 AM, I I wrote: > how can i monitor the number of Varnish threads? and i don't know how many > number the varnish threads,so i don't know how big need i to set the > "thread_pool_max". $ varnishstat -1 | grep n_wrk n_wrk 10 . N worker threads Regards, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From stockrt at gmail.com Wed Nov 4 12:27:22 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Wed, 4 Nov 2009 10:27:22 -0200 Subject: Write error, len = 69696/260844, errno = Success In-Reply-To: <4AE9A67B.5000706@1art.cz> References: <4AE9A67B.5000706@1art.cz> Message-ID: <100657c90911040427j68e05441o42e83458607a68ef@mail.gmail.com> 2009/10/29 V?clav B?lek : > Is there anyone who can point us where to look to find the problem? > > thanks for any respond V?clav, Isn't your problem related to the send_timeout option? Cutting transfers that take more than send_timeout seconds to deliver, even before the end of the file? Take a look at the startup option: "send_timeout" and at this thread: http://projects.linpro.no/pipermail/varnish-misc/2009-October/003182.html > ?1267 ReqEnd ? ? ? c 143433511 1256820931.257400990 1256820936.260433674 0.020234823 0.006479740 4.996552944 I can see here that your xmit time was of only 5 seconds. Haven't you tuned send_timeout to 5 seconds? Oops... :) Cheers, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From v.bilek at 1art.cz Wed Nov 4 12:49:39 2009 From: v.bilek at 1art.cz (=?ISO-8859-1?Q?V=E1clav_B=EDlek?=) Date: Wed, 04 Nov 2009 13:49:39 +0100 Subject: Write error, len = 69696/260844, errno = Success In-Reply-To: <100657c90911040427j68e05441o42e83458607a68ef@mail.gmail.com> References: <4AE9A67B.5000706@1art.cz> <100657c90911040427j68e05441o42e83458607a68ef@mail.gmail.com> Message-ID: <4AF17863.2070302@1art.cz> I have tested send_timeout setting to biger values with no changes Rog?rio Schneider napsal(a): > 2009/10/29 V?clav B?lek : >> Is there anyone who can point us where to look to find the problem? >> >> thanks for any respond > > V?clav, > > Isn't your problem related to the send_timeout option? Cutting > transfers that take more than send_timeout seconds to deliver, even > before the end of the file? > > Take a look at the startup option: "send_timeout" and at this thread: > > http://projects.linpro.no/pipermail/varnish-misc/2009-October/003182.html > >> 1267 ReqEnd c 143433511 1256820931.257400990 1256820936.260433674 0.020234823 0.006479740 4.996552944 > > I can see here that your xmit time was of only 5 seconds. Haven't you > tuned send_timeout to 5 seconds? Oops... :) > > Cheers, From kristian at redpill-linpro.com Wed Nov 4 15:49:18 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Wed, 4 Nov 2009 16:49:18 +0100 Subject: Varnish stuck on stresstest/approved by real traffic In-Reply-To: <4AF00B2A.2040006@1art.cz> References: <4AF00B2A.2040006@1art.cz> Message-ID: <20091104154918.GD8687@kjeks.linpro.no> (Excessive trimming ahead. Whoohoo) On Tue, Nov 03, 2009 at 11:51:22AM +0100, V?clav B?lek wrote: > When testing varnish throughput and scalability I have found strange > varnish behavior. What's the cpu load at that point? Also: use sess_linger. No session_linger == kaboom when things get too loaded. It's 50ms default in 2.0.5/trunk, but set to 0ms in 2.0.4 and previous. The behaviour in trunk is slightly different/better, but it's still worth using it in 2.0.4. - Kristian -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From stockrt at gmail.com Wed Nov 4 21:04:10 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Wed, 4 Nov 2009 19:04:10 -0200 Subject: Varnish virtual memory usage In-Reply-To: <5405273812888663297@unknownmsgid> References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> Message-ID: <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen wrote: > I will report back. Did this solve the problem? Removing this? >> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == "no-cache") { >> purge_url(req.url); >> } >> Cheers Att, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From h.paulissen at qbell.nl Wed Nov 4 22:53:34 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Wed, 4 Nov 2009 23:53:34 +0100 Subject: Varnish virtual memory usage In-Reply-To: <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> Message-ID: <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> No, varnishd still usages way more than allowed. The only solutions I found at the moment are: Run on x64 linux and restart varnish every 4 hours (crontab). Run on x32 linux (all is working as expected but you cant allocate more as 4G each instance). I hope linpro will find this issue and address it. Again @ linpro: if you need a machine (with live traffic) to run some tests, please contact me. We have multiple machines in high availability, so testing and rebooting a instance wouldn?t hurt us. Regards. -----Oorspronkelijk bericht----- Van: Rog?rio Schneider [mailto:stockrt at gmail.com] Verzonden: woensdag 4 november 2009 22:04 Aan: Henry Paulissen CC: Scott Wilson; varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen wrote: > I will report back. Did this solve the problem? Removing this? >> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == "no-cache") { >> purge_url(req.url); >> } >> Cheers Att, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From phk at phk.freebsd.dk Wed Nov 4 22:59:43 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 04 Nov 2009 22:59:43 +0000 Subject: Back from the dea^H^H^Hsoul-less Message-ID: <18504.1257375583@critter.freebsd.dk> Hi Guys, I owe you all an apology for disappering for the last couple of weeks, but I had to spend pretty much all my time writing my reply in my Windows-refund case against Lenovo. Tomorrow I'll drop off the result at the court-house, and than I should be able to ignore that until X-mas, when Lenovo is supposed to reply. And then it's back to hacking varnish... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From h.paulissen at qbell.nl Wed Nov 4 23:06:40 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 00:06:40 +0100 Subject: Back from the dea^H^H^Hsoul-less In-Reply-To: <18504.1257375583@critter.freebsd.dk> References: <18504.1257375583@critter.freebsd.dk> Message-ID: <002c01ca5da3$77a93230$66fb9690$@paulissen@qbell.nl> Windows-refund case??? Did i miss something? Anyway, ood luck with your case. Regards. -----Oorspronkelijk bericht----- Van: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] Namens Poul-Henning Kamp Verzonden: donderdag 5 november 2009 0:00 Aan: varnish-misc at projects.linpro.no Onderwerp: Back from the dea^H^H^Hsoul-less Hi Guys, I owe you all an apology for disappering for the last couple of weeks, but I had to spend pretty much all my time writing my reply in my Windows-refund case against Lenovo. Tomorrow I'll drop off the result at the court-house, and than I should be able to ignore that until X-mas, when Lenovo is supposed to reply. And then it's back to hacking varnish... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Wed Nov 4 23:07:57 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 04 Nov 2009 23:07:57 +0000 Subject: Back from the dea^H^H^Hsoul-less In-Reply-To: Your message of "Thu, 05 Nov 2009 00:06:40 +0100." <002c01ca5da3$77a93230$66fb9690$@paulissen@qbell.nl> Message-ID: <18560.1257376077@critter.freebsd.dk> In message <002c01ca5da3$77a93230$66fb9690$@paulissen at qbell.nl>, "Henry Pauliss en" writes: >Windows-refund case?=BF? >Did i miss something? http://phk.freebsd.dk/MicrosoftSkat/ You should be able to read it :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From h.paulissen at qbell.nl Wed Nov 4 23:18:11 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 00:18:11 +0100 Subject: Varnish virtual memory usage In-Reply-To: <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> Message-ID: <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> I attached the memory dump. Child processes count gives me 1610 processes (on this instance). Currently the server isn?t so busy (~175 requests / sec). Varnishstat -1: ============================================================================ ====== ============================================================================ ====== uptime 3090 . Child uptime client_conn 435325 140.88 Client connections accepted client_drop 0 0.00 Connection dropped, no sess client_req 435294 140.87 Client requests received cache_hit 45740 14.80 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 126445 40.92 Cache misses backend_conn 355277 114.98 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 34331 11.11 Backend conn. reuses backend_toolate 690 0.22 Backend conn. was closed backend_recycle 35021 11.33 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 384525 124.44 Fetch with Length fetch_chunked 2441 0.79 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 2028 0.66 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 989 . N struct sess_mem n_sess 94 . N struct sess n_object 89296 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 89640 . N struct objectcore n_objecthead 25379 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 26 . N struct vbe_conn n_wrk 1600 . N worker threads n_wrk_create 1600 0.52 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 1274 0.41 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 1342 0.43 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 5 . N backends n_expired 1393 . N expired objects n_lru_nuked 35678 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 20020 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 11 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 433558 140.31 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 435298 140.87 Total Sessions s_req 435294 140.87 Total Requests s_pipe 0 0.00 Total pipe s_pass 263190 85.17 Total pass s_fetch 388994 125.89 Total fetch s_hdrbytes 157405143 50940.18 Total header bytes s_bodybytes 533077018 172516.83 Total body bytes sess_closed 435291 140.87 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 69 0.02 Session herd shm_records 37936743 12277.26 SHM records shm_writes 2141029 692.89 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 3956 1.28 SHM MTX contention shm_cycles 16 0.01 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 550879 178.28 SMA allocator requests sma_nobj 178590 . SMA outstanding allocations sma_nbytes 1073690180 . SMA outstanding bytes sma_balloc 2066782844 . SMA bytes allocated sma_bfree 993092664 . SMA bytes free sms_nreq 649 0.21 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 378848 . SMS bytes allocated sms_bfree 378848 . SMS bytes freed backend_req 389342 126.00 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) ============================================================================ ====== ============================================================================ ====== -----Oorspronkelijk bericht----- Van: Ken Brownfield [mailto:kb at slide.com] Verzonden: donderdag 5 november 2009 0:01 Aan: Henry Paulissen CC: Rog?rio Schneider Onderwerp: Re: Varnish virtual memory usage Curious: For a heavily leaked varnish instance, can you run "pmap -x PID" on the parent PID and child PID, and record how many threads are active (something like 'ps -efT | grep varnish | wc -l')? Might help isolate the RAM usage. Sorry if you have done this already; didn't find it in my email archive. Ken On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: > No, varnishd still usages way more than allowed. > The only solutions I found at the moment are: > > Run on x64 linux and restart varnish every 4 hours (crontab). > Run on x32 linux (all is working as expected but you cant allocate > more as > 4G each instance). > > > I hope linpro will find this issue and address it. > > > > Again @ linpro: if you need a machine (with live traffic) to run > some tests, > please contact me. > We have multiple machines in high availability, so testing and > rebooting a > instance wouldn?t hurt us. > > > Regards. > > -----Oorspronkelijk bericht----- > Van: Rog?rio Schneider [mailto:stockrt at gmail.com] > Verzonden: woensdag 4 november 2009 22:04 > Aan: Henry Paulissen > CC: Scott Wilson; varnish-misc at projects.linpro.no > Onderwerp: Re: Varnish virtual memory usage > > On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen > > wrote: >> I will report back. > > Did this solve the problem? > > Removing this? > >>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == > "no-cache") { >>> purge_url(req.url); >>> } >>> > > Cheers > > Att, > -- > Rog?rio Schneider > > MSN: stockrt at hotmail.com > GTalk: stockrt at gmail.com > Skype: stockrt > http://stockrt.github.com > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pmap.txt URL: From h.paulissen at qbell.nl Wed Nov 4 23:20:24 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 00:20:24 +0100 Subject: Back from the dea^H^H^Hsoul-less In-Reply-To: <18560.1257376077@critter.freebsd.dk> References: Your message of "Thu, 05 Nov 2009 00:06:40 +0100." <002c01ca5da3$77a93230$66fb9690$@paulissen@qbell.nl> <18560.1257376077@critter.freebsd.dk> Message-ID: <003101ca5da5$632ab250$298016f0$@paulissen@qbell.nl> Google translate is very nice in this case :) As Dutchman my Danish isn't so superb. Regards -----Oorspronkelijk bericht----- Van: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] Namens Poul-Henning Kamp Verzonden: donderdag 5 november 2009 0:08 Aan: Henry Paulissen CC: varnish-misc at projects.linpro.no Onderwerp: Re: Back from the dea^H^H^Hsoul-less In message <002c01ca5da3$77a93230$66fb9690$@paulissen at qbell.nl>, "Henry Pauliss en" writes: >Windows-refund case?=BF? >Did i miss something? http://phk.freebsd.dk/MicrosoftSkat/ You should be able to read it :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Wed Nov 4 23:24:06 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 04 Nov 2009 23:24:06 +0000 Subject: Back from the dea^H^H^Hsoul-less In-Reply-To: Your message of "Thu, 05 Nov 2009 00:20:24 +0100." <003101ca5da5$632ab250$298016f0$@paulissen@qbell.nl> Message-ID: <18629.1257377046@critter.freebsd.dk> In message <003101ca5da5$632ab250$298016f0$@paulissen at qbell.nl>, "Henry Pauliss en" writes: >Google translate is very nice in this case :) >As Dutchman my Danish isn't so superb. I'm usually able to read dutch in a pseudo-quasi-phonetic way... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From h.paulissen at qbell.nl Wed Nov 4 23:48:43 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 00:48:43 +0100 Subject: Varnish virtual memory usage In-Reply-To: References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> Message-ID: <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> Our load balancer transforms all connections from keep-alive to close. So keep-alive connections aren?t the issue here. Also, if I limit the thread count I still see the same behavior. -----Oorspronkelijk bericht----- Van: Ken Brownfield [mailto:kb at slide.com] Verzonden: donderdag 5 november 2009 0:31 Aan: Henry Paulissen CC: varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Looks like varnish is allocating ~1.5GB of RAM for pure cache (which may roughly match your "-s file" option) but 1,610 threads with your 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the footprint of this instance as roughly 3.6GB, and I'm assuming top/ps agree with that number. Unless your "-s file" option is significantly less than 1-1.5GB, the sheer thread count explains your memory usage: maybe using a stacksize of 512K or 256K could help, and/or disable keepalives on the client side? Also, if you happen to be using a load balancer, TCP Buffering (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically reduce the thread count (and they can handle the persistent keepalives as well). But IMHO, an event-based (for example) handler for "idle" or "slow" threads is probably the next important feature, just below persistence. Without something like TCP buffering, the memory available for actual caching is dwarfed by the thread stacksize alloc overhead. Ken On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: > I attached the memory dump. > > Child processes count gives me 1610 processes (on this instance). > Currently the server isn?t so busy (~175 requests / sec). > > Varnishstat -1: > = > = > = > = > = > = > ====================================================================== > ====== > = > = > = > = > = > = > ====================================================================== > ====== > uptime 3090 . Child uptime > client_conn 435325 140.88 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess > client_req 435294 140.87 Client requests received > cache_hit 45740 14.80 Cache hits > cache_hitpass 0 0.00 Cache hits for pass > cache_miss 126445 40.92 Cache misses > backend_conn 355277 114.98 Backend conn. success > backend_unhealthy 0 0.00 Backend conn. not > attempted > backend_busy 0 0.00 Backend conn. too many > backend_fail 0 0.00 Backend conn. failures > backend_reuse 34331 11.11 Backend conn. reuses > backend_toolate 690 0.22 Backend conn. was closed > backend_recycle 35021 11.33 Backend conn. recycles > backend_unused 0 0.00 Backend conn. unused > fetch_head 0 0.00 Fetch head > fetch_length 384525 124.44 Fetch with Length > fetch_chunked 2441 0.79 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 2028 0.66 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed 0 0.00 Fetch failed > n_sess_mem 989 . N struct sess_mem > n_sess 94 . N struct sess > n_object 89296 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 89640 . N struct objectcore > n_objecthead 25379 . N struct objecthead > n_smf 0 . N struct smf > n_smf_frag 0 . N small free smf > n_smf_large 0 . N large free smf > n_vbe_conn 26 . N struct vbe_conn > n_wrk 1600 . N worker threads > n_wrk_create 1600 0.52 N worker threads created > n_wrk_failed 0 0.00 N worker threads not > created > n_wrk_max 1274 0.41 N worker threads limited > n_wrk_queue 0 0.00 N queued work requests > n_wrk_overflow 1342 0.43 N overflowed work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 5 . N backends > n_expired 1393 . N expired objects > n_lru_nuked 35678 . N LRU nuked objects > n_lru_saved 0 . N LRU saved objects > n_lru_moved 20020 . N LRU moved objects > n_deathrow 0 . N objects on deathrow > losthdr 11 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 433558 140.31 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing > workspace > s_sess 435298 140.87 Total Sessions > s_req 435294 140.87 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 263190 85.17 Total pass > s_fetch 388994 125.89 Total fetch > s_hdrbytes 157405143 50940.18 Total header bytes > s_bodybytes 533077018 172516.83 Total body bytes > sess_closed 435291 140.87 Session Closed > sess_pipeline 0 0.00 Session Pipeline > sess_readahead 0 0.00 Session Read Ahead > sess_linger 0 0.00 Session Linger > sess_herd 69 0.02 Session herd > shm_records 37936743 12277.26 SHM records > shm_writes 2141029 692.89 SHM writes > shm_flushes 0 0.00 SHM flushes due to overflow > shm_cont 3956 1.28 SHM MTX contention > shm_cycles 16 0.01 SHM cycles through buffer > sm_nreq 0 0.00 allocator requests > sm_nobj 0 . outstanding allocations > sm_balloc 0 . bytes allocated > sm_bfree 0 . bytes free > sma_nreq 550879 178.28 SMA allocator requests > sma_nobj 178590 . SMA outstanding allocations > sma_nbytes 1073690180 . SMA outstanding bytes > sma_balloc 2066782844 . SMA bytes allocated > sma_bfree 993092664 . SMA bytes free > sms_nreq 649 0.21 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 378848 . SMS bytes allocated > sms_bfree 378848 . SMS bytes freed > backend_req 389342 126.00 Backend requests made > n_vcl 1 0.00 N vcl total > n_vcl_avail 1 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_purge 1 . N total active purges > n_purge_add 1 0.00 N new purges added > n_purge_retire 0 0.00 N old purges deleted > n_purge_obj_test 0 0.00 N objects tested > n_purge_re_test 0 0.00 N regexps tested against > n_purge_dups 0 0.00 N duplicate purges removed > hcb_nolock 0 0.00 HCB Lookups without lock > hcb_lock 0 0.00 HCB Lookups with lock > hcb_insert 0 0.00 HCB Inserts > esi_parse 0 0.00 Objects ESI parsed (unlock) > esi_errors 0 0.00 ESI parse errors (unlock) > = > = > = > = > = > = > ====================================================================== > ====== > = > = > = > = > = > = > ====================================================================== > ====== > > > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb at slide.com] > Verzonden: donderdag 5 november 2009 0:01 > Aan: Henry Paulissen > CC: Rog?rio Schneider > Onderwerp: Re: Varnish virtual memory usage > > Curious: For a heavily leaked varnish instance, can you run "pmap -x > PID" on the parent PID and child PID, and record how many threads are > active (something like 'ps -efT | grep varnish | wc -l')? Might help > isolate the RAM usage. > > Sorry if you have done this already; didn't find it in my email > archive. > > Ken > > On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: > >> No, varnishd still usages way more than allowed. >> The only solutions I found at the moment are: >> >> Run on x64 linux and restart varnish every 4 hours (crontab). >> Run on x32 linux (all is working as expected but you cant allocate >> more as >> 4G each instance). >> >> >> I hope linpro will find this issue and address it. >> >> >> >> Again @ linpro: if you need a machine (with live traffic) to run >> some tests, >> please contact me. >> We have multiple machines in high availability, so testing and >> rebooting a >> instance wouldn?t hurt us. >> >> >> Regards. >> >> -----Oorspronkelijk bericht----- >> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >> Verzonden: woensdag 4 november 2009 22:04 >> Aan: Henry Paulissen >> CC: Scott Wilson; varnish-misc at projects.linpro.no >> Onderwerp: Re: Varnish virtual memory usage >> >> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >> >> wrote: >>> I will report back. >> >> Did this solve the problem? >> >> Removing this? >> >>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >> "no-cache") { >>>> purge_url(req.url); >>>> } >>>> >> >> Cheers >> >> Att, >> -- >> Rog?rio Schneider >> >> MSN: stockrt at hotmail.com >> GTalk: stockrt at gmail.com >> Skype: stockrt >> http://stockrt.github.com >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > From phk at phk.freebsd.dk Thu Nov 5 00:16:07 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 05 Nov 2009 00:16:07 +0000 Subject: Varnish virtual memory usage In-Reply-To: Your message of "Thu, 05 Nov 2009 00:48:43 +0100." <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> Message-ID: <18853.1257380167@critter.freebsd.dk> In message <003201ca5da9$57ae7e30$070b7a90$@paulissen at qbell.nl>, "Henry Pauliss en" writes: >Our load balancer transforms all connections from keep-alive to close. That is a bad idea really, it increases the amount of work varnish has to do significantly. >but 1,610 threads with your >1MB stack limit will use 1.7GB of RAM. It is very important to keep "Virtual Address Space" and "RAM" out from each other. The stacks will use 1.7G of VM-space, but certainly not as much RAM as most of the stacks are not accessed. The number you care about is the resident size, the _actual_ amount of RAM used. Only on 32bit systems is there any reason to be concerned about VM-space used. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kb+varnish at slide.com Thu Nov 5 00:17:36 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 4 Nov 2009 16:17:36 -0800 Subject: Varnish virtual memory usage In-Reply-To: <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> Message-ID: <8FD99932-7033-44AF-AE09-DBB0A5C48026@slide.com> Hmm, well the memory adds up to a 1.5G -s option (can you confirm what you use with -s?) and memory required to run the number of threads you're running. Unless your -s is drastically smaller than 1.5GB, the pmap you sent is of a normal, non-leaking process. Ken On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: > Our load balancer transforms all connections from keep-alive to close. > So keep-alive connections aren?t the issue here. > > Also, if I limit the thread count I still see the same behavior. > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb at slide.com] > Verzonden: donderdag 5 november 2009 0:31 > Aan: Henry Paulissen > CC: varnish-misc at projects.linpro.no > Onderwerp: Re: Varnish virtual memory usage > > Looks like varnish is allocating ~1.5GB of RAM for pure cache (which > may roughly match your "-s file" option) but 1,610 threads with your > 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the > footprint of this instance as roughly 3.6GB, and I'm assuming top/ps > agree with that number. > > Unless your "-s file" option is significantly less than 1-1.5GB, the > sheer thread count explains your memory usage: maybe using a stacksize > of 512K or 256K could help, and/or disable keepalives on the client > side? > > Also, if you happen to be using a load balancer, TCP Buffering > (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically > reduce the thread count (and they can handle the persistent keepalives > as well). > > But IMHO, an event-based (for example) handler for "idle" or "slow" > threads is probably the next important feature, just below > persistence. Without something like TCP buffering, the memory > available for actual caching is dwarfed by the thread stacksize alloc > overhead. > > Ken > > On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: > >> I attached the memory dump. >> >> Child processes count gives me 1610 processes (on this instance). >> Currently the server isn?t so busy (~175 requests / sec). >> >> Varnishstat -1: >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> uptime 3090 . Child uptime >> client_conn 435325 140.88 Client connections >> accepted >> client_drop 0 0.00 Connection dropped, no >> sess >> client_req 435294 140.87 Client requests received >> cache_hit 45740 14.80 Cache hits >> cache_hitpass 0 0.00 Cache hits for pass >> cache_miss 126445 40.92 Cache misses >> backend_conn 355277 114.98 Backend conn. success >> backend_unhealthy 0 0.00 Backend conn. not >> attempted >> backend_busy 0 0.00 Backend conn. too many >> backend_fail 0 0.00 Backend conn. failures >> backend_reuse 34331 11.11 Backend conn. reuses >> backend_toolate 690 0.22 Backend conn. was closed >> backend_recycle 35021 11.33 Backend conn. recycles >> backend_unused 0 0.00 Backend conn. unused >> fetch_head 0 0.00 Fetch head >> fetch_length 384525 124.44 Fetch with Length >> fetch_chunked 2441 0.79 Fetch chunked >> fetch_eof 0 0.00 Fetch EOF >> fetch_bad 0 0.00 Fetch had bad headers >> fetch_close 2028 0.66 Fetch wanted close >> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >> fetch_zero 0 0.00 Fetch zero len >> fetch_failed 0 0.00 Fetch failed >> n_sess_mem 989 . N struct sess_mem >> n_sess 94 . N struct sess >> n_object 89296 . N struct object >> n_vampireobject 0 . N unresurrected objects >> n_objectcore 89640 . N struct objectcore >> n_objecthead 25379 . N struct objecthead >> n_smf 0 . N struct smf >> n_smf_frag 0 . N small free smf >> n_smf_large 0 . N large free smf >> n_vbe_conn 26 . N struct vbe_conn >> n_wrk 1600 . N worker threads >> n_wrk_create 1600 0.52 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 1274 0.41 N worker threads limited >> n_wrk_queue 0 0.00 N queued work requests >> n_wrk_overflow 1342 0.43 N overflowed work requests >> n_wrk_drop 0 0.00 N dropped work requests >> n_backend 5 . N backends >> n_expired 1393 . N expired objects >> n_lru_nuked 35678 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 20020 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 11 0.00 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 433558 140.31 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 435298 140.87 Total Sessions >> s_req 435294 140.87 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 263190 85.17 Total pass >> s_fetch 388994 125.89 Total fetch >> s_hdrbytes 157405143 50940.18 Total header bytes >> s_bodybytes 533077018 172516.83 Total body bytes >> sess_closed 435291 140.87 Session Closed >> sess_pipeline 0 0.00 Session Pipeline >> sess_readahead 0 0.00 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 69 0.02 Session herd >> shm_records 37936743 12277.26 SHM records >> shm_writes 2141029 692.89 SHM writes >> shm_flushes 0 0.00 SHM flushes due to >> overflow >> shm_cont 3956 1.28 SHM MTX contention >> shm_cycles 16 0.01 SHM cycles through buffer >> sm_nreq 0 0.00 allocator requests >> sm_nobj 0 . outstanding allocations >> sm_balloc 0 . bytes allocated >> sm_bfree 0 . bytes free >> sma_nreq 550879 178.28 SMA allocator requests >> sma_nobj 178590 . SMA outstanding >> allocations >> sma_nbytes 1073690180 . SMA outstanding bytes >> sma_balloc 2066782844 . SMA bytes allocated >> sma_bfree 993092664 . SMA bytes free >> sms_nreq 649 0.21 SMS allocator requests >> sms_nobj 0 . SMS outstanding >> allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 378848 . SMS bytes allocated >> sms_bfree 378848 . SMS bytes freed >> backend_req 389342 126.00 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> hcb_nolock 0 0.00 HCB Lookups without lock >> hcb_lock 0 0.00 HCB Lookups with lock >> hcb_insert 0 0.00 HCB Inserts >> esi_parse 0 0.00 Objects ESI parsed >> (unlock) >> esi_errors 0 0.00 ESI parse errors (unlock) >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> >> >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb at slide.com] >> Verzonden: donderdag 5 november 2009 0:01 >> Aan: Henry Paulissen >> CC: Rog?rio Schneider >> Onderwerp: Re: Varnish virtual memory usage >> >> Curious: For a heavily leaked varnish instance, can you run "pmap -x >> PID" on the parent PID and child PID, and record how many threads are >> active (something like 'ps -efT | grep varnish | wc -l')? Might help >> isolate the RAM usage. >> >> Sorry if you have done this already; didn't find it in my email >> archive. >> >> Ken >> >> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >> >>> No, varnishd still usages way more than allowed. >>> The only solutions I found at the moment are: >>> >>> Run on x64 linux and restart varnish every 4 hours (crontab). >>> Run on x32 linux (all is working as expected but you cant allocate >>> more as >>> 4G each instance). >>> >>> >>> I hope linpro will find this issue and address it. >>> >>> >>> >>> Again @ linpro: if you need a machine (with live traffic) to run >>> some tests, >>> please contact me. >>> We have multiple machines in high availability, so testing and >>> rebooting a >>> instance wouldn?t hurt us. >>> >>> >>> Regards. >>> >>> -----Oorspronkelijk bericht----- >>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>> Verzonden: woensdag 4 november 2009 22:04 >>> Aan: Henry Paulissen >>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>> >>> wrote: >>>> I will report back. >>> >>> Did this solve the problem? >>> >>> Removing this? >>> >>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>> "no-cache") { >>>>> purge_url(req.url); >>>>> } >>>>> >>> >>> Cheers >>> >>> Att, >>> -- >>> Rog?rio Schneider >>> >>> MSN: stockrt at hotmail.com >>> GTalk: stockrt at gmail.com >>> Skype: stockrt >>> http://stockrt.github.com >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > From h.paulissen at qbell.nl Thu Nov 5 00:38:14 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 01:38:14 +0100 Subject: Varnish virtual memory usage Message-ID: <003701ca5db0$42760ae0$c76220a0$@paulissen@qbell.nl> Running varnishd now for abount 30 minutes with a thread_pool of 4. ============================================================================ ======================== ============================================================================ ======================== uptime 2637 . Child uptime client_conn 316759 120.12 Client connections accepted client_drop 0 0.00 Connection dropped, no sess client_req 316738 120.11 Client requests received cache_hit 32477 12.32 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 93703 35.53 Cache misses backend_conn 261033 98.99 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 23305 8.84 Backend conn. reuses backend_toolate 528 0.20 Backend conn. was closed backend_recycle 23833 9.04 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 280973 106.55 Fetch with Length fetch_chunked 1801 0.68 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 1329 0.50 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 284 . N struct sess_mem n_sess 35 . N struct sess n_object 90560 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 90616 . N struct objectcore n_objecthead 25146 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 10 . N struct vbe_conn n_wrk 200 . N worker threads n_wrk_create 248 0.09 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 100988 38.30 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 630 0.24 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 5 . N backends n_expired 1027 . N expired objects n_lru_nuked 2108 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 12558 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 5 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 315222 119.54 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 316740 120.11 Total Sessions s_req 316738 120.11 Total Requests s_pipe 0 0.00 Total pipe s_pass 190664 72.30 Total pass s_fetch 284103 107.74 Total fetch s_hdrbytes 114236150 43320.50 Total header bytes s_bodybytes 355198316 134697.88 Total body bytes sess_closed 316740 120.11 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 33 0.01 Session herd shm_records 27534992 10441.79 SHM records shm_writes 1555265 589.79 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 1689 0.64 SHM MTX contention shm_cycles 12 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 379783 144.02 SMA allocator requests sma_nobj 181121 . SMA outstanding allocations sma_nbytes 1073735584 . SMA outstanding bytes sma_balloc 1488895305 . SMA bytes allocated sma_bfree 415159721 . SMA bytes free sms_nreq 268 0.10 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 156684 . SMS bytes allocated sms_bfree 156684 . SMS bytes freed backend_req 284202 107.77 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) ============================================================================ ======================== ============================================================================ ======================== As you can see I have now 200 worker threads. Still its using 1.8G and is still increasing (~1 to 5 mb/s) -----Oorspronkelijk bericht----- Van: Ken Brownfield [mailto:kb+varnish at slide.com] Verzonden: donderdag 5 november 2009 1:18 Aan: Henry Paulissen CC: varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Hmm, well the memory adds up to a 1.5G -s option (can you confirm what you use with -s?) and memory required to run the number of threads you're running. Unless your -s is drastically smaller than 1.5GB, the pmap you sent is of a normal, non-leaking process. Ken On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: > Our load balancer transforms all connections from keep-alive to close. > So keep-alive connections aren?t the issue here. > > Also, if I limit the thread count I still see the same behavior. > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb at slide.com] > Verzonden: donderdag 5 november 2009 0:31 > Aan: Henry Paulissen > CC: varnish-misc at projects.linpro.no > Onderwerp: Re: Varnish virtual memory usage > > Looks like varnish is allocating ~1.5GB of RAM for pure cache (which > may roughly match your "-s file" option) but 1,610 threads with your > 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the > footprint of this instance as roughly 3.6GB, and I'm assuming top/ps > agree with that number. > > Unless your "-s file" option is significantly less than 1-1.5GB, the > sheer thread count explains your memory usage: maybe using a stacksize > of 512K or 256K could help, and/or disable keepalives on the client > side? > > Also, if you happen to be using a load balancer, TCP Buffering > (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically > reduce the thread count (and they can handle the persistent keepalives > as well). > > But IMHO, an event-based (for example) handler for "idle" or "slow" > threads is probably the next important feature, just below > persistence. Without something like TCP buffering, the memory > available for actual caching is dwarfed by the thread stacksize alloc > overhead. > > Ken > > On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: > >> I attached the memory dump. >> >> Child processes count gives me 1610 processes (on this instance). >> Currently the server isn?t so busy (~175 requests / sec). >> >> Varnishstat -1: >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> uptime 3090 . Child uptime >> client_conn 435325 140.88 Client connections >> accepted >> client_drop 0 0.00 Connection dropped, no >> sess >> client_req 435294 140.87 Client requests received >> cache_hit 45740 14.80 Cache hits >> cache_hitpass 0 0.00 Cache hits for pass >> cache_miss 126445 40.92 Cache misses >> backend_conn 355277 114.98 Backend conn. success >> backend_unhealthy 0 0.00 Backend conn. not >> attempted >> backend_busy 0 0.00 Backend conn. too many >> backend_fail 0 0.00 Backend conn. failures >> backend_reuse 34331 11.11 Backend conn. reuses >> backend_toolate 690 0.22 Backend conn. was closed >> backend_recycle 35021 11.33 Backend conn. recycles >> backend_unused 0 0.00 Backend conn. unused >> fetch_head 0 0.00 Fetch head >> fetch_length 384525 124.44 Fetch with Length >> fetch_chunked 2441 0.79 Fetch chunked >> fetch_eof 0 0.00 Fetch EOF >> fetch_bad 0 0.00 Fetch had bad headers >> fetch_close 2028 0.66 Fetch wanted close >> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >> fetch_zero 0 0.00 Fetch zero len >> fetch_failed 0 0.00 Fetch failed >> n_sess_mem 989 . N struct sess_mem >> n_sess 94 . N struct sess >> n_object 89296 . N struct object >> n_vampireobject 0 . N unresurrected objects >> n_objectcore 89640 . N struct objectcore >> n_objecthead 25379 . N struct objecthead >> n_smf 0 . N struct smf >> n_smf_frag 0 . N small free smf >> n_smf_large 0 . N large free smf >> n_vbe_conn 26 . N struct vbe_conn >> n_wrk 1600 . N worker threads >> n_wrk_create 1600 0.52 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 1274 0.41 N worker threads limited >> n_wrk_queue 0 0.00 N queued work requests >> n_wrk_overflow 1342 0.43 N overflowed work requests >> n_wrk_drop 0 0.00 N dropped work requests >> n_backend 5 . N backends >> n_expired 1393 . N expired objects >> n_lru_nuked 35678 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 20020 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 11 0.00 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 433558 140.31 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 435298 140.87 Total Sessions >> s_req 435294 140.87 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 263190 85.17 Total pass >> s_fetch 388994 125.89 Total fetch >> s_hdrbytes 157405143 50940.18 Total header bytes >> s_bodybytes 533077018 172516.83 Total body bytes >> sess_closed 435291 140.87 Session Closed >> sess_pipeline 0 0.00 Session Pipeline >> sess_readahead 0 0.00 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 69 0.02 Session herd >> shm_records 37936743 12277.26 SHM records >> shm_writes 2141029 692.89 SHM writes >> shm_flushes 0 0.00 SHM flushes due to >> overflow >> shm_cont 3956 1.28 SHM MTX contention >> shm_cycles 16 0.01 SHM cycles through buffer >> sm_nreq 0 0.00 allocator requests >> sm_nobj 0 . outstanding allocations >> sm_balloc 0 . bytes allocated >> sm_bfree 0 . bytes free >> sma_nreq 550879 178.28 SMA allocator requests >> sma_nobj 178590 . SMA outstanding >> allocations >> sma_nbytes 1073690180 . SMA outstanding bytes >> sma_balloc 2066782844 . SMA bytes allocated >> sma_bfree 993092664 . SMA bytes free >> sms_nreq 649 0.21 SMS allocator requests >> sms_nobj 0 . SMS outstanding >> allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 378848 . SMS bytes allocated >> sms_bfree 378848 . SMS bytes freed >> backend_req 389342 126.00 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> hcb_nolock 0 0.00 HCB Lookups without lock >> hcb_lock 0 0.00 HCB Lookups with lock >> hcb_insert 0 0.00 HCB Inserts >> esi_parse 0 0.00 Objects ESI parsed >> (unlock) >> esi_errors 0 0.00 ESI parse errors (unlock) >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ====== >> >> >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb at slide.com] >> Verzonden: donderdag 5 november 2009 0:01 >> Aan: Henry Paulissen >> CC: Rog?rio Schneider >> Onderwerp: Re: Varnish virtual memory usage >> >> Curious: For a heavily leaked varnish instance, can you run "pmap -x >> PID" on the parent PID and child PID, and record how many threads are >> active (something like 'ps -efT | grep varnish | wc -l')? Might help >> isolate the RAM usage. >> >> Sorry if you have done this already; didn't find it in my email >> archive. >> >> Ken >> >> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >> >>> No, varnishd still usages way more than allowed. >>> The only solutions I found at the moment are: >>> >>> Run on x64 linux and restart varnish every 4 hours (crontab). >>> Run on x32 linux (all is working as expected but you cant allocate >>> more as 4G each instance). >>> >>> >>> I hope linpro will find this issue and address it. >>> >>> >>> >>> Again @ linpro: if you need a machine (with live traffic) to run >>> some tests, please contact me. >>> We have multiple machines in high availability, so testing and >>> rebooting a >>> instance wouldn?t hurt us. >>> >>> >>> Regards. >>> >>> -----Oorspronkelijk bericht----- >>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>> Verzonden: woensdag 4 november 2009 22:04 >>> Aan: Henry Paulissen >>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>> >>> wrote: >>>> I will report back. >>> >>> Did this solve the problem? >>> >>> Removing this? >>> >>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>> "no-cache") { >>>>> purge_url(req.url); >>>>> } >>>>> >>> >>> Cheers >>> >>> Att, >>> -- >>> Rog?rio Schneider >>> >>> MSN: stockrt at hotmail.com >>> GTalk: stockrt at gmail.com >>> Skype: stockrt >>> http://stockrt.github.com >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pmap.txt URL: From h.paulissen at qbell.nl Thu Nov 5 00:46:02 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 01:46:02 +0100 Subject: Varnish virtual memory usage In-Reply-To: <18853.1257380167@critter.freebsd.dk> References: Your message of "Thu, 05 Nov 2009 00:48:43 +0100." <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> <18853.1257380167@critter.freebsd.dk> Message-ID: <003b01ca5db1$5985dca0$0c9195e0$@paulissen@qbell.nl> I know it is really bad thing to don't have keep-alive. But our load balancer / failover software isn't supporting it (http://haproxy.1wt.eu/). Our traffic goes as follows: Through round robin dns data is sent to 1 of 2 dedicated haproxy servers. In haproxy there are 6 varnish servers defined. Haproxy chooses on round robin (and on availability) the dedicated varnish server he wants to sent the data to. So all our static traffic is distributed to 6 dedicated varnish servers. As I can see from memory usage and cpu load, this behavior isn't stressing them. -----Oorspronkelijk bericht----- Van: phk at critter.freebsd.dk [mailto:phk at critter.freebsd.dk] Namens Poul-Henning Kamp Verzonden: donderdag 5 november 2009 1:16 Aan: Henry Paulissen CC: 'Ken Brownfield'; varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage In message <003201ca5da9$57ae7e30$070b7a90$@paulissen at qbell.nl>, "Henry Pauliss en" writes: >Our load balancer transforms all connections from keep-alive to close. That is a bad idea really, it increases the amount of work varnish has to do significantly. >but 1,610 threads with your >1MB stack limit will use 1.7GB of RAM. It is very important to keep "Virtual Address Space" and "RAM" out from each other. The stacks will use 1.7G of VM-space, but certainly not as much RAM as most of the stacks are not accessed. The number you care about is the resident size, the _actual_ amount of RAM used. Only on 32bit systems is there any reason to be concerned about VM-space used. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From h.paulissen at qbell.nl Thu Nov 5 00:48:30 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 01:48:30 +0100 Subject: FW: Varnish virtual memory usage Message-ID: <003d01ca5db1$b1d71d60$15855820$@paulissen@qbell.nl> See the pmap.txt attachment. The startup command is in the beginning of the file. /usr/local/varnish/sbin/varnishd -P /var/run/xxx.pid -a 0.0.0.0:xxx -f /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T 0.0.0.0:xxx -s malloc,1G -i xxx -n /usr/local/varnish/var/varnish/xxx -p obj_workspace 8192 -p sess_workspace 262144 -p listen_depth 8192 -p lru_interval 60 -p sess_timeout 10 -p shm_workspace 32768 -p ping_interval 2 -p thread_pools 4 -p thread_pool_min 50 -p thread_pool_max 4000 -p esi_syntax 1 -p overflow_max 10000 P.S. Sorry for the double mail. Forgot to CC. -----Oorspronkelijk bericht----- Van: Ken Brownfield [mailto:kb+varnish at slide.com] Verzonden: donderdag 5 november 2009 1:42 Aan: Henry Paulissen Onderwerp: Re: Varnish virtual memory usage Is your -s set at 1.5GB? What's your varnishd command line? I'm not sure if you realize that thread_pool does not control the number of threads, only the number of pools (and mutexes). I think thread_pool_max is what you're looking for? -- Ken On Nov 4, 2009, at 4:37 PM, Henry Paulissen wrote: > Running varnishd now for abount 30 minutes with a thread_pool of 4. > > = > = > = > = > = > = > ====================================================================== > ======================== > = > = > = > = > = > = > ====================================================================== > ======================== > uptime 2637 . Child uptime > client_conn 316759 120.12 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess > client_req 316738 120.11 Client requests received > cache_hit 32477 12.32 Cache hits > cache_hitpass 0 0.00 Cache hits for pass > cache_miss 93703 35.53 Cache misses > backend_conn 261033 98.99 Backend conn. success > backend_unhealthy 0 0.00 Backend conn. not > attempted > backend_busy 0 0.00 Backend conn. too many > backend_fail 0 0.00 Backend conn. failures > backend_reuse 23305 8.84 Backend conn. reuses > backend_toolate 528 0.20 Backend conn. was closed > backend_recycle 23833 9.04 Backend conn. recycles > backend_unused 0 0.00 Backend conn. unused > fetch_head 0 0.00 Fetch head > fetch_length 280973 106.55 Fetch with Length > fetch_chunked 1801 0.68 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 1329 0.50 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed 0 0.00 Fetch failed > n_sess_mem 284 . N struct sess_mem > n_sess 35 . N struct sess > n_object 90560 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 90616 . N struct objectcore > n_objecthead 25146 . N struct objecthead > n_smf 0 . N struct smf > n_smf_frag 0 . N small free smf > n_smf_large 0 . N large free smf > n_vbe_conn 10 . N struct vbe_conn > n_wrk 200 . N worker threads > n_wrk_create 248 0.09 N worker threads created > n_wrk_failed 0 0.00 N worker threads not > created > n_wrk_max 100988 38.30 N worker threads limited > n_wrk_queue 0 0.00 N queued work requests > n_wrk_overflow 630 0.24 N overflowed work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 5 . N backends > n_expired 1027 . N expired objects > n_lru_nuked 2108 . N LRU nuked objects > n_lru_saved 0 . N LRU saved objects > n_lru_moved 12558 . N LRU moved objects > n_deathrow 0 . N objects on deathrow > losthdr 5 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 315222 119.54 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing > workspace > s_sess 316740 120.11 Total Sessions > s_req 316738 120.11 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 190664 72.30 Total pass > s_fetch 284103 107.74 Total fetch > s_hdrbytes 114236150 43320.50 Total header bytes > s_bodybytes 355198316 134697.88 Total body bytes > sess_closed 316740 120.11 Session Closed > sess_pipeline 0 0.00 Session Pipeline > sess_readahead 0 0.00 Session Read Ahead > sess_linger 0 0.00 Session Linger > sess_herd 33 0.01 Session herd > shm_records 27534992 10441.79 SHM records > shm_writes 1555265 589.79 SHM writes > shm_flushes 0 0.00 SHM flushes due to overflow > shm_cont 1689 0.64 SHM MTX contention > shm_cycles 12 0.00 SHM cycles through buffer > sm_nreq 0 0.00 allocator requests > sm_nobj 0 . outstanding allocations > sm_balloc 0 . bytes allocated > sm_bfree 0 . bytes free > sma_nreq 379783 144.02 SMA allocator requests > sma_nobj 181121 . SMA outstanding allocations > sma_nbytes 1073735584 . SMA outstanding bytes > sma_balloc 1488895305 . SMA bytes allocated > sma_bfree 415159721 . SMA bytes free > sms_nreq 268 0.10 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 156684 . SMS bytes allocated > sms_bfree 156684 . SMS bytes freed > backend_req 284202 107.77 Backend requests made > n_vcl 1 0.00 N vcl total > n_vcl_avail 1 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_purge 1 . N total active purges > n_purge_add 1 0.00 N new purges added > n_purge_retire 0 0.00 N old purges deleted > n_purge_obj_test 0 0.00 N objects tested > n_purge_re_test 0 0.00 N regexps tested against > n_purge_dups 0 0.00 N duplicate purges removed > hcb_nolock 0 0.00 HCB Lookups without lock > hcb_lock 0 0.00 HCB Lookups with lock > hcb_insert 0 0.00 HCB Inserts > esi_parse 0 0.00 Objects ESI parsed (unlock) > esi_errors 0 0.00 ESI parse errors (unlock) > = > = > = > = > = > = > ====================================================================== > ======================== > = > = > = > = > = > = > ====================================================================== > ======================== > > As you can see I have now 200 worker threads. > Still its using 1.8G and is still increasing (~1 to 5 mb/s) > > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb+varnish at slide.com] > Verzonden: donderdag 5 november 2009 1:18 > Aan: Henry Paulissen > CC: varnish-misc at projects.linpro.no > Onderwerp: Re: Varnish virtual memory usage > > Hmm, well the memory adds up to a 1.5G -s option (can you confirm what > you use with -s?) and memory required to run the number of threads > you're running. Unless your -s is drastically smaller than 1.5GB, the > pmap you sent is of a normal, non-leaking process. > > Ken > > On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: > >> Our load balancer transforms all connections from keep-alive to >> close. >> So keep-alive connections aren?t the issue here. >> >> Also, if I limit the thread count I still see the same behavior. >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb at slide.com] >> Verzonden: donderdag 5 november 2009 0:31 >> Aan: Henry Paulissen >> CC: varnish-misc at projects.linpro.no >> Onderwerp: Re: Varnish virtual memory usage >> >> Looks like varnish is allocating ~1.5GB of RAM for pure cache (which >> may roughly match your "-s file" option) but 1,610 threads with your >> 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the >> footprint of this instance as roughly 3.6GB, and I'm assuming top/ps >> agree with that number. >> >> Unless your "-s file" option is significantly less than 1-1.5GB, the >> sheer thread count explains your memory usage: maybe using a >> stacksize >> of 512K or 256K could help, and/or disable keepalives on the client >> side? >> >> Also, if you happen to be using a load balancer, TCP Buffering >> (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically >> reduce the thread count (and they can handle the persistent >> keepalives >> as well). >> >> But IMHO, an event-based (for example) handler for "idle" or "slow" >> threads is probably the next important feature, just below >> persistence. Without something like TCP buffering, the memory >> available for actual caching is dwarfed by the thread stacksize alloc >> overhead. >> >> Ken >> >> On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: >> >>> I attached the memory dump. >>> >>> Child processes count gives me 1610 processes (on this instance). >>> Currently the server isn?t so busy (~175 requests / sec). >>> >>> Varnishstat -1: >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> ==================================================================== >>> ====== >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> ==================================================================== >>> ====== >>> uptime 3090 . Child uptime >>> client_conn 435325 140.88 Client connections >>> accepted >>> client_drop 0 0.00 Connection dropped, no >>> sess >>> client_req 435294 140.87 Client requests received >>> cache_hit 45740 14.80 Cache hits >>> cache_hitpass 0 0.00 Cache hits for pass >>> cache_miss 126445 40.92 Cache misses >>> backend_conn 355277 114.98 Backend conn. success >>> backend_unhealthy 0 0.00 Backend conn. not >>> attempted >>> backend_busy 0 0.00 Backend conn. too many >>> backend_fail 0 0.00 Backend conn. failures >>> backend_reuse 34331 11.11 Backend conn. reuses >>> backend_toolate 690 0.22 Backend conn. was closed >>> backend_recycle 35021 11.33 Backend conn. recycles >>> backend_unused 0 0.00 Backend conn. unused >>> fetch_head 0 0.00 Fetch head >>> fetch_length 384525 124.44 Fetch with Length >>> fetch_chunked 2441 0.79 Fetch chunked >>> fetch_eof 0 0.00 Fetch EOF >>> fetch_bad 0 0.00 Fetch had bad headers >>> fetch_close 2028 0.66 Fetch wanted close >>> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >>> fetch_zero 0 0.00 Fetch zero len >>> fetch_failed 0 0.00 Fetch failed >>> n_sess_mem 989 . N struct sess_mem >>> n_sess 94 . N struct sess >>> n_object 89296 . N struct object >>> n_vampireobject 0 . N unresurrected objects >>> n_objectcore 89640 . N struct objectcore >>> n_objecthead 25379 . N struct objecthead >>> n_smf 0 . N struct smf >>> n_smf_frag 0 . N small free smf >>> n_smf_large 0 . N large free smf >>> n_vbe_conn 26 . N struct vbe_conn >>> n_wrk 1600 . N worker threads >>> n_wrk_create 1600 0.52 N worker threads created >>> n_wrk_failed 0 0.00 N worker threads not >>> created >>> n_wrk_max 1274 0.41 N worker threads limited >>> n_wrk_queue 0 0.00 N queued work requests >>> n_wrk_overflow 1342 0.43 N overflowed work >>> requests >>> n_wrk_drop 0 0.00 N dropped work requests >>> n_backend 5 . N backends >>> n_expired 1393 . N expired objects >>> n_lru_nuked 35678 . N LRU nuked objects >>> n_lru_saved 0 . N LRU saved objects >>> n_lru_moved 20020 . N LRU moved objects >>> n_deathrow 0 . N objects on deathrow >>> losthdr 11 0.00 HTTP header overflows >>> n_objsendfile 0 0.00 Objects sent with >>> sendfile >>> n_objwrite 433558 140.31 Objects sent with write >>> n_objoverflow 0 0.00 Objects overflowing >>> workspace >>> s_sess 435298 140.87 Total Sessions >>> s_req 435294 140.87 Total Requests >>> s_pipe 0 0.00 Total pipe >>> s_pass 263190 85.17 Total pass >>> s_fetch 388994 125.89 Total fetch >>> s_hdrbytes 157405143 50940.18 Total header bytes >>> s_bodybytes 533077018 172516.83 Total body bytes >>> sess_closed 435291 140.87 Session Closed >>> sess_pipeline 0 0.00 Session Pipeline >>> sess_readahead 0 0.00 Session Read Ahead >>> sess_linger 0 0.00 Session Linger >>> sess_herd 69 0.02 Session herd >>> shm_records 37936743 12277.26 SHM records >>> shm_writes 2141029 692.89 SHM writes >>> shm_flushes 0 0.00 SHM flushes due to >>> overflow >>> shm_cont 3956 1.28 SHM MTX contention >>> shm_cycles 16 0.01 SHM cycles through buffer >>> sm_nreq 0 0.00 allocator requests >>> sm_nobj 0 . outstanding allocations >>> sm_balloc 0 . bytes allocated >>> sm_bfree 0 . bytes free >>> sma_nreq 550879 178.28 SMA allocator requests >>> sma_nobj 178590 . SMA outstanding >>> allocations >>> sma_nbytes 1073690180 . SMA outstanding bytes >>> sma_balloc 2066782844 . SMA bytes allocated >>> sma_bfree 993092664 . SMA bytes free >>> sms_nreq 649 0.21 SMS allocator requests >>> sms_nobj 0 . SMS outstanding >>> allocations >>> sms_nbytes 0 . SMS outstanding bytes >>> sms_balloc 378848 . SMS bytes allocated >>> sms_bfree 378848 . SMS bytes freed >>> backend_req 389342 126.00 Backend requests made >>> n_vcl 1 0.00 N vcl total >>> n_vcl_avail 1 0.00 N vcl available >>> n_vcl_discard 0 0.00 N vcl discarded >>> n_purge 1 . N total active purges >>> n_purge_add 1 0.00 N new purges added >>> n_purge_retire 0 0.00 N old purges deleted >>> n_purge_obj_test 0 0.00 N objects tested >>> n_purge_re_test 0 0.00 N regexps tested against >>> n_purge_dups 0 0.00 N duplicate purges >>> removed >>> hcb_nolock 0 0.00 HCB Lookups without lock >>> hcb_lock 0 0.00 HCB Lookups with lock >>> hcb_insert 0 0.00 HCB Inserts >>> esi_parse 0 0.00 Objects ESI parsed >>> (unlock) >>> esi_errors 0 0.00 ESI parse errors (unlock) >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> ==================================================================== >>> ====== >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> ==================================================================== >>> ====== >>> >>> >>> >>> -----Oorspronkelijk bericht----- >>> Van: Ken Brownfield [mailto:kb at slide.com] >>> Verzonden: donderdag 5 november 2009 0:01 >>> Aan: Henry Paulissen >>> CC: Rog?rio Schneider >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> Curious: For a heavily leaked varnish instance, can you run "pmap -x >>> PID" on the parent PID and child PID, and record how many threads >>> are >>> active (something like 'ps -efT | grep varnish | wc -l')? Might >>> help >>> isolate the RAM usage. >>> >>> Sorry if you have done this already; didn't find it in my email >>> archive. >>> >>> Ken >>> >>> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >>> >>>> No, varnishd still usages way more than allowed. >>>> The only solutions I found at the moment are: >>>> >>>> Run on x64 linux and restart varnish every 4 hours (crontab). >>>> Run on x32 linux (all is working as expected but you cant allocate >>>> more as >>>> 4G each instance). >>>> >>>> >>>> I hope linpro will find this issue and address it. >>>> >>>> >>>> >>>> Again @ linpro: if you need a machine (with live traffic) to run >>>> some tests, >>>> please contact me. >>>> We have multiple machines in high availability, so testing and >>>> rebooting a >>>> instance wouldn?t hurt us. >>>> >>>> >>>> Regards. >>>> >>>> -----Oorspronkelijk bericht----- >>>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>>> Verzonden: woensdag 4 november 2009 22:04 >>>> Aan: Henry Paulissen >>>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>>> Onderwerp: Re: Varnish virtual memory usage >>>> >>>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>>> >>>> wrote: >>>>> I will report back. >>>> >>>> Did this solve the problem? >>>> >>>> Removing this? >>>> >>>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>>> "no-cache") { >>>>>> purge_url(req.url); >>>>>> } >>>>>> >>>> >>>> Cheers >>>> >>>> Att, >>>> -- >>>> Rog?rio Schneider >>>> >>>> MSN: stockrt at hotmail.com >>>> GTalk: stockrt at gmail.com >>>> Skype: stockrt >>>> http://stockrt.github.com >>>> >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at projects.linpro.no >>>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>> >> > From kb+varnish at slide.com Thu Nov 5 01:16:05 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 4 Nov 2009 17:16:05 -0800 Subject: Varnish virtual memory usage In-Reply-To: <003c01ca5db1$9755af10$c6010d30$@paulissen@qbell.nl> References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> <8FD99932-7033-44AF-AE09-DBB0A5C48026@slide.com> <003301ca5db0$243b1700$6cb14500$@paulissen@qbell.nl> <36F57DAE-82B4-4986-BEB0-E13AA106386F@slide.com> <003c01ca5db1$9755af10$c6010d30$@paulissen@qbell.nl> Message-ID: Ah, sorry, missed that the command-line was in there. Given 1G of cache, a large sess_workspace and shm_workspace buffers, and the number of threads, the math adds up correctly. Do you definitely need those large buffers? Your memory footprint will simply increase with thread count; reducing active simultaneous connections, reducing the stack size, and reducing the large sess_workspace are the only ways I know of for you to control the memory. I'm really not seeing a leak or malfunction, IMHO. The reason behind your high/growing worker count is worth investigating (lower send_timeout? slow/disconnecting clients? strace the threads to see what they're doing?) Minor thing: overflow_max is a percentage, so 10000 is probably ignored? -- Ken On Nov 4, 2009, at 4:47 PM, Henry Paulissen wrote: > See the pmap.txt attachment. > The startup command is in the beginning of the file. > > > /usr/local/varnish/sbin/varnishd -P /var/run/xxx.pid -a 0.0.0.0:xxx -f > /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T 0.0.0.0:xxx -s > malloc,1G > -i xxx -n /usr/local/varnish/var/varnish/xxx -p obj_workspace 8192 -p > sess_workspace 262144 -p listen_depth 8192 -p lru_interval 60 -p > sess_timeout 10 -p shm_workspace 32768 -p ping_interval 2 -p > thread_pools 4 > -p thread_pool_min 50 -p thread_pool_max 4000 -p esi_syntax 1 -p > overflow_max 10000 > > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb+varnish at slide.com] > Verzonden: donderdag 5 november 2009 1:42 > Aan: Henry Paulissen > Onderwerp: Re: Varnish virtual memory usage > > Is your -s set at 1.5GB? What's your varnishd command line? > > I'm not sure if you realize that thread_pool does not control the > number of threads, only the number of pools (and mutexes). I think > thread_pool_max is what you're looking for? > -- > Ken > > On Nov 4, 2009, at 4:37 PM, Henry Paulissen wrote: > >> Running varnishd now for abount 30 minutes with a thread_pool of 4. >> >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> uptime 2637 . Child uptime >> client_conn 316759 120.12 Client connections >> accepted >> client_drop 0 0.00 Connection dropped, no >> sess >> client_req 316738 120.11 Client requests received >> cache_hit 32477 12.32 Cache hits >> cache_hitpass 0 0.00 Cache hits for pass >> cache_miss 93703 35.53 Cache misses >> backend_conn 261033 98.99 Backend conn. success >> backend_unhealthy 0 0.00 Backend conn. not >> attempted >> backend_busy 0 0.00 Backend conn. too many >> backend_fail 0 0.00 Backend conn. failures >> backend_reuse 23305 8.84 Backend conn. reuses >> backend_toolate 528 0.20 Backend conn. was closed >> backend_recycle 23833 9.04 Backend conn. recycles >> backend_unused 0 0.00 Backend conn. unused >> fetch_head 0 0.00 Fetch head >> fetch_length 280973 106.55 Fetch with Length >> fetch_chunked 1801 0.68 Fetch chunked >> fetch_eof 0 0.00 Fetch EOF >> fetch_bad 0 0.00 Fetch had bad headers >> fetch_close 1329 0.50 Fetch wanted close >> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >> fetch_zero 0 0.00 Fetch zero len >> fetch_failed 0 0.00 Fetch failed >> n_sess_mem 284 . N struct sess_mem >> n_sess 35 . N struct sess >> n_object 90560 . N struct object >> n_vampireobject 0 . N unresurrected objects >> n_objectcore 90616 . N struct objectcore >> n_objecthead 25146 . N struct objecthead >> n_smf 0 . N struct smf >> n_smf_frag 0 . N small free smf >> n_smf_large 0 . N large free smf >> n_vbe_conn 10 . N struct vbe_conn >> n_wrk 200 . N worker threads >> n_wrk_create 248 0.09 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 100988 38.30 N worker threads limited >> n_wrk_queue 0 0.00 N queued work requests >> n_wrk_overflow 630 0.24 N overflowed work requests >> n_wrk_drop 0 0.00 N dropped work requests >> n_backend 5 . N backends >> n_expired 1027 . N expired objects >> n_lru_nuked 2108 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 12558 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 5 0.00 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 315222 119.54 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 316740 120.11 Total Sessions >> s_req 316738 120.11 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 190664 72.30 Total pass >> s_fetch 284103 107.74 Total fetch >> s_hdrbytes 114236150 43320.50 Total header bytes >> s_bodybytes 355198316 134697.88 Total body bytes >> sess_closed 316740 120.11 Session Closed >> sess_pipeline 0 0.00 Session Pipeline >> sess_readahead 0 0.00 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 33 0.01 Session herd >> shm_records 27534992 10441.79 SHM records >> shm_writes 1555265 589.79 SHM writes >> shm_flushes 0 0.00 SHM flushes due to >> overflow >> shm_cont 1689 0.64 SHM MTX contention >> shm_cycles 12 0.00 SHM cycles through buffer >> sm_nreq 0 0.00 allocator requests >> sm_nobj 0 . outstanding allocations >> sm_balloc 0 . bytes allocated >> sm_bfree 0 . bytes free >> sma_nreq 379783 144.02 SMA allocator requests >> sma_nobj 181121 . SMA outstanding >> allocations >> sma_nbytes 1073735584 . SMA outstanding bytes >> sma_balloc 1488895305 . SMA bytes allocated >> sma_bfree 415159721 . SMA bytes free >> sms_nreq 268 0.10 SMS allocator requests >> sms_nobj 0 . SMS outstanding >> allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 156684 . SMS bytes allocated >> sms_bfree 156684 . SMS bytes freed >> backend_req 284202 107.77 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> hcb_nolock 0 0.00 HCB Lookups without lock >> hcb_lock 0 0.00 HCB Lookups with lock >> hcb_insert 0 0.00 HCB Inserts >> esi_parse 0 0.00 Objects ESI parsed >> (unlock) >> esi_errors 0 0.00 ESI parse errors (unlock) >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> >> As you can see I have now 200 worker threads. >> Still its using 1.8G and is still increasing (~1 to 5 mb/s) >> >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb+varnish at slide.com] >> Verzonden: donderdag 5 november 2009 1:18 >> Aan: Henry Paulissen >> CC: varnish-misc at projects.linpro.no >> Onderwerp: Re: Varnish virtual memory usage >> >> Hmm, well the memory adds up to a 1.5G -s option (can you confirm >> what >> you use with -s?) and memory required to run the number of threads >> you're running. Unless your -s is drastically smaller than 1.5GB, >> the >> pmap you sent is of a normal, non-leaking process. >> >> Ken >> >> On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: >> >>> Our load balancer transforms all connections from keep-alive to >>> close. >>> So keep-alive connections aren?t the issue here. >>> >>> Also, if I limit the thread count I still see the same behavior. >>> >>> -----Oorspronkelijk bericht----- >>> Van: Ken Brownfield [mailto:kb at slide.com] >>> Verzonden: donderdag 5 november 2009 0:31 >>> Aan: Henry Paulissen >>> CC: varnish-misc at projects.linpro.no >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> Looks like varnish is allocating ~1.5GB of RAM for pure cache (which >>> may roughly match your "-s file" option) but 1,610 threads with your >>> 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the >>> footprint of this instance as roughly 3.6GB, and I'm assuming top/ps >>> agree with that number. >>> >>> Unless your "-s file" option is significantly less than 1-1.5GB, the >>> sheer thread count explains your memory usage: maybe using a >>> stacksize >>> of 512K or 256K could help, and/or disable keepalives on the client >>> side? >>> >>> Also, if you happen to be using a load balancer, TCP Buffering >>> (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically >>> reduce the thread count (and they can handle the persistent >>> keepalives >>> as well). >>> >>> But IMHO, an event-based (for example) handler for "idle" or "slow" >>> threads is probably the next important feature, just below >>> persistence. Without something like TCP buffering, the memory >>> available for actual caching is dwarfed by the thread stacksize >>> alloc >>> overhead. >>> >>> Ken >>> >>> On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: >>> >>>> I attached the memory dump. >>>> >>>> Child processes count gives me 1610 processes (on this instance). >>>> Currently the server isn?t so busy (~175 requests / sec). >>>> >>>> Varnishstat -1: >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> uptime 3090 . Child uptime >>>> client_conn 435325 140.88 Client connections >>>> accepted >>>> client_drop 0 0.00 Connection dropped, no >>>> sess >>>> client_req 435294 140.87 Client requests received >>>> cache_hit 45740 14.80 Cache hits >>>> cache_hitpass 0 0.00 Cache hits for pass >>>> cache_miss 126445 40.92 Cache misses >>>> backend_conn 355277 114.98 Backend conn. success >>>> backend_unhealthy 0 0.00 Backend conn. not >>>> attempted >>>> backend_busy 0 0.00 Backend conn. too many >>>> backend_fail 0 0.00 Backend conn. failures >>>> backend_reuse 34331 11.11 Backend conn. reuses >>>> backend_toolate 690 0.22 Backend conn. was closed >>>> backend_recycle 35021 11.33 Backend conn. recycles >>>> backend_unused 0 0.00 Backend conn. unused >>>> fetch_head 0 0.00 Fetch head >>>> fetch_length 384525 124.44 Fetch with Length >>>> fetch_chunked 2441 0.79 Fetch chunked >>>> fetch_eof 0 0.00 Fetch EOF >>>> fetch_bad 0 0.00 Fetch had bad headers >>>> fetch_close 2028 0.66 Fetch wanted close >>>> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 >>>> closed >>>> fetch_zero 0 0.00 Fetch zero len >>>> fetch_failed 0 0.00 Fetch failed >>>> n_sess_mem 989 . N struct sess_mem >>>> n_sess 94 . N struct sess >>>> n_object 89296 . N struct object >>>> n_vampireobject 0 . N unresurrected objects >>>> n_objectcore 89640 . N struct objectcore >>>> n_objecthead 25379 . N struct objecthead >>>> n_smf 0 . N struct smf >>>> n_smf_frag 0 . N small free smf >>>> n_smf_large 0 . N large free smf >>>> n_vbe_conn 26 . N struct vbe_conn >>>> n_wrk 1600 . N worker threads >>>> n_wrk_create 1600 0.52 N worker threads created >>>> n_wrk_failed 0 0.00 N worker threads not >>>> created >>>> n_wrk_max 1274 0.41 N worker threads limited >>>> n_wrk_queue 0 0.00 N queued work requests >>>> n_wrk_overflow 1342 0.43 N overflowed work >>>> requests >>>> n_wrk_drop 0 0.00 N dropped work requests >>>> n_backend 5 . N backends >>>> n_expired 1393 . N expired objects >>>> n_lru_nuked 35678 . N LRU nuked objects >>>> n_lru_saved 0 . N LRU saved objects >>>> n_lru_moved 20020 . N LRU moved objects >>>> n_deathrow 0 . N objects on deathrow >>>> losthdr 11 0.00 HTTP header overflows >>>> n_objsendfile 0 0.00 Objects sent with >>>> sendfile >>>> n_objwrite 433558 140.31 Objects sent with write >>>> n_objoverflow 0 0.00 Objects overflowing >>>> workspace >>>> s_sess 435298 140.87 Total Sessions >>>> s_req 435294 140.87 Total Requests >>>> s_pipe 0 0.00 Total pipe >>>> s_pass 263190 85.17 Total pass >>>> s_fetch 388994 125.89 Total fetch >>>> s_hdrbytes 157405143 50940.18 Total header bytes >>>> s_bodybytes 533077018 172516.83 Total body bytes >>>> sess_closed 435291 140.87 Session Closed >>>> sess_pipeline 0 0.00 Session Pipeline >>>> sess_readahead 0 0.00 Session Read Ahead >>>> sess_linger 0 0.00 Session Linger >>>> sess_herd 69 0.02 Session herd >>>> shm_records 37936743 12277.26 SHM records >>>> shm_writes 2141029 692.89 SHM writes >>>> shm_flushes 0 0.00 SHM flushes due to >>>> overflow >>>> shm_cont 3956 1.28 SHM MTX contention >>>> shm_cycles 16 0.01 SHM cycles through >>>> buffer >>>> sm_nreq 0 0.00 allocator requests >>>> sm_nobj 0 . outstanding allocations >>>> sm_balloc 0 . bytes allocated >>>> sm_bfree 0 . bytes free >>>> sma_nreq 550879 178.28 SMA allocator requests >>>> sma_nobj 178590 . SMA outstanding >>>> allocations >>>> sma_nbytes 1073690180 . SMA outstanding bytes >>>> sma_balloc 2066782844 . SMA bytes allocated >>>> sma_bfree 993092664 . SMA bytes free >>>> sms_nreq 649 0.21 SMS allocator requests >>>> sms_nobj 0 . SMS outstanding >>>> allocations >>>> sms_nbytes 0 . SMS outstanding bytes >>>> sms_balloc 378848 . SMS bytes allocated >>>> sms_bfree 378848 . SMS bytes freed >>>> backend_req 389342 126.00 Backend requests made >>>> n_vcl 1 0.00 N vcl total >>>> n_vcl_avail 1 0.00 N vcl available >>>> n_vcl_discard 0 0.00 N vcl discarded >>>> n_purge 1 . N total active purges >>>> n_purge_add 1 0.00 N new purges added >>>> n_purge_retire 0 0.00 N old purges deleted >>>> n_purge_obj_test 0 0.00 N objects tested >>>> n_purge_re_test 0 0.00 N regexps tested against >>>> n_purge_dups 0 0.00 N duplicate purges >>>> removed >>>> hcb_nolock 0 0.00 HCB Lookups without lock >>>> hcb_lock 0 0.00 HCB Lookups with lock >>>> hcb_insert 0 0.00 HCB Inserts >>>> esi_parse 0 0.00 Objects ESI parsed >>>> (unlock) >>>> esi_errors 0 0.00 ESI parse errors >>>> (unlock) >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> >>>> >>>> >>>> -----Oorspronkelijk bericht----- >>>> Van: Ken Brownfield [mailto:kb at slide.com] >>>> Verzonden: donderdag 5 november 2009 0:01 >>>> Aan: Henry Paulissen >>>> CC: Rog?rio Schneider >>>> Onderwerp: Re: Varnish virtual memory usage >>>> >>>> Curious: For a heavily leaked varnish instance, can you run "pmap >>>> -x >>>> PID" on the parent PID and child PID, and record how many threads >>>> are >>>> active (something like 'ps -efT | grep varnish | wc -l')? Might >>>> help >>>> isolate the RAM usage. >>>> >>>> Sorry if you have done this already; didn't find it in my email >>>> archive. >>>> >>>> Ken >>>> >>>> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >>>> >>>>> No, varnishd still usages way more than allowed. >>>>> The only solutions I found at the moment are: >>>>> >>>>> Run on x64 linux and restart varnish every 4 hours (crontab). >>>>> Run on x32 linux (all is working as expected but you cant allocate >>>>> more as >>>>> 4G each instance). >>>>> >>>>> >>>>> I hope linpro will find this issue and address it. >>>>> >>>>> >>>>> >>>>> Again @ linpro: if you need a machine (with live traffic) to run >>>>> some tests, >>>>> please contact me. >>>>> We have multiple machines in high availability, so testing and >>>>> rebooting a >>>>> instance wouldn?t hurt us. >>>>> >>>>> >>>>> Regards. >>>>> >>>>> -----Oorspronkelijk bericht----- >>>>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>>>> Verzonden: woensdag 4 november 2009 22:04 >>>>> Aan: Henry Paulissen >>>>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>>>> Onderwerp: Re: Varnish virtual memory usage >>>>> >>>>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>>>> >>>>> wrote: >>>>>> I will report back. >>>>> >>>>> Did this solve the problem? >>>>> >>>>> Removing this? >>>>> >>>>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>>>> "no-cache") { >>>>>>> purge_url(req.url); >>>>>>> } >>>>>>> >>>>> >>>>> Cheers >>>>> >>>>> Att, >>>>> -- >>>>> Rog?rio Schneider >>>>> >>>>> MSN: stockrt at hotmail.com >>>>> GTalk: stockrt at gmail.com >>>>> Skype: stockrt >>>>> http://stockrt.github.com >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at projects.linpro.no >>>>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>>> >>> >> > From h.paulissen at qbell.nl Thu Nov 5 07:34:46 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 08:34:46 +0100 Subject: Varnish virtual memory usage In-Reply-To: References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> <8FD99932-7033-44AF-AE09-DBB0A5C48026@slide.com> <003301ca5db0$243b1700$6cb14500$@paulissen@qbell.nl> <36F57DAE-82B4-4986-BEB0-E13AA106386F@slide.com> <003c01ca5db1$9755af10$c6010d30$@paulissen@qbell.nl> Message-ID: <004101ca5dea$730f1860$592d4920$@paulissen@qbell.nl> Good morning :) What do you propose for sess_workspace and shm_workspace? In beginning I didn?t set these settings at all and was seeing the same issue. Is the default also set to high? Set them both to 8192 now. Will report back later on this day. overflow_max isn?t ignored as I see a overflow of 1000+ if I set the worker count to 1600. Regards. -----Oorspronkelijk bericht----- Van: Ken Brownfield [mailto:kb+varnish at slide.com] Verzonden: donderdag 5 november 2009 2:16 Aan: Henry Paulissen CC: varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Ah, sorry, missed that the command-line was in there. Given 1G of cache, a large sess_workspace and shm_workspace buffers, and the number of threads, the math adds up correctly. Do you definitely need those large buffers? Your memory footprint will simply increase with thread count; reducing active simultaneous connections, reducing the stack size, and reducing the large sess_workspace are the only ways I know of for you to control the memory. I'm really not seeing a leak or malfunction, IMHO. The reason behind your high/growing worker count is worth investigating (lower send_timeout? slow/disconnecting clients? strace the threads to see what they're doing?) Minor thing: overflow_max is a percentage, so 10000 is probably ignored? -- Ken On Nov 4, 2009, at 4:47 PM, Henry Paulissen wrote: > See the pmap.txt attachment. > The startup command is in the beginning of the file. > > > /usr/local/varnish/sbin/varnishd -P /var/run/xxx.pid -a 0.0.0.0:xxx -f > /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T 0.0.0.0:xxx -s > malloc,1G > -i xxx -n /usr/local/varnish/var/varnish/xxx -p obj_workspace 8192 -p > sess_workspace 262144 -p listen_depth 8192 -p lru_interval 60 -p > sess_timeout 10 -p shm_workspace 32768 -p ping_interval 2 -p > thread_pools 4 > -p thread_pool_min 50 -p thread_pool_max 4000 -p esi_syntax 1 -p > overflow_max 10000 > > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb+varnish at slide.com] > Verzonden: donderdag 5 november 2009 1:42 > Aan: Henry Paulissen > Onderwerp: Re: Varnish virtual memory usage > > Is your -s set at 1.5GB? What's your varnishd command line? > > I'm not sure if you realize that thread_pool does not control the > number of threads, only the number of pools (and mutexes). I think > thread_pool_max is what you're looking for? > -- > Ken > > On Nov 4, 2009, at 4:37 PM, Henry Paulissen wrote: > >> Running varnishd now for abount 30 minutes with a thread_pool of 4. >> >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> uptime 2637 . Child uptime >> client_conn 316759 120.12 Client connections >> accepted >> client_drop 0 0.00 Connection dropped, no >> sess >> client_req 316738 120.11 Client requests received >> cache_hit 32477 12.32 Cache hits >> cache_hitpass 0 0.00 Cache hits for pass >> cache_miss 93703 35.53 Cache misses >> backend_conn 261033 98.99 Backend conn. success >> backend_unhealthy 0 0.00 Backend conn. not >> attempted >> backend_busy 0 0.00 Backend conn. too many >> backend_fail 0 0.00 Backend conn. failures >> backend_reuse 23305 8.84 Backend conn. reuses >> backend_toolate 528 0.20 Backend conn. was closed >> backend_recycle 23833 9.04 Backend conn. recycles >> backend_unused 0 0.00 Backend conn. unused >> fetch_head 0 0.00 Fetch head >> fetch_length 280973 106.55 Fetch with Length >> fetch_chunked 1801 0.68 Fetch chunked >> fetch_eof 0 0.00 Fetch EOF >> fetch_bad 0 0.00 Fetch had bad headers >> fetch_close 1329 0.50 Fetch wanted close >> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >> fetch_zero 0 0.00 Fetch zero len >> fetch_failed 0 0.00 Fetch failed >> n_sess_mem 284 . N struct sess_mem >> n_sess 35 . N struct sess >> n_object 90560 . N struct object >> n_vampireobject 0 . N unresurrected objects >> n_objectcore 90616 . N struct objectcore >> n_objecthead 25146 . N struct objecthead >> n_smf 0 . N struct smf >> n_smf_frag 0 . N small free smf >> n_smf_large 0 . N large free smf >> n_vbe_conn 10 . N struct vbe_conn >> n_wrk 200 . N worker threads >> n_wrk_create 248 0.09 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 100988 38.30 N worker threads limited >> n_wrk_queue 0 0.00 N queued work requests >> n_wrk_overflow 630 0.24 N overflowed work requests >> n_wrk_drop 0 0.00 N dropped work requests >> n_backend 5 . N backends >> n_expired 1027 . N expired objects >> n_lru_nuked 2108 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 12558 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 5 0.00 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 315222 119.54 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 316740 120.11 Total Sessions >> s_req 316738 120.11 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 190664 72.30 Total pass >> s_fetch 284103 107.74 Total fetch >> s_hdrbytes 114236150 43320.50 Total header bytes >> s_bodybytes 355198316 134697.88 Total body bytes >> sess_closed 316740 120.11 Session Closed >> sess_pipeline 0 0.00 Session Pipeline >> sess_readahead 0 0.00 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 33 0.01 Session herd >> shm_records 27534992 10441.79 SHM records >> shm_writes 1555265 589.79 SHM writes >> shm_flushes 0 0.00 SHM flushes due to >> overflow >> shm_cont 1689 0.64 SHM MTX contention >> shm_cycles 12 0.00 SHM cycles through buffer >> sm_nreq 0 0.00 allocator requests >> sm_nobj 0 . outstanding allocations >> sm_balloc 0 . bytes allocated >> sm_bfree 0 . bytes free >> sma_nreq 379783 144.02 SMA allocator requests >> sma_nobj 181121 . SMA outstanding >> allocations >> sma_nbytes 1073735584 . SMA outstanding bytes >> sma_balloc 1488895305 . SMA bytes allocated >> sma_bfree 415159721 . SMA bytes free >> sms_nreq 268 0.10 SMS allocator requests >> sms_nobj 0 . SMS outstanding >> allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 156684 . SMS bytes allocated >> sms_bfree 156684 . SMS bytes freed >> backend_req 284202 107.77 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> hcb_nolock 0 0.00 HCB Lookups without lock >> hcb_lock 0 0.00 HCB Lookups with lock >> hcb_insert 0 0.00 HCB Inserts >> esi_parse 0 0.00 Objects ESI parsed >> (unlock) >> esi_errors 0 0.00 ESI parse errors (unlock) >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> >> As you can see I have now 200 worker threads. >> Still its using 1.8G and is still increasing (~1 to 5 mb/s) >> >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb+varnish at slide.com] >> Verzonden: donderdag 5 november 2009 1:18 >> Aan: Henry Paulissen >> CC: varnish-misc at projects.linpro.no >> Onderwerp: Re: Varnish virtual memory usage >> >> Hmm, well the memory adds up to a 1.5G -s option (can you confirm >> what >> you use with -s?) and memory required to run the number of threads >> you're running. Unless your -s is drastically smaller than 1.5GB, >> the >> pmap you sent is of a normal, non-leaking process. >> >> Ken >> >> On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: >> >>> Our load balancer transforms all connections from keep-alive to >>> close. >>> So keep-alive connections aren?t the issue here. >>> >>> Also, if I limit the thread count I still see the same behavior. >>> >>> -----Oorspronkelijk bericht----- >>> Van: Ken Brownfield [mailto:kb at slide.com] >>> Verzonden: donderdag 5 november 2009 0:31 >>> Aan: Henry Paulissen >>> CC: varnish-misc at projects.linpro.no >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> Looks like varnish is allocating ~1.5GB of RAM for pure cache (which >>> may roughly match your "-s file" option) but 1,610 threads with your >>> 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the >>> footprint of this instance as roughly 3.6GB, and I'm assuming top/ps >>> agree with that number. >>> >>> Unless your "-s file" option is significantly less than 1-1.5GB, the >>> sheer thread count explains your memory usage: maybe using a >>> stacksize >>> of 512K or 256K could help, and/or disable keepalives on the client >>> side? >>> >>> Also, if you happen to be using a load balancer, TCP Buffering >>> (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically >>> reduce the thread count (and they can handle the persistent >>> keepalives >>> as well). >>> >>> But IMHO, an event-based (for example) handler for "idle" or "slow" >>> threads is probably the next important feature, just below >>> persistence. Without something like TCP buffering, the memory >>> available for actual caching is dwarfed by the thread stacksize >>> alloc >>> overhead. >>> >>> Ken >>> >>> On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: >>> >>>> I attached the memory dump. >>>> >>>> Child processes count gives me 1610 processes (on this instance). >>>> Currently the server isn?t so busy (~175 requests / sec). >>>> >>>> Varnishstat -1: >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> uptime 3090 . Child uptime >>>> client_conn 435325 140.88 Client connections >>>> accepted >>>> client_drop 0 0.00 Connection dropped, no >>>> sess >>>> client_req 435294 140.87 Client requests received >>>> cache_hit 45740 14.80 Cache hits >>>> cache_hitpass 0 0.00 Cache hits for pass >>>> cache_miss 126445 40.92 Cache misses >>>> backend_conn 355277 114.98 Backend conn. success >>>> backend_unhealthy 0 0.00 Backend conn. not >>>> attempted >>>> backend_busy 0 0.00 Backend conn. too many >>>> backend_fail 0 0.00 Backend conn. failures >>>> backend_reuse 34331 11.11 Backend conn. reuses >>>> backend_toolate 690 0.22 Backend conn. was closed >>>> backend_recycle 35021 11.33 Backend conn. recycles >>>> backend_unused 0 0.00 Backend conn. unused >>>> fetch_head 0 0.00 Fetch head >>>> fetch_length 384525 124.44 Fetch with Length >>>> fetch_chunked 2441 0.79 Fetch chunked >>>> fetch_eof 0 0.00 Fetch EOF >>>> fetch_bad 0 0.00 Fetch had bad headers >>>> fetch_close 2028 0.66 Fetch wanted close >>>> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 >>>> closed >>>> fetch_zero 0 0.00 Fetch zero len >>>> fetch_failed 0 0.00 Fetch failed >>>> n_sess_mem 989 . N struct sess_mem >>>> n_sess 94 . N struct sess >>>> n_object 89296 . N struct object >>>> n_vampireobject 0 . N unresurrected objects >>>> n_objectcore 89640 . N struct objectcore >>>> n_objecthead 25379 . N struct objecthead >>>> n_smf 0 . N struct smf >>>> n_smf_frag 0 . N small free smf >>>> n_smf_large 0 . N large free smf >>>> n_vbe_conn 26 . N struct vbe_conn >>>> n_wrk 1600 . N worker threads >>>> n_wrk_create 1600 0.52 N worker threads created >>>> n_wrk_failed 0 0.00 N worker threads not >>>> created >>>> n_wrk_max 1274 0.41 N worker threads limited >>>> n_wrk_queue 0 0.00 N queued work requests >>>> n_wrk_overflow 1342 0.43 N overflowed work >>>> requests >>>> n_wrk_drop 0 0.00 N dropped work requests >>>> n_backend 5 . N backends >>>> n_expired 1393 . N expired objects >>>> n_lru_nuked 35678 . N LRU nuked objects >>>> n_lru_saved 0 . N LRU saved objects >>>> n_lru_moved 20020 . N LRU moved objects >>>> n_deathrow 0 . N objects on deathrow >>>> losthdr 11 0.00 HTTP header overflows >>>> n_objsendfile 0 0.00 Objects sent with >>>> sendfile >>>> n_objwrite 433558 140.31 Objects sent with write >>>> n_objoverflow 0 0.00 Objects overflowing >>>> workspace >>>> s_sess 435298 140.87 Total Sessions >>>> s_req 435294 140.87 Total Requests >>>> s_pipe 0 0.00 Total pipe >>>> s_pass 263190 85.17 Total pass >>>> s_fetch 388994 125.89 Total fetch >>>> s_hdrbytes 157405143 50940.18 Total header bytes >>>> s_bodybytes 533077018 172516.83 Total body bytes >>>> sess_closed 435291 140.87 Session Closed >>>> sess_pipeline 0 0.00 Session Pipeline >>>> sess_readahead 0 0.00 Session Read Ahead >>>> sess_linger 0 0.00 Session Linger >>>> sess_herd 69 0.02 Session herd >>>> shm_records 37936743 12277.26 SHM records >>>> shm_writes 2141029 692.89 SHM writes >>>> shm_flushes 0 0.00 SHM flushes due to >>>> overflow >>>> shm_cont 3956 1.28 SHM MTX contention >>>> shm_cycles 16 0.01 SHM cycles through >>>> buffer >>>> sm_nreq 0 0.00 allocator requests >>>> sm_nobj 0 . outstanding allocations >>>> sm_balloc 0 . bytes allocated >>>> sm_bfree 0 . bytes free >>>> sma_nreq 550879 178.28 SMA allocator requests >>>> sma_nobj 178590 . SMA outstanding >>>> allocations >>>> sma_nbytes 1073690180 . SMA outstanding bytes >>>> sma_balloc 2066782844 . SMA bytes allocated >>>> sma_bfree 993092664 . SMA bytes free >>>> sms_nreq 649 0.21 SMS allocator requests >>>> sms_nobj 0 . SMS outstanding >>>> allocations >>>> sms_nbytes 0 . SMS outstanding bytes >>>> sms_balloc 378848 . SMS bytes allocated >>>> sms_bfree 378848 . SMS bytes freed >>>> backend_req 389342 126.00 Backend requests made >>>> n_vcl 1 0.00 N vcl total >>>> n_vcl_avail 1 0.00 N vcl available >>>> n_vcl_discard 0 0.00 N vcl discarded >>>> n_purge 1 . N total active purges >>>> n_purge_add 1 0.00 N new purges added >>>> n_purge_retire 0 0.00 N old purges deleted >>>> n_purge_obj_test 0 0.00 N objects tested >>>> n_purge_re_test 0 0.00 N regexps tested against >>>> n_purge_dups 0 0.00 N duplicate purges >>>> removed >>>> hcb_nolock 0 0.00 HCB Lookups without lock >>>> hcb_lock 0 0.00 HCB Lookups with lock >>>> hcb_insert 0 0.00 HCB Inserts >>>> esi_parse 0 0.00 Objects ESI parsed >>>> (unlock) >>>> esi_errors 0 0.00 ESI parse errors >>>> (unlock) >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> >>>> >>>> >>>> -----Oorspronkelijk bericht----- >>>> Van: Ken Brownfield [mailto:kb at slide.com] >>>> Verzonden: donderdag 5 november 2009 0:01 >>>> Aan: Henry Paulissen >>>> CC: Rog?rio Schneider >>>> Onderwerp: Re: Varnish virtual memory usage >>>> >>>> Curious: For a heavily leaked varnish instance, can you run "pmap >>>> -x >>>> PID" on the parent PID and child PID, and record how many threads >>>> are >>>> active (something like 'ps -efT | grep varnish | wc -l')? Might >>>> help >>>> isolate the RAM usage. >>>> >>>> Sorry if you have done this already; didn't find it in my email >>>> archive. >>>> >>>> Ken >>>> >>>> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >>>> >>>>> No, varnishd still usages way more than allowed. >>>>> The only solutions I found at the moment are: >>>>> >>>>> Run on x64 linux and restart varnish every 4 hours (crontab). >>>>> Run on x32 linux (all is working as expected but you cant allocate >>>>> more as >>>>> 4G each instance). >>>>> >>>>> >>>>> I hope linpro will find this issue and address it. >>>>> >>>>> >>>>> >>>>> Again @ linpro: if you need a machine (with live traffic) to run >>>>> some tests, >>>>> please contact me. >>>>> We have multiple machines in high availability, so testing and >>>>> rebooting a >>>>> instance wouldn?t hurt us. >>>>> >>>>> >>>>> Regards. >>>>> >>>>> -----Oorspronkelijk bericht----- >>>>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>>>> Verzonden: woensdag 4 november 2009 22:04 >>>>> Aan: Henry Paulissen >>>>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>>>> Onderwerp: Re: Varnish virtual memory usage >>>>> >>>>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>>>> >>>>> wrote: >>>>>> I will report back. >>>>> >>>>> Did this solve the problem? >>>>> >>>>> Removing this? >>>>> >>>>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>>>> "no-cache") { >>>>>>> purge_url(req.url); >>>>>>> } >>>>>>> >>>>> >>>>> Cheers >>>>> >>>>> Att, >>>>> -- >>>>> Rog?rio Schneider >>>>> >>>>> MSN: stockrt at hotmail.com >>>>> GTalk: stockrt at gmail.com >>>>> Skype: stockrt >>>>> http://stockrt.github.com >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at projects.linpro.no >>>>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>>> >>> >> > From v.bilek at 1art.cz Thu Nov 5 08:31:37 2009 From: v.bilek at 1art.cz (=?UTF-8?B?VsOhY2xhdiBCw61sZWs=?=) Date: Thu, 05 Nov 2009 09:31:37 +0100 Subject: SV: Re: Varnish stuck on stresstest/approved by real traffic In-Reply-To: References: Message-ID: <4AF28D69.2020506@1art.cz> Mail for Exchange napsal(a): > (sorry for bad quoting/name... Damned phone) > > Yes. The value depends on the size of the average object, but often values all the way up to 150ms make sense, possibly even more. will try ... > If this was/is the issue for you, you should see a significantt improvement on performance when you are cpu-sarved on 8core server ve do not get ower 300% cpu usage >, and on 2.0.4 and older you will often use noticeably fewer threads (threads should no longer grow considerably larger than actual concurrent requests, which is somewhat counter intuitive) Vaclav > > - Kristian > > --- opprinnelig melding --- > Fra: V?clav B?lek > Emne: Re: Varnish stuck on stresstest/approved by real traffic > Dato: 5. november 2009 > Klokkeslett: 07:39:48 > > > > Kristian Lyngstol napsal(a): >> (Excessive trimming ahead. Whoohoo) >> >> On Tue, Nov 03, 2009 at 11:51:22AM +0100, V?clav B?lek wrote: >>> When testing varnish throughput and scalability I have found strange >>> varnish behavior. >> What's the cpu load at that point? > zero > > >> Also: use sess_linger. No session_linger == kaboom when things get too >> loaded. It's 50ms default in 2.0.5/trunk, but set to 0ms in 2.0.4 and >> previous. >> > if I understand i should set higher value than 0, right? > > >> The behaviour in trunk is slightly different/better, but it's still worth >> using it in 2.0.4. >> >> - Kristian From h.paulissen at qbell.nl Thu Nov 5 10:31:19 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 11:31:19 +0100 Subject: Varnish virtual memory usage In-Reply-To: References: <-1252439508831938500@unknownmsgid> <2e63e54c0910212352r1c822bfekcad8914f79b65d1e@mail.gmail.com> <5405273812888663297@unknownmsgid> <100657c90911041304q544adbb4jd061337fbb668a1@mail.gmail.com> <002b01ca5da1$a3cd50a0$eb67f1e0$@paulissen@qbell.nl> <05C49245-14D5-45E2-858A-F89554650CC6@slide.com> <002d01ca5da5$14010a80$3c031f80$@paulissen@qbell.nl> <003201ca5da9$57ae7e30$070b7a90$@paulissen@qbell.nl> <8FD99932-7033-44AF-AE09-DBB0A5C48026@slide.com> <003301ca5db0$243b1700$6cb14500$@paulissen@qbell.nl> <36F57DAE-82B4-4986-BEB0-E13AA106386F@slide.com> <003c01ca5db1$9755af10$c6010d30$@paulissen@qbell.nl> Message-ID: <000901ca5e03$1cc31420$56493c60$@paulissen@qbell.nl> Looks like setting the workspaces down isn?t doing the trick. 2.2G (top reports similar for virt memory) Uptime: 2.8 hours Malloc limit: 1G See pmap.txt for my startup command. See config.txt for the vcl of this instance. Note that xxx_backend servers are not the other varnish instances but real serving webservers (otherwise you'll create a cache loop :-)). Regards. ============================================================================ ================================== ============================================================================ ================================== uptime 10214 . Child uptime client_conn 858404 84.04 Client connections accepted client_drop 0 0.00 Connection dropped, no sess client_req 858391 84.04 Client requests received cache_hit 107268 10.50 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 248859 24.36 Cache misses backend_conn 684612 67.03 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 66598 6.52 Backend conn. reuses backend_toolate 1704 0.17 Backend conn. was closed backend_recycle 68302 6.69 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 0 0.00 Fetch head fetch_length 742190 72.66 Fetch with Length fetch_chunked 4959 0.49 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 3579 0.35 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed n_sess_mem 194 . N struct sess_mem n_sess 50 . N struct sess n_object 90563 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 90621 . N struct objectcore n_objecthead 25964 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 18446744073709551613 . N struct vbe_conn n_wrk 200 . N worker threads n_wrk_create 200 0.02 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 95621 9.36 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 266 0.03 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 5 . N backends n_expired 20389 . N expired objects n_lru_nuked 137897 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 40803 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 15 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 855168 83.73 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 858402 84.04 Total Sessions s_req 858391 84.04 Total Requests s_pipe 0 0.00 Total pipe s_pass 502432 49.19 Total pass s_fetch 750728 73.50 Total fetch s_hdrbytes 307039348 30060.64 Total header bytes s_bodybytes 950744321 93082.47 Total body bytes sess_closed 858402 84.04 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 10 0.00 Session herd shm_records 73693195 7214.92 SHM records shm_writes 4200749 411.27 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 3529 0.35 SHM MTX contention shm_cycles 31 0.00 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 1137287 111.35 SMA allocator requests sma_nobj 181121 . SMA outstanding allocations sma_nbytes 1073734798 . SMA outstanding bytes sma_balloc 3979904937 . SMA bytes allocated sma_bfree 2906170139 . SMA bytes free sms_nreq 577 0.06 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 337209 . SMS bytes allocated sms_bfree 337209 . SMS bytes freed backend_req 750979 73.52 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 1 . N total active purges n_purge_add 1 0.00 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 0 0.00 N objects tested n_purge_re_test 0 0.00 N regexps tested against n_purge_dups 0 0.00 N duplicate purges removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_parse 0 0.00 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) ============================================================================ ================================== ============================================================================ ================================== -----Original Message----- From: Ken Brownfield [mailto:kb+varnish at slide.com] Sent: donderdag 5 november 2009 2:16 To: Henry Paulissen Cc: varnish-misc at projects.linpro.no Subject: Re: Varnish virtual memory usage Ah, sorry, missed that the command-line was in there. Given 1G of cache, a large sess_workspace and shm_workspace buffers, and the number of threads, the math adds up correctly. Do you definitely need those large buffers? Your memory footprint will simply increase with thread count; reducing active simultaneous connections, reducing the stack size, and reducing the large sess_workspace are the only ways I know of for you to control the memory. I'm really not seeing a leak or malfunction, IMHO. The reason behind your high/growing worker count is worth investigating (lower send_timeout? slow/disconnecting clients? strace the threads to see what they're doing?) Minor thing: overflow_max is a percentage, so 10000 is probably ignored? -- Ken On Nov 4, 2009, at 4:47 PM, Henry Paulissen wrote: > See the pmap.txt attachment. > The startup command is in the beginning of the file. > > > /usr/local/varnish/sbin/varnishd -P /var/run/xxx.pid -a 0.0.0.0:xxx -f > /usr/local/varnish/etc/varnish/xxx.xxx.xxx.vcl -T 0.0.0.0:xxx -s > malloc,1G > -i xxx -n /usr/local/varnish/var/varnish/xxx -p obj_workspace 8192 -p > sess_workspace 262144 -p listen_depth 8192 -p lru_interval 60 -p > sess_timeout 10 -p shm_workspace 32768 -p ping_interval 2 -p > thread_pools 4 > -p thread_pool_min 50 -p thread_pool_max 4000 -p esi_syntax 1 -p > overflow_max 10000 > > > -----Oorspronkelijk bericht----- > Van: Ken Brownfield [mailto:kb+varnish at slide.com] > Verzonden: donderdag 5 november 2009 1:42 > Aan: Henry Paulissen > Onderwerp: Re: Varnish virtual memory usage > > Is your -s set at 1.5GB? What's your varnishd command line? > > I'm not sure if you realize that thread_pool does not control the > number of threads, only the number of pools (and mutexes). I think > thread_pool_max is what you're looking for? > -- > Ken > > On Nov 4, 2009, at 4:37 PM, Henry Paulissen wrote: > >> Running varnishd now for abount 30 minutes with a thread_pool of 4. >> >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> uptime 2637 . Child uptime >> client_conn 316759 120.12 Client connections >> accepted >> client_drop 0 0.00 Connection dropped, no >> sess >> client_req 316738 120.11 Client requests received >> cache_hit 32477 12.32 Cache hits >> cache_hitpass 0 0.00 Cache hits for pass >> cache_miss 93703 35.53 Cache misses >> backend_conn 261033 98.99 Backend conn. success >> backend_unhealthy 0 0.00 Backend conn. not >> attempted >> backend_busy 0 0.00 Backend conn. too many >> backend_fail 0 0.00 Backend conn. failures >> backend_reuse 23305 8.84 Backend conn. reuses >> backend_toolate 528 0.20 Backend conn. was closed >> backend_recycle 23833 9.04 Backend conn. recycles >> backend_unused 0 0.00 Backend conn. unused >> fetch_head 0 0.00 Fetch head >> fetch_length 280973 106.55 Fetch with Length >> fetch_chunked 1801 0.68 Fetch chunked >> fetch_eof 0 0.00 Fetch EOF >> fetch_bad 0 0.00 Fetch had bad headers >> fetch_close 1329 0.50 Fetch wanted close >> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed >> fetch_zero 0 0.00 Fetch zero len >> fetch_failed 0 0.00 Fetch failed >> n_sess_mem 284 . N struct sess_mem >> n_sess 35 . N struct sess >> n_object 90560 . N struct object >> n_vampireobject 0 . N unresurrected objects >> n_objectcore 90616 . N struct objectcore >> n_objecthead 25146 . N struct objecthead >> n_smf 0 . N struct smf >> n_smf_frag 0 . N small free smf >> n_smf_large 0 . N large free smf >> n_vbe_conn 10 . N struct vbe_conn >> n_wrk 200 . N worker threads >> n_wrk_create 248 0.09 N worker threads created >> n_wrk_failed 0 0.00 N worker threads not >> created >> n_wrk_max 100988 38.30 N worker threads limited >> n_wrk_queue 0 0.00 N queued work requests >> n_wrk_overflow 630 0.24 N overflowed work requests >> n_wrk_drop 0 0.00 N dropped work requests >> n_backend 5 . N backends >> n_expired 1027 . N expired objects >> n_lru_nuked 2108 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 12558 . N LRU moved objects >> n_deathrow 0 . N objects on deathrow >> losthdr 5 0.00 HTTP header overflows >> n_objsendfile 0 0.00 Objects sent with sendfile >> n_objwrite 315222 119.54 Objects sent with write >> n_objoverflow 0 0.00 Objects overflowing >> workspace >> s_sess 316740 120.11 Total Sessions >> s_req 316738 120.11 Total Requests >> s_pipe 0 0.00 Total pipe >> s_pass 190664 72.30 Total pass >> s_fetch 284103 107.74 Total fetch >> s_hdrbytes 114236150 43320.50 Total header bytes >> s_bodybytes 355198316 134697.88 Total body bytes >> sess_closed 316740 120.11 Session Closed >> sess_pipeline 0 0.00 Session Pipeline >> sess_readahead 0 0.00 Session Read Ahead >> sess_linger 0 0.00 Session Linger >> sess_herd 33 0.01 Session herd >> shm_records 27534992 10441.79 SHM records >> shm_writes 1555265 589.79 SHM writes >> shm_flushes 0 0.00 SHM flushes due to >> overflow >> shm_cont 1689 0.64 SHM MTX contention >> shm_cycles 12 0.00 SHM cycles through buffer >> sm_nreq 0 0.00 allocator requests >> sm_nobj 0 . outstanding allocations >> sm_balloc 0 . bytes allocated >> sm_bfree 0 . bytes free >> sma_nreq 379783 144.02 SMA allocator requests >> sma_nobj 181121 . SMA outstanding >> allocations >> sma_nbytes 1073735584 . SMA outstanding bytes >> sma_balloc 1488895305 . SMA bytes allocated >> sma_bfree 415159721 . SMA bytes free >> sms_nreq 268 0.10 SMS allocator requests >> sms_nobj 0 . SMS outstanding >> allocations >> sms_nbytes 0 . SMS outstanding bytes >> sms_balloc 156684 . SMS bytes allocated >> sms_bfree 156684 . SMS bytes freed >> backend_req 284202 107.77 Backend requests made >> n_vcl 1 0.00 N vcl total >> n_vcl_avail 1 0.00 N vcl available >> n_vcl_discard 0 0.00 N vcl discarded >> n_purge 1 . N total active purges >> n_purge_add 1 0.00 N new purges added >> n_purge_retire 0 0.00 N old purges deleted >> n_purge_obj_test 0 0.00 N objects tested >> n_purge_re_test 0 0.00 N regexps tested against >> n_purge_dups 0 0.00 N duplicate purges removed >> hcb_nolock 0 0.00 HCB Lookups without lock >> hcb_lock 0 0.00 HCB Lookups with lock >> hcb_insert 0 0.00 HCB Inserts >> esi_parse 0 0.00 Objects ESI parsed >> (unlock) >> esi_errors 0 0.00 ESI parse errors (unlock) >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> = >> = >> = >> = >> = >> = >> = >> ===================================================================== >> ======================== >> >> As you can see I have now 200 worker threads. >> Still its using 1.8G and is still increasing (~1 to 5 mb/s) >> >> >> -----Oorspronkelijk bericht----- >> Van: Ken Brownfield [mailto:kb+varnish at slide.com] >> Verzonden: donderdag 5 november 2009 1:18 >> Aan: Henry Paulissen >> CC: varnish-misc at projects.linpro.no >> Onderwerp: Re: Varnish virtual memory usage >> >> Hmm, well the memory adds up to a 1.5G -s option (can you confirm >> what >> you use with -s?) and memory required to run the number of threads >> you're running. Unless your -s is drastically smaller than 1.5GB, >> the >> pmap you sent is of a normal, non-leaking process. >> >> Ken >> >> On Nov 4, 2009, at 3:48 PM, Henry Paulissen wrote: >> >>> Our load balancer transforms all connections from keep-alive to >>> close. >>> So keep-alive connections aren?t the issue here. >>> >>> Also, if I limit the thread count I still see the same behavior. >>> >>> -----Oorspronkelijk bericht----- >>> Van: Ken Brownfield [mailto:kb at slide.com] >>> Verzonden: donderdag 5 november 2009 0:31 >>> Aan: Henry Paulissen >>> CC: varnish-misc at projects.linpro.no >>> Onderwerp: Re: Varnish virtual memory usage >>> >>> Looks like varnish is allocating ~1.5GB of RAM for pure cache (which >>> may roughly match your "-s file" option) but 1,610 threads with your >>> 1MB stack limit will use 1.7GB of RAM. Pmap is reporting the >>> footprint of this instance as roughly 3.6GB, and I'm assuming top/ps >>> agree with that number. >>> >>> Unless your "-s file" option is significantly less than 1-1.5GB, the >>> sheer thread count explains your memory usage: maybe using a >>> stacksize >>> of 512K or 256K could help, and/or disable keepalives on the client >>> side? >>> >>> Also, if you happen to be using a load balancer, TCP Buffering >>> (NetScaler) or Proxy Buffering? (BigIP) or the like can drastically >>> reduce the thread count (and they can handle the persistent >>> keepalives >>> as well). >>> >>> But IMHO, an event-based (for example) handler for "idle" or "slow" >>> threads is probably the next important feature, just below >>> persistence. Without something like TCP buffering, the memory >>> available for actual caching is dwarfed by the thread stacksize >>> alloc >>> overhead. >>> >>> Ken >>> >>> On Nov 4, 2009, at 3:18 PM, Henry Paulissen wrote: >>> >>>> I attached the memory dump. >>>> >>>> Child processes count gives me 1610 processes (on this instance). >>>> Currently the server isn?t so busy (~175 requests / sec). >>>> >>>> Varnishstat -1: >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> uptime 3090 . Child uptime >>>> client_conn 435325 140.88 Client connections >>>> accepted >>>> client_drop 0 0.00 Connection dropped, no >>>> sess >>>> client_req 435294 140.87 Client requests received >>>> cache_hit 45740 14.80 Cache hits >>>> cache_hitpass 0 0.00 Cache hits for pass >>>> cache_miss 126445 40.92 Cache misses >>>> backend_conn 355277 114.98 Backend conn. success >>>> backend_unhealthy 0 0.00 Backend conn. not >>>> attempted >>>> backend_busy 0 0.00 Backend conn. too many >>>> backend_fail 0 0.00 Backend conn. failures >>>> backend_reuse 34331 11.11 Backend conn. reuses >>>> backend_toolate 690 0.22 Backend conn. was closed >>>> backend_recycle 35021 11.33 Backend conn. recycles >>>> backend_unused 0 0.00 Backend conn. unused >>>> fetch_head 0 0.00 Fetch head >>>> fetch_length 384525 124.44 Fetch with Length >>>> fetch_chunked 2441 0.79 Fetch chunked >>>> fetch_eof 0 0.00 Fetch EOF >>>> fetch_bad 0 0.00 Fetch had bad headers >>>> fetch_close 2028 0.66 Fetch wanted close >>>> fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 >>>> closed >>>> fetch_zero 0 0.00 Fetch zero len >>>> fetch_failed 0 0.00 Fetch failed >>>> n_sess_mem 989 . N struct sess_mem >>>> n_sess 94 . N struct sess >>>> n_object 89296 . N struct object >>>> n_vampireobject 0 . N unresurrected objects >>>> n_objectcore 89640 . N struct objectcore >>>> n_objecthead 25379 . N struct objecthead >>>> n_smf 0 . N struct smf >>>> n_smf_frag 0 . N small free smf >>>> n_smf_large 0 . N large free smf >>>> n_vbe_conn 26 . N struct vbe_conn >>>> n_wrk 1600 . N worker threads >>>> n_wrk_create 1600 0.52 N worker threads created >>>> n_wrk_failed 0 0.00 N worker threads not >>>> created >>>> n_wrk_max 1274 0.41 N worker threads limited >>>> n_wrk_queue 0 0.00 N queued work requests >>>> n_wrk_overflow 1342 0.43 N overflowed work >>>> requests >>>> n_wrk_drop 0 0.00 N dropped work requests >>>> n_backend 5 . N backends >>>> n_expired 1393 . N expired objects >>>> n_lru_nuked 35678 . N LRU nuked objects >>>> n_lru_saved 0 . N LRU saved objects >>>> n_lru_moved 20020 . N LRU moved objects >>>> n_deathrow 0 . N objects on deathrow >>>> losthdr 11 0.00 HTTP header overflows >>>> n_objsendfile 0 0.00 Objects sent with >>>> sendfile >>>> n_objwrite 433558 140.31 Objects sent with write >>>> n_objoverflow 0 0.00 Objects overflowing >>>> workspace >>>> s_sess 435298 140.87 Total Sessions >>>> s_req 435294 140.87 Total Requests >>>> s_pipe 0 0.00 Total pipe >>>> s_pass 263190 85.17 Total pass >>>> s_fetch 388994 125.89 Total fetch >>>> s_hdrbytes 157405143 50940.18 Total header bytes >>>> s_bodybytes 533077018 172516.83 Total body bytes >>>> sess_closed 435291 140.87 Session Closed >>>> sess_pipeline 0 0.00 Session Pipeline >>>> sess_readahead 0 0.00 Session Read Ahead >>>> sess_linger 0 0.00 Session Linger >>>> sess_herd 69 0.02 Session herd >>>> shm_records 37936743 12277.26 SHM records >>>> shm_writes 2141029 692.89 SHM writes >>>> shm_flushes 0 0.00 SHM flushes due to >>>> overflow >>>> shm_cont 3956 1.28 SHM MTX contention >>>> shm_cycles 16 0.01 SHM cycles through >>>> buffer >>>> sm_nreq 0 0.00 allocator requests >>>> sm_nobj 0 . outstanding allocations >>>> sm_balloc 0 . bytes allocated >>>> sm_bfree 0 . bytes free >>>> sma_nreq 550879 178.28 SMA allocator requests >>>> sma_nobj 178590 . SMA outstanding >>>> allocations >>>> sma_nbytes 1073690180 . SMA outstanding bytes >>>> sma_balloc 2066782844 . SMA bytes allocated >>>> sma_bfree 993092664 . SMA bytes free >>>> sms_nreq 649 0.21 SMS allocator requests >>>> sms_nobj 0 . SMS outstanding >>>> allocations >>>> sms_nbytes 0 . SMS outstanding bytes >>>> sms_balloc 378848 . SMS bytes allocated >>>> sms_bfree 378848 . SMS bytes freed >>>> backend_req 389342 126.00 Backend requests made >>>> n_vcl 1 0.00 N vcl total >>>> n_vcl_avail 1 0.00 N vcl available >>>> n_vcl_discard 0 0.00 N vcl discarded >>>> n_purge 1 . N total active purges >>>> n_purge_add 1 0.00 N new purges added >>>> n_purge_retire 0 0.00 N old purges deleted >>>> n_purge_obj_test 0 0.00 N objects tested >>>> n_purge_re_test 0 0.00 N regexps tested against >>>> n_purge_dups 0 0.00 N duplicate purges >>>> removed >>>> hcb_nolock 0 0.00 HCB Lookups without lock >>>> hcb_lock 0 0.00 HCB Lookups with lock >>>> hcb_insert 0 0.00 HCB Inserts >>>> esi_parse 0 0.00 Objects ESI parsed >>>> (unlock) >>>> esi_errors 0 0.00 ESI parse errors >>>> (unlock) >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> ====== >>>> >>>> >>>> >>>> -----Oorspronkelijk bericht----- >>>> Van: Ken Brownfield [mailto:kb at slide.com] >>>> Verzonden: donderdag 5 november 2009 0:01 >>>> Aan: Henry Paulissen >>>> CC: Rog?rio Schneider >>>> Onderwerp: Re: Varnish virtual memory usage >>>> >>>> Curious: For a heavily leaked varnish instance, can you run "pmap >>>> -x >>>> PID" on the parent PID and child PID, and record how many threads >>>> are >>>> active (something like 'ps -efT | grep varnish | wc -l')? Might >>>> help >>>> isolate the RAM usage. >>>> >>>> Sorry if you have done this already; didn't find it in my email >>>> archive. >>>> >>>> Ken >>>> >>>> On Nov 4, 2009, at 2:53 PM, Henry Paulissen wrote: >>>> >>>>> No, varnishd still usages way more than allowed. >>>>> The only solutions I found at the moment are: >>>>> >>>>> Run on x64 linux and restart varnish every 4 hours (crontab). >>>>> Run on x32 linux (all is working as expected but you cant allocate >>>>> more as >>>>> 4G each instance). >>>>> >>>>> >>>>> I hope linpro will find this issue and address it. >>>>> >>>>> >>>>> >>>>> Again @ linpro: if you need a machine (with live traffic) to run >>>>> some tests, >>>>> please contact me. >>>>> We have multiple machines in high availability, so testing and >>>>> rebooting a >>>>> instance wouldn?t hurt us. >>>>> >>>>> >>>>> Regards. >>>>> >>>>> -----Oorspronkelijk bericht----- >>>>> Van: Rog?rio Schneider [mailto:stockrt at gmail.com] >>>>> Verzonden: woensdag 4 november 2009 22:04 >>>>> Aan: Henry Paulissen >>>>> CC: Scott Wilson; varnish-misc at projects.linpro.no >>>>> Onderwerp: Re: Varnish virtual memory usage >>>>> >>>>> On Thu, Oct 22, 2009 at 6:04 AM, Henry Paulissen >>>>> >>>>> wrote: >>>>>> I will report back. >>>>> >>>>> Did this solve the problem? >>>>> >>>>> Removing this? >>>>> >>>>>>> if (req.http.Cache-Control == "no-cache" || req.http.Pragma == >>>>> "no-cache") { >>>>>>> purge_url(req.url); >>>>>>> } >>>>>>> >>>>> >>>>> Cheers >>>>> >>>>> Att, >>>>> -- >>>>> Rog?rio Schneider >>>>> >>>>> MSN: stockrt at hotmail.com >>>>> GTalk: stockrt at gmail.com >>>>> Skype: stockrt >>>>> http://stockrt.github.com >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at projects.linpro.no >>>>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>>> >>> >> > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pmap.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: config.txt URL: From stockrt at gmail.com Thu Nov 5 14:07:36 2009 From: stockrt at gmail.com (=?ISO-8859-1?Q?Rog=E9rio_Schneider?=) Date: Thu, 5 Nov 2009 12:07:36 -0200 Subject: SV: Re: Varnish stuck on stresstest/approved by real traffic In-Reply-To: <4AF28D69.2020506@1art.cz> References: <4AF28D69.2020506@1art.cz> Message-ID: <100657c90911050607o6eebf7b2mc8d7715fcf13d22a@mail.gmail.com> 2009/11/5 V?clav B?lek : >>, and on 2.0.4 and older you will often use noticeably fewer threads (threads should no longer grow considerably larger than actual concurrent requests, which is somewhat counter intuitive) I would like to say, about sess_linger, that when I tested it, all things went ok, until I put synthetic load on the server and tried to access it with no success from another client, using a browser, while the synthetic load was getting all its requests served. I tried somethng like: thread_pool_min = 20 thread_pool_max = 200 thread_pools = 4 Number of synthetic concurrent clients: 80 All the 4*20 threads, 20 from each thread_pool, were occupied "lingering", and no new thread have waken to serve my new connection from the browser. I was using linger 50ms, and certainly the synthetic load could make requests interval less than that. Also, I believe that putting 81 concurrent clients (much much fast clients, cause they are synthetic load in the same network with virtually no lag and no bandwidth limits) could start showing problems in this load test, due to the lack of a new thread, which should be started on-demand, the same problem I faced with my browser. I think in real life linger should be ok, but I only use linger in production with "preforked" threads, as in: thread_pool_min = thread_pool_max and thread_pool_add_delay=1 to force the varnish to start serving as soon as possible when we restart it. Leaving threads configured to be created on-demand with sess_linger, at least on 2.0.4, I think can cause you trouble if you have very fast clients and bursts/peaks of requests. Regards, -- Rog?rio Schneider MSN: stockrt at hotmail.com GTalk: stockrt at gmail.com Skype: stockrt http://stockrt.github.com From ccripy at gmail.com Thu Nov 5 20:22:04 2009 From: ccripy at gmail.com (cripy) Date: Thu, 5 Nov 2009 15:22:04 -0500 Subject: Varnish virtual memory usage In-Reply-To: <2159791486448097683@unknownmsgid> References: <2159791486448097683@unknownmsgid> Message-ID: I experienced this same issue under x64. Varnish seemed great but once I put some real traffic on it under x64 the memory leaks began and it would eventually crash/restart. Ended up putting Varnish on the back burner and have been waiting for it to stabilize before even trying to present it to upper management again. Varnish has great potential but until it can run stable under x64 it's got a long fight ahead of itself. (I do want to note that my comments are based mainly on varnish 1 and not varnish 2.0) --cripy On Wed, Oct 21, 2009 at 8:22 AM, Henry Paulissen wrote: > We encounter the same problem. > > Its seems to occur only on x64 platforms. > We decided to take a different approach and installed vmware to the > machine. > Next we did a setup of 6 guests with x32 PAE software. > > No strange memory leaks occurred since then at the price of small storage > (3.5G max) and limited worker threads (256 max). > > Opened a ticket for the problem, but the wont listen until I buy a support > contract (? ?8K). > Seems they don?t want to know there is some kind of memory issue in their > software. > > Anyway... > Varnish is running stable now with some few tricks. > > > Regards, > > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no [mailto: > varnish-misc-bounces at projects.linpro.no] On Behalf Of Kristian Lyngstol > Sent: woensdag 21 oktober 2009 13:34 > To: Roi Avinoam > Cc: varnish-misc at projects.linpro.no > Subject: Re: Varnish virtual memory usage > > On Mon, Sep 21, 2009 at 02:55:07PM +0300, Roi Avinoam wrote: > > At Metacafe we're testing the integration with Varnish, and I was > > tasked with benchmarking our Varnish setup. I intentionally > > over-flooded the server with requests, in an attempt to see how the > > system will behave under extensive traffic. Surprisingly, the server > > ran out of swap and crashed. > > That seems mighty strange. What sort of tests did you do? > > > In out configuration, "-s file,/var/lib/varnish/varnish_storage.bin,1G". > > Does it mean Varnish shouldn't use more than 1GB of the virtual memory? > > Is there any other way to limit the memory/storage usage? > > If you are using -s file and you have 4GB of memory, you are telling > Varnish to create a _file_ of 1GB, and it's up to the kernel what it keeps > in memory or not. If you actually run out of memory with this setup, you've > either hit a bug (need more details first), or you're doing something > strange like having the mmaped file (/var/lib/varnish/) in tmpfs with a > sizelimit less than 1GB or something along those lines. But I need more > details to say anything for certain. > > -- > Kristian Lyngst?l > Redpill Linpro AS > Tlf: +47 21544179 > Mob: +47 99014497 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kb+varnish at slide.com Thu Nov 5 21:35:27 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Thu, 5 Nov 2009 13:35:27 -0800 Subject: Varnish virtual memory usage In-Reply-To: References: <2159791486448097683@unknownmsgid> Message-ID: Hopefully your upper management allows you to install contemporary software and distributions. Otherwise memory leaks and x86_64 would be the least of your concerns. Honestly, you're waiting for Varnish to stabilize and you're running v1? My data point: 5 months and over 100PB of transfers, and 2.0.4 is stable and has never leaked in our pure x86_64 production environment. Its memory use can be precisely monitored and controlled between Varnish configuration and the OS environment by any competent sysadmin, IMHO. We actually can't use Squid at all because it really does leak like a sieve. pmap does not lie. I just hope that people that have problems with any software are taking on the responsibility of diagnosing their own environments as much as they expect any OSS project to diagnose its code -- the former is just as often the problem as the latter. -- Ken On Nov 5, 2009, at 12:22 PM, cripy wrote: > I experienced this same issue under x64. Varnish seemed great but > once I put some real traffic on it under x64 the memory leaks began > and it would eventually crash/restart. > > Ended up putting Varnish on the back burner and have been waiting > for it to stabilize before even trying to present it to upper > management again. > > Varnish has great potential but until it can run stable under x64 > it's got a long fight ahead of itself. > > (I do want to note that my comments are based mainly on varnish 1 > and not varnish 2.0) > > --cripy -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.paulissen at qbell.nl Thu Nov 5 21:42:16 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Thu, 5 Nov 2009 22:42:16 +0100 Subject: Varnish virtual memory usage In-Reply-To: References: <2159791486448097683@unknownmsgid> Message-ID: <005701ca5e60$d7e8a5a0$87b9f0e0$@paulissen@qbell.nl> People arent great sysadmin in 1 day. Tell us more about your system (specs, linux distro, vcl config, startup command, linux (sysctl?) tuning). Maybe it can help anybody/me. Regards. Van: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] Namens Ken Brownfield Verzonden: donderdag 5 november 2009 22:35 Aan: cripy CC: varnish-misc at projects.linpro.no Onderwerp: Re: Varnish virtual memory usage Hopefully your upper management allows you to install contemporary software and distributions. Otherwise memory leaks and x86_64 would be the least of your concerns. Honestly, you're waiting for Varnish to stabilize and you're running v1? My data point: 5 months and over 100PB of transfers, and 2.0.4 is stable and has never leaked in our pure x86_64 production environment. Its memory use can be precisely monitored and controlled between Varnish configuration and the OS environment by any competent sysadmin, IMHO. We actually can't use Squid at all because it really does leak like a sieve. pmap does not lie. I just hope that people that have problems with any software are taking on the responsibility of diagnosing their own environments as much as they expect any OSS project to diagnose its code -- the former is just as often the problem as the latter. -- Ken On Nov 5, 2009, at 12:22 PM, cripy wrote: I experienced this same issue under x64. Varnish seemed great but once I put some real traffic on it under x64 the memory leaks began and it would eventually crash/restart. Ended up putting Varnish on the back burner and have been waiting for it to stabilize before even trying to present it to upper management again. Varnish has great potential but until it can run stable under x64 it's got a long fight ahead of itself. (I do want to note that my comments are based mainly on varnish 1 and not varnish 2.0) --cripy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Fri Nov 6 08:03:36 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 06 Nov 2009 09:03:36 +0100 Subject: Caching POSTs In-Reply-To: (Rob Ayres's message of "Wed, 4 Nov 2009 11:48:10 +0000") References: Message-ID: <87pr7wrsfb.fsf@qurzaw.linpro.no> ]] Rob Ayres | I want to cache POSTs but can't get varnish to do it, is it possible? If it | makes it any easier, all requests through this cache will be of POST type. No, you can't cache POSTs. It doesn't make any sense to do so. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at redpill-linpro.com Fri Nov 6 08:06:01 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 06 Nov 2009 09:06:01 +0100 Subject: Write error, len = 69696/260844, errno = Success In-Reply-To: <4AE9A67B.5000706@1art.cz> (=?utf-8?Q?=22V=C3=A1clav_B=C3=ADl?= =?utf-8?Q?ek=22's?= message of "Thu, 29 Oct 2009 15:28:11 +0100") References: <4AE9A67B.5000706@1art.cz> Message-ID: <87ljikrsba.fsf@qurzaw.linpro.no> ]] V?clav B?lek | Is there anyone who can point us where to look to find the problem? As you are getting a write error and not a timeout, I would take a look at any load balancers or similar that are configured to kill connections that take more than 5s. It doesn't look like anything Varnish is doing. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From v.bilek at 1art.cz Fri Nov 6 08:18:05 2009 From: v.bilek at 1art.cz (=?UTF-8?B?VsOhY2xhdiBCw61sZWs=?=) Date: Fri, 06 Nov 2009 09:18:05 +0100 Subject: Write error, len = 69696/260844, errno = Success In-Reply-To: <87ljikrsba.fsf@qurzaw.linpro.no> References: <4AE9A67B.5000706@1art.cz> <87ljikrsba.fsf@qurzaw.linpro.no> Message-ID: <4AF3DBBD.50004@1art.cz> it was bad tcp stack setting ... too small tcp_rmem tcp_wmem Tollef Fog Heen napsal(a): > ]] V?clav B?lek > > | Is there anyone who can point us where to look to find the problem? > > As you are getting a write error and not a timeout, I would take a look > at any load balancers or similar that are configured to kill connections > that take more than 5s. It doesn't look like anything Varnish is doing. > From sanelson at gmail.com Fri Nov 6 09:54:44 2009 From: sanelson at gmail.com (Stephen Nelson-Smith) Date: Fri, 6 Nov 2009 09:54:44 +0000 Subject: Permanent Redirects Message-ID: Hi, I have two old DNS names which point to my new website: anoldcrapname.com anotheroldname.com ---> myrealsite.com As varnish handles all port 80 traffice, any redirects need to be handled by Varnish. How can I do this? One approach would be to have varnish pass those names through to apache, and have apache do a redirect, but is there a way to do it in VCL? Thanks, S. -- Stephen Nelson-Smith Technical Director Atalanta Systems Ltd www.atalanta-systems.com From phk at phk.freebsd.dk Fri Nov 6 11:36:50 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 06 Nov 2009 11:36:50 +0000 Subject: Permanent Redirects In-Reply-To: Your message of "Fri, 06 Nov 2009 09:54:44 GMT." Message-ID: <3775.1257507410@critter.freebsd.dk> In message , Steph en Nelson-Smith writes: >One approach would be to have varnish pass those names through to >apache, and have apache do a redirect, but is there a way to do it in >VCL? It happens automatically, the requets "Host:" header gets passed on, unless you overwrite it in VCL. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From quasirob at googlemail.com Fri Nov 6 13:42:20 2009 From: quasirob at googlemail.com (Rob Ayres) Date: Fri, 6 Nov 2009 13:42:20 +0000 Subject: Caching POSTs In-Reply-To: <87pr7wrsfb.fsf@qurzaw.linpro.no> References: <87pr7wrsfb.fsf@qurzaw.linpro.no> Message-ID: 2009/11/6 Tollef Fog Heen > ]] Rob Ayres > > | I want to cache POSTs but can't get varnish to do it, is it possible? If > it > | makes it any easier, all requests through this cache will be of POST > type. > > No, you can't cache POSTs. It doesn't make any sense to do so. > > We have a processing server and a database server. The processing server makes its requests to the database server by means of a POST. There is enough duplication in the POST requests to have made it worth having a caching server between the two. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtshilston at gmail.com Fri Nov 6 15:01:53 2009 From: rtshilston at gmail.com (rtshilston at gmail.com) Date: Fri, 6 Nov 2009 15:01:53 +0000 Subject: Caching POSTs In-Reply-To: Message-ID: <4af43a65.0702d00a.52fc.ffffd4c0@mx.google.com> I thoroughly disagree with this use of HTTP. ?If a request makes an impact on a system, then it should use POST (eg login, pay, delete). ?However, if it has no write-behaviour (other than, perhaps, logging) then it must be GET. If you follow this, then varnish will work fine. Can you explain more about your actions? ? If you're using a processing server to build reports then GET should be fine. Rtsh -- Sent from my Palm Pr? Rob Ayres wrote: 2009/11/6 Tollef Fog Heen <tfheen at redpill-linpro.com> ]] Rob Ayres | I want to cache POSTs but can't get varnish to do it, is it possible? If it | makes it any easier, all requests through this cache will be of POST type. No, you can't cache POSTs. ?It doesn't make any sense to do so. We have a processing server and a database server. The processing server makes its requests to the database server by means of a POST. There is enough duplication in the POST requests to have made it worth having a caching server between the two. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ck-lists at cksoft.de Fri Nov 6 15:57:10 2009 From: ck-lists at cksoft.de (Christian Kratzer) Date: Fri, 6 Nov 2009 16:57:10 +0100 (CET) Subject: Caching POSTs In-Reply-To: <4af43a65.0702d00a.52fc.ffffd4c0@mx.google.com> References: <4af43a65.0702d00a.52fc.ffffd4c0@mx.google.com> Message-ID: Hi, On Fri, 6 Nov 2009, rtshilston at gmail.com wrote: > I thoroughly disagree with this use of HTTP. ?If a request makes an impact on a system, then it should use POST (eg login, pay, delete). ?However, if it has no write-behaviour (other than, perhaps, logging) then it must be GET. This kind of behaviour seems quite frequent in the context of webservices which are often POST only. This is of course not the classic use case of varnish between client browsers and webservers. Greetings Christian Kratzer CK Software GmbH > > If you follow this, then varnish will work fine. > > Can you explain more about your actions? ? If you're using a processing server to build reports then GET should be fine. > > Rtsh > > -- Sent from my Palm Pr? > Rob Ayres wrote: > > 2009/11/6 Tollef Fog Heen <tfheen at redpill-linpro.com> > > ]] Rob Ayres > > > > | I want to cache POSTs but can't get varnish to do it, is it possible? If it > > | makes it any easier, all requests through this cache will be of POST type. > > > > No, you can't cache POSTs. ?It doesn't make any sense to do so. > > > We have a processing server and a database server. The processing server makes its requests to the database server by means of a POST. There is enough duplication in the POST requests to have made it worth having a caching server between the two. > > > > > -- Christian Kratzer CK Software GmbH Email: ck at cksoft.de Schwarzwaldstr. 31 Phone: +49 7452 889 135 D-71131 Jettingen Fax: +49 7452 889 136 HRB 245288, Amtsgericht Stuttgart Web: http://www.cksoft.de/ Geschaeftsfuehrer: Christian Kratzer From kb+varnish at slide.com Fri Nov 6 19:21:14 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Fri, 6 Nov 2009 11:21:14 -0800 Subject: Caching POSTs In-Reply-To: References: <4af43a65.0702d00a.52fc.ffffd4c0@mx.google.com> Message-ID: I agree with Rtsh: the sole intent of POST is to write data, and there is simply no reason to use a POST in any other circumstance. I doubt you'd find any out-of-the-box caching system that doesn't simply short- circuit POSTs (I've never seen one that doesn't). Christian Kratzer, if by "webservices" you mean the prototypical Web2.0/AJAX POSTs, these cannot be cached -- it's a POST for a reason. If it's a POST for no reason, then the web developer needs an HTTP refresher. :) :) An HTTP/POST interface to a database is a poor choice from both a query performance and an HTTP interoperability standpoint -- the worst of both worlds. The "right" solution would probably involve the database server (that handles the HTTP/POST) implementing a simple query cache, maybe with memcached. Or perhaps on your front-ends in the spirit of a PHP/memcached/MySQL setup. I'm not sure I can think of an easy option for you, other than hand- patching Varnish. :( My US$0.02, -- Ken On Nov 6, 2009, at 7:57 AM, Christian Kratzer wrote: > Hi, > > On Fri, 6 Nov 2009, rtshilston at gmail.com wrote: > >> I thoroughly disagree with this use of HTTP. If a request makes an >> impact on a system, then it should use POST (eg login, pay, >> delete). However, if it has no write-behaviour (other than, >> perhaps, logging) then it must be GET. > > This kind of behaviour seems quite frequent in the context of > webservices which are often POST only. > > This is of course not the classic use case of varnish between client > browsers and webservers. > > Greetings > Christian Kratzer > CK Software GmbH > > >> >> If you follow this, then varnish will work fine. >> >> Can you explain more about your actions? If you're using a >> processing server to build reports then GET should be fine. >> >> Rtsh >> >> -- Sent from my Palm Pr? >> Rob Ayres wrote: >> >> 2009/11/6 Tollef Fog Heen <tfheen at redpill-linpro.com> >> >> ]] Rob Ayres >> >> >> >> | I want to cache POSTs but can't get varnish to do it, is it >> possible? If it >> >> | makes it any easier, all requests through this cache will be of >> POST type. >> >> >> >> No, you can't cache POSTs. It doesn't make any sense to do so. >> >> >> We have a processing server and a database server. The processing >> server makes its requests to the database server by means of a >> POST. There is enough duplication in the POST requests to have made >> it worth having a caching server between the two. >> >> >> >> >> > > -- > Christian Kratzer CK Software GmbH > Email: ck at cksoft.de Schwarzwaldstr. 31 > Phone: +49 7452 889 135 D-71131 Jettingen > Fax: +49 7452 889 136 HRB 245288, Amtsgericht > Stuttgart > Web: http://www.cksoft.de/ Geschaeftsfuehrer: Christian > Kratzer_______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -- kb From bzeeb-lists at lists.zabbadoz.net Fri Nov 6 20:01:39 2009 From: bzeeb-lists at lists.zabbadoz.net (Bjoern A. Zeeb) Date: Fri, 6 Nov 2009 20:01:39 +0000 (UTC) Subject: Caching POSTs In-Reply-To: References: <4af43a65.0702d00a.52fc.ffffd4c0@mx.google.com> Message-ID: <20091106195642.G37440@maildrop.int.zabbadoz.net> On Fri, 6 Nov 2009, Ken Brownfield wrote: Hi, > Christian Kratzer, if by "webservices" you mean the prototypical > Web2.0/AJAX POSTs, these cannot be cached -- it's a POST for a > reason. If it's a POST for no reason, then the web developer needs an > HTTP refresher. :) :) I guess he doesn't. "Web Services" is a W3 term, a genus for quit a few standards. See http://www.w3.org/2002/ws/ . /bz -- Bjoern A. Zeeb It will not break if you know what you are doing. From l at lrowe.co.uk Sat Nov 7 23:44:17 2009 From: l at lrowe.co.uk (Laurence Rowe) Date: Sat, 7 Nov 2009 23:44:17 +0000 Subject: Caching POSTs In-Reply-To: References: <87pr7wrsfb.fsf@qurzaw.linpro.no> Message-ID: 2009/11/6 Rob Ayres : > 2009/11/6 Tollef Fog Heen >> >> ]] Rob Ayres >> >> | I want to cache POSTs but can't get varnish to do it, is it possible? If >> it >> | makes it any easier, all requests through this cache will be of POST >> type. >> >> No, you can't cache POSTs. ?It doesn't make any sense to do so. >> > We have a processing server and a database server. The processing server > makes its requests to the database server by means of a POST. There is > enough duplication in the POST requests to have made it worth having a > caching server between the two. As Varnish does not inspect post data, it is impossible to create the query string for an 'equivalent' GET request (presumably there is something important in the post data distinguishing these requests.) If it is the rendering of the POST response page which is time consuming, you might return an 'X-Accel-Redirect' header pointing to the result page and run Nginx in front of Varnish. You might be able to achieve the functionality completely within Varnish with something like the following (untested), but the transformed POST to GET request may still have it's postdata hanging around to confuse things. sub vcl_recv { if(req.restarts == 0) { remove req.http.X-Accel-Redirect; # guard against improper use } elseif (req.http.X-Accel-Redirect) { set req.url = req.http.X-Accel-Redirect; set req.request = "GET"; } } sub vcl_fetch { if (obj.http.X-Accel-Redirect) { set req.http.X-Accel-Redirect = obj.http.X-Accel-Redirect; restart; } } Laurence From l at lrowe.co.uk Sat Nov 7 23:52:59 2009 From: l at lrowe.co.uk (Laurence Rowe) Date: Sat, 7 Nov 2009 23:52:59 +0000 Subject: Permanent Redirects In-Reply-To: References: Message-ID: 2009/11/6 Stephen Nelson-Smith : > Hi, > > I have two old DNS names which point to my new website: > > anoldcrapname.com > anotheroldname.com > > ---> myrealsite.com > > As varnish handles all port 80 traffice, any redirects need to be > handled by Varnish. > > How can I do this? > > One approach would be to have varnish pass those names through to > apache, and have apache do a redirect, but is there a way to do it in > VCL? See http://varnish.projects.linpro.no/wiki/VCLExampleAlexc in the mobile redirects section of vcl_recv and vcl_error. Laurence From ccripy at gmail.com Tue Nov 10 06:48:06 2009 From: ccripy at gmail.com (cripy) Date: Tue, 10 Nov 2009 01:48:06 -0500 Subject: varnish 2.0.4 questions - no IMS, no persistence cache - please help Message-ID: GaneshKumar Natarajan writes: Tue, 20 Oct 2009 12:35:00 -0700 3. mmap storage : max i can configure is 340 GB. I was able to use only 340 GB of cache. any size after this, i would get error. child (25790) Started Pushing vcls failed: dlopen(./vcl.1P9zoqAU.so): ./vcl.1P9zoqAU.so: failed to map segment from shared object: Cannot allocate memory -- I was having this issue too. After some googling it appears this is a AMD64 Linux 2.6 issue. According to http://lists.humbug.org.au/pipermail/general/2004-July/024139.html "It maybe important to note that as of the latest 2.6 kernels, Linux on the AMD64 platform can only memory map a 340GB per process. This is due mainly to a VM paging system ported from the ia32 platform that should have been left on the hillside at birth to die. I have not tested *BSD because we have not done enough research to confirm if the Linux emulation works on AMD64 for AMD64." From ingvar at redpill-linpro.com Tue Nov 10 10:30:51 2009 From: ingvar at redpill-linpro.com (Ingvar Hagelund) Date: Tue, 10 Nov 2009 11:30:51 +0100 Subject: RPM-packages of varnish-2.0.5 for RHEL and Fedora available Message-ID: <4AF940DB.4070504@redpill-linpro.com> I have submitted varnish-2.0.5 for Fedora and Fedora EPEL, and updates to the stable releases will be requested, so they will trickle down to the stable repos in a few weeks. For RHEL, both el4 and el5 packages are now in the EPEL testing repo. For those who are too impatient to wait for stable, or want to participate in testing, you can download the package with yum: rhel5# yum --enablerepo=epel-testing update varnish ... or download the package from RedHat: http://download.fedora.redhat.com/pub/epel/testing/ Fedora packages are still pending for testing, but will be visible in a few days, I guess. If you need packages for Fedora now, try http://kojipkgs.fedoraproject.org/packages/varnish/ Bugs in the package can be reported in Red Hat's Bugzilla: http://bugzilla.redhat.com/ or to varnish-dist at projects.linpro.no. Ingvar From v.bilek at 1art.cz Tue Nov 10 14:18:49 2009 From: v.bilek at 1art.cz (=?UTF-8?B?VsOhY2xhdiBCw61sZWs=?=) Date: Tue, 10 Nov 2009 15:18:49 +0100 Subject: Varnish stuck on stresstest/approved by real traffic In-Reply-To: <20091104154918.GD8687@kjeks.linpro.no> References: <4AF00B2A.2040006@1art.cz> <20091104154918.GD8687@kjeks.linpro.no> Message-ID: <4AF97649.60907@1art.cz> I have tried setting session_linger =50 on 2.0.4 and it seems that is solves the problem ( I wasnt able to reproduce after that) Kristian Lyngstol napsal(a): > (Excessive trimming ahead. Whoohoo) > > On Tue, Nov 03, 2009 at 11:51:22AM +0100, V?clav B?lek wrote: >> When testing varnish throughput and scalability I have found strange >> varnish behavior. > > What's the cpu load at that point? > > Also: use sess_linger. No session_linger == kaboom when things get too > loaded. It's 50ms default in 2.0.5/trunk, but set to 0ms in 2.0.4 and > previous. > > The behaviour in trunk is slightly different/better, but it's still worth > using it in 2.0.4. > > - Kristian From itisgany at gmail.com Tue Nov 10 19:34:50 2009 From: itisgany at gmail.com (GaneshKumar Natarajan) Date: Tue, 10 Nov 2009 14:34:50 -0500 Subject: varnish 2.0.4 questions - no IMS, no persistence cache - please help In-Reply-To: References: Message-ID: Thanks. I checked /proc/cpuinfo and it shows intel processor. So even with Intel, we see this limitation of 340 GB. This is a serious limitation to me, since in Squid, we were using 1.5 TB of storage and i thought i could mmap and use all the space for Varnish. Any workarounds or working kernel version in linux, please let me know. mylinux version: RH4 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux ulimit -a: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited pending signals (-i) 1024 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 278528 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited cat /proc/cpufinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz stepping : 6 cpu MHz : 2992.505 cache size : 6144 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm bogomips : 5989.00 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz stepping : 6 cpu MHz : 2992.505 cache size : 6144 KB physical id : 3 siblings : 2 core id : 6 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm bogomips : 5985.03 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz stepping : 6 cpu MHz : 2992.505 cache size : 6144 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm bogomips : 5984.96 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz stepping : 6 cpu MHz : 2992.505 cache size : 6144 KB physical id : 3 siblings : 2 core id : 7 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm bogomips : 5985.04 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: Ganesh On Tue, Nov 10, 2009 at 1:48 AM, cripy wrote: > GaneshKumar Natarajan writes: > Tue, 20 Oct 2009 12:35:00 -0700 > > 3. mmap storage : max i can configure is 340 GB. > I was able to use only 340 GB of cache. any size after this, i would get error. > child (25790) Started > Pushing vcls failed: dlopen(./vcl.1P9zoqAU.so): ./vcl.1P9zoqAU.so: > failed to map segment from shared object: Cannot allocate memory > -- > > I was having this issue too. ?After some googling it appears this is a > AMD64 Linux 2.6 issue. ?According to > http://lists.humbug.org.au/pipermail/general/2004-July/024139.html > > "It maybe important to note that as of the latest 2.6 kernels, Linux on > the AMD64 platform can only memory map a 340GB per process. This is due > mainly to a VM paging system ported from the ia32 platform that should > have been left on the hillside at birth to die. I have not tested *BSD > because we have not done enough research to confirm if the Linux > emulation works on AMD64 for AMD64." > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -- Regards, Gany From michael at dynamine.net Tue Nov 10 19:47:28 2009 From: michael at dynamine.net (Michael S. Fischer) Date: Wed, 11 Nov 2009 01:17:28 +0530 Subject: varnish 2.0.4 questions - no IMS, no persistence cache - please help In-Reply-To: References: Message-ID: amd64 refers to the architecture (AKA x86_64), not the particular CPU vendor. (As a matter of fact, I was unaware of this limitation; AFAIK it does not exist in FreeBSD.) In any event, mmap()ing 340GB even on a 64GB box is a recipe for disaster; you will probably suffer death by paging if your working set is larger than RAM. If it's smaller than RAM, then, well, there's no harm in making it just under the total RAM size. --Michael On Nov 11, 2009, at 1:04 AM, GaneshKumar Natarajan wrote: > Thanks. > I checked /proc/cpuinfo and it shows intel processor. > So even with Intel, we see this limitation of 340 GB. This is a > serious limitation to me, since in Squid, we were using 1.5 TB of > storage and i thought i could mmap and use all the space for Varnish. > Any workarounds or working kernel version in linux, please let me > know. > > mylinux version: RH4 > 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 > x86_64 GNU/Linux > > ulimit -a: > core file size (blocks, -c) 0 > data seg size (kbytes, -d) unlimited > file size (blocks, -f) unlimited > pending signals (-i) 1024 > max locked memory (kbytes, -l) 32 > max memory size (kbytes, -m) unlimited > open files (-n) 65535 > pipe size (512 bytes, -p) 8 > POSIX message queues (bytes, -q) 819200 > stack size (kbytes, -s) 10240 > cpu time (seconds, -t) unlimited > max user processes (-u) 278528 > virtual memory (kbytes, -v) unlimited > file locks (-x) unlimited > > cat /proc/cpufinfo > > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 23 > model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz > stepping : 6 > cpu MHz : 2992.505 > cache size : 6144 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 2 > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall > nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm > bogomips : 5989.00 > clflush size : 64 > cache_alignment : 64 > address sizes : 38 bits physical, 48 bits virtual > power management: > > processor : 1 > vendor_id : GenuineIntel > cpu family : 6 > model : 23 > model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz > stepping : 6 > cpu MHz : 2992.505 > cache size : 6144 KB > physical id : 3 > siblings : 2 > core id : 6 > cpu cores : 2 > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall > nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm > bogomips : 5985.03 > clflush size : 64 > cache_alignment : 64 > address sizes : 38 bits physical, 48 bits virtual > power management: > > processor : 2 > vendor_id : GenuineIntel > cpu family : 6 > model : 23 > model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz > stepping : 6 > cpu MHz : 2992.505 > cache size : 6144 KB > physical id : 0 > siblings : 2 > core id : 1 > cpu cores : 2 > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall > nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm > bogomips : 5984.96 > clflush size : 64 > cache_alignment : 64 > address sizes : 38 bits physical, 48 bits virtual > power management: > > processor : 3 > vendor_id : GenuineIntel > cpu family : 6 > model : 23 > model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz > stepping : 6 > cpu MHz : 2992.505 > cache size : 6144 KB > physical id : 3 > siblings : 2 > core id : 7 > cpu cores : 2 > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall > nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm > bogomips : 5985.04 > clflush size : 64 > cache_alignment : 64 > address sizes : 38 bits physical, 48 bits virtual > power management: > > Ganesh > > > On Tue, Nov 10, 2009 at 1:48 AM, cripy wrote: >> GaneshKumar Natarajan writes: >> Tue, 20 Oct 2009 12:35:00 -0700 >> >> 3. mmap storage : max i can configure is 340 GB. >> I was able to use only 340 GB of cache. any size after this, i >> would get error. >> child (25790) Started >> Pushing vcls failed: dlopen(./vcl.1P9zoqAU.so): ./vcl.1P9zoqAU.so: >> failed to map segment from shared object: Cannot allocate memory >> -- >> >> I was having this issue too. After some googling it appears this >> is a >> AMD64 Linux 2.6 issue. According to >> http://lists.humbug.org.au/pipermail/general/2004-July/024139.html >> >> "It maybe important to note that as of the latest 2.6 kernels, >> Linux on >> the AMD64 platform can only memory map a 340GB per process. This is >> due >> mainly to a VM paging system ported from the ia32 platform that >> should >> have been left on the hillside at birth to die. I have not tested >> *BSD >> because we have not done enough research to confirm if the Linux >> emulation works on AMD64 for AMD64." >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > > > > -- > Regards, > Gany > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From kb+varnish at slide.com Tue Nov 10 20:11:57 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Tue, 10 Nov 2009 12:11:57 -0800 Subject: varnish 2.0.4 questions - no IMS, no persistence cache - please help In-Reply-To: References: Message-ID: <17E710AB-2BEC-4D6D-AEFF-0B117B1D1199@slide.com> Note that the linked article is from 2004. The kernels that RedHat uses are a bag of hurt, not to mention ancient. If you can upgrade to RHELl5 that may be the easiest fix (I can only assume that the mmap limitation has been removed). Perhaps RedHat has newer RHELl4 kernels in a bleeding edge repository, or perhaps Fedora/CentOS have packages that could upgrade your kernel. That being said, there may be other pratfalls in the libc on RHELl4; bad 64-bit judgment calls persist in libc to this day (e.g., MAP_32BIT). Ken On Nov 10, 2009, at 11:47 AM, Michael S. Fischer wrote: > amd64 refers to the architecture (AKA x86_64), not the particular CPU > vendor. (As a matter of fact, I was unaware of this limitation; > AFAIK it does not exist in FreeBSD.) > > In any event, mmap()ing 340GB even on a 64GB box is a recipe for > disaster; you will probably suffer death by paging if your working set > is larger than RAM. If it's smaller than RAM, then, well, there's no > harm in making it just under the total RAM size. > > --Michael > > > On Nov 11, 2009, at 1:04 AM, GaneshKumar Natarajan wrote: > >> Thanks. >> I checked /proc/cpuinfo and it shows intel processor. >> So even with Intel, we see this limitation of 340 GB. This is a >> serious limitation to me, since in Squid, we were using 1.5 TB of >> storage and i thought i could mmap and use all the space for Varnish. >> Any workarounds or working kernel version in linux, please let me >> know. >> >> mylinux version: RH4 >> 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:33:05 EDT 2009 x86_64 x86_64 >> x86_64 GNU/Linux >> >> ulimit -a: >> core file size (blocks, -c) 0 >> data seg size (kbytes, -d) unlimited >> file size (blocks, -f) unlimited >> pending signals (-i) 1024 >> max locked memory (kbytes, -l) 32 >> max memory size (kbytes, -m) unlimited >> open files (-n) 65535 >> pipe size (512 bytes, -p) 8 >> POSIX message queues (bytes, -q) 819200 >> stack size (kbytes, -s) 10240 >> cpu time (seconds, -t) unlimited >> max user processes (-u) 278528 >> virtual memory (kbytes, -v) unlimited >> file locks (-x) unlimited >> >> cat /proc/cpufinfo >> >> processor : 0 >> vendor_id : GenuineIntel >> cpu family : 6 >> model : 23 >> model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz >> stepping : 6 >> cpu MHz : 2992.505 >> cache size : 6144 KB >> physical id : 0 >> siblings : 2 >> core id : 0 >> cpu cores : 2 >> fpu : yes >> fpu_exception : yes >> cpuid level : 10 >> wp : yes >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge >> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall >> nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm >> bogomips : 5989.00 >> clflush size : 64 >> cache_alignment : 64 >> address sizes : 38 bits physical, 48 bits virtual >> power management: >> >> processor : 1 >> vendor_id : GenuineIntel >> cpu family : 6 >> model : 23 >> model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz >> stepping : 6 >> cpu MHz : 2992.505 >> cache size : 6144 KB >> physical id : 3 >> siblings : 2 >> core id : 6 >> cpu cores : 2 >> fpu : yes >> fpu_exception : yes >> cpuid level : 10 >> wp : yes >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge >> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall >> nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm >> bogomips : 5985.03 >> clflush size : 64 >> cache_alignment : 64 >> address sizes : 38 bits physical, 48 bits virtual >> power management: >> >> processor : 2 >> vendor_id : GenuineIntel >> cpu family : 6 >> model : 23 >> model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz >> stepping : 6 >> cpu MHz : 2992.505 >> cache size : 6144 KB >> physical id : 0 >> siblings : 2 >> core id : 1 >> cpu cores : 2 >> fpu : yes >> fpu_exception : yes >> cpuid level : 10 >> wp : yes >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge >> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall >> nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm >> bogomips : 5984.96 >> clflush size : 64 >> cache_alignment : 64 >> address sizes : 38 bits physical, 48 bits virtual >> power management: >> >> processor : 3 >> vendor_id : GenuineIntel >> cpu family : 6 >> model : 23 >> model name : Intel(R) Xeon(R) CPU L5240 @ 3.00GHz >> stepping : 6 >> cpu MHz : 2992.505 >> cache size : 6144 KB >> physical id : 3 >> siblings : 2 >> core id : 7 >> cpu cores : 2 >> fpu : yes >> fpu_exception : yes >> cpuid level : 10 >> wp : yes >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge >> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall >> nx lm pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm >> bogomips : 5985.04 >> clflush size : 64 >> cache_alignment : 64 >> address sizes : 38 bits physical, 48 bits virtual >> power management: >> >> Ganesh >> >> >> On Tue, Nov 10, 2009 at 1:48 AM, cripy wrote: >>> GaneshKumar Natarajan writes: >>> Tue, 20 Oct 2009 12:35:00 -0700 >>> >>> 3. mmap storage : max i can configure is 340 GB. >>> I was able to use only 340 GB of cache. any size after this, i >>> would get error. >>> child (25790) Started >>> Pushing vcls failed: dlopen(./vcl.1P9zoqAU.so): ./vcl.1P9zoqAU.so: >>> failed to map segment from shared object: Cannot allocate memory >>> -- >>> >>> I was having this issue too. After some googling it appears this >>> is a >>> AMD64 Linux 2.6 issue. According to >>> http://lists.humbug.org.au/pipermail/general/2004-July/024139.html >>> >>> "It maybe important to note that as of the latest 2.6 kernels, >>> Linux on >>> the AMD64 platform can only memory map a 340GB per process. This is >>> due >>> mainly to a VM paging system ported from the ia32 platform that >>> should >>> have been left on the hillside at birth to die. I have not tested >>> *BSD >>> because we have not done enough research to confirm if the Linux >>> emulation works on AMD64 for AMD64." >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>> >> >> >> >> -- >> Regards, >> Gany >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -- kb From jcpetit at syspark.com Tue Nov 10 23:40:34 2009 From: jcpetit at syspark.com (Jean-Christophe Petit) Date: Tue, 10 Nov 2009 18:40:34 -0500 Subject: varnish behind varnish and X-Forwarded-For Message-ID: <4AF9F9F2.7030805@syspark.com> Hello, setup is a varnish behind an other varnish - don't ask ;) Is there a way to get the X-Forwarded-For from the first varnish to send it to the backend (Apache with mod_rpaf) ? I see in the varnishlog of the second varnish that there are 2 X-Forwarded-For (the client IP and the varnish IP) How would I get rid of the second X-Forward-For but not the first one (the client IP) ? Thanks, Jesse From kristian at redpill-linpro.com Wed Nov 11 09:31:40 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Wed, 11 Nov 2009 10:31:40 +0100 Subject: Benchmarks [was: Yahoo! Traffic Server] In-Reply-To: <87bpjkxai6.fsf@qurzaw.linpro.no> References: <2E51048E-58CD-4599-B4BE-85CDAC78FC33@develooper.com> <87bpjkxai6.fsf@qurzaw.linpro.no> Message-ID: <20091111093140.GA9589@kjeks.linpro.no> This is sliding off topic, and is in no way meant as a comparison to Traffic Server, but... On Tue, Nov 03, 2009 at 09:45:05AM +0100, Tollef Fog Heen wrote: > ?We have benchmarked Traffic Server to handle in excess of 35,000 RPS on > a single box.? > > I know Kristian has benchmarked Varnish to about three times that, > though with 1-byte objects, so it's not really anything resembling a > real-life scenario. I think sky has been serving ~64k requests/s using > synthetic. I got it up to roughly 150k req/s and I still think that's a client-issue. I know Nicholas had a favicon serving at 60k req/s without any hassle (after session_linger was set up). I'm reasonably certain that I can get close to breaking 300k req/s on a dual quad core with ht, but I don't think I can convince anyone to lend me a small city worth of computers to generate the traffic. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From phk at phk.freebsd.dk Wed Nov 11 09:44:47 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 11 Nov 2009 09:44:47 +0000 Subject: Benchmarks [was: Yahoo! Traffic Server] In-Reply-To: Your message of "Wed, 11 Nov 2009 10:31:40 +0100." <20091111093140.GA9589@kjeks.linpro.no> Message-ID: <23825.1257932687@critter.freebsd.dk> In message <20091111093140.GA9589 at kjeks.linpro.no>, Kristian Lyngstol writes: >I'm reasonably certain that I can get close to breaking 300k req/s on a >dual quad core with ht, but I don't think I can convince anyone to lend me >a small city worth of computers to generate the traffic. I thought the way to do that kind of benchmark was to send a twitter message like: "Hehe, looks like Jessica Biels bikini straps snapped (http...)" :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kristian at redpill-linpro.com Wed Nov 11 10:02:56 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Wed, 11 Nov 2009 11:02:56 +0100 Subject: varnish with authirsed requests In-Reply-To: References: Message-ID: <20091111100256.GB9589@kjeks.linpro.no> On Tue, Oct 27, 2009 at 01:27:18PM +0000, Rob Ayres wrote: > I've got varnish caching POST requests from my browser by changing the line > in vcl_recv to this: Unless you actually modified the source code, you didn't _change_ the VCL, you added your own and the default will be run afterwards unless you end your vcl with a terminating statement. That being said, chat-hit-for-pass means that it was passed in vcl_fetch. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From plfgoa at gmail.com Wed Nov 11 12:00:08 2009 From: plfgoa at gmail.com (Paras Fadte) Date: Wed, 11 Nov 2009 17:30:08 +0530 Subject: Varnish 503 Service unavailable error In-Reply-To: <87k4z61gvj.fsf@qurzaw.linpro.no> References: <75cf5800908130119o5b10f41bn85e7d76fcb7ebc96@mail.gmail.com> <87fxbw5atk.fsf@qurzaw.linpro.no> <75cf5800908130243k3c0e7db6i2d01788a42ab352e@mail.gmail.com> <87k4z61gvj.fsf@qurzaw.linpro.no> Message-ID: <75cf5800911110400h32542eek30946320ae108d8e@mail.gmail.com> Hi Tollef, After increasing the connect_timeout and first_byte_timeout values the erros have reduced. I also tried to get transaction for 503 error and following is part of the transaction for the same 230 VCL_call c recv lookup 230 VCL_call c hash hash 230 VCL_call c miss fetch 230 VCL_call c error deliver 230 Length c 465 230 VCL_call c deliver deliver 230 TxProtocol c HTTP/1.1 230 TxStatus c 503 230 TxResponse c Service Unavailable 230 TxHeader c Server: Varnish 230 TxHeader c Retry-After: 0 230 TxHeader c Content-Type: text/html; charset=utf-8 230 TxHeader c Content-Length: 465 230 TxHeader c Date: Wed, 11 Nov 2009 10:23:06 GMT 230 TxHeader c X-Varnish: 272187330 230 TxHeader c Age: 1 Thank you. -Paras On Thu, Oct 8, 2009 at 1:01 PM, Tollef Fog Heen wrote: > ]] Paras Fadte > > | Following is the output of varnishstat -1 . Also ,do I have to restart > | varnish when I change the "connect_timeout" parameter ? > > No, no need to restart varnish. > > | backend_fail ? ? ? ? ? ?37154 ? ? ? ? 0.59 Backend connections failures > > As you can see here, you have quite a few backend connection failures. > Is your backend overloaded? > > -- > Tollef Fog Heen > Redpill Linpro -- Changing the game! > t: +47 21 54 41 73 > From ibeginhere at gmail.com Thu Nov 12 03:19:08 2009 From: ibeginhere at gmail.com (I I) Date: Thu, 12 Nov 2009 11:19:08 +0800 Subject: varnishd -C Message-ID: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> In the web of official varnish , http://varnish.projects.linpro.no/wiki/VCLExampleSyslog there is some C variable names used in VCL.but when I type the command "varnishd -C",nothing return .and when I enter the website command to the VCL,error return like this "Running C-compiler failed, exit 1 VCL compilation failed" what I neet to prepare for this ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kb+varnish at slide.com Thu Nov 12 04:01:24 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 11 Nov 2009 20:01:24 -0800 Subject: varnishd -C In-Reply-To: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> References: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> Message-ID: <9032C62D-69A5-4E5D-86E3-9F1FDEA0AB77@slide.com> varnishd -f /path/to/your/config.vcl -C This will compile your VCL into C and emit it to stdout. It will show prototypes for all of the VRT interface accessible from VCL, the structs representing your backend(s) and director(s), and the config itself. The wiki is a little misleading (and -C isn't otherwise documented AFAIK): it's not a document per se, but it's extremely useful when writing inline C -- hunting down VRT definitions in the code is no fun. -- Ken On Nov 11, 2009, at 7:19 PM, I I wrote: > In the web of official varnish ,http://varnish.projects.linpro.no/wiki/VCLExampleSyslog > there is some C variable names used in VCL.but when I type the command "varnishd -C",nothing return .and when I enter the website command to the VCL,error return like this > "Running C-compiler failed, exit 1 > VCL compilation failed" > what I neet to prepare for this ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Thu Nov 12 10:10:30 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Thu, 12 Nov 2009 11:10:30 +0100 Subject: Varnish 503 Service unavailable error In-Reply-To: <75cf5800911110400h32542eek30946320ae108d8e@mail.gmail.com> (Paras Fadte's message of "Wed, 11 Nov 2009 17:30:08 +0530") References: <75cf5800908130119o5b10f41bn85e7d76fcb7ebc96@mail.gmail.com> <87fxbw5atk.fsf@qurzaw.linpro.no> <75cf5800908130243k3c0e7db6i2d01788a42ab352e@mail.gmail.com> <87k4z61gvj.fsf@qurzaw.linpro.no> <75cf5800911110400h32542eek30946320ae108d8e@mail.gmail.com> Message-ID: <877htwdpex.fsf@qurzaw.linpro.no> ]] Paras Fadte | After increasing the connect_timeout and first_byte_timeout values the | erros have reduced. I also tried to get transaction for 503 error and | following is part of the transaction for the same | | 230 VCL_call c recv lookup | 230 VCL_call c hash hash | 230 VCL_call c miss fetch | 230 VCL_call c error deliver It fails to get the result from the backend. Upgrade to 2.0.5 and look at the FetchError tag which will hopefully explain a bit more what goes wrong. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From ibeginhere at gmail.com Fri Nov 13 07:25:05 2009 From: ibeginhere at gmail.com (I I) Date: Fri, 13 Nov 2009 15:25:05 +0800 Subject: varnishlog can't log every request Message-ID: <9ff4dbc10911122325k779efd2fp70026db160630ba9@mail.gmail.com> hi,all. Is anybody find that the varnishlog command can't log every request . the problem is here. I set up a varnish and proxy for a website.It distribute two server .one set up varnish the other is website server . And I use the command varnishlog > varnish.log to log the situation about the Miss and Hit. Even though there many Hit.but the varnish server still connect to website server . and I capture the network ,the url required from varnish server to website server can't find from the varnish.log. and I capture the HTTP in the client . I find the HTTP head response from server is directly from the website server .there are nothing signature from the varnish to mark. like this: (Status-Line):HTTP/1.1 200 OK Date:Fri, 13 Nov 2009 06:39:13 GMT Server:nginx/0.7.19 ETag:W/"28991-1245029784000" Last-Modified:Mon, 15 Jun 2009 01:36:24 GMT Content-Length:28991 Content-Type:image/jpeg Content-Language:zh-cn Keep-Alive:timeout=15 Connection:Keep-Alive here is the normal mark from the varnish (Status-Line):HTTP/1.1 200 varnish ETag:W/"4135-1196350806000" Last-Modified:Thu, 29 Nov 2007 15:40:06 GMT Content-Type:image/jpeg Content-Language:zh-cn Content-Length:4135 Date:Fri, 13 Nov 2009 06:58:19 GMT Age:1736 Connection:keep-alive X-Cache:HIT Via:varnish server:varnish why in the file varnishlog which output from the varnishlog ,I can't find the abnormal url request ? and what's the problems ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.birdsong at gmail.com Fri Nov 13 08:10:01 2009 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 13 Nov 2009 00:10:01 -0800 Subject: forget an object in vcl_fetch Message-ID: Is it possible to make varnish completely forget an object in vcl_fetch? In thinking about a tiered setup, I'm trying to figure how I can capture the very tip of my traffic with a varnish running with abundant random read capacity and ethernet such that it never lru_nukes or expires an object which I've noticed can cause random read performance to plummet on the devices housing the storage file. ( So say in a backend varnish I set a header when it's obj.hits > 100000. The downstream varnish looks for this header, if it's not present it sets obj.cacheable to false in vcl_fetch. For example, upstream (closer to backend) varnish: sub vcl_deliver { if (obj.hits > 100000 ) { set resp.http.X-Super-Hot-File = "1"; } In the downstream (closer to client) varnish: sub vcl_fetch { if (!beresp.X-Super-Hot-File) { unset obj.cacheable; } } or maybe instead ... sub vcl_fetch { if (!beresp.X-Super-Hot-File) { obj.ttl = 0s; } } Would this work? Will this allow me to pass massive amounts of traffic through varnish without dirtying pages and in turn driving up iowait on slow devices that back the mmap'd storage file? From kristian at redpill-linpro.com Fri Nov 13 13:38:56 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Fri, 13 Nov 2009 14:38:56 +0100 Subject: Benchmarking and stress testing Varnish Message-ID: <20091113133832.GA4463@kjeks.kristian.int> As some of you might already know, I regular stress tests of Varnish, most of the time it's a matter of testing the same thing over and over to ensure that there aren't any huge surprises during the development of Varnish (we run nightly builds and stress tests of trunk - they need to be as predictable as possible to have any value). However, I also do occasional tests which involve trying to find the various breaking points of Varnish. Last time I did this, I used roughly 42 cpu cores spread over about 30 computers to generate traffic against a single Varnish server on our quad xenon core machine. The result thus far was that all of the clients ran far too close to 100% cpu usage, and Varnish was serving between 140 and 150 thousand requests per second. The reason I'm telling you this is because I'm looking for input on aspects that should be tested next time I do this, which will most likely be during Christmas (far easier to borrow machine power). So far on my list, I've got: 1. Test performance over some time period when pages are evicted more frequently. (ie: X thousand pages requested repeatedly, but random expiry time). 2. Test with fewer requests per session (this is somewhat hard because the clients tend to turn into the bottleneck). 3. Test raw hot-hit cache rate (more of what I did before - get a high number). 4. Test raw performance with a huge data set that's bound to be swapped out. 5. Various tests of -sfile versus -smalloc and large datasets, combined with critbit/classic tests. 6. Find some reasonably optimal settings, then fidget around trying to find a worst-case scenario. 7. Test the effect of session lingering with really slow clients. .... One thing that should be noted is that the test server is limited to 1gbit, which means that for "raw req/s", we're basically forced to use tiny pages, or we just end up starving the network. The goal is to test theories regarding performance, stability and predictability. Basically find the breaking points, what's good and what's not, and what we have to care about and what we can silently ignore. As you can see, the list is getting long and this is off the top of my head, but any suggestions are welcome. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From stockrt at gmail.com Fri Nov 13 17:54:37 2009 From: stockrt at gmail.com (Rogerio Schneider) Date: Fri, 13 Nov 2009 15:54:37 -0200 Subject: Benchmarking and stress testing Varnish In-Reply-To: <20091113133832.GA4463@kjeks.kristian.int> References: <20091113133832.GA4463@kjeks.kristian.int> Message-ID: Kristian, we do also test Varnish's behavior with slow clients, trying to find if it runs out of file descriptors, for instance. There was a time that when we reach 65k fd an assert In shmlog.c made it to bailout. I think it was fixed but not ported to 2.0.5. Regards, Rogerio Schneider Em 13/11/2009, ?s 11:38, Kristian Lyngstol escreveu: > As some of you might already know, I regular stress tests of > Varnish, most > of the time it's a matter of testing the same thing over and over to > ensure > that there aren't any huge surprises during the development of > Varnish (we > run nightly builds and stress tests of trunk - they need to be as > predictable as possible to have any value). > > However, I also do occasional tests which involve trying to find the > various breaking points of Varnish. Last time I did this, I used > roughly 42 > cpu cores spread over about 30 computers to generate traffic against a > single Varnish server on our quad xenon core machine. The result > thus far > was that all of the clients ran far too close to 100% cpu usage, and > Varnish was serving between 140 and 150 thousand requests per second. > > The reason I'm telling you this is because I'm looking for input on > aspects > that should be tested next time I do this, which will most likely be > during > Christmas (far easier to borrow machine power). So far on my list, > I've > got: > > 1. Test performance over some time period when pages are evicted more > frequently. (ie: X thousand pages requested repeatedly, but random > expiry > time). > > 2. Test with fewer requests per session (this is somewhat hard > because the > clients tend to turn into the bottleneck). > > 3. Test raw hot-hit cache rate (more of what I did before - get a high > number). > > 4. Test raw performance with a huge data set that's bound to be > swapped > out. > > 5. Various tests of -sfile versus -smalloc and large datasets, > combined > with critbit/classic tests. > > 6. Find some reasonably optimal settings, then fidget around trying > to find > a worst-case scenario. > > 7. Test the effect of session lingering with really slow clients. > > .... > > One thing that should be noted is that the test server is limited to > 1gbit, > which means that for "raw req/s", we're basically forced to use tiny > pages, > or we just end up starving the network. > > The goal is to test theories regarding performance, stability and > predictability. Basically find the breaking points, what's good and > what's > not, and what we have to care about and what we can silently ignore. > > As you can see, the list is getting long and this is off the top of my > head, but any suggestions are welcome. > > -- > Kristian Lyngst?l > Redpill Linpro AS > Tlf: +47 21544179 > Mob: +47 99014497 > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From david.birdsong at gmail.com Fri Nov 13 18:12:21 2009 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 13 Nov 2009 10:12:21 -0800 Subject: forget an object in vcl_fetch In-Reply-To: References: Message-ID: On Fri, Nov 13, 2009 at 12:10 AM, David Birdsong wrote: > Is it possible to make varnish completely forget an object in > vcl_fetch? ?In thinking about a tiered setup, I'm trying to figure how > I can capture the very tip of my traffic with a varnish running with > abundant random read capacity and ethernet such that it never > lru_nukes or expires an object which I've noticed can cause random > read performance to plummet on the devices housing the storage file. > ( > > So say in a backend varnish I set a header when it's obj.hits > > 100000. ?The downstream varnish looks for this header, if it's not > present it sets obj.cacheable to false in vcl_fetch. > > For example, upstream (closer to backend) varnish: > sub vcl_deliver { > ?if (obj.hits > 100000 ) { > ? ?set resp.http.X-Super-Hot-File = "1"; > ?} > > In the downstream (closer to client) varnish: > > sub vcl_fetch { > ?if (!beresp.X-Super-Hot-File) { > ? ?unset obj.cacheable; > ?} > } ok, i tried this and was not allowed to modify this attribute. > > or maybe instead ... > sub vcl_fetch { > ?if (!beresp.X-Super-Hot-File) { > ? obj.ttl = 0s; > ?} > } this worked kept varnish from caching anything, but it didn't cut down on disk hits. i guess it's storing the object and something else comes through and expires it. just out of curiosity i tried 'pass' instead but that also caused disk hits. is there anyway to completely pass traffic through without varnish storing it? > > Would this work? ?Will this allow me to pass massive amounts of > traffic through varnish without dirtying pages and in turn driving up > iowait on slow devices that back the mmap'd storage file? > From ibeginhere at gmail.com Mon Nov 16 02:41:19 2009 From: ibeginhere at gmail.com (I I) Date: Mon, 16 Nov 2009 10:41:19 +0800 Subject: varnishd -C In-Reply-To: <9032C62D-69A5-4E5D-86E3-9F1FDEA0AB77@slide.com> References: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> <9032C62D-69A5-4E5D-86E3-9F1FDEA0AB77@slide.com> Message-ID: <9ff4dbc10911151841j7a10ef39lceaf5b806d3f132e@mail.gmail.com> so I can't "To log to syslog "? I find the varnish maybe miss to process some url.I want to more detail about the varnish process by the log. In my varnish and backend server ,some request back from the backend server to client , from the HTTP response Head, there are no sign marked by the varnish .there should be some options marked by the varnish ,like X-Cache :"HIT" or "MISS". even though in the varnishlog output , there are nothing about there urls. 2009/11/12 Ken Brownfield > > varnishd -f /path/to/your/config.vcl -C > > This will compile your VCL into C and emit it to stdout. It will show > prototypes for all of the VRT interface accessible from VCL, the structs > representing your backend(s) and director(s), and the config itself. The > wiki is a little misleading (and -C isn't otherwise documented AFAIK): it's > not a document per se, but it's extremely useful when writing inline C -- > hunting down VRT definitions in the code is no fun. > -- > Ken > > On Nov 11, 2009, at 7:19 PM, I I wrote: > > In the web of official varnish , > http://varnish.projects.linpro.no/wiki/VCLExampleSyslog > there is some C variable names used in VCL.but when I type the command > "varnishd -C",nothing return .and when I enter the website command to the > VCL,error return like this > "Running C-compiler failed, exit 1 > VCL compilation failed" > what I neet to prepare for this ? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bedis9 at gmail.com Mon Nov 16 06:13:25 2009 From: bedis9 at gmail.com (Baptiste Assmann) Date: Mon, 16 Nov 2009 07:13:25 +0100 Subject: varnishd -C In-Reply-To: <9ff4dbc10911151841j7a10ef39lceaf5b806d3f132e@mail.gmail.com> References: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> <9032C62D-69A5-4E5D-86E3-9F1FDEA0AB77@slide.com> <9ff4dbc10911151841j7a10ef39lceaf5b806d3f132e@mail.gmail.com> Message-ID: To log to syslog, you must use inline C. To see request from Varnish to the origin, use varnishlog -b RxURL You can use VCL to add a X-Cache header: # Called before a cached object is delivered to the client sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } remove resp.http.Via; deliver; } good luck Le 16 nov. 2009 ? 03:41, I I a ?crit : > so I can't "To log to syslog "? I find the varnish maybe miss to > process some url.I want to more detail about the varnish process by > the log. > In my varnish and backend server ,some request back from the backend > server to client , from the HTTP response Head, there are no sign > marked by the varnish .there should be some options marked by the > varnish ,like X-Cache :"HIT" or "MISS". > even though in the varnishlog output , there are nothing about there > urls. > > 2009/11/12 Ken Brownfield > varnishd -f /path/to/your/config.vcl -C > > This will compile your VCL into C and emit it to stdout. It will > show prototypes for all of the VRT interface accessible from VCL, > the structs representing your backend(s) and director(s), and the > config itself. The wiki is a little misleading (and -C isn't > otherwise documented AFAIK): it's not a document per se, but it's > extremely useful when writing inline C -- hunting down VRT > definitions in the code is no fun. > -- > Ken > > On Nov 11, 2009, at 7:19 PM, I I wrote: > >> In the web of official varnish ,http://varnish.projects.linpro.no/wiki/VCLExampleSyslog >> there is some C variable names used in VCL.but when I type the >> command "varnishd -C",nothing return .and when I enter the website >> command to the VCL,error return like this >> "Running C-compiler failed, exit 1 >> VCL compilation failed" >> what I neet to prepare for this ? > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibeginhere at gmail.com Mon Nov 16 08:45:34 2009 From: ibeginhere at gmail.com (I I) Date: Mon, 16 Nov 2009 16:45:34 +0800 Subject: varnishd -C In-Reply-To: References: <9ff4dbc10911111919q1ee79f6et924ceccf2c072ef9@mail.gmail.com> <9032C62D-69A5-4E5D-86E3-9F1FDEA0AB77@slide.com> <9ff4dbc10911151841j7a10ef39lceaf5b806d3f132e@mail.gmail.com> Message-ID: <9ff4dbc10911160045p3ff222ckdffffbaa3bc8ff49@mail.gmail.com> yeah.I had set this. So I found some url not marked by the varnish .and there urls request to backend server . although I find that the pipe mode will not mark any to the HEADERS.but if the urls process by the pipe, there will be log in the varnishlog. unfortunately ,I can't find any about there urls in the varnishlog. 2009/11/16 Baptiste Assmann > > > > To log to syslog, you must use inline C. > To see request from Varnish to the origin, use varnishlog -b RxURL > You can use VCL to add a X-Cache header: > # Called before a cached object is delivered to the client > sub vcl_deliver { > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > remove resp.http.Via; > deliver; > } > > good luck > > > Le 16 nov. 2009 ? 03:41, I I a ?crit : > > so I can't "To log to syslog "? I find the varnish maybe miss to process > some url.I want to more detail about the varnish process by the log. > In my varnish and backend server ,some request back from the backend server > to client , from the HTTP response Head, there are no sign marked by the > varnish .there should be some options marked by the varnish ,like X-Cache > :"HIT" or "MISS". > even though in the varnishlog output , there are nothing about there urls. > > 2009/11/12 Ken Brownfield < kb+varnish at slide.com> > >> varnishd -f /path/to/your/config.vcl -C >> >> This will compile your VCL into C and emit it to stdout. It will show >> prototypes for all of the VRT interface accessible from VCL, the structs >> representing your backend(s) and director(s), and the config itself. The >> wiki is a little misleading (and -C isn't otherwise documented AFAIK): it's >> not a document per se, but it's extremely useful when writing inline C -- >> hunting down VRT definitions in the code is no fun. >> -- >> Ken >> >> On Nov 11, 2009, at 7:19 PM, I I wrote: >> >> In the web of official varnish , >> http://varnish.projects.linpro.no/wiki/VCLExampleSyslog >> there is some C variable names used in VCL.but when I type the command >> "varnishd -C",nothing return .and when I enter the website command to the >> VCL,error return like this >> "Running C-compiler failed, exit 1 >> VCL compilation failed" >> what I neet to prepare for this ? >> >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debunkers at gmail.com Tue Nov 17 01:05:52 2009 From: debunkers at gmail.com (Debunk it) Date: Mon, 16 Nov 2009 19:05:52 -0600 Subject: Does vbulletin support varnish? Message-ID: <388e6260911161705k79929d4aybeb575bfb26ebdc2@mail.gmail.com> Hi all, Can someone confirm whether vbulletin supports varnish out of the box? Does vbulletin know how to send specific purge requests to varnish on updates? My concern is mainly about stale cache. I would prefer not to have to send purge requests manually. Any tips or help is greatly appreciated. D -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibeginhere at gmail.com Tue Nov 17 02:25:23 2009 From: ibeginhere at gmail.com (ll) Date: Tue, 17 Nov 2009 10:25:23 +0800 Subject: vcl_timeout and fetch Message-ID: <4B020993.9070108@gmail.com> by the VCL define , the vcl_timeout can terminal by fetch or discard .can I defined like this in the vcl? sub vcl_timeout { fetch; } but I find this in the default.vcl #sub vcl_timeout { # /* XXX: Do not redefine vcl_timeout{}, it is not yet supported */ # return (discard); #} and actually ,I try to set the timeout to fetch. it's failed .after the ttl I request the url again ,it will MISS. so,is there any ways to set the varnish to fetch the cache and flash it automatic ? I have a webserver,it has so many pages ,I want to varnish can fetch all pages one time and keep it a long time .and after the ttl,it can flash it automatic .Is it some ways to accomplish this ? thanks !! From coolbomb at gmail.com Tue Nov 17 10:31:52 2009 From: coolbomb at gmail.com (Daniel Rodriguez) Date: Tue, 17 Nov 2009 11:31:52 +0100 Subject: Compressed and uncompressed cached object handling Message-ID: <95ff419c0911170231j15a7bb19uca19314a6da9ed7a@mail.gmail.com> Hi guys, I'm having a problem with a varnish implementation that we are testing to replace an ugly appliance. We were almost ready to place our server in a more real? environment (some of our production sites), but I found out that there is something not working properly with the compression handling in my varnishd (varnish-2.0.5 - Debian) Varnish its returning the first object cached no matter if i ask for a clear object (no Accept-encoding specified) or a gzip/deflate object. If the object cached is a gzip object it will return that no matter if I later ask for a clear one later. According to what I have seen in the documentation varnish should keep both object versions (compressed and no-compressed) in the cache and deliver the one that its asked by the client. Step 1 I ask for a non-compressed object (no Accept-encoding specified). This works "great" GET -H "TE:" -sed "http://foo.bar/test/prueba.php" 200 OK Cache-Control: max-age=20 Connection: close Date: Mon, 16 Nov 2009 16:56:06 GMT Via: 1.1 varnish Age: 0 Server: Apache Content-Length: 11013 Content-Type: text/html; charset=iso-8859-15 Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT Client-Date: Mon, 16 Nov 2009 16:56:06 GMT Client-Peer: 10.10.10.10:80 Client-Response-Num: 1 X-Varnish: 1655545411 The request goes like this in the log: ? 12 SessionOpen? c 10.20.20.20 57909 :80 ??? 12 ReqStart???? c 10.20.20.20 57909 1655545411 ??? 12 RxRequest??? c GET ??? 12 RxURL??????? c /test/prueba.php ??? 12 RxProtocol?? c HTTP/1.1 ??? 12 RxHeader???? c Connection: TE, close ??? 12 RxHeader???? c Host: foo.bar ??? 12 RxHeader???? c TE: ??? 12 RxHeader???? c User-Agent: lwp-request/5.827 libwww-perl/5.831 ??? 12 VCL_call???? c recv ??? 12 VCL_return?? c lookup ??? 12 VCL_call???? c hash ??? 12 VCL_return?? c hash ??? 12 VCL_call???? c miss ??? 12 VCL_return?? c fetch ??? 14 BackendOpen? b default 10.10.10.10 33484 10.30.30.30 80 ??? 12 Backend????? c 14 default default ??? 14 TxRequest??? b GET ??? 14 TxURL??????? b /test/prueba.php ??? 14 TxProtocol?? b HTTP/1.1 ??? 14 TxHeader???? b Host: foo.bar ??? 14 TxHeader???? b User-Agent: lwp-request/5.827 libwww-perl/5.831 ??? 14 TxHeader???? b X-Varnish: 1655545411 ??? 14 TxHeader???? b X-Forwarded-For: 10.20.20.20 ???? 0 CLI????????? - Rd ping ???? 0 CLI????????? - Wr 0 200 PONG 1258390564 1.0 ??? 14 RxProtocol?? b HTTP/1.1 ??? 14 RxStatus???? b 200 ??? 14 RxResponse?? b OK ??? 14 RxHeader???? b Date: Mon, 16 Nov 2009 16:56:01 GMT ??? 14 RxHeader???? b Server: Apache ??? 14 RxHeader???? b Cache-control: max-age=20 ??? 14 RxHeader???? b Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT ??? 14 RxHeader???? b Connection: close ??? 14 RxHeader???? b Transfer-Encoding: chunked ??? 14 RxHeader???? b Content-Type: text/html; charset=iso-8859-15 ??? 12 ObjProtocol? c HTTP/1.1 ??? 12 ObjStatus??? c 200 ??? 12 ObjResponse? c OK ??? 12 ObjHeader??? c Date: Mon, 16 Nov 2009 16:56:01 GMT ??? 12 ObjHeader??? c Server: Apache ??? 12 ObjHeader??? c Cache-control: max-age=20 ??? 12 ObjHeader??? c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT ??? 12 ObjHeader??? c Content-Type: text/html; charset=iso-8859-15 ??? 14 BackendClose b default ??? 12 TTL????????? c 1655545411 RFC 20 1258390566 0 0 20 0 ??? 12 VCL_call???? c fetch ??? 12 VCL_return?? c deliver ??? 12 Length?????? c 11013 ??? 12 VCL_call???? c deliver ??? 12 VCL_return?? c deliver ??? 12 TxProtocol?? c HTTP/1.1 ??? 12 TxStatus???? c 200 ??? 12 TxResponse?? c OK ??? 12 TxHeader???? c Server: Apache ??? 12 TxHeader???? c Cache-control: max-age=20 ??? 12 TxHeader???? c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT ??? 12 TxHeader???? c Content-Type: text/html; charset=iso-8859-15 ??? 12 TxHeader???? c Content-Length: 11013 ??? 12 TxHeader???? c Date: Mon, 16 Nov 2009 16:56:06 GMT ??? 12 TxHeader???? c X-Varnish: 1655545411 ??? 12 TxHeader???? c Age: 0 ??? 12 TxHeader???? c Via: 1.1 varnish ??? 12 TxHeader???? c Connection: close ??? 12 ReqEnd?????? c 1655545411 1258390561.316438675 1258390566.327898026 0.000134945 5.010995150 0.000464201 ??? 12 SessionClose c Connection: close ??? 12 StatSess???? c 10.20.20.20 57909 5 1 1 0 0 1 282 11013 Step 2 Then the next request will go with a (Accept-encoding: gzip), and returns me a clear object :( GET -H "Accept-encoding: gzip" -H "TE:" -sed "http://foo.bar/test/prueba.php" 200 OK Cache-Control: max-age=20 Connection: close Date: Mon, 16 Nov 2009 16:56:09 GMT Via: 1.1 varnish Age: 3 Server: Apache Content-Length: 11013 Content-Type: text/html; charset=iso-8859-15 Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT Client-Date: Mon, 16 Nov 2009 16:56:08 GMT Client-Peer: 10.10.10.10:80 Client-Response-Num: 1 X-Varnish: 1655545412 1655545411 12 SessionOpen c 10.20.20.20 57910 :80 12 ReqStart c 10.20.20.20 57910 1655545412 12 RxRequest c GET 12 RxURL c /test/prueba.php 12 RxProtocol c HTTP/1.1 12 RxHeader c Connection: TE, close 12 RxHeader c Accept-Encoding: gzip 12 RxHeader c Host: foo.bar 12 RxHeader c TE: 12 RxHeader c User-Agent: lwp-request/5.827 libwww-perl/5.831 12 VCL_call c recv 12 VCL_return c lookup 12 VCL_call c hash 12 VCL_return c hash 12 Hit c 1655545411 12 VCL_call c hit 12 VCL_return c deliver 12 Length c 11013 12 VCL_call c deliver 12 VCL_return c deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 200 12 TxResponse c OK 12 TxHeader c Server: Apache 12 TxHeader c Cache-control: max-age=20 12 TxHeader c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT 12 TxHeader c Content-Type: text/html; charset=iso-8859-15 12 TxHeader c Content-Length: 11013 12 TxHeader c Date: Mon, 16 Nov 2009 16:56:09 GMT 12 TxHeader c X-Varnish: 1655545412 1655545411 12 TxHeader c Age: 3 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 ReqEnd c 1655545412 1258390569.036545277 1258390569.036923647 0.000121355 0.000098705 0.000279665 12 SessionClose c Connection: close 12 StatSess c 10.20.20.20 57910 0 1 1 0 0 0 293 11013 My config: backend default { .host = "10.30.30.30"; .port = "80"; } acl purge { "localhost"; } sub vcl_recv { set req.grace = 2m; if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No se comprimen estos, no tiene logica remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # No se conoce el algoritmo remove req.http.Accept-Encoding; } } if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } purge("req.url == " req.url); } if (req.http.Authorization) { return (pass); } return(lookup); } sub vcl_fetch { set obj.grace = 2m; if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private" || obj.http.Cache-Control ~ "max-age=0" || obj.http.Cache-Control ~ "must-revalidate" || obj.http.Cache-Control ~ "private" ) { pass; } if (!obj.cacheable) { return (pass); } if (obj.http.Set-Cookie) { return (pass); } set obj.prefetch = -3s; if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } return (deliver); } I'm doing something wrong?... Best Regards, From michael at dynamine.net Tue Nov 17 10:42:36 2009 From: michael at dynamine.net (Michael S. Fischer) Date: Tue, 17 Nov 2009 16:12:36 +0530 Subject: Compressed and uncompressed cached object handling In-Reply-To: <95ff419c0911170231j15a7bb19uca19314a6da9ed7a@mail.gmail.com> References: <95ff419c0911170231j15a7bb19uca19314a6da9ed7a@mail.gmail.com> Message-ID: Are you returning a "Vary: Accept-Encoding" in your origin server's response headers? --Michael On Nov 17, 2009, at 4:01 PM, Daniel Rodriguez wrote: > Hi guys, > > I'm having a problem with a varnish implementation that we are testing > to replace an ugly appliance. We were almost ready to place our server > in a more real environment (some of our production sites), but I > found out that there is something not working properly with the > compression handling in my varnishd (varnish-2.0.5 - Debian) > > Varnish its returning the first object cached no matter if i ask for a > clear object (no Accept-encoding specified) or a gzip/deflate object. > If the object cached is a gzip object it will return that no matter if > I later ask for a clear one later. > > According to what I have seen in the documentation varnish should keep > both object versions (compressed and no-compressed) in the cache and > deliver the one that its asked by the client. > > Step 1 > > I ask for a non-compressed object (no Accept-encoding specified). This > works "great" > > GET -H "TE:" -sed "http://foo.bar/test/prueba.php" > 200 OK > Cache-Control: max-age=20 > Connection: close > Date: Mon, 16 Nov 2009 16:56:06 GMT > Via: 1.1 varnish > Age: 0 > Server: Apache > Content-Length: 11013 > Content-Type: text/html; charset=iso-8859-15 > Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > Client-Date: Mon, 16 Nov 2009 16:56:06 GMT > Client-Peer: 10.10.10.10:80 > Client-Response-Num: 1 > X-Varnish: 1655545411 > > The request goes like this in the log: > > 12 SessionOpen c 10.20.20.20 57909 :80 > 12 ReqStart c 10.20.20.20 57909 1655545411 > 12 RxRequest c GET > 12 RxURL c /test/prueba.php > 12 RxProtocol c HTTP/1.1 > 12 RxHeader c Connection: TE, close > 12 RxHeader c Host: foo.bar > 12 RxHeader c TE: > 12 RxHeader c User-Agent: lwp-request/5.827 libwww-perl/5.831 > 12 VCL_call c recv > 12 VCL_return c lookup > 12 VCL_call c hash > 12 VCL_return c hash > 12 VCL_call c miss > 12 VCL_return c fetch > 14 BackendOpen b default 10.10.10.10 33484 10.30.30.30 80 > 12 Backend c 14 default default > 14 TxRequest b GET > 14 TxURL b /test/prueba.php > 14 TxProtocol b HTTP/1.1 > 14 TxHeader b Host: foo.bar > 14 TxHeader b User-Agent: lwp-request/5.827 libwww-perl/5.831 > 14 TxHeader b X-Varnish: 1655545411 > 14 TxHeader b X-Forwarded-For: 10.20.20.20 > 0 CLI - Rd ping > 0 CLI - Wr 0 200 PONG 1258390564 1.0 > 14 RxProtocol b HTTP/1.1 > 14 RxStatus b 200 > 14 RxResponse b OK > 14 RxHeader b Date: Mon, 16 Nov 2009 16:56:01 GMT > 14 RxHeader b Server: Apache > 14 RxHeader b Cache-control: max-age=20 > 14 RxHeader b Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > 14 RxHeader b Connection: close > 14 RxHeader b Transfer-Encoding: chunked > 14 RxHeader b Content-Type: text/html; charset=iso-8859-15 > 12 ObjProtocol c HTTP/1.1 > 12 ObjStatus c 200 > 12 ObjResponse c OK > 12 ObjHeader c Date: Mon, 16 Nov 2009 16:56:01 GMT > 12 ObjHeader c Server: Apache > 12 ObjHeader c Cache-control: max-age=20 > 12 ObjHeader c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > 12 ObjHeader c Content-Type: text/html; charset=iso-8859-15 > 14 BackendClose b default > 12 TTL c 1655545411 RFC 20 1258390566 0 0 20 0 > 12 VCL_call c fetch > 12 VCL_return c deliver > 12 Length c 11013 > 12 VCL_call c deliver > 12 VCL_return c deliver > 12 TxProtocol c HTTP/1.1 > 12 TxStatus c 200 > 12 TxResponse c OK > 12 TxHeader c Server: Apache > 12 TxHeader c Cache-control: max-age=20 > 12 TxHeader c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > 12 TxHeader c Content-Type: text/html; charset=iso-8859-15 > 12 TxHeader c Content-Length: 11013 > 12 TxHeader c Date: Mon, 16 Nov 2009 16:56:06 GMT > 12 TxHeader c X-Varnish: 1655545411 > 12 TxHeader c Age: 0 > 12 TxHeader c Via: 1.1 varnish > 12 TxHeader c Connection: close > 12 ReqEnd c 1655545411 1258390561.316438675 > 1258390566.327898026 0.000134945 5.010995150 0.000464201 > 12 SessionClose c Connection: close > 12 StatSess c 10.20.20.20 57909 5 1 1 0 0 1 282 11013 > > Step 2 > > Then the next request will go with a (Accept-encoding: gzip), and > returns me a clear object :( > > GET -H "Accept-encoding: gzip" -H "TE:" -sed "http://foo.bar/test/prueba.php > " > 200 OK > Cache-Control: max-age=20 > Connection: close > Date: Mon, 16 Nov 2009 16:56:09 GMT > Via: 1.1 varnish > Age: 3 > Server: Apache > Content-Length: 11013 > Content-Type: text/html; charset=iso-8859-15 > Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > Client-Date: Mon, 16 Nov 2009 16:56:08 GMT > Client-Peer: 10.10.10.10:80 > Client-Response-Num: 1 > X-Varnish: 1655545412 1655545411 > > 12 SessionOpen c 10.20.20.20 57910 :80 > 12 ReqStart c 10.20.20.20 57910 1655545412 > 12 RxRequest c GET > 12 RxURL c /test/prueba.php > 12 RxProtocol c HTTP/1.1 > 12 RxHeader c Connection: TE, close > 12 RxHeader c Accept-Encoding: gzip > 12 RxHeader c Host: foo.bar > 12 RxHeader c TE: > 12 RxHeader c User-Agent: lwp-request/5.827 libwww-perl/5.831 > 12 VCL_call c recv > 12 VCL_return c lookup > 12 VCL_call c hash > 12 VCL_return c hash > 12 Hit c 1655545411 > 12 VCL_call c hit > 12 VCL_return c deliver > 12 Length c 11013 > 12 VCL_call c deliver > 12 VCL_return c deliver > 12 TxProtocol c HTTP/1.1 > 12 TxStatus c 200 > 12 TxResponse c OK > 12 TxHeader c Server: Apache > 12 TxHeader c Cache-control: max-age=20 > 12 TxHeader c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT > 12 TxHeader c Content-Type: text/html; charset=iso-8859-15 > 12 TxHeader c Content-Length: 11013 > 12 TxHeader c Date: Mon, 16 Nov 2009 16:56:09 GMT > 12 TxHeader c X-Varnish: 1655545412 1655545411 > 12 TxHeader c Age: 3 > 12 TxHeader c Via: 1.1 varnish > 12 TxHeader c Connection: close > 12 ReqEnd c 1655545412 1258390569.036545277 > 1258390569.036923647 0.000121355 0.000098705 0.000279665 > 12 SessionClose c Connection: close > 12 StatSess c 10.20.20.20 57910 0 1 1 0 0 0 293 11013 > > My config: > > backend default { > .host = "10.30.30.30"; > .port = "80"; > } > > acl purge { > "localhost"; > } > > sub vcl_recv { > set req.grace = 2m; > if (req.http.Accept-Encoding) { > if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { > # No se comprimen estos, no tiene logica > remove req.http.Accept-Encoding; > } elsif (req.http.Accept-Encoding ~ "gzip") { > set req.http.Accept-Encoding = "gzip"; > } elsif (req.http.Accept-Encoding ~ "deflate") { > set req.http.Accept-Encoding = "deflate"; > } else { > # No se conoce el algoritmo > remove req.http.Accept-Encoding; > } > } > if (req.request == "PURGE") { > if (!client.ip ~ purge) { > error 405 "Not allowed."; > } > purge("req.url == " req.url); > } > if (req.http.Authorization) { > return (pass); > } > return(lookup); > } > > sub vcl_fetch { > set obj.grace = 2m; > if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ > "no-cache" || obj.http.Cache-Control ~ "private" || > obj.http.Cache-Control ~ "max-age=0" || obj.http.Cache-Control ~ > "must-revalidate" || obj.http.Cache-Control ~ "private" ) { > pass; > } > if (!obj.cacheable) { > return (pass); > } > if (obj.http.Set-Cookie) { > return (pass); > } > set obj.prefetch = -3s; > if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { > pass; > } > return (deliver); > } > > I'm doing something wrong?... > > Best Regards, > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From coolbomb at gmail.com Tue Nov 17 11:24:03 2009 From: coolbomb at gmail.com (Daniel Rodriguez) Date: Tue, 17 Nov 2009 12:24:03 +0100 Subject: Compressed and uncompressed cached object handling In-Reply-To: References: <95ff419c0911170231j15a7bb19uca19314a6da9ed7a@mail.gmail.com> Message-ID: <95ff419c0911170324n2e4384e8p975da17a5806d6b2@mail.gmail.com> Hi Michael, That was the problem, the server was not returning the "Vary: Accept-Encoding", I didn't notice that detail in the headers sent by the server. Its working perfect right now. Thank you very much. Regards, On Tue, Nov 17, 2009 at 11:42 AM, Michael S. Fischer wrote: > Are you returning a "Vary: Accept-Encoding" in your origin server's response > headers? > > --Michael > > On Nov 17, 2009, at 4:01 PM, Daniel Rodriguez wrote: > >> Hi guys, >> >> I'm having a problem with a varnish implementation that we are testing >> to replace an ugly appliance. We were almost ready to place our server >> in a more real ?environment (some of our production sites), but I >> found out that there is something not working properly with the >> compression handling in my varnishd (varnish-2.0.5 - Debian) >> >> Varnish its returning the first object cached no matter if i ask for a >> clear object (no Accept-encoding specified) or a gzip/deflate object. >> If the object cached is a gzip object it will return that no matter if >> I later ask for a clear one later. >> >> According to what I have seen in the documentation varnish should keep >> both object versions (compressed and no-compressed) in the cache and >> deliver the one that its asked by the client. >> >> Step 1 >> >> I ask for a non-compressed object (no Accept-encoding specified). This >> works "great" >> >> GET -H "TE:" -sed "http://foo.bar/test/prueba.php" >> 200 OK >> Cache-Control: max-age=20 >> Connection: close >> Date: Mon, 16 Nov 2009 16:56:06 GMT >> Via: 1.1 varnish >> Age: 0 >> Server: Apache >> Content-Length: 11013 >> Content-Type: text/html; charset=iso-8859-15 >> Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> Client-Date: Mon, 16 Nov 2009 16:56:06 GMT >> Client-Peer: 10.10.10.10:80 >> Client-Response-Num: 1 >> X-Varnish: 1655545411 >> >> The request goes like this in the log: >> >> ?12 SessionOpen ?c 10.20.20.20 57909 :80 >> ? ?12 ReqStart ? ? c 10.20.20.20 57909 1655545411 >> ? ?12 RxRequest ? ?c GET >> ? ?12 RxURL ? ? ? ?c /test/prueba.php >> ? ?12 RxProtocol ? c HTTP/1.1 >> ? ?12 RxHeader ? ? c Connection: TE, close >> ? ?12 RxHeader ? ? c Host: foo.bar >> ? ?12 RxHeader ? ? c TE: >> ? ?12 RxHeader ? ? c User-Agent: lwp-request/5.827 libwww-perl/5.831 >> ? ?12 VCL_call ? ? c recv >> ? ?12 VCL_return ? c lookup >> ? ?12 VCL_call ? ? c hash >> ? ?12 VCL_return ? c hash >> ? ?12 VCL_call ? ? c miss >> ? ?12 VCL_return ? c fetch >> ? ?14 BackendOpen ?b default 10.10.10.10 33484 10.30.30.30 80 >> ? ?12 Backend ? ? ?c 14 default default >> ? ?14 TxRequest ? ?b GET >> ? ?14 TxURL ? ? ? ?b /test/prueba.php >> ? ?14 TxProtocol ? b HTTP/1.1 >> ? ?14 TxHeader ? ? b Host: foo.bar >> ? ?14 TxHeader ? ? b User-Agent: lwp-request/5.827 libwww-perl/5.831 >> ? ?14 TxHeader ? ? b X-Varnish: 1655545411 >> ? ?14 TxHeader ? ? b X-Forwarded-For: 10.20.20.20 >> ? ? 0 CLI ? ? ? ? ?- Rd ping >> ? ? 0 CLI ? ? ? ? ?- Wr 0 200 PONG 1258390564 1.0 >> ? ?14 RxProtocol ? b HTTP/1.1 >> ? ?14 RxStatus ? ? b 200 >> ? ?14 RxResponse ? b OK >> ? ?14 RxHeader ? ? b Date: Mon, 16 Nov 2009 16:56:01 GMT >> ? ?14 RxHeader ? ? b Server: Apache >> ? ?14 RxHeader ? ? b Cache-control: max-age=20 >> ? ?14 RxHeader ? ? b Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> ? ?14 RxHeader ? ? b Connection: close >> ? ?14 RxHeader ? ? b Transfer-Encoding: chunked >> ? ?14 RxHeader ? ? b Content-Type: text/html; charset=iso-8859-15 >> ? ?12 ObjProtocol ?c HTTP/1.1 >> ? ?12 ObjStatus ? ?c 200 >> ? ?12 ObjResponse ?c OK >> ? ?12 ObjHeader ? ?c Date: Mon, 16 Nov 2009 16:56:01 GMT >> ? ?12 ObjHeader ? ?c Server: Apache >> ? ?12 ObjHeader ? ?c Cache-control: max-age=20 >> ? ?12 ObjHeader ? ?c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> ? ?12 ObjHeader ? ?c Content-Type: text/html; charset=iso-8859-15 >> ? ?14 BackendClose b default >> ? ?12 TTL ? ? ? ? ?c 1655545411 RFC 20 1258390566 0 0 20 0 >> ? ?12 VCL_call ? ? c fetch >> ? ?12 VCL_return ? c deliver >> ? ?12 Length ? ? ? c 11013 >> ? ?12 VCL_call ? ? c deliver >> ? ?12 VCL_return ? c deliver >> ? ?12 TxProtocol ? c HTTP/1.1 >> ? ?12 TxStatus ? ? c 200 >> ? ?12 TxResponse ? c OK >> ? ?12 TxHeader ? ? c Server: Apache >> ? ?12 TxHeader ? ? c Cache-control: max-age=20 >> ? ?12 TxHeader ? ? c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> ? ?12 TxHeader ? ? c Content-Type: text/html; charset=iso-8859-15 >> ? ?12 TxHeader ? ? c Content-Length: 11013 >> ? ?12 TxHeader ? ? c Date: Mon, 16 Nov 2009 16:56:06 GMT >> ? ?12 TxHeader ? ? c X-Varnish: 1655545411 >> ? ?12 TxHeader ? ? c Age: 0 >> ? ?12 TxHeader ? ? c Via: 1.1 varnish >> ? ?12 TxHeader ? ? c Connection: close >> ? ?12 ReqEnd ? ? ? c 1655545411 1258390561.316438675 >> 1258390566.327898026 0.000134945 5.010995150 0.000464201 >> ? ?12 SessionClose c Connection: close >> ? ?12 StatSess ? ? c 10.20.20.20 57909 5 1 1 0 0 1 282 11013 >> >> Step 2 >> >> Then the next request will go with a (Accept-encoding: gzip), and >> returns me a clear object :( >> >> GET -H "Accept-encoding: gzip" -H "TE:" -sed >> "http://foo.bar/test/prueba.php" >> 200 OK >> Cache-Control: max-age=20 >> Connection: close >> Date: Mon, 16 Nov 2009 16:56:09 GMT >> Via: 1.1 varnish >> Age: 3 >> Server: Apache >> Content-Length: 11013 >> Content-Type: text/html; charset=iso-8859-15 >> Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> Client-Date: Mon, 16 Nov 2009 16:56:08 GMT >> Client-Peer: 10.10.10.10:80 >> Client-Response-Num: 1 >> X-Varnish: 1655545412 1655545411 >> >> ? 12 SessionOpen ?c 10.20.20.20 57910 :80 >> ? 12 ReqStart ? ? c 10.20.20.20 57910 1655545412 >> ? 12 RxRequest ? ?c GET >> ? 12 RxURL ? ? ? ?c /test/prueba.php >> ? 12 RxProtocol ? c HTTP/1.1 >> ? 12 RxHeader ? ? c Connection: TE, close >> ? 12 RxHeader ? ? c Accept-Encoding: gzip >> ? 12 RxHeader ? ? c Host: foo.bar >> ? 12 RxHeader ? ? c TE: >> ? 12 RxHeader ? ? c User-Agent: lwp-request/5.827 libwww-perl/5.831 >> ? 12 VCL_call ? ? c recv >> ? 12 VCL_return ? c lookup >> ? 12 VCL_call ? ? c hash >> ? 12 VCL_return ? c hash >> ? 12 Hit ? ? ? ? ?c 1655545411 >> ? 12 VCL_call ? ? c hit >> ? 12 VCL_return ? c deliver >> ? 12 Length ? ? ? c 11013 >> ? 12 VCL_call ? ? c deliver >> ? 12 VCL_return ? c deliver >> ? 12 TxProtocol ? c HTTP/1.1 >> ? 12 TxStatus ? ? c 200 >> ? 12 TxResponse ? c OK >> ? 12 TxHeader ? ? c Server: Apache >> ? 12 TxHeader ? ? c Cache-control: max-age=20 >> ? 12 TxHeader ? ? c Last-Modified: Mon, 16 Nov 2009 16:56:06 GMT >> ? 12 TxHeader ? ? c Content-Type: text/html; charset=iso-8859-15 >> ? 12 TxHeader ? ? c Content-Length: 11013 >> ? 12 TxHeader ? ? c Date: Mon, 16 Nov 2009 16:56:09 GMT >> ? 12 TxHeader ? ? c X-Varnish: 1655545412 1655545411 >> ? 12 TxHeader ? ? c Age: 3 >> ? 12 TxHeader ? ? c Via: 1.1 varnish >> ? 12 TxHeader ? ? c Connection: close >> ? 12 ReqEnd ? ? ? c 1655545412 1258390569.036545277 >> 1258390569.036923647 0.000121355 0.000098705 0.000279665 >> ? 12 SessionClose c Connection: close >> ? 12 StatSess ? ? c 10.20.20.20 57910 0 1 1 0 0 0 293 11013 >> >> My config: >> >> backend default { >> ? ? ? .host = "10.30.30.30"; >> ? ? ? ?.port = "80"; >> } >> >> acl purge { >> ? ? ? "localhost"; >> } >> >> sub vcl_recv { >> ?set req.grace = 2m; >> ?if (req.http.Accept-Encoding) { >> ? ? if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { >> ? ? ? ?# No se comprimen estos, no tiene logica >> ? ? ? ? ? remove req.http.Accept-Encoding; >> ? ? } elsif (req.http.Accept-Encoding ~ "gzip") { >> ? ? ? ?set req.http.Accept-Encoding = "gzip"; >> ? ? } elsif (req.http.Accept-Encoding ~ "deflate") { >> ? ? ? ?set req.http.Accept-Encoding = "deflate"; >> ? ? } else { >> ? ? ? ?# No se conoce el algoritmo >> ? ? ? remove req.http.Accept-Encoding; >> ? ? } >> ?} >> ?if (req.request == "PURGE") { >> ? ? if (!client.ip ~ purge) { >> ? ? ? ?error 405 "Not allowed."; >> ? ? } >> ? ? purge("req.url == " req.url); >> ?} >> ?if (req.http.Authorization) { >> ? ? ?return (pass); >> ?} >> ?return(lookup); >> } >> >> sub vcl_fetch { >> ? set obj.grace = 2m; >> ? if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ >> "no-cache" || obj.http.Cache-Control ~ "private" || >> obj.http.Cache-Control ~ "max-age=0" || obj.http.Cache-Control ~ >> "must-revalidate" || obj.http.Cache-Control ~ "private" ) { >> ? ? ?pass; >> ? } >> ? if (!obj.cacheable) { >> ? ? ? return (pass); >> ? } >> ? if (obj.http.Set-Cookie) { >> ? ? ? return (pass); >> ? } >> ? set obj.prefetch = ?-3s; >> ? if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { >> ? ? ?pass; >> ? } >> ? return (deliver); >> } >> >> I'm doing something wrong?... >> >> Best Regards, >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > From eric.a.labelle at gmail.com Tue Nov 17 16:21:32 2009 From: eric.a.labelle at gmail.com (Fu Kite (Eric Labelle)) Date: Tue, 17 Nov 2009 11:21:32 -0500 Subject: Varnish/Plone css not found Message-ID: Hi I'm a total newb to varnish but here is my problem: When I connect to my plone site directly there is no problem however the moment i use varnish none of the css loads and all that loads is the html... pressing on plone's links to validate css returns that none of the css files are found when going through varnish. here is my vcl config: # # This is an example VCL configuration file for varnish, meant for the # Plone CMS running within Zope. It defines a "default" backend for # serving static content from a normal web server, and a "zope" # backend for requests to the Zope CMS # # See the vcl(7) man page for details on VCL syntax and semantics. # # $Id: zope-plone.vcl 3300 2008-10-15 09:52:15Z tfheen $ # # Default backend definition. Set this to point to your content # server. # Default backend is the Zope CMS backend default { .host = "127.0.0.1"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { # Normalize host headers, and do rewriting for the zope sites. Reject # requests for unknown hosts if (req.http.host ~ "(www.)?parctechno.mediacommune.org") { set req.http.host = "parctechno.mediacommune.org"; set req.url = "/VirtualHostBase/http/ parctechno.mediacommune.org/2/VirtualHostRoot/" req.url; } else { error 404 "Unknown virtual host."; } /* Do not cache if request is not GET or HEAD */ if (req.request != "GET" && req.request != "HEAD") { /* Forward to 'lookup' if request is an authorized PURGE request */ if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } lookup; } pipe; } /* Do not cache if request contains an Expect header */ if (req.http.Expect) { pipe; } /* Varnish doesn't do INM requests so pass it through */ if (req.http.If-None-Match) { pass; } /* Always cache images and multimedia */ if (req.url ~ "\.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|mp3|mp4|m4a|ogg|mov|avi|wmv)$") { lookup; } /* Always cache CSS and javascript */ if (req.url ~ "\.(css|js)$") { lookup; } /* Always cache static files */ if (req.url ~ "\.(pdf|xls|vsd|doc|ppt|pps|vsd|doc|ppt|pps|xls|pdf|sxw|zip|gz|bz2|tgz|tar|rar|odc|odb|odf|odg|odi|odp|ods|odt|sxc|sxd|sxi|sxw|dmg|torrent|deb|msi|iso|rpm)$") { lookup; } /* Do not cache when authenticated via HTTP Basic or Digest Authentication */ if (req.http.Authenticate || req.http.Authorization) { pipe; } /* Do not cache when authenticated via "__ac" cookies */ if (req.http.Cookie && req.http.Cookie ~ "__ac_(name|password|persistent)=") { pipe; } /* Do not cache when authenticated via "_ZopeId" cookies */ if (req.http.Cookie && req.http.Cookie ~ "_ZopeId=") { pipe; } lookup; } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged"; } } sub vcl_miss { /* Varnish doesn't do IMS to backend, so if not in cache just pass it through */ if (req.http.If-Modified-Since) { pass; } if (req.request == "PURGE") { error 404 "Not in cache"; } } Like i said the pages seem to load fine only the css is missing. :S Sorry if it's something really obvious to you guys but i've been butting my head up against this since i installed varnish last night and i can't seem to find any info googling it :( -- Eric Labelle (Dubian) ________________________________ Dubearth Collective - www.dubearth.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Tue Nov 17 20:59:30 2009 From: scaunter at topscms.com (Caunter, Stefan) Date: Tue, 17 Nov 2009 15:59:30 -0500 Subject: Does vbulletin support varnish? In-Reply-To: <388e6260911161705k79929d4aybeb575bfb26ebdc2@mail.gmail.com> References: <388e6260911161705k79929d4aybeb575bfb26ebdc2@mail.gmail.com> Message-ID: <064FF286FD17EC418BFB74629578BED1152526F1@tmg-mail4.torstar.net> Out of the box vbulletin is php and runs best with fastcgi. You'll get the most value out of serving images on a separate domain that varnish answers. The cookies make it difficult. Best running example of a big vbulletin instance is http://hackint0sh.org/ which is nginx and fastcgi, no apache. Stefan From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Debunk it Sent: November-16-09 8:06 PM To: varnish-misc at projects.linpro.no Subject: Does vbulletin support varnish? Hi all, Can someone confirm whether vbulletin supports varnish out of the box? Does vbulletin know how to send specific purge requests to varnish on updates? My concern is mainly about stale cache. I would prefer not to have to send purge requests manually. Any tips or help is greatly appreciated. D -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.haase at evolver.de Wed Nov 18 08:52:22 2009 From: andreas.haase at evolver.de (Andreas Haase) Date: Wed, 18 Nov 2009 09:52:22 +0100 Subject: Strange behaviour Message-ID: <1258534342.3012.5.camel@andreas-haase-linux.evolver.de> Hello, we are using varnish v2.0.4 in a setup with 4 backend servers. The varnish is configured to use the backends with loadbalancing in a round-robin fashion and all worked well until some days ago. Until we removed 2 of the backends we had a cache ratio (total requests compared to totalfetches) of 80 percent. After removing 2 of the backends the ratio is only 57 percent now. The only change to varnish configuration has been to remove the backends that are not operational any more. The data to be cached has not been changed. And now I'm wondering what is the reason for that behaviour? I could imagine this when there had been a changed of content to cache or something else. Anyone here who could give me a possible explanation for that? Thanks in advance. -- Mit freundlichen Gruessen Andreas Haase Administration und Technik evolver services GmbH Fon +49 / (0)3 71 / 4 00 03 727 Fax +49 / (0)3 71 / 4 00 03 79 E-Mail andreas.haase at evolver.de Web http://www.evolver.de Sitz der Gesellschaft: Chemnitz Handelsregister: Amtsgericht Chemnitz, HRB 22649 Geschaeftsfuehrer: Dirk Neubauer From phk at phk.freebsd.dk Wed Nov 18 12:43:40 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 18 Nov 2009 12:43:40 +0000 Subject: Worker thread stack size Message-ID: <94258.1258548220@critter.freebsd.dk> I have added a parameter to set the worker thread stack size. I suspect we can get away with something as low as maybe 64k + $sess_workspace, but I have absolutely no data to confirm or deny this claim. If a couple of you could spend a few minutes to examine actual stack sizes and report back, that would be nice. The number I am interested in, is the number of mapped and modified pages in the worker-thread stacks. On FreeBSD, the mincore(2) could report this, but on Linix mincore(2) only reports mapped vs. unmapped pages, which may or may not be enough. In either case, it would require some hackish code in Varnish. A better way is to ask your systems VM system, for instance by looking at /proc/$pid/map. On a 64bit FreeBSD system, the entries you are looking for look like this: 0x7ffffddd0000 0x7ffffddf0000 3 0 0xffffff003d87e0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488 0x7ffffdfd1000 0x7ffffdff1000 3 0 0xffffff0028845d80 rw- 1 0 0x3100 NCOW NNC default - CH 488 0x7ffffe1d2000 0x7ffffe1f2000 3 0 0xffffff00635eea20 rw- 1 0 0x3100 NCOW NNC default - CH 488 0x7ffffe3d3000 0x7ffffe3f3000 3 0 0xffffff0095d57870 rw- 1 0 0x3100 NCOW NNC default - CH 488 0x7ffffe5d4000 0x7ffffe5f4000 3 0 0xffffff00630ec0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488 And the number I need is the difference between the first two colums for all your worker threads, (min, max, average accepted also ) and the value of your sess_workspace parameter. In the case you would find: 0x7ffffddf0000 - 0x7ffffddd0000 = 128K 0x7ffffdff1000 - 0x7ffffdfd1000 = 128K ... Poul-Henning In message <20091118123439.6A18C38D0D at projects.linpro.no>, phk at projects.linpro. no writes: >Author: phk >Date: 2009-11-18 13:34:39 +0100 (Wed, 18 Nov 2009) >New Revision: 4352 > >Modified: > trunk/varnish-cache/bin/varnishd/cache_pool.c > trunk/varnish-cache/bin/varnishd/heritage.h > trunk/varnish-cache/bin/varnishd/mgt_pool.c >Log: >Add a parameter to set the workerthread stacksize. > >On 32 bit systems, it may be necessary to tweak this down to get high >numbers of worker threads squeezed into the address-space. > >I have no idea how much stack-space a worker thread normally uses, so >no guidance is given, and we default to the system default. > >Fixes #572 -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From itlj at gyldendal.dk Wed Nov 18 12:54:47 2009 From: itlj at gyldendal.dk (=?iso-8859-1?Q?Lars_J=F8rgensen?=) Date: Wed, 18 Nov 2009 13:54:47 +0100 Subject: Cache utilization? Message-ID: Hi, Obvious question but I can't find the answer anywhere: How do I know how much of my cache is utilized, i.e. is my cache file big enough? -- Lars From phk at phk.freebsd.dk Wed Nov 18 13:02:32 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 18 Nov 2009 13:02:32 +0000 Subject: Cache utilization? In-Reply-To: Your message of "Wed, 18 Nov 2009 13:54:47 +0100." Message-ID: <7014.1258549352@critter.freebsd.dk> In message , =?iso-8859-1?Q? Lars_J=F8rgensen?= writes: >Hi, > >Obvious question but I can't find the answer anywhere: How do I >know how much of my cache is utilized, i.e. is my cache file big >enough? You can see if it is too small, because you will get LRU kill activity in varnishstat. There is not really a good way to see if your cache is to big, and apart from diskspace, there is no disadvantage to that. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From itlj at gyldendal.dk Wed Nov 18 13:05:37 2009 From: itlj at gyldendal.dk (=?iso-8859-1?Q?Lars_J=F8rgensen?=) Date: Wed, 18 Nov 2009 14:05:37 +0100 Subject: Expires and Cache-Control stripped by varnish? Message-ID: <7BAE51A4-0C33-4A9E-976A-8EC61E9BDEC4@gyldendal.dk> Hi, Another one that I'm trying to work out at the moment. I have enabled mod_expires in Apache and set a 24 hour expiration on css. When I request a page directly from the backend, a CSS object looks like this: Date: Wed, 18 Nov 2009 12:57:50 GMT Server: Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 Connection: Keep-Alive Keep-Alive: timeout=15 Etag: "da8e2c-e663-6c305b40" Expires: Thu, 19 Nov 2009 12:57:50 GMT Cache-Control: max-age=86400 The same object through Varnish: Date: Wed, 18 Nov 2009 12:56:24 GMT Via: 1.1 varnish X-Varnish: 71655288 Last-Modified: Mon, 22 Jun 2009 10:34:45 GMT Connection: keep-alive I do get "304 Not Modified" on the object, but shouldn't I get the Expires and Cache-Control headers too? -- Lars From itlj at gyldendal.dk Wed Nov 18 13:22:02 2009 From: itlj at gyldendal.dk (=?iso-8859-1?Q?Lars_J=F8rgensen?=) Date: Wed, 18 Nov 2009 14:22:02 +0100 Subject: Fwd: Cache utilization? References: <5790F2B2-F932-45B4-9975-0BFFAA97D336@gyldendal.dk> Message-ID: <2AF2DCEE-6443-41C2-8BE4-C338203F2C20@gyldendal.dk> Sorry, the default reply-option set to go to the original author got me. Here's my reply for the benefit of the list. -- Lars > Fra: Lars J?rgensen > Dato: 18. nov 2009 14.08.07 CET > Til: Poul-Henning Kamp > Emne: Vedr.: Cache utilization? > > Den 18/11/2009 kl. 14.02 skrev Poul-Henning Kamp: >>> Hi, >>> >>> Obvious question but I can't find the answer anywhere: How do I >>> know how much of my cache is utilized, i.e. is my cache file big >>> enough? >> >> You can see if it is too small, because you will get LRU kill >> activity in varnishstat. > > I have: > > n_lru_nuked 0 . N LRU nuked objects > n_lru_saved 0 . N LRU saved objects > n_lru_moved 3692590 . N LRU moved objects > > I guess what you call "kill activity" is the same as "nuked objects"? > >> There is not really a good way to see if your cache is to big, and >> apart from diskspace, there is no disadvantage to that. > > I agree, I'm only worrying if my cache is too small. > > > -- > Lars From phk at phk.freebsd.dk Wed Nov 18 13:29:19 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 18 Nov 2009 13:29:19 +0000 Subject: Fwd: Cache utilization? In-Reply-To: Your message of "Wed, 18 Nov 2009 14:22:02 +0100." <2AF2DCEE-6443-41C2-8BE4-C338203F2C20@gyldendal.dk> Message-ID: <19578.1258550959@critter.freebsd.dk> In message <2AF2DCEE-6443-41C2-8BE4-C338203F2C20 at gyldendal.dk>, =?iso-8859-1?Q? Lars_J=F8rgensen?= writes: >> I have: >> = >> n_lru_nuked 0 . N LRU nuked objects >> n_lru_saved 0 . N LRU saved objects >> n_lru_moved 3692590 . N LRU moved objects >> = >> I guess what you call "kill activity" is the same as "nuked objects"? Correct, you seem to have plenty cache space. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From anders at fupp.net Wed Nov 18 13:52:39 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 18 Nov 2009 14:52:39 +0100 Subject: Saint mode Message-ID: <20091118135238.GA21288@fupp.net> Hi, We spoke briefly about in the Varnish meeting. Unfortunately I can not remember the conclusion(s), so I ask again: 1) How to check in vcl_recv if a request was restarted saintmode? You mentioned something about checking TTL? Does it matter if that request is handled with pass or lookup? I want to avoid mixing up restarts due to saint mode and restarts due to what should have been normal passes in vcl_fetch (IMO) but which are restarts to handle it in vcl_recv to avoid "hit for pass" problems. 2) How do use saint mode when backend fails to respond, covering everything from no response/connection to backend just resetting the connection? As far as my syslogging shows, connection failures are adressed in vcl_error and easy/possible to catch in vcl_fetch? Cheers from Gyoda, Saitama, Japan. -- Anders. From anders at fupp.net Wed Nov 18 13:55:27 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 18 Nov 2009 14:55:27 +0100 Subject: Saint mode In-Reply-To: <20091118135238.GA21288@fupp.net> References: <20091118135238.GA21288@fupp.net> Message-ID: <20091118135527.GA22861@fupp.net> Hi, On Wed, Nov 18, 2009 at 02:52:39PM +0100, Anders Nordby wrote: > 2) How do use saint mode when backend fails to respond, covering > everything from no response/connection to backend just resetting the > connection? As far as my syslogging shows, connection failures are adressed > in vcl_error and easy/possible to catch in vcl_fetch? I meant NOT easy/possible to catch in vcl_fetch. Jetlag owns me. :/ Regards, -- Anders. From mail at danielbruessler.de Wed Nov 18 15:25:33 2009 From: mail at danielbruessler.de (Daniel Bruessler) Date: Wed, 18 Nov 2009 16:25:33 +0100 Subject: too much requests for varnish? Message-ID: <4B0411ED.5020409@danielbruessler.de> Hi, we're using varnish for a newspaper-portal and see in the logfile that about 1% of the requests are NOT done by varnish. That requests don't get the special varnish-http-header like the "Age:" info. Is there a limit? Did anybody of you have that problem already? (we're using varnish 2.0.4 with 1GB of RAM-limit, about 3 single requests a second) Cheers! Daniel :: Daniel Bruessler - Emilienstr. 10 - 90489 Nuernberg From phk at phk.freebsd.dk Wed Nov 18 15:28:26 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 18 Nov 2009 15:28:26 +0000 Subject: too much requests for varnish? In-Reply-To: Your message of "Wed, 18 Nov 2009 16:25:33 +0100." <4B0411ED.5020409@danielbruessler.de> Message-ID: <20090.1258558106@critter.freebsd.dk> In message <4B0411ED.5020409 at danielbruessler.de>, Daniel Bruessler writes: >Hi, > >we're using varnish for a newspaper-portal and see in the logfile that >about 1% of the requests are NOT done by varnish. That requests don't >get the special varnish-http-header like the "Age:" info. Check if these requests are processed by "pipe" or "pass". Pay particular attention to "pipe" as all subsequent requests on the same TCP connection gets routed directly to the backend. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From l at lrowe.co.uk Wed Nov 18 17:04:36 2009 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 18 Nov 2009 17:04:36 +0000 Subject: Varnish/Plone css not found In-Reply-To: References: Message-ID: 2009/11/17 Fu Kite (Eric Labelle) : > Hi > > I'm a total newb to varnish but here is my problem: > > When I connect to my plone site directly there is no problem however the > moment i use varnish none of the css loads and all that loads is the html... > pressing on plone's links to validate css returns that none of the css files > are found when going through varnish. What urls do the css files linked from the page have? Laurence From l at lrowe.co.uk Wed Nov 18 17:15:59 2009 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 18 Nov 2009 17:15:59 +0000 Subject: Varnish/Plone css not found In-Reply-To: References: Message-ID: What are the response headers you see in Firebug? varnishlog output is also helpful. Laurence 2009/11/18 Fu Kite (Eric Labelle) : > Hi lawrence > > here are the import statements from the generated plone source: > > > > > > > > I'm really new to both plone and varnish so if it's something really obvious > I apologize ahead of time :P > > > On Wed, Nov 18, 2009 at 12:04 PM, Laurence Rowe wrote: >> >> 2009/11/17 Fu Kite (Eric Labelle) : >> > Hi >> > >> > I'm a total newb to varnish but here is my problem: >> > >> > When I connect to my plone site directly there is no problem however the >> > moment i use varnish none of the css loads and all that loads is the >> > html... >> > pressing on plone's links to validate css returns that none of the css >> > files >> > are found when going through varnish. >> >> What urls do the css files linked from the page have? >> >> Laurence >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > -- > Eric Labelle > (Dubian) > ________________________________ > Dubearth Collective - www.dubearth.com > > > From l at lrowe.co.uk Wed Nov 18 17:42:19 2009 From: l at lrowe.co.uk (Laurence Rowe) Date: Wed, 18 Nov 2009 17:42:19 +0000 Subject: Varnish/Plone css not found In-Reply-To: References: Message-ID: OK, from this I can see that you requested the url: http://parctechno.mediacommune.org:6081/ but have set the Zope virtual hosting to be http://parctechno.mediacommune.org/ (the port is missing). This means that the css is being requested on a different server (your apache instance). Change the vcl to say: set req.url = "/VirtualHostBase/http/parctechno.mediacommune.org:6081/2/VirtualHostRoot/" req.url; run Varnish on port 80 or set Apache to proxy through to Varnish. Laurence 2009/11/18 Fu Kite (Eric Labelle) > > GET base-cachekey5861.css: > > Response Headers > Date Wed, 18 Nov 2009 17:20:41 GMT > Server Apache/2.2.3 (CentOS) > Content-Length 336 > Connectionclose > Content-Typetext/html; charset=iso-8859-1 > Request Headers > Hostparctechno.mediacommune.org > User-AgentMozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 GTB6 > Accepttext/css,*/*;q=0.1 > Accept-Languageen-us,en;q=0.5 > Accept-Encodinggzip,deflate > Accept-CharsetISO-8859-1,utf-8;q=0.7,*;q=0.7 > Keep-Alive300 > Connectionkeep-alive > Refererhttp://parctechno.mediacommune.org:6081/ > Cache-Controlmax-age=0 > GET Plonecustom-cachekey5221.css : > > > DateWed, 18 Nov 2009 17:20:41 GMT > ServerApache/2.2.3 (CentOS) > Content-Length343 > Connectionclose > Content-Typetext/html; charset=iso-8859-1 > Request Headers > Hostparctechno.mediacommune.org > User-AgentMozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 GTB6 > Accepttext/css,*/*;q=0.1 > Accept-Languageen-us,en;q=0.5 > Accept-Encodinggzip,deflate > Accept-CharsetISO-8859-1,utf-8;q=0.7,*;q=0.7 > Keep-Alive300 > Connectionkeep-alive > Refererhttp://parctechno.mediacommune.org:6081/ > Cache-Controlmax-age=0 > > I've attached what i figured would be the most relevant info for you from the varnish log as a txt file. > > Thanks! > > > > On Wed, Nov 18, 2009 at 12:15 PM, Laurence Rowe wrote: >> >> What are the response headers you see in Firebug? varnishlog output is >> also helpful. >> >> Laurence >> >> 2009/11/18 Fu Kite (Eric Labelle) : >> > Hi lawrence >> > >> > here are the import statements from the generated plone source: >> > >> > >> > >> > >> > >> > >> > >> > I'm really new to both plone and varnish so if it's something really obvious >> > I apologize ahead of time :P >> > >> > >> > On Wed, Nov 18, 2009 at 12:04 PM, Laurence Rowe wrote: >> >> >> >> 2009/11/17 Fu Kite (Eric Labelle) : >> >> > Hi >> >> > >> >> > I'm a total newb to varnish but here is my problem: >> >> > >> >> > When I connect to my plone site directly there is no problem however the >> >> > moment i use varnish none of the css loads and all that loads is the >> >> > html... >> >> > pressing on plone's links to validate css returns that none of the css >> >> > files >> >> > are found when going through varnish. >> >> >> >> What urls do the css files linked from the page have? >> >> >> >> Laurence >> >> _______________________________________________ >> >> varnish-misc mailing list >> >> varnish-misc at projects.linpro.no >> >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> > >> > >> > >> > -- >> > Eric Labelle >> > (Dubian) >> > ________________________________ >> > Dubearth Collective - www.dubearth.com >> > >> > >> > >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > -- > Eric Labelle > (Dubian) > ________________________________ > Dubearth Collective - www.dubearth.com > > From kb+varnish at slide.com Thu Nov 19 02:08:07 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 18 Nov 2009 18:08:07 -0800 Subject: Worker thread stack size In-Reply-To: <94258.1258548220@critter.freebsd.dk> References: <94258.1258548220@critter.freebsd.dk> Message-ID: I've attached a zip with /proc/PID/maps output as well as the "pmap -x PID" output for a 2.0.5 process that's been running in production for about four days. It's similarly patched for worker /and/ backend thread stacksizing, and I'm specifying thread_pool_stacksize=256K. The stack for other threads will be the system default (8MB on my systems). In my testing on x86_64, I wasn't able to get below 256K using "ulimit -s" for the entire process, and I wasn't able to get below a 128K stacksize just applied to the worker/backend threads. 256K worker/backend stacksize has had no failures for me. All with default 16k sess_workspace. No i686 experience, sorry. Given the sometimes high ratio of backend threads to worker threads, I thought that controlling the backend thread stack size was also a good idea. I'm curious why a similar tweak to cache_backend_poll.c wasn't made? Thx, -- Ken -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish_maps.zip Type: application/zip Size: 113733 bytes Desc: not available URL: -------------- next part -------------- On Nov 18, 2009, at 4:43 AM, Poul-Henning Kamp wrote: > > I have added a parameter to set the worker thread stack size. > > I suspect we can get away with something as low as maybe 64k + > $sess_workspace, but I have absolutely no data to confirm or > deny this claim. > > If a couple of you could spend a few minutes to examine actual > stack sizes and report back, that would be nice. > > The number I am interested in, is the number of mapped and > modified pages in the worker-thread stacks. > > On FreeBSD, the mincore(2) could report this, but on Linix mincore(2) > only reports mapped vs. unmapped pages, which may or may not be > enough. In either case, it would require some hackish code in Varnish. > > A better way is to ask your systems VM system, for instance by > looking at /proc/$pid/map. > > On a 64bit FreeBSD system, the entries you are looking for look like > this: > 0x7ffffddd0000 0x7ffffddf0000 3 0 0xffffff003d87e0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488 > 0x7ffffdfd1000 0x7ffffdff1000 3 0 0xffffff0028845d80 rw- 1 0 0x3100 NCOW NNC default - CH 488 > 0x7ffffe1d2000 0x7ffffe1f2000 3 0 0xffffff00635eea20 rw- 1 0 0x3100 NCOW NNC default - CH 488 > 0x7ffffe3d3000 0x7ffffe3f3000 3 0 0xffffff0095d57870 rw- 1 0 0x3100 NCOW NNC default - CH 488 > 0x7ffffe5d4000 0x7ffffe5f4000 3 0 0xffffff00630ec0d8 rw- 1 0 0x3100 NCOW NNC default - CH 488 > > And the number I need is the difference between the first two colums > for all your worker threads, (min, max, average accepted also ) > and the value of your sess_workspace parameter. > > In the case you would find: > 0x7ffffddf0000 - 0x7ffffddd0000 = 128K > 0x7ffffdff1000 - 0x7ffffdfd1000 = 128K > ... > > Poul-Henning > > > In message <20091118123439.6A18C38D0D at projects.linpro.no>, phk at projects.linpro. > no writes: >> Author: phk >> Date: 2009-11-18 13:34:39 +0100 (Wed, 18 Nov 2009) >> New Revision: 4352 >> >> Modified: >> trunk/varnish-cache/bin/varnishd/cache_pool.c >> trunk/varnish-cache/bin/varnishd/heritage.h >> trunk/varnish-cache/bin/varnishd/mgt_pool.c >> Log: >> Add a parameter to set the workerthread stacksize. >> >> On 32 bit systems, it may be necessary to tweak this down to get high >> numbers of worker threads squeezed into the address-space. >> >> I have no idea how much stack-space a worker thread normally uses, so >> no guidance is given, and we default to the system default. >> >> Fixes #572 > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From kb+varnish at slide.com Thu Nov 19 02:10:04 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Wed, 18 Nov 2009 18:10:04 -0800 Subject: Expires and Cache-Control stripped by varnish? In-Reply-To: <7BAE51A4-0C33-4A9E-976A-8EC61E9BDEC4@gyldendal.dk> References: <7BAE51A4-0C33-4A9E-976A-8EC61E9BDEC4@gyldendal.dk> Message-ID: <5A764B5A-C575-473D-964E-2F15B5A10195@slide.com> I believe you should upgrade to 2.0.5 (or scan Varnish ticket #529 for a patch) which retains this and other headers in a 304 response. -- Ken On Nov 18, 2009, at 5:05 AM, Lars J?rgensen wrote: > Hi, > > Another one that I'm trying to work out at the moment. I have enabled mod_expires in Apache and set a 24 hour expiration on css. When I request a page directly from the backend, a CSS object looks like this: > > Date: Wed, 18 Nov 2009 12:57:50 GMT > Server: Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13 > Connection: Keep-Alive > Keep-Alive: timeout=15 > Etag: "da8e2c-e663-6c305b40" > Expires: Thu, 19 Nov 2009 12:57:50 GMT > Cache-Control: max-age=86400 > > The same object through Varnish: > > Date: Wed, 18 Nov 2009 12:56:24 GMT > Via: 1.1 varnish > X-Varnish: 71655288 > Last-Modified: Mon, 22 Jun 2009 10:34:45 GMT > Connection: keep-alive > > I do get "304 Not Modified" on the object, but shouldn't I get the Expires and Cache-Control headers too? > > > -- > Lars > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From martin.boer at netclever.nl Thu Nov 19 09:56:42 2009 From: martin.boer at netclever.nl (Martin Boer) Date: Thu, 19 Nov 2009 10:56:42 +0100 Subject: too much requests for varnish? In-Reply-To: <20090.1258558106@critter.freebsd.dk> References: <20090.1258558106@critter.freebsd.dk> Message-ID: <4B05165A.7060809@netclever.nl> Only 1% is missing ? I'd be very happy with that. :) On a serious note; my guess is that that 1% has something do do with cookies. Regards, Martin Boer Poul-Henning Kamp wrote: > In message <4B0411ED.5020409 at danielbruessler.de>, Daniel Bruessler writes: > >> Hi, >> >> we're using varnish for a newspaper-portal and see in the logfile that >> about 1% of the requests are NOT done by varnish. That requests don't >> get the special varnish-http-header like the "Age:" info. >> > > Check if these requests are processed by "pipe" or "pass". > > Pay particular attention to "pipe" as all subsequent requests on > the same TCP connection gets routed directly to the backend. > > From mail at danielbruessler.de Thu Nov 19 17:47:08 2009 From: mail at danielbruessler.de (Daniel Bruessler) Date: Thu, 19 Nov 2009 18:47:08 +0100 Subject: too much requests for varnish? In-Reply-To: <4B05165A.7060809@netclever.nl> References: <20090.1258558106@critter.freebsd.dk> <4B05165A.7060809@netclever.nl> Message-ID: <4B05849C.4020700@danielbruessler.de> Hello, I solved the problem now with changing the cookie-handling. Normally we don't use cookies an the site, just a small partner-application does use them. So we just had a little number of that kind of requests. => Now I don't use pipe anymore, I just set the obj.ttl for the objects I want to cache. So now the cookies have the time 0s, and default time is 600 seconds. => So now all web-pages have their varnish-header-information :-) Cheers! Daniel ................................................... :: TEL +49 (0)911 - 815 90 30 :: Daniel Br??ler - Emilienstr. 10 - 90489 N?rnberg > Only 1% is missing ? I'd be very happy with that. :) > On a serious note; my guess is that that 1% has something do do with > cookies. > > Regards, > Martin Boer > > Poul-Henning Kamp wrote: >> In message <4B0411ED.5020409 at danielbruessler.de>, Daniel Bruessler >> writes: >> >>> Hi, >>> >>> we're using varnish for a newspaper-portal and see in the logfile that >>> about 1% of the requests are NOT done by varnish. That requests don't >>> get the special varnish-http-header like the "Age:" info. >>> >> >> Check if these requests are processed by "pipe" or "pass". >> >> Pay particular attention to "pipe" as all subsequent requests on >> the same TCP connection gets routed directly to the backend. >> From kb+varnish at slide.com Thu Nov 19 23:18:33 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Thu, 19 Nov 2009 15:18:33 -0800 Subject: Benchmarking and stress testing Varnish In-Reply-To: <20091113133832.GA4463@kjeks.kristian.int> References: <20091113133832.GA4463@kjeks.kristian.int> Message-ID: <1EE13D35-0C95-4E3E-8FE8-55F5730D5603@slide.com> Those all seem very useful to me, but I think the lowest-hanging performance fruit right now is simultaneous connections and the threading model (including the discussions about stacksize and memory usage, etc). Modeling Varnish's behavior with certain ranges of simultaneous worker and backend threads would be useful, IMHO -- both from slow backends, slow clients, and lots of keepalive threads. -- kb On Nov 13, 2009, at 5:38 AM, Kristian Lyngstol wrote: > As some of you might already know, I regular stress tests of Varnish, most > of the time it's a matter of testing the same thing over and over to ensure > that there aren't any huge surprises during the development of Varnish (we > run nightly builds and stress tests of trunk - they need to be as > predictable as possible to have any value). > > However, I also do occasional tests which involve trying to find the > various breaking points of Varnish. Last time I did this, I used roughly 42 > cpu cores spread over about 30 computers to generate traffic against a > single Varnish server on our quad xenon core machine. The result thus far > was that all of the clients ran far too close to 100% cpu usage, and > Varnish was serving between 140 and 150 thousand requests per second. > > The reason I'm telling you this is because I'm looking for input on aspects > that should be tested next time I do this, which will most likely be during > Christmas (far easier to borrow machine power). So far on my list, I've > got: > > 1. Test performance over some time period when pages are evicted more > frequently. (ie: X thousand pages requested repeatedly, but random expiry > time). > > 2. Test with fewer requests per session (this is somewhat hard because the > clients tend to turn into the bottleneck). > > 3. Test raw hot-hit cache rate (more of what I did before - get a high > number). > > 4. Test raw performance with a huge data set that's bound to be swapped > out. > > 5. Various tests of -sfile versus -smalloc and large datasets, combined > with critbit/classic tests. > > 6. Find some reasonably optimal settings, then fidget around trying to find > a worst-case scenario. > > 7. Test the effect of session lingering with really slow clients. > > .... > > One thing that should be noted is that the test server is limited to 1gbit, > which means that for "raw req/s", we're basically forced to use tiny pages, > or we just end up starving the network. > > The goal is to test theories regarding performance, stability and > predictability. Basically find the breaking points, what's good and what's > not, and what we have to care about and what we can silently ignore. > > As you can see, the list is getting long and this is off the top of my > head, but any suggestions are welcome. > > -- > Kristian Lyngst?l > Redpill Linpro AS > Tlf: +47 21544179 > Mob: +47 99014497 > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From ebowman at boboco.ie Fri Nov 20 22:19:29 2009 From: ebowman at boboco.ie (Eric Bowman) Date: Fri, 20 Nov 2009 22:19:29 +0000 Subject: multi-terabyte caching Message-ID: <4B0715F1.9030309@boboco.ie> Hi, Apologies if this has been hashed out before. I did some googling, and read the faq, but I could have been more thorough... ;) I'm considering using Varnish to handle caching for a mapping application. After reading http://varnish.projects.linpro.no/wiki/ArchitectNotes, it seems like Varnish is maybe not a good choice for this. In short I need to cache something like 500,000,000 files that take up about 2TB of storage. Using more 1975 technologies, one of the challenges has been how to distribute these across the file system without putting too many files per directory. We have a solution we kind of like, and there are others out there. My impression is that we would start to put a big strain on Varnish and the OS using it in the standard way. But maybe I'm wrong. Or, is there a way to plugin a backend to manage this storage, without getting into the vm-thrash from which Squid suffers? Thanks for any advice -- Varnish gets such good press I'd really love if it were straightforward to use it in this case. -Eric -- Eric Bowman From david.birdsong at gmail.com Fri Nov 20 23:05:11 2009 From: david.birdsong at gmail.com (David Birdsong) Date: Fri, 20 Nov 2009 15:05:11 -0800 Subject: multi-terabyte caching In-Reply-To: <4B0715F1.9030309@boboco.ie> References: <4B0715F1.9030309@boboco.ie> Message-ID: On Fri, Nov 20, 2009 at 2:19 PM, Eric Bowman wrote: > Hi, > > Apologies if this has been hashed out before. ?I did some googling, and > read the faq, but I could have been more thorough... ;) > > I'm considering using Varnish to handle caching for a mapping > application. ?After reading > http://varnish.projects.linpro.no/wiki/ArchitectNotes, it seems like > Varnish is maybe not a good choice for this. ?In short I need to cache > something like 500,000,000 files that take up about 2TB of storage. > > Using more 1975 technologies, one of the challenges has been how to > distribute these across the file system without putting too many files > per directory. ?We have a solution we kind of like, and there are others > out there. > > My impression is that we would start to put a big strain on Varnish and > the OS using it in the standard way. ?But maybe I'm wrong. ?Or, is there > a way to plugin a backend to manage this storage, without getting into > the vm-thrash from which Squid suffers? > > Thanks for any advice -- Varnish gets such good press I'd really love if > it were straightforward to use it in this case. > > -Eric a straight forward way to store an unlimited amount of data is to find the optimal cache storage capacity per varnish instance then: optimal_size / working_set = N where N is the number of varnish instances you need to run. then put a layer 7 switch in front of the pool of varnish instances, hashing on the requests. works like a charm. finding optimal storage amount per varnish requires turning the knobs: - tuning VM - tuning kernel for high network traffic - balancing between big and fast storage medium random reads will skyrocket, minimize writing to storage while serving if possible (pregenerate your working set, dont let anything expire between generating ) ..and test > Eric Bowman > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From vlists at veus.hr Sat Nov 21 02:48:20 2009 From: vlists at veus.hr (Vladimir Vuksan) Date: Fri, 20 Nov 2009 21:48:20 -0500 (EST) Subject: Varnish Ganglia metrics Message-ID: I just wanted to let people know that I put a page that describes how to crunch Varnish logs in real-time and push the metrics using Ganglia and gmetric. URL is as follows http://vuksan.com/linux/ganglia/index.html#varnish_stats Let me know if you have questions. Vladimir From ebowman at boboco.ie Sat Nov 21 09:31:14 2009 From: ebowman at boboco.ie (Eric Bowman) Date: Sat, 21 Nov 2009 09:31:14 +0000 Subject: multi-terabyte caching In-Reply-To: References: <4B0715F1.9030309@boboco.ie> Message-ID: <4B07B362.9080606@boboco.ie> Thanks -- very useful and helpful. cheers, Eric David Birdsong wrote: > On Fri, Nov 20, 2009 at 2:19 PM, Eric Bowman wrote: > >> Hi, >> >> Apologies if this has been hashed out before. I did some googling, and >> read the faq, but I could have been more thorough... ;) >> >> I'm considering using Varnish to handle caching for a mapping >> application. After reading >> http://varnish.projects.linpro.no/wiki/ArchitectNotes, it seems like >> Varnish is maybe not a good choice for this. In short I need to cache >> something like 500,000,000 files that take up about 2TB of storage. >> >> Using more 1975 technologies, one of the challenges has been how to >> distribute these across the file system without putting too many files >> per directory. We have a solution we kind of like, and there are others >> out there. >> >> My impression is that we would start to put a big strain on Varnish and >> the OS using it in the standard way. But maybe I'm wrong. Or, is there >> a way to plugin a backend to manage this storage, without getting into >> the vm-thrash from which Squid suffers? >> >> Thanks for any advice -- Varnish gets such good press I'd really love if >> it were straightforward to use it in this case. >> >> -Eric >> > a straight forward way to store an unlimited amount of data is to find > the optimal cache storage capacity per varnish instance then: > > optimal_size / working_set = N > > where N is the number of varnish instances you need to run. > > then put a layer 7 switch in front of the pool of varnish instances, > hashing on the requests. > > works like a charm. > > finding optimal storage amount per varnish requires turning the knobs: > - tuning VM > - tuning kernel for high network traffic > - balancing between big and fast storage medium > random reads will skyrocket, minimize writing to storage while > serving if possible (pregenerate your working set, dont let anything > expire between generating ) > ..and test > > >> Eric Bowman >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at projects.linpro.no >> http://projects.linpro.no/mailman/listinfo/varnish-misc >> >> -- Eric Bowman Boboco Ltd ebowman at boboco.ie http://www.boboco.ie/ebowman/pubkey.pgp +35318394189/+353872801532 From david.birdsong at gmail.com Sat Nov 21 09:43:39 2009 From: david.birdsong at gmail.com (David Birdsong) Date: Sat, 21 Nov 2009 01:43:39 -0800 Subject: multi-terabyte caching In-Reply-To: <4B07B362.9080606@boboco.ie> References: <4B0715F1.9030309@boboco.ie> <4B07B362.9080606@boboco.ie> Message-ID: On Sat, Nov 21, 2009 at 1:31 AM, Eric Bowman wrote: > Thanks -- very useful and helpful. > > cheers, > Eric > of course my equation was wrong though, should be: working_set / optimal_size = N > David Birdsong wrote: >> On Fri, Nov 20, 2009 at 2:19 PM, Eric Bowman wrote: >> >>> Hi, >>> >>> Apologies if this has been hashed out before. ?I did some googling, and >>> read the faq, but I could have been more thorough... ;) >>> >>> I'm considering using Varnish to handle caching for a mapping >>> application. ?After reading >>> http://varnish.projects.linpro.no/wiki/ArchitectNotes, it seems like >>> Varnish is maybe not a good choice for this. ?In short I need to cache >>> something like 500,000,000 files that take up about 2TB of storage. >>> >>> Using more 1975 technologies, one of the challenges has been how to >>> distribute these across the file system without putting too many files >>> per directory. ?We have a solution we kind of like, and there are others >>> out there. >>> >>> My impression is that we would start to put a big strain on Varnish and >>> the OS using it in the standard way. ?But maybe I'm wrong. ?Or, is there >>> a way to plugin a backend to manage this storage, without getting into >>> the vm-thrash from which Squid suffers? >>> >>> Thanks for any advice -- Varnish gets such good press I'd really love if >>> it were straightforward to use it in this case. >>> >>> -Eric >>> >> a straight forward way to store an unlimited amount of data is to find >> the optimal cache storage capacity per varnish instance then: >> >> optimal_size ?/ working_set = N >> >> where N is the number of varnish instances you need to run. >> >> then put a layer 7 switch in front of the pool of varnish instances, >> hashing on the requests. >> >> works like a charm. >> >> finding optimal storage amount per varnish requires turning the knobs: >> ?- tuning VM >> ?- tuning kernel for high network traffic >> ?- balancing between big and fast storage medium >> ? ? random reads will skyrocket, minimize writing to storage while >> serving if possible (pregenerate your working set, dont let anything >> expire between generating ) >> ?..and test >> >> >>> Eric Bowman >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at projects.linpro.no >>> http://projects.linpro.no/mailman/listinfo/varnish-misc >>> >>> > > > -- > Eric Bowman > Boboco Ltd > ebowman at boboco.ie > http://www.boboco.ie/ebowman/pubkey.pgp > +35318394189/+353872801532 > > From glen at delfi.ee Tue Nov 24 17:57:54 2009 From: glen at delfi.ee (Elan =?iso-8859-1?q?Ruusam=E4e?=) Date: Tue, 24 Nov 2009 19:57:54 +0200 Subject: vim syntax Message-ID: <200911241957.54742.glen@delfi.ee> hi there, @phk mentioned in the irc, that somebody has written varnish syntax for vim. i tried to search the archives but found no info about it. anyone knows? would be pity to start writing one from scratch if somebody already spend some time of yours on it. -- glen From tw at cloudcontrol.de Wed Nov 25 10:21:20 2009 From: tw at cloudcontrol.de (Tobias Wilken) Date: Wed, 25 Nov 2009 11:21:20 +0100 Subject: http format error Message-ID: <3d46faea0911250221n6eaa137cr28211875ad7a75de@mail.gmail.com> Hello list, I've got a strange behavior of varnish and wordpress. After installing and configuring wordpress without problems, I get a "Error 503 Service Unavailable ... " Error on the front end and a "http format error" in the varnishlog, when I try to change the admin password of wordpress. I'm not sure on which side the problem lies, does varnish check the http format to strictly or does wordpress hopes that the webserver/proxies do not interpret the response so exactly. In my current opinion it will be a wordpress problem, but it don't make sense to talk to the wordpress team, with the information "My varnish throws an 'http format error'". So I would like to understand a bit better, what this error should tell me and where to find the problem more exactly. The smallest enviroment where I can reproduce the error: One varnish 2.0.5 and a apache 2.2.14-3 (debian sid package) on different virtual machines (on one machine it seems to work). As operation system I'm using debian sid. Wordpress is also the latest version. If I just use the apache server without varnish, the problem don't occur. The request leads to a 302 redirect, but has a body part with content. I don't think that makes sense, but I checked with such a redirect page by myself and varnish works well. For every help or hints I'm thankful. Best regards Tobias Wilken ---------------------------------------------------------------------------------------------------------------- My vcl file: director cc1 random { { .backend = { .host = "192.168.113.147"; .port = "http"; } .weight = 1; } } sub vcl_recv { if (req.http.host ~ "^wordpress.cloudcontrolled.dev$") { set req.backend = cc1; } pass; } ---------------------------------------------------------------------------------------------------------------- The varnishlog entry: 13 SessionOpen c 192.168.113.211 47458 :80 13 ReqStart c 192.168.113.211 47458 1674378634 13 RxRequest c POST 13 RxURL c /wp-admin/profile.php 13 RxProtocol c HTTP/1.1 13 RxHeader c Host: wordpress.cloudcontrolled.dev 13 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091109 Ubuntu/9.10 (karmic) Firefox/3.5.5 13 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 13 RxHeader c Accept-Language: en-us,en;q=0.5 13 RxHeader c Accept-Encoding: gzip,deflate 13 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 13 RxHeader c Keep-Alive: 300 13 RxHeader c Connection: keep-alive 13 RxHeader c Referer: http://wordpress.cloudcontrolled.dev/wp-admin/profile.php 13 RxHeader c Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316492%7Ce3bdb402b44249dcc6fc83cabd6f9f46; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316492%7C34d707fe38add651a319005072a85382; wp 13 RxHeader c Content-Type: application/x-www-form-urlencoded 13 RxHeader c Content-Length: 304 13 VCL_call c recv 13 VCL_return c pass 13 VCL_call c pass 13 VCL_return c pass 13 Backend c 17 cc1 cc1 17 TxRequest b POST 17 TxURL b /wp-admin/profile.php 17 TxProtocol b HTTP/1.1 17 TxHeader b Host: wordpress.cloudcontrolled.dev 17 TxHeader b User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091109 Ubuntu/9.10 (karmic) Firefox/3.5.5 17 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 17 TxHeader b Accept-Language: en-us,en;q=0.5 17 TxHeader b Accept-Encoding: gzip,deflate 17 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 17 TxHeader b Referer: http://wordpress.cloudcontrolled.dev/wp-admin/profile.php 17 TxHeader b Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316492%7Ce3bdb402b44249dcc6fc83cabd6f9f46; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316492%7C34d707fe38add651a319005072a85382; wp 17 TxHeader b Content-Type: application/x-www-form-urlencoded 17 TxHeader b Content-Length: 304 17 TxHeader b X-Varnish: 1674378634 17 TxHeader b X-Forwarded-For: 192.168.113.211 17 RxProtocol b HTTP/1.1 17 RxStatus b 302 17 RxResponse b Moved Temporarily 17 RxHeader b Date: Wed, 25 Nov 2009 10:08:31 GMT 17 RxHeader b Server: Apache/2.2.14 (Debian) 17 RxHeader b X-Powered-By: PHP/5.2.11-2 17 RxHeader b Expires: Wed, 11 Jan 1984 05:00:00 GMT 17 RxHeader b Cache-Control: no-cache, must-revalidate, max-age=0 17 RxHeader b Pragma: no-cache 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/wp-admin 17 RxHeader b Set-Cookie: wordpress_sec_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/wp-admin 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/wp-content/plugins 17 RxHeader b Set-Cookie: wordpress_sec_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/wp-content/plugins 17 RxHeader b Set-Cookie: wordpress_logged_in_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_logged_in_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_sec_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_sec_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpressuser_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpresspass_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpressuser_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpresspass_52d975a3939ea2b11528d5062a5c2ed3=+; expires=Tue, 25-Nov-2008 10:08:31 GMT; path=/ 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316511%7C0dc880d5b829c0f30bc8f7a0d5c4c93a; path=/wp-content/plugins; httponly 17 RxHeader b Set-Cookie: wordpress_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316511%7C0dc880d5b829c0f30bc8f7a0d5c4c93a; path=/wp-admin; httponly 17 RxHeader b Set-Cookie: wordpress_logged_in_52d975a3939ea2b11528d5062a5c2ed3=admin%7C1259316511%7Cfc61093f71095d943728d95f4a8e298f; path=/; httponly 17 RxHeader b Last-Modified: Wed, 25 Nov 2009 10:08:31 GMT 17 RxHeader b Location: profile.php?updated=true&wp_http_referer 17 RxHeader b Vary: Accept-Encoding 17 RxHeader b Content-Encoding: gzip 17 LostHeader b Content-Length: 20 17 HttpGarbage b HTTP/1.1 13 FetchError c http format error 17 BackendClose b cc1 13 VCL_call c error 13 VCL_return c deliver 13 Length c 488 13 VCL_call c deliver 13 VCL_return c deliver 13 TxProtocol c HTTP/1.1 13 TxStatus c 503 13 TxResponse c Service Unavailable 13 TxHeader c Server: Varnish 13 TxHeader c Retry-After: 0 13 TxHeader c Content-Type: text/html; charset=utf-8 13 TxHeader c Content-Length: 488 13 TxHeader c Date: Wed, 25 Nov 2009 10:08:37 GMT 13 TxHeader c X-Varnish: 1674378634 13 TxHeader c Age: 0 13 TxHeader c Via: 1.1 varnish 13 TxHeader c Connection: close 13 ReqEnd c 1674378634 1259143717.572573185 1259143717.699785233 0.000110149 0.126987696 0.000224352 13 SessionClose c error 13 StatSess c 192.168.113.211 47458 0 1 1 0 1 0 235 488 ---------------------------------------------------------------------------------------------------------------- Also the request/response informations, extracted from firebug, without using varnish: Response Headers DateWed, 25 Nov 2009 09:50:43 GMTServerApache/2.2.9 (Ubuntu) PHP/5.2.6-2ubuntu4.3 with Suhosin-PatchX-Powered-ByPHP/5.2.6-2ubuntu4.3 ExpiresWed, 11 Jan 1984 05:00:00 GMTLast-ModifiedWed, 25 Nov 2009 09:50:43 GMTCache-Controlno-cache, must-revalidate, max-age=0Pragmano-cacheSet-Cookiewordpress_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/wp-admin wordpress_sec_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/wp-admin wordpress_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/wp-content/plugins wordpress_sec_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/wp-content/plugins wordpress_logged_in_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_logged_in_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_sec_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_sec_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpressuser_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpresspass_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpressuser_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpresspass_1c08b2c8e84c2537018c0c8511b9faba=+; expires=Tue, 25-Nov-2008 09:50:43 GMT; path=/ wordpress_1c08b2c8e84c2537018c0c8511b9faba=admin%7C1259315443%7C27e2eb337a6bc5b32d51972f64df190a; path=/wp-content/plugins; httponly wordpress_1c08b2c8e84c2537018c0c8511b9faba=admin%7C1259315443%7C27e2eb337a6bc5b32d51972f64df190a; path=/wp-admin; httponly wordpress_logged_in_1c08b2c8e84c2537018c0c8511b9faba=admin%7C1259315443%7Cbdd7dc29f13a84eba14b1391298d3a25; path=/; httponlyLocationprofile.php?updated=true&wp_http_refererVary Accept-EncodingContent-EncodinggzipContent-Length20Keep-Alivetimeout=15, max=94ConnectionKeep-AliveContent-Typetext/html Request Headers Host192.168.113.145User-AgentMozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091109 Ubuntu/9.10 (karmic) Firefox/3.5.5Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Languageen-us,en;q=0.5Accept-Encodinggzip,deflateAccept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7Keep-Alive300Connectionkeep-aliveReferer http://192.168.113.145/wp-admin/profile.phpCookiewordpress_1c08b2c8e84c2537018c0c8511b9faba=admin%7C1259315352%7C4a398c2d0373cd3a6496b58bd98b218e; wordpress_test_cookie=WP+Cookie+check; wp-settings-time-1=1259142638; wordpress_logged_in_1c08b2c8e84c2537018c0c8511b9faba =admin%7C1259315352%7C6ceee6960a75e84d376a58592d133300 ------------------------------------------------------------------------------------------------------------------------------------------------------ And the html body: Profile ‹ test — WordPress

Profile

Personal Options

Visual Editor
Admin Color Scheme
Admin Color Scheme
       
       
Keyboard Shortcuts More information

Name

Your username cannot be changed.

Contact Info

About Yourself


Share a little biographical information to fill out your profile. This may be shown publicly.
If you would like to change the password type a new one. Otherwise leave this blank.
Type your new password again.
Strength indicator

Hint: The password should be at least seven characters long. To make it stronger, use upper and lower case letters, numbers and symbols like ! " ? $ % ^ & ).

-------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at andreas-lehr.com Wed Nov 25 15:06:50 2009 From: mail at andreas-lehr.com (Andreas Lehr) Date: Wed, 25 Nov 2009 16:06:50 +0100 Subject: File extension caching Message-ID: <4B0D480A.1000201@andreas-lehr.com> Hi, Is it possible to use the extension caching mechanism of varnish to cache even rewrited jpegs like this URI scheme: http:/www.example.com/056.jpg?original http:/www.example.com/056.jpg?small http:/www.example.com/056.jpg?thumb Would this work like this? Or is another more sophisticated solution needed? sub vcl_recv { if (req.url ~ "\.(jpg|jpg\?.)$") { lookup; } } # strip the cookie before the image is inserted into cache. sub vcl_fetch if (req.url ~ "\.(jpg|jpg\?.)$") { unset obj.http.set-cookie; } Thank you very much! From richard.chiswell at mangahigh.com Wed Nov 25 15:17:29 2009 From: richard.chiswell at mangahigh.com (Richard Chiswell) Date: Wed, 25 Nov 2009 15:17:29 +0000 Subject: File extension caching In-Reply-To: <4B0D480A.1000201@andreas-lehr.com> References: <4B0D480A.1000201@andreas-lehr.com> Message-ID: <4B0D4A89.4060604@mangahigh.com> Hi Andreas, I believe: if (req.url ~ "\.(jpg|jpg\?.*)$") { should do it for you. (The closest we've got in our working and testing Varnish configuration is ..."\.jpg\?([A-z0-9]+)$"... Richard C. Andreas Lehr wrote: > Hi, > > Is it possible to use the extension caching mechanism of varnish to cache even rewrited jpegs like this URI scheme: > > http:/www.example.com/056.jpg?original > http:/www.example.com/056.jpg?small > http:/www.example.com/056.jpg?thumb > > Would this work like this? > Or is another more sophisticated solution needed? > > > sub vcl_recv { > if (req.url ~ "\.(jpg|jpg\?.)$") { > lookup; > } > } > > # strip the cookie before the image is inserted into cache. > sub vcl_fetch > if (req.url ~ "\.(jpg|jpg\?.)$") { > unset obj.http.set-cookie; > > } > > Thank you very much! > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From ibeginhere at gmail.com Fri Nov 27 01:12:10 2009 From: ibeginhere at gmail.com (ll) Date: Fri, 27 Nov 2009 09:12:10 +0800 Subject: varnish bottleneck? Message-ID: <4B0F276A.9000909@gmail.com> I think there are maybe some problem about varnish .My varnish's version is 2.0.4 .I want to cache a website for everything .so I set the rules like that if (req.http.host ~"www.abc.cn"){ lookup; } it's well. Varnish can cache everything .but some function of the website is unable .eg POST. So I put the POST judge before the HOST .like that : if (req.request == "POST"){ pipe; } if (req.http.host ~"www.abc.cn"){ lookup; } there are some problem .many of url's record can't be find in the varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in the Response Headers . I had post this problem in this maillist .it's nothing about pipe or pass about no marked. And there urls will go to the backend server every time. I think is there the varnish bottleneck? whether varnish judge the POST first ,it can't handle all .so there are some miss handle ,and go though to the backend ? or maybe some options ,I didn't have the right configure in the vcl ? From plfgoa at gmail.com Fri Nov 27 04:46:42 2009 From: plfgoa at gmail.com (Paras Fadte) Date: Fri, 27 Nov 2009 10:16:42 +0530 Subject: Varnish 503 Service unavailable error In-Reply-To: <877htwdpex.fsf@qurzaw.linpro.no> References: <75cf5800908130119o5b10f41bn85e7d76fcb7ebc96@mail.gmail.com> <87fxbw5atk.fsf@qurzaw.linpro.no> <75cf5800908130243k3c0e7db6i2d01788a42ab352e@mail.gmail.com> <87k4z61gvj.fsf@qurzaw.linpro.no> <75cf5800911110400h32542eek30946320ae108d8e@mail.gmail.com> <877htwdpex.fsf@qurzaw.linpro.no> Message-ID: <75cf5800911262046p6a5b286awf9b563065d5aaab9@mail.gmail.com> Hi Tollef , FetchError showed " *no backend connection* " . Would playing around with "backend probes" rectify this issue if backends are sometimes under load ? or would increasing the "connect_timeout" help ? Thank you. -Paras On Thu, Nov 12, 2009 at 3:40 PM, Tollef Fog Heen wrote: > ]] Paras Fadte > > | After increasing the connect_timeout and first_byte_timeout values the > | erros have reduced. I also tried to get transaction for 503 error and > | following is part of the transaction for the same > | > | 230 VCL_call c recv lookup > | 230 VCL_call c hash hash > | 230 VCL_call c miss fetch > | 230 VCL_call c error deliver > > It fails to get the result from the backend. Upgrade to 2.0.5 and look > at the FetchError tag which will hopefully explain a bit more what goes > wrong. > > -- > Tollef Fog Heen > Redpill Linpro -- Changing the game! > t: +47 21 54 41 73 > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Fri Nov 27 08:53:54 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 09:53:54 +0100 Subject: forget an object in vcl_fetch In-Reply-To: (David Birdsong's message of "Fri, 13 Nov 2009 10:12:21 -0800") References: Message-ID: <874oogz6u5.fsf@qurzaw.linpro.no> ]] David Birdsong | > sub vcl_fetch { | > ?if (!beresp.X-Super-Hot-File) { | > ? ?unset obj.cacheable; | > ?} | > } | | ok, i tried this and was not allowed to modify this attribute. You can set it to true or false, but you can't unset it. | just out of curiosity i tried 'pass' instead but that also caused disk | hits. is there anyway to completely pass traffic through without | varnish storing it? You can pipe it, but then you end up closing the connection afterwards. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From mariocar at corp.globo.com Fri Nov 13 10:40:34 2009 From: mariocar at corp.globo.com (Mario Carvalho) Date: Fri, 13 Nov 2009 08:40:34 -0200 Subject: Google's new SPeeDY low latency web proposal Message-ID: <2a794fb60911130240r15689d8dp9b4f3d16fad5f74c@mail.gmail.com> Dear all, have you already seen the new low latency web protocol Google is proposing, SPDY?.. it looks promissing but it seems to need a specially crafted webserver/reverse cache to support it. I think it is worth taking a look and start a thread about it. It aims to address web latency issues, enhancing and augmenting HTTP http://sites.google.com/a/chromium.org/dev/spdy/spdy-whitepaper Regards, -- Mario Carvalho -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Fri Nov 27 08:59:04 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 09:59:04 +0100 Subject: vcl_timeout and fetch In-Reply-To: <4B020993.9070108@gmail.com> (ll's message of "Tue, 17 Nov 2009 10:25:23 +0800") References: <4B020993.9070108@gmail.com> Message-ID: <87ws1cxs13.fsf@qurzaw.linpro.no> ]] ll | so,is there any ways to set the varnish to fetch the cache and flash it | automatic ? I have a webserver,it has so many pages ,I want to varnish | can fetch all pages one time and keep it a long time .and after the | ttl,it can flash it automatic .Is it some ways to accomplish this ? No, it's not something we support. We considered it for a while, but decided against it in the end. You might want to look at grace which does solve approximately the same problem most people are trying to solve with prefetch. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at redpill-linpro.com Fri Nov 27 09:15:03 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 10:15:03 +0100 Subject: multi-terabyte caching In-Reply-To: <4B0715F1.9030309@boboco.ie> (Eric Bowman's message of "Fri, 20 Nov 2009 22:19:29 +0000") References: <4B0715F1.9030309@boboco.ie> Message-ID: <87skc0xrag.fsf@qurzaw.linpro.no> ]] Eric Bowman Hi, | I'm considering using Varnish to handle caching for a mapping | application. After reading | http://varnish.projects.linpro.no/wiki/ArchitectNotes, it seems like | Varnish is maybe not a good choice for this. In short I need to cache | something like 500,000,000 files that take up about 2TB of storage. | | Using more 1975 technologies, one of the challenges has been how to | distribute these across the file system without putting too many files | per directory. We have a solution we kind of like, and there are others | out there. Hashing on the file name should solve this easily enough, or maybe even better, hash on the hash of the file name, so you have ?somefile? where the md5sum of the file name is c21641b4fc25d6d558bf130659d56811. Given how md5 works, and say you want to end up with about 1000 files per directory, you need four or five levels of hashing, so that file would live in c/2/1/6/4/somefile. Five levels give you an average of 476 files per directory. (You can of course use another hash than md5, and it's fine to use it here since we only do it to get a good distribution, not because of any kind of security requirements.) | My impression is that we would start to put a big strain on Varnish and | the OS using it in the standard way. But maybe I'm wrong. Or, is there | a way to plugin a backend to manage this storage, without getting into | the vm-thrash from which Squid suffers? We use a hash internally already, so assuming you make the hash size, for instance 39916801 (a prime number that's not too far from 1/10 of your total number of objects), it should work. Or you could use -h critbit instead, which should scale better, but few people use it so far, so it might well have some bugs. Alternatively, use a hashing load balancer in front and have a bunch of Varnish machines each serving their part of the URL space, like David Birdsong suggested. It'd be interesting to hear your experiences once you get this going. :-) Regards, -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From itlj at gyldendal.dk Fri Nov 27 09:21:12 2009 From: itlj at gyldendal.dk (=?iso-8859-1?Q?Lars_J=F8rgensen?=) Date: Fri, 27 Nov 2009 10:21:12 +0100 Subject: varnish bottleneck? In-Reply-To: <4B0F276A.9000909@gmail.com> References: <4B0F276A.9000909@gmail.com> Message-ID: Den 27/11/2009 kl. 02.12 skrev ll: > there are some problem .many of url's record can't be find in the > varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in > the Response Headers . > I had post this problem in this maillist .it's nothing about pipe or > pass about no marked. Well, if you have decided it has nothing to do with pipe, then we can't really help you, can we? -- Lars From ibeginhere at gmail.com Fri Nov 27 09:17:36 2009 From: ibeginhere at gmail.com (ll) Date: Fri, 27 Nov 2009 17:17:36 +0800 Subject: vcl_timeout and fetch In-Reply-To: <87ws1cxs13.fsf@qurzaw.linpro.no> References: <4B020993.9070108@gmail.com> <87ws1cxs13.fsf@qurzaw.linpro.no> Message-ID: <4B0F9930.4050200@gmail.com> grace?do you means this ? Grace ? If the backend takes a long time to generate an object there is a risk of a thread pile up. In order to prevent this you can enable grace. This allows varnish to serve an expired version of the object while a fresh object is being generated by the backend. how can it have the prefetch function ?? ? 2009-11-27 16:59, Tollef Fog Heen ??: > ]] ll > > | so,is there any ways to set the varnish to fetch the cache and flash it > | automatic ? I have a webserver,it has so many pages ,I want to varnish > | can fetch all pages one time and keep it a long time .and after the > | ttl,it can flash it automatic .Is it some ways to accomplish this ? > > No, it's not something we support. We considered it for a while, but > decided against it in the end. You might want to look at grace which > does solve approximately the same problem most people are trying to > solve with prefetch. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Fri Nov 27 09:24:08 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 10:24:08 +0100 Subject: http format error In-Reply-To: <3d46faea0911250221n6eaa137cr28211875ad7a75de@mail.gmail.com> (Tobias Wilken's message of "Wed, 25 Nov 2009 11:21:20 +0100") References: <3d46faea0911250221n6eaa137cr28211875ad7a75de@mail.gmail.com> Message-ID: <87ocmoxqvb.fsf@qurzaw.linpro.no> ]] Tobias Wilken | I've got a strange behavior of varnish and wordpress. After installing | and configuring wordpress without problems, I get a "Error 503 Service | Unavailable ... " Error on the front end and a "http format error" in | the varnishlog, when I try to change the admin password of wordpress. | | I'm not sure on which side the problem lies, does varnish check the | http format to strictly or does wordpress hopes that the | webserver/proxies do not interpret the response so exactly. In my | current opinion it will be a wordpress problem, but it don't make | sense to talk to the wordpress team, with the information "My varnish | throws an 'http format error'". So I would like to understand a bit | better, what this error should tell me and where to find the problem | more exactly. It means Varnish has problems dissecting the response from the backend. As you can see in the log: 17 LostHeader b Content-Length: 20 Try compiling Varnish with a higher maximum number of HTTP headers, just increasing the limit to 64 or so should be ok. Poul-Henning, should we just bump the default? Wordpress is fairly popular and requiring a recompilation to work correctly is a bit on the heavy side. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at redpill-linpro.com Fri Nov 27 09:25:58 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 10:25:58 +0100 Subject: varnish bottleneck? In-Reply-To: <4B0F276A.9000909@gmail.com> (ll's message of "Fri, 27 Nov 2009 09:12:10 +0800") References: <4B0F276A.9000909@gmail.com> Message-ID: <87k4xcxqs9.fsf@qurzaw.linpro.no> ]] ll | if (req.http.host ~"www.abc.cn"){ | lookup; | } | it's well. Varnish can cache everything .but some function of the | website is unable .eg POST. So I put the POST judge before the HOST | .like that : | if (req.request == "POST"){ | pipe; | } | if (req.http.host ~"www.abc.cn"){ | lookup; | } | there are some problem .many of url's record can't be find in the | varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in | the Response Headers . Yes, this is how pipe works. You might want to read up on pass vs pipe. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at redpill-linpro.com Fri Nov 27 09:28:06 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 10:28:06 +0100 Subject: Varnish 503 Service unavailable error In-Reply-To: <75cf5800911262046p6a5b286awf9b563065d5aaab9@mail.gmail.com> (Paras Fadte's message of "Fri, 27 Nov 2009 10:16:42 +0530") References: <75cf5800908130119o5b10f41bn85e7d76fcb7ebc96@mail.gmail.com> <87fxbw5atk.fsf@qurzaw.linpro.no> <75cf5800908130243k3c0e7db6i2d01788a42ab352e@mail.gmail.com> <87k4z61gvj.fsf@qurzaw.linpro.no> <75cf5800911110400h32542eek30946320ae108d8e@mail.gmail.com> <877htwdpex.fsf@qurzaw.linpro.no> <75cf5800911262046p6a5b286awf9b563065d5aaab9@mail.gmail.com> Message-ID: <87fx80xqop.fsf@qurzaw.linpro.no> ]] Paras Fadte Hi, | FetchError showed " *no backend connection* " . Would playing around | with "backend probes" rectify this issue if backends are sometimes under | load ? | or would increasing the "connect_timeout" help ? ?no backend connection? means just that ? we failed to find a backend connection in time, ran into the maximum number of connections or similar. If the problem is connect_timeout, increasing that further might help, yes. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tfheen at redpill-linpro.com Fri Nov 27 09:30:40 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 27 Nov 2009 10:30:40 +0100 Subject: vcl_timeout and fetch In-Reply-To: <4B0F9930.4050200@gmail.com> (ll's message of "Fri, 27 Nov 2009 17:17:36 +0800") References: <4B020993.9070108@gmail.com> <87ws1cxs13.fsf@qurzaw.linpro.no> <4B0F9930.4050200@gmail.com> Message-ID: <87bpioxqkf.fsf@qurzaw.linpro.no> ]] ll Hi, | If the backend takes a long time to generate an object there is a risk | of a thread pile up. In order to prevent this you can enable | grace. This allows varnish to serve an expired version of the object | while a fresh object is being generated by the backend. | | how can it have the prefetch function ?? I did not say it was the same thing, I said it solves the same problem that people are trying to solve with prefetch, namely thread pileups when you have many clients waiting on an object to come back from the backend. Since you don't seem to think it solves your problem, what is the problem you are trying to solve? Regards, -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From tw at cloudcontrol.de Fri Nov 27 10:43:12 2009 From: tw at cloudcontrol.de (Tobias Wilken) Date: Fri, 27 Nov 2009 11:43:12 +0100 Subject: http format error In-Reply-To: <87ocmoxqvb.fsf@qurzaw.linpro.no> References: <3d46faea0911250221n6eaa137cr28211875ad7a75de@mail.gmail.com> <87ocmoxqvb.fsf@qurzaw.linpro.no> Message-ID: <3d46faea0911270243t647347c5wc03e346aeda79fed@mail.gmail.com> > > It means Varnish has problems dissecting the response from the backend. > > As you can see in the log: > > 17 LostHeader b Content-Length: 20 > > Try compiling Varnish with a higher maximum number of HTTP headers, just > increasing the limit to 64 or so should be ok. > Great, thanks. That works fine. I was so focused on the "http format error", that I didn't recognized that line as an "error description". Best regards, another happy varnish user Tobias Wilken -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolbomb at gmail.com Fri Nov 27 13:34:44 2009 From: coolbomb at gmail.com (Daniel Rodriguez) Date: Fri, 27 Nov 2009 14:34:44 +0100 Subject: Handling 304 and header refresh Message-ID: <95ff419c0911270534s35f02db8y9c0f49da8cb36e2d@mail.gmail.com> Hi, I'm having a problem with a varnish caching implementation in our sites. We have some big and heavy loaded sites, and one of the things we are used to do, is to return a 304 from an object but with some modifications to the object headers. This works cool with our current caching systems (the ones that are going to be replaced with varnish). Example: http://www.foo.com/varnish.jpg That image never changes so our apache server will always returns 304. In some situations we need to change one of the headers of the object (changing the max age is one of the things we usually do). But if we do that with varnish all our fetches after changing the headers end up on our backbends. Am i missing something? My config and a segment of the logs after the headers being refreshed: backend default { .host = "192.168.9.158"; .port = "80"; } acl purge { "localhost"; "192.168.90.14"; "192.168.90.34"; } sub vcl_recv { set req.http.X-Forwarded-For = client.ip; if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "PURGE" && /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } set req.grace = 2m; if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # No se conoce el algoritmo remove req.http.Accept-Encoding; } } if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } purge("req.url == " req.url); } if (req.http.Authorization) { return (pass); } return(lookup); } sub vcl_fetch { set obj.grace = 2m; if(obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private" || obj.http.Cache-Control ~ "max-age=0" || obj.http.Cache-Control ~ "must-revalidate" || obj.http.Cache-Control ~ "private" ) { pass; } if (!obj.cacheable) { return (pass); } if (obj.http.Set-Cookie) { return (pass); } set obj.prefetch = -3s; if (req.http.Authorization && !obj.http.Cache-Control ~ "public") { pass; } return (deliver); } 12 ReqStart c 192.168.90.41 44056 345093438 12 RxRequest c GET 12 RxURL c /test/prueba.php 12 RxProtocol c HTTP/1.1 12 RxHeader c Host: www.foo.com 12 RxHeader c User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092816 Iceweasel/3.0.3 (Debian-3.0.3-3) 12 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 12 RxHeader c Accept-Language: en-us,en;q=0.5 12 RxHeader c Accept-Encoding: gzip,deflate 12 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 12 RxHeader c Keep-Alive: 300 12 RxHeader c Connection: keep-alive 12 RxHeader c Cookie: foo.comGlobal=0; cUser=nouser 12 VCL_call c recv 12 VCL_return c lookup 12 VCL_call c hash 12 VCL_return c hash 12 HitPass c 345093401 12 VCL_call c pass 12 VCL_return c pass 13 BackendOpen b default 192.168.55.18 13822 192.168.9.158 80 12 Backend c 13 default default 13 TxRequest b GET 13 TxURL b /test/prueba.php 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: www.foo.com.com 13 TxHeader b User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092816 Iceweasel/3.0.3 (Debian-3.0.3-3) 13 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 13 TxHeader b Accept-Language: en-us,en;q=0.5 13 TxHeader b Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 13 TxHeader b Cookie: foo.comGlobal=0; cUser=nouser 13 TxHeader b Accept-Encoding: gzip 13 TxHeader b X-Varnish: 345093438 13 TxHeader b X-Forwarded-For: 192.168.90.41 13 RxProtocol b HTTP/1.1 13 RxStatus b 304 13 RxResponse b Not Modified 13 RxHeader b Date: Mon, 23 Nov 2009 17:11:37 GMT 13 RxHeader b Server: Apache 13 RxHeader b Connection: close 13 RxHeader b Cache-control: max-age=20 12 ObjProtocol c HTTP/1.1 12 ObjStatus c 304 12 ObjResponse c Not Modified 12 ObjHeader c Date: Mon, 23 Nov 2009 17:11:37 GMT 12 ObjHeader c Server: Apache 12 ObjHeader c Cache-control: max-age=20 13 BackendClose b default 12 TTL c 345093438 RFC 20 1258996298 0 0 20 0 12 VCL_call c fetch 12 VCL_return c pass 12 Length c 0 12 VCL_call c deliver 12 VCL_return c deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 304 12 TxResponse c Not Modified 12 TxHeader c Server: Apache 12 TxHeader c Cache-control: max-age=20 12 TxHeader c Content-Length: 0 12 TxHeader c Date: Mon, 23 Nov 2009 17:11:38 GMT 12 TxHeader c X-Varnish: 345093438 12 TxHeader c Age: 0 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: keep-alive 12 ReqEnd c 345093438 1258996298.242152929 1258996298.246277332 0.538141012 0.003986359 0.000138044 12 Debug c "herding" Thank you! From h.paulissen at qbell.nl Fri Nov 27 14:53:22 2009 From: h.paulissen at qbell.nl (Henry Paulissen) Date: Fri, 27 Nov 2009 15:53:22 +0100 Subject: req.hash and fetch Message-ID: <003c01ca6f71$5d788ed0$1869ac70$@paulissen@qbell.nl> Hey all, One of my website has more only the request url who are unique. Content inside the website depends on the url you request (.com|.co.uk|etc) and the browser language. So at first request we set a cookie with the language preferences (so we don't have to check it every time). Inside varnish I made the following: @vcl_recv: set req.http.X-match = regsub(req.http.Cookie, "^.*(langpref=[a-z]+_[a-z]+).*$", "\1"); @vcl_hash: set req.hash += req.url; set req.hash += req.http.X-match; Does varnish need more settings (in vcl_fetch?) to store the parsed backend request with the proper hash key? Reagards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaunter at topscms.com Fri Nov 27 20:14:34 2009 From: scaunter at topscms.com (Caunter, Stefan) Date: Fri, 27 Nov 2009 15:14:34 -0500 Subject: Handling 304 and header refresh In-Reply-To: <95ff419c0911270534s35f02db8y9c0f49da8cb36e2d@mail.gmail.com> References: <95ff419c0911270534s35f02db8y9c0f49da8cb36e2d@mail.gmail.com> Message-ID: <064FF286FD17EC418BFB74629578BED1156840D5@tmg-mail4.torstar.net> -----Original Message----- From: varnish-misc-bounces at projects.linpro.no [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Daniel Rodriguez Sent: November-27-09 8:35 AM To: varnish-misc at projects.linpro.no Subject: Handling 304 and header refresh Hi, I'm having a problem with a varnish caching implementation in our sites. We have some big and heavy loaded sites, and one of the things we are used to do, is to return a 304 from an object but with some modifications to the object headers. This works cool with our current caching systems (the ones that are going to be replaced with varnish). Example: http://www.foo.com/varnish.jpg That image never changes so our apache server will always returns 304. In some situations we need to change one of the headers of the object (changing the max age is one of the things we usually do). But if we do that with varnish all our fetches after changing the headers end up on our backbends. Hi: I put this in my vcl.fetch and it sets max-age I do this in vcl.fetch set obj.http.cache-control = "max-age = 600"; Do you have this anywhere in your config? Stef From coolbomb at gmail.com Fri Nov 27 21:55:02 2009 From: coolbomb at gmail.com (Daniel Rodriguez) Date: Fri, 27 Nov 2009 22:55:02 +0100 Subject: Handling 304 and header refresh In-Reply-To: <064FF286FD17EC418BFB74629578BED1156840D5@tmg-mail4.torstar.net> References: <95ff419c0911270534s35f02db8y9c0f49da8cb36e2d@mail.gmail.com> <064FF286FD17EC418BFB74629578BED1156840D5@tmg-mail4.torstar.net> Message-ID: <95ff419c0911271355g286d2d5bm530a7872f13fbab0@mail.gmail.com> Hi Stefan, Thank you for your answer, but no, I don't use that kind of configuration on my vcl, because the max-age are changed in a different way per object by the apache web server at the time it returns a 304, and in some specifics situations demanded by our sites. In the example log, the apache web server returns: 13 RxStatus b 304 13 RxResponse b Not Modified 13 RxHeader b Date: Mon, 23 Nov 2009 17:11:37 GMT 13 RxHeader b Server: Apache 13 RxHeader b Connection: close 13 RxHeader b Cache-control: max-age=20 Changing the max-age of the object that was originally set in a previous fetch to 20 in this case. BTW: In the log there is a line that says: 13 TxHeader b Host: www.foo.com.com <--- that second .com is my mistake when i replaced the host name. Not a log/object error. Best Regards, On Fri, Nov 27, 2009 at 9:14 PM, Caunter, Stefan wrote: > > -----Original Message----- > From: varnish-misc-bounces at projects.linpro.no > [mailto:varnish-misc-bounces at projects.linpro.no] On Behalf Of Daniel > Rodriguez > Sent: November-27-09 8:35 AM > To: varnish-misc at projects.linpro.no > Subject: Handling 304 and header refresh > > Hi, > > I'm having a problem with a varnish caching implementation in our sites. > > We have some big and heavy loaded sites, and one of the things we are > used to do, is to return a 304 from an object but with some > modifications to the object headers. This works cool with our current > caching systems (the ones that are going to be replaced with varnish). > > Example: > > http://www.foo.com/varnish.jpg > > That image never changes so our apache server will always returns 304. > In some situations we need to change one of the headers of the object > (changing the max age is one of the things we usually do). > > But if we do that with varnish all our fetches after changing the > headers end up on our backbends. > > Hi: I put this in my vcl.fetch and it sets max-age > > I do this in vcl.fetch > > ?set ? ?obj.http.cache-control = "max-age = 600"; > > Do you have this anywhere in your config? > > Stef > > From ibeginhere at gmail.com Mon Nov 30 02:06:21 2009 From: ibeginhere at gmail.com (ll) Date: Mon, 30 Nov 2009 10:06:21 +0800 Subject: varnish bottleneck? In-Reply-To: References: <4B0F276A.9000909@gmail.com> Message-ID: <4B13289D.8030403@gmail.com> sorry, I meant the problem is not leaded by the different between pipe and pass . but ,now , I think it's my fault . I still not understand the different between pipe and pass very well now ,i think the pipe lead this problems . ? 2009-11-27 17:21, Lars J?rgensen ??: > Den 27/11/2009 kl. 02.12 skrev ll: > >> there are some problem .many of url's record can't be find in the >> varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in >> the Response Headers . >> I had post this problem in this maillist .it's nothing about pipe or >> pass about no marked. >> > > Well, if you have decided it has nothing to do with pipe, then we can't really help you, can we? > > > -- > Lars > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibeginhere at gmail.com Mon Nov 30 02:51:03 2009 From: ibeginhere at gmail.com (ll) Date: Mon, 30 Nov 2009 10:51:03 +0800 Subject: varnish bottleneck? In-Reply-To: <87k4xcxqs9.fsf@qurzaw.linpro.no> References: <4B0F276A.9000909@gmail.com> <87k4xcxqs9.fsf@qurzaw.linpro.no> Message-ID: <4B133316.7030807@gmail.com> I have some problem when I understand the pipe and pass. In the official website , In pipe mode, the request is passed on to the backend, and any further data from either client or backend is passed on unaltered until either end closes the connection. In pass mode, the request is passed on to the backend, and the backend?s response is passed on to the client, but is not entered into the cache. Subsequent requests submitted over the same client connection are handled normally. so if in the pipe mode , one of request is "POST",and i set the if POST the pipe. in this situation , I can control only the POST request go to pipe mode.may be some GET request will be pipe also if the GET request after the POST. right ? because the pipe mode end by either backend or client closes the connection . and I don't know when the connect will close . can I understand like that ? Is it rights ? and if I set PASS,even though the GET request after the POST, it will be handle by normally (maybe lookup or other settings I had set for GET) Is it right ? ? 2009-11-27 17:25, Tollef Fog Heen ??: > ]] ll > > | if (req.http.host ~"www.abc.cn"){ > | lookup; > | } > | it's well. Varnish can cache everything .but some function of the > | website is unable .eg POST. So I put the POST judge before the HOST > | .like that : > | if (req.request == "POST"){ > | pipe; > | } > | if (req.http.host ~"www.abc.cn"){ > | lookup; > | } > | there are some problem .many of url's record can't be find in the > | varnishlog .and there are no marked by the varnish eg "X-Cache: MISS" in > | the Response Headers . > > Yes, this is how pipe works. You might want to read up on pass vs pipe. > > From itlj at gyldendal.dk Mon Nov 30 08:54:09 2009 From: itlj at gyldendal.dk (=?iso-8859-1?Q?Lars_J=F8rgensen?=) Date: Mon, 30 Nov 2009 09:54:09 +0100 Subject: varnish bottleneck? In-Reply-To: <4B133316.7030807@gmail.com> References: <4B0F276A.9000909@gmail.com> <87k4xcxqs9.fsf@qurzaw.linpro.no> <4B133316.7030807@gmail.com> Message-ID: Den 30/11/2009 kl. 03.51 skrev ll: > so if in the pipe mode , one of request is "POST",and i set the if POST > the pipe. in this situation , I can control only the POST request go to > pipe mode.may be some GET request will be pipe also if the GET request > after the POST. right ? because the pipe mode end by either backend or > client closes the connection . and I don't know when the connect will > close . can I understand like that ? Is it rights ? > and if I set PASS,even though the GET request after the POST, it will be > handle by normally (maybe lookup or other settings I had set for GET) > Is it right ? Yes, that's the way I understand it, too. -- Lars From stockrt at gmail.com Mon Nov 30 14:57:58 2009 From: stockrt at gmail.com (Rogerio Schneider) Date: Mon, 30 Nov 2009 12:57:58 -0200 Subject: varnish bottleneck? In-Reply-To: References: <4B0F276A.9000909@gmail.com> <87k4xcxqs9.fsf@qurzaw.linpro.no> <4B133316.7030807@gmail.com> Message-ID: That is an intersting issue. I would like to have a confirmation from the supreme beings. Regards, Rogerio Schneider Em 30/11/2009, ?s 06:54, Lars J?rgensen escreveu: > Den 30/11/2009 kl. 03.51 skrev ll: >> so if in the pipe mode , one of request is "POST",and i set the if >> POST >> the pipe. in this situation , I can control only the POST request >> go to >> pipe mode.may be some GET request will be pipe also if the GET >> request >> after the POST. right ? because the pipe mode end by either backend >> or >> client closes the connection . and I don't know when the connect will >> close . can I understand like that ? Is it rights ? >> and if I set PASS,even though the GET request after the POST, it >> will be >> handle by normally (maybe lookup or other settings I had set for GET) >> Is it right ? > > Yes, that's the way I understand it, too. > > > -- > Lars > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Mon Nov 30 15:45:24 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 30 Nov 2009 15:45:24 +0000 Subject: varnish bottleneck? In-Reply-To: Your message of "Mon, 30 Nov 2009 12:57:58 -0200." Message-ID: <14572.1259595924@critter.freebsd.dk> That is correct. If you use "pipe", the TCP connection turns into a straight pipe where bytes are moved from client to server, without further inspection, until either side closes the TCP connection. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.