From wxz19861013 at gmail.com Fri Feb 1 07:01:53 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Fri, 1 Feb 2013 15:01:53 +0800 Subject: How to make multiple clients can get the response at the same time by stream. In-Reply-To: References: <20130129121340.GC19688@kuba.local.lp> Message-ID: Hi, Thanks for clarification. What you say is very clear. I am sorry to show my poor English, but I have tried my best to communicate. There is another question.For example, if we request a .jpg file(cacheable), varnish will encapsulation it as an object and insert into memory. How can we get the .jpg file from the object? Thank you for help again. -Shawn Wang 2013/1/30 Per Buer > Hi, > > I was a bit quick and I didn't read the whole email the first time. Sorry > about that. You're actually using the streaming branch, already I see. What > you're writing is really, really odd. There is a slight lock while the > "first" object is being fetched where other requests will be put on the > waiting list. However, when the hit-for-pass object is created these should > be released and pass'ed to the clients. > > If the backend takes forever coming back with the response headers then > the situation would be something like what you describe. However, that > would be odd and doesn't make much sense. > > PS: The streaming branch was renamed "plus" when it got other experimental > features. You'll find source https://github.com/mbgrydeland/varnish-cacheand packages at > repo.varnish-cache.org/test if I recall correctly. > > > > > On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang wrote: > >> >> Hi, >> Thanks a lot. >> >> I tried option >> "set req.hash_ignore_busy = true;" >> in vlc_recv. >> I think it works. But there are side effects: it would increase backend >> load. >> >> I have an idea about it in my previous email. what do you think about it? >> >> Another question is that where can I find the "plus" branch of Varnish >> which matches this issue. >> >> Any suggestions will be appreciate. >> Thanks again for help. >> >> Regards, >> -- >> Shawn Wang >> >> >> ---------- Forwarded message ---------- >> From: Xianzhe Wang >> Date: 2013/1/30 >> Subject: Re: How to make multiple clients can get the response at the >> same time by stream. >> To: Jakub S?oci?ski >> >> >> Hi Jakub S. >> Thank you very much. >> I tried, and take a simple test, two client request the big file at the >> same time, they get the response stream immediately, so it works. >> In that case, multiple requests will go directly to "pass", they do not >> need to wait, but it would increase backend load. >> We need to balance the benefits and drawbacks. >> >> I wanna is that: >> Client 1 requests url /foo >> Client 2..N request url /foo >> Varnish tasks a worker to fetch /foo for Client 1 >> Client 2..N are now queued pending response from the worker >> Worker fetch response header(just header not include body) from >> backend, and find it non-cacheable, then make the remaining >> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass >> object synchronously in the first request(Client 1). >> Subsequent requests are now given the hit_for_pass object instructing >> them to go to the backend as long as the hit_for_pass object exists. >> >> As I mentioned below, is it feasible? Or do you have any Suggestions? >> >> Thanks again for help. >> >> Regards, >> -- >> Shawn Wang >> >> >> >> 2013/1/29 Jakub S?oci?ski >> >>> Hi Xianzhe Wang, >>> you should try option >>> "set req.hash_ignore_busy = true;" >>> in vlc_recv. >>> >>> Regards, >>> -- >>> Jakub S. >>> >>> >>> Xianzhe Wang napisa?(a): >>> > Hello everyone, >>> > My varnish version is 3.0.2-streaming release.And I set >>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the >>> object to >>> > the client directly without fetching the whole object into varnish"; >>> > >>> > This is a part of my *.vcl file: >>> > >>> > sub vcl_fetch { >>> > set beresp.grace = 30m; >>> > >>> > set beresp.do_stream = true; >>> > >>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>> > "[0-9]{8,}") { >>> > return (hit_for_pass); >>> > } >>> > >>> > if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ >>> > "no-cache" || beresp.http.Cache-Control ~ "private") { >>> > return (hit_for_pass); >>> > } >>> > >>> > if (beresp.ttl <= 0s || >>> > beresp.http.Set-Cookie || >>> > beresp.http.Vary == "*") { >>> > >>> > set beresp.ttl = 120 s; >>> > return (hit_for_pass); >>> > } >>> > >>> > return (deliver); >>> > } >>> > >>> > Then I request a big file(about 100M+) like "xxx.zip" from >>> clients.There is >>> > only one client can access the object.because "the object will marked >>> as >>> > busy as it is delivered." >>> > >>> > But if the request goes directly to ?pass? ,multiple clients can get >>> the >>> > response at the same time. >>> > >>> > Also if I remove >>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>> > "[0-9]{8,}") { >>> > return (hit_for_pass); >>> > } >>> > to make the file cacheable,multiple clients can get the response at the >>> > same time. >>> > >>> > Now I want "multiple clients can get the response at the same time." >>> in all >>> > situations("pass","hit","hit_for_pass"). >>> > >>> > What can I do for it? >>> > Any suggestions will be appreciate. >>> > Thank you. >>> > >>> > -Shawn Wang >>> >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >>> >> >> > > > -- > *Per Buer* > > CEO | Varnish Software AS > Phone: +47 958 39 117 | Skype: per.buer > We Make Websites Fly! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damon at huddler-inc.com Fri Feb 1 07:10:02 2013 From: damon at huddler-inc.com (Damon Snyder) Date: Thu, 31 Jan 2013 23:10:02 -0800 Subject: Feedback on session_linger (high load/n_wrk_overflow issues) In-Reply-To: References: Message-ID: Well, I spoke too soon. Bumping session_linger brought us stability for about a day and then the high context switching returned. See the attached plot to illustrate the problem. The context switching (pulled from dstat and added to varnish) starts to ramp up. As soon it crosses the ~50k mark we start start seeing stability and latency issues (10k/s is more "normal"). So now I'm at a loss as to how to proceed. Below is the current varnishstat -1 output when the system is well behaved. I wasn't able to capture it when the context switching peaked. n_wrk_overflow spiked to about 1500 at the time and the load average over the past 5 min was ~15. This is running on a dual hex core Intel Xeon E5-2620 (24 contexts or cores) with 64GB of memory. The hitrate is about 0.65 and we are nuking (n_lru_nuked incrementing) once the system has been running for a few hours. We have long tail objects in that we have a lot of content, but only some of it is hot so it's difficult to predict our precise cache size requirements at any given time. We are using varnish-2.1.5 on Centos 5.6 with kernel 2.6.18-238. I haven't found anything in syslog that would be of interest. Thanks, Damon # currently running command line /usr/local/varnish-2.1.5/sbin/varnishd -P /var/run/varnish.pid -a 10.16.50.150:6081,127.0.0.1:6081 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 100,2000,120 -u varnish -g varnish -s malloc,48G -p thread_pools 8 -p thread_pool_add_delay 2 -p listen_depth 1024 -p session_linger 110 -p lru_interval 60 -p sess_workspace 524288 # varnishstat -1 client_conn 5021009 442.11 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5020366 442.05 Client requests received cache_hit 4597754 404.84 Cache hits cache_hitpass 24775 2.18 Cache hits for pass cache_miss 2664516 234.61 Cache misses backend_conn 3101882 273.13 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 99 0.01 Backend conn. reuses backend_toolate 11777 1.04 Backend conn. was closed backend_recycle 11877 1.05 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 33 0.00 Fetch head fetch_length 2134117 187.91 Fetch with Length fetch_chunked 811172 71.42 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 156333 13.77 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 1 0.00 Fetch failed n_sess_mem 3867 . N struct sess_mem n_sess 3841 . N struct sess n_object 757929 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 758158 . N struct objectcore n_objecthead 760076 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 69 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 0.07 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 171 0.02 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 21 . N backends n_expired 1879027 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 813472 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5195402 457.46 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 5020819 442.09 Total Sessions s_req 5020366 442.05 Total Requests s_pipe 0 0.00 Total pipe s_pass 437300 38.50 Total pass s_fetch 3101668 273.11 Total fetch s_hdrbytes 2575156242 226746.17 Total header bytes s_bodybytes 168406835436 14828461.34 Total body bytes sess_closed 5020817 442.09 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 430740934 37927.35 SHM records shm_writes 29264523 2576.78 SHM writes shm_flushes 51411 4.53 SHM flushes due to overflow shm_cont 78056 6.87 SHM MTX contention shm_cycles 186 0.02 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 5570153 490.46 SMA allocator requests sma_nobj 1810351 . SMA outstanding allocations sma_nbytes 35306367656 . SMA outstanding bytes sma_balloc 154583709612 . SMA bytes allocated sma_bfree 119277341956 . SMA bytes free sms_nreq 32127 2.83 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 29394423 . SMS bytes allocated sms_bfree 29394423 . SMS bytes freed backend_req 3101977 273.13 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 31972 . N total active purges n_purge_add 31972 2.82 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 3317282 292.09 N objects tested n_purge_re_test 530623465 46722.15 N regexps tested against n_purge_dups 28047 2.47 N duplicate purges removed hcb_nolock 7285152 641.47 HCB Lookups without lock hcb_lock 2543066 223.92 HCB Lookups with lock hcb_insert 2542964 223.91 HCB Inserts esi_parse 741690 65.31 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11357 1.00 Client uptime backend_retry 99 0.01 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 14 0.00 Fetch no body (304) On Wed, Jan 30, 2013 at 3:45 PM, Damon Snyder wrote: > When you 'varnishadm -T localhost:port param.show session_linger' it > indicates at the bottom that "we don't know if this is a good idea... and > feeback is welcome." > > We found that setting session_linger pulled us out of a bind. I wanted to > add my feedback to the list in the hope that someone else might benefit > from what we experienced. > > We recently increased the number of esi includes on pages that get ~60-70 > req/s on our platform. Some of those modules were being rendered with > s-maxage set to zero so that they would be refreshed on every page load > (this is so we could insert a non-cached partial into the page) which > further increased the request load on varnish. > > What we found is that after a few hours the load on a varnish box went > from < 1 to > 10 or more and n_wkr_overflow started incrementing. After > investigating further we noticed that the context switching went from > ~10k/s to > 100k/s. We are running Linux specifically Centos. > > No adjusting of threads or thread pools had any impact on the thrashing. > After reading Kristian's post about > high-end varnish tuning we decided to try out session_linger. We started by > doubling the default from 50 to 100 to test the theory ('varnishadm -T > localhost:port param.set session_linger 100'). Once we did that we saw a > gradual settling of the context switching (using dstat or sar -w) and > a stabilizing of the load. > > It's such a great feature to be able to change this parameter via the > admin interface. We have 50GB malloc'ed and some nuking on our boxes so > restarting varnish doesn't come without some impact to the platform. > > Intuitively increasing session_linger makes sense. If you have several esi > modules rendered within a page and the gap between them is > 50ms then > they'll be reallocated elsewhere. > > What is not clear to me is how we should tune session_linger. We started > by setting it to the 3rd quantile of render times for the esi module taken > from a sampling of backend requests. This turned out to be 110ms. > > Damon > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-context-switching.png Type: image/png Size: 55752 bytes Desc: not available URL: From perbu at varnish-software.com Fri Feb 1 13:39:21 2013 From: perbu at varnish-software.com (Per Buer) Date: Fri, 1 Feb 2013 14:39:21 +0100 Subject: How to make multiple clients can get the response at the same time by stream. In-Reply-To: References: <20130129121340.GC19688@kuba.local.lp> Message-ID: Hi, I don't quite understand what you're trying to do. Varnish will store the jpg together with the response headers in memory. When you request the object Varnish will deliver it verbatim along with the HTTP headers. What exactly are you trying to do? PS: I see we haven't built packages of 3.0.3-plus, yet. This should pop up in the repo next week. Until then 3.0.2s might suffice. On Fri, Feb 1, 2013 at 8:01 AM, Xianzhe Wang wrote: > Hi, > Thanks for clarification. What you say is very clear. > I am sorry to show my poor English, but I have tried my best to > communicate. > > There is another question.For example, if we request a .jpg > file(cacheable), varnish will encapsulation it as an object and insert > into memory. How can we get the .jpg file from the object? > > Thank you for help again. > > -Shawn Wang > > > 2013/1/30 Per Buer > >> Hi, >> >> I was a bit quick and I didn't read the whole email the first time. Sorry >> about that. You're actually using the streaming branch, already I see. What >> you're writing is really, really odd. There is a slight lock while the >> "first" object is being fetched where other requests will be put on the >> waiting list. However, when the hit-for-pass object is created these should >> be released and pass'ed to the clients. >> >> If the backend takes forever coming back with the response headers then >> the situation would be something like what you describe. However, that >> would be odd and doesn't make much sense. >> >> PS: The streaming branch was renamed "plus" when it got other >> experimental features. You'll find source >> https://github.com/mbgrydeland/varnish-cache and packages at >> repo.varnish-cache.org/test if I recall correctly. >> >> >> >> >> On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang wrote: >> >>> >>> Hi, >>> Thanks a lot. >>> >>> I tried option >>> "set req.hash_ignore_busy = true;" >>> in vlc_recv. >>> I think it works. But there are side effects: it would increase backend >>> load. >>> >>> I have an idea about it in my previous email. what do you think about it? >>> >>> Another question is that where can I find the "plus" branch of Varnish >>> which matches this issue. >>> >>> Any suggestions will be appreciate. >>> Thanks again for help. >>> >>> Regards, >>> -- >>> Shawn Wang >>> >>> >>> ---------- Forwarded message ---------- >>> From: Xianzhe Wang >>> Date: 2013/1/30 >>> Subject: Re: How to make multiple clients can get the response at the >>> same time by stream. >>> To: Jakub S?oci?ski >>> >>> >>> Hi Jakub S. >>> Thank you very much. >>> I tried, and take a simple test, two client request the big file at the >>> same time, they get the response stream immediately, so it works. >>> In that case, multiple requests will go directly to "pass", they do not >>> need to wait, but it would increase backend load. >>> We need to balance the benefits and drawbacks. >>> >>> I wanna is that: >>> Client 1 requests url /foo >>> Client 2..N request url /foo >>> Varnish tasks a worker to fetch /foo for Client 1 >>> Client 2..N are now queued pending response from the worker >>> Worker fetch response header(just header not include body) from >>> backend, and find it non-cacheable, then make the remaining >>> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass >>> object synchronously in the first request(Client 1). >>> Subsequent requests are now given the hit_for_pass object >>> instructing them to go to the backend as long as the hit_for_pass object >>> exists. >>> >>> As I mentioned below, is it feasible? Or do you have any Suggestions? >>> >>> Thanks again for help. >>> >>> Regards, >>> -- >>> Shawn Wang >>> >>> >>> >>> 2013/1/29 Jakub S?oci?ski >>> >>>> Hi Xianzhe Wang, >>>> you should try option >>>> "set req.hash_ignore_busy = true;" >>>> in vlc_recv. >>>> >>>> Regards, >>>> -- >>>> Jakub S. >>>> >>>> >>>> Xianzhe Wang napisa?(a): >>>> > Hello everyone, >>>> > My varnish version is 3.0.2-streaming release.And I set >>>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the >>>> object to >>>> > the client directly without fetching the whole object into varnish"; >>>> > >>>> > This is a part of my *.vcl file: >>>> > >>>> > sub vcl_fetch { >>>> > set beresp.grace = 30m; >>>> > >>>> > set beresp.do_stream = true; >>>> > >>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>> > "[0-9]{8,}") { >>>> > return (hit_for_pass); >>>> > } >>>> > >>>> > if (beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control >>>> ~ >>>> > "no-cache" || beresp.http.Cache-Control ~ "private") { >>>> > return (hit_for_pass); >>>> > } >>>> > >>>> > if (beresp.ttl <= 0s || >>>> > beresp.http.Set-Cookie || >>>> > beresp.http.Vary == "*") { >>>> > >>>> > set beresp.ttl = 120 s; >>>> > return (hit_for_pass); >>>> > } >>>> > >>>> > return (deliver); >>>> > } >>>> > >>>> > Then I request a big file(about 100M+) like "xxx.zip" from >>>> clients.There is >>>> > only one client can access the object.because "the object will marked >>>> as >>>> > busy as it is delivered." >>>> > >>>> > But if the request goes directly to ?pass? ,multiple clients can get >>>> the >>>> > response at the same time. >>>> > >>>> > Also if I remove >>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>> > "[0-9]{8,}") { >>>> > return (hit_for_pass); >>>> > } >>>> > to make the file cacheable,multiple clients can get the response at >>>> the >>>> > same time. >>>> > >>>> > Now I want "multiple clients can get the response at the same time." >>>> in all >>>> > situations("pass","hit","hit_for_pass"). >>>> > >>>> > What can I do for it? >>>> > Any suggestions will be appreciate. >>>> > Thank you. >>>> > >>>> > -Shawn Wang >>>> >>>> > _______________________________________________ >>>> > varnish-misc mailing list >>>> > varnish-misc at varnish-cache.org >>>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> >>>> >>> >>> >> >> >> -- >> *Per Buer* >> >> CEO | Varnish Software AS >> Phone: +47 958 39 117 | Skype: per.buer >> We Make Websites Fly! >> >> > -- *Per Buer* CEO | Varnish Software AS Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh132 at psu.edu Fri Feb 1 17:33:53 2013 From: jdh132 at psu.edu (Jason Heffner) Date: Fri, 1 Feb 2013 12:33:53 -0500 Subject: Varnish 3.0.3 http error Message-ID: Hi, I've been through some of the archives on the list and have seen some similar problems. I was wondering if anyone can tell why this particular HTTP error might be caused? I also contacted the jetpack developers since it is their plugin returning the HTTP error from apache on the backend. If varnish is taken out of the setup the 302 redirect returned from apache functions properly. I can recreate the issue on a development system, and all other page loads are fine. Our setup: RHEL6, Pound 2.6.2, Varnish 3.0.3, Apache 2.2.15, php-fpm 5.3.21 I'd welcome any advice. Thanks, Jason varnishlog -d -c -m TxStatus:503 16 ReqStart c xxx.xxx.xxx.xxx 45094 812364756 16 RxRequest c GET 16 RxURL c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 16 RxProtocol c HTTP/1.1 16 RxHeader c Host: somesite.psu.edu 16 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:18.0) Gecko/20100101 Firefox/18.0 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Accept-Language: en-US,en;q=0.5 16 RxHeader c Accept-Encoding: gzip, deflate 16 RxHeader c Cookie: __utma=165035810.2146825052.1359667423.1359667423.1359672023.2; __utmc=165035810; __utmz=165035810.1359667423.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); bp-activity-oldestpage=1; __qca=P0-1744595654-1359667424495; bp-activity-scope=all; bp 16 RxHeader c Connection: keep-alive 16 RxHeader c X-Forwarded-For: xxx.xxx.xxx.xxx 16 VCL_call c recv pass 16 VCL_call c hash 16 Hash c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 16 Hash c somesite.psu.edu 16 VCL_return c hash 16 VCL_call c pass pass 16 Backend c 14 default_director web1 16 FetchError c http format error 16 VCL_call c error deliver 16 VCL_call c deliver deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Retry-After: 5 16 TxHeader c Content-Length: 418 16 TxHeader c Accept-Ranges: bytes 16 TxHeader c Date: Thu, 31 Jan 2013 22:46:08 GMT 16 TxHeader c Age: 3 16 TxHeader c Connection: close 16 TxHeader c X-Cache: MISS 16 Length c 418 16 ReqEnd c 812364756 1359672365.351407051 1359672368.171207428 1.863311291 2.819756508 0.000043869 p: (814) 865-1840, c: (814) 777-7665 Systems Administrator Teaching and Learning with Technology, Information Technology Services The Pennsylvania State University From Raul.Rangel at disney.com Fri Feb 1 17:55:39 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Fri, 1 Feb 2013 09:55:39 -0800 Subject: Varnish 3.0.3 http error In-Reply-To: References: Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F2609ADAE91@SM-CALA-VXMB04A.swna.wdpr.disney.com> What does the backend log look like? In the log it shows 16 FetchError c http format error What does curl show if you do the same request? -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Jason Heffner Sent: Friday, February 01, 2013 10:34 AM To: varnish-misc at varnish-cache.org Subject: Varnish 3.0.3 http error Hi, I've been through some of the archives on the list and have seen some similar problems. I was wondering if anyone can tell why this particular HTTP error might be caused? I also contacted the jetpack developers since it is their plugin returning the HTTP error from apache on the backend. If varnish is taken out of the setup the 302 redirect returned from apache functions properly. I can recreate the issue on a development system, and all other page loads are fine. Our setup: RHEL6, Pound 2.6.2, Varnish 3.0.3, Apache 2.2.15, php-fpm 5.3.21 I'd welcome any advice. Thanks, Jason varnishlog -d -c -m TxStatus:503 16 ReqStart c xxx.xxx.xxx.xxx 45094 812364756 16 RxRequest c GET 16 RxURL c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 16 RxProtocol c HTTP/1.1 16 RxHeader c Host: somesite.psu.edu 16 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:18.0) Gecko/20100101 Firefox/18.0 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 16 RxHeader c Accept-Language: en-US,en;q=0.5 16 RxHeader c Accept-Encoding: gzip, deflate 16 RxHeader c Cookie: __utma=165035810.2146825052.1359667423.1359667423.1359672023.2; __utmc=165035810; __utmz=165035810.1359667423.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); bp-activity-oldestpage=1; __qca=P0-1744595654-1359667424495; bp-activity-scope=all; bp 16 RxHeader c Connection: keep-alive 16 RxHeader c X-Forwarded-For: xxx.xxx.xxx.xxx 16 VCL_call c recv pass 16 VCL_call c hash 16 Hash c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 16 Hash c somesite.psu.edu 16 VCL_return c hash 16 VCL_call c pass pass 16 Backend c 14 default_director web1 16 FetchError c http format error 16 VCL_call c error deliver 16 VCL_call c deliver deliver 16 TxProtocol c HTTP/1.1 16 TxStatus c 503 16 TxResponse c Service Unavailable 16 TxHeader c Server: Varnish 16 TxHeader c Content-Type: text/html; charset=utf-8 16 TxHeader c Retry-After: 5 16 TxHeader c Content-Length: 418 16 TxHeader c Accept-Ranges: bytes 16 TxHeader c Date: Thu, 31 Jan 2013 22:46:08 GMT 16 TxHeader c Age: 3 16 TxHeader c Connection: close 16 TxHeader c X-Cache: MISS 16 Length c 418 16 ReqEnd c 812364756 1359672365.351407051 1359672368.171207428 1.863311291 2.819756508 0.000043869 p: (814) 865-1840, c: (814) 777-7665 Systems Administrator Teaching and Learning with Technology, Information Technology Services The Pennsylvania State University _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jdh132 at psu.edu Fri Feb 1 21:50:42 2013 From: jdh132 at psu.edu (Jason Heffner) Date: Fri, 1 Feb 2013 16:50:42 -0500 Subject: Varnish 3.0.3 http error In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F2609ADAE91@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F2609ADAE91@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: I attached a tcpdump since a curl isn't practical; I would need to include cookies and the the correct string parameters to simulate the same action. http://pastebin.com/3GHYFUwD On Feb 1, 2013, at 12:55 PM, "Rangel, Raul" wrote: > What does the backend log look like? > > In the log it shows > 16 FetchError c http format error > > What does curl show if you do the same request? > > -----Original Message----- > From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Jason Heffner > Sent: Friday, February 01, 2013 10:34 AM > To: varnish-misc at varnish-cache.org > Subject: Varnish 3.0.3 http error > > Hi, > > I've been through some of the archives on the list and have seen some similar problems. I was wondering if anyone can tell why this particular HTTP error might be caused? I also contacted the jetpack developers since it is their plugin returning the HTTP error from apache on the backend. If varnish is taken out of the setup the 302 redirect returned from apache functions properly. I can recreate the issue on a development system, and all other page loads are fine. > > Our setup: RHEL6, Pound 2.6.2, Varnish 3.0.3, Apache 2.2.15, php-fpm 5.3.21 > > I'd welcome any advice. > > Thanks, > Jason > > varnishlog -d -c -m TxStatus:503 > 16 ReqStart c xxx.xxx.xxx.xxx 45094 812364756 > 16 RxRequest c GET > 16 RxURL c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 > 16 RxProtocol c HTTP/1.1 > 16 RxHeader c Host: somesite.psu.edu > 16 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:18.0) Gecko/20100101 Firefox/18.0 > 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > 16 RxHeader c Accept-Language: en-US,en;q=0.5 > 16 RxHeader c Accept-Encoding: gzip, deflate > 16 RxHeader c Cookie: __utma=165035810.2146825052.1359667423.1359667423.1359672023.2; __utmc=165035810; __utmz=165035810.1359667423.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); bp-activity-oldestpage=1; __qca=P0-1744595654-1359667424495; bp-activity-scope=all; bp > 16 RxHeader c Connection: keep-alive > 16 RxHeader c X-Forwarded-For: xxx.xxx.xxx.xxx > 16 VCL_call c recv pass > 16 VCL_call c hash > 16 Hash c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 > 16 Hash c somesite.psu.edu > 16 VCL_return c hash > 16 VCL_call c pass pass > 16 Backend c 14 default_director web1 > 16 FetchError c http format error > 16 VCL_call c error deliver > 16 VCL_call c deliver deliver > 16 TxProtocol c HTTP/1.1 > 16 TxStatus c 503 > 16 TxResponse c Service Unavailable > 16 TxHeader c Server: Varnish > 16 TxHeader c Content-Type: text/html; charset=utf-8 > 16 TxHeader c Retry-After: 5 > 16 TxHeader c Content-Length: 418 > 16 TxHeader c Accept-Ranges: bytes > 16 TxHeader c Date: Thu, 31 Jan 2013 22:46:08 GMT > 16 TxHeader c Age: 3 > 16 TxHeader c Connection: close > 16 TxHeader c X-Cache: MISS > 16 Length c 418 > 16 ReqEnd c 812364756 1359672365.351407051 1359672368.171207428 1.863311291 2.819756508 0.000043869 > > p: (814) 865-1840, c: (814) 777-7665 > Systems Administrator > Teaching and Learning with Technology, Information Technology Services The Pennsylvania State University > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh132 at psu.edu Fri Feb 1 22:32:12 2013 From: jdh132 at psu.edu (Jason Heffner) Date: Fri, 1 Feb 2013 17:32:12 -0500 Subject: Varnish 3.0.3 http error In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609ADAE91@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: I didn't actually look too closely to the tcpdump till a few minutes later and noticed the issue is caused by jetpack. I got a ticket open with them now. I can see why Varnish decided to throw a 503. On Feb 1, 2013, at 4:50 PM, Jason Heffner wrote: > I attached a tcpdump since a curl isn't practical; I would need to include cookies and the the correct string parameters to simulate the same action. > > http://pastebin.com/3GHYFUwD > > > On Feb 1, 2013, at 12:55 PM, "Rangel, Raul" wrote: > >> What does the backend log look like? >> >> In the log it shows >> 16 FetchError c http format error >> >> What does curl show if you do the same request? >> >> -----Original Message----- >> From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Jason Heffner >> Sent: Friday, February 01, 2013 10:34 AM >> To: varnish-misc at varnish-cache.org >> Subject: Varnish 3.0.3 http error >> >> Hi, >> >> I've been through some of the archives on the list and have seen some similar problems. I was wondering if anyone can tell why this particular HTTP error might be caused? I also contacted the jetpack developers since it is their plugin returning the HTTP error from apache on the backend. If varnish is taken out of the setup the 302 redirect returned from apache functions properly. I can recreate the issue on a development system, and all other page loads are fine. >> >> Our setup: RHEL6, Pound 2.6.2, Varnish 3.0.3, Apache 2.2.15, php-fpm 5.3.21 >> >> I'd welcome any advice. >> >> Thanks, >> Jason >> >> varnishlog -d -c -m TxStatus:503 >> 16 ReqStart c xxx.xxx.xxx.xxx 45094 812364756 >> 16 RxRequest c GET >> 16 RxURL c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 >> 16 RxProtocol c HTTP/1.1 >> 16 RxHeader c Host: somesite.psu.edu >> 16 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:18.0) Gecko/20100101 Firefox/18.0 >> 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 >> 16 RxHeader c Accept-Language: en-US,en;q=0.5 >> 16 RxHeader c Accept-Encoding: gzip, deflate >> 16 RxHeader c Cookie: __utma=165035810.2146825052.1359667423.1359667423.1359672023.2; __utmc=165035810; __utmz=165035810.1359667423.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); bp-activity-oldestpage=1; __qca=P0-1744595654-1359667424495; bp-activity-scope=all; bp >> 16 RxHeader c Connection: keep-alive >> 16 RxHeader c X-Forwarded-For: xxx.xxx.xxx.xxx >> 16 VCL_call c recv pass >> 16 VCL_call c hash >> 16 Hash c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 >> 16 Hash c somesite.psu.edu >> 16 VCL_return c hash >> 16 VCL_call c pass pass >> 16 Backend c 14 default_director web1 >> 16 FetchError c http format error >> 16 VCL_call c error deliver >> 16 VCL_call c deliver deliver >> 16 TxProtocol c HTTP/1.1 >> 16 TxStatus c 503 >> 16 TxResponse c Service Unavailable >> 16 TxHeader c Server: Varnish >> 16 TxHeader c Content-Type: text/html; charset=utf-8 >> 16 TxHeader c Retry-After: 5 >> 16 TxHeader c Content-Length: 418 >> 16 TxHeader c Accept-Ranges: bytes >> 16 TxHeader c Date: Thu, 31 Jan 2013 22:46:08 GMT >> 16 TxHeader c Age: 3 >> 16 TxHeader c Connection: close >> 16 TxHeader c X-Cache: MISS >> 16 Length c 418 >> 16 ReqEnd c 812364756 1359672365.351407051 1359672368.171207428 1.863311291 2.819756508 0.000043869 >> >> p: (814) 865-1840, c: (814) 777-7665 >> Systems Administrator >> Teaching and Learning with Technology, Information Technology Services The Pennsylvania State University >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh132 at psu.edu Fri Feb 1 22:57:39 2013 From: jdh132 at psu.edu (Jason Heffner) Date: Fri, 1 Feb 2013 17:57:39 -0500 Subject: Varnish 3.0.3 http error In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609ADAE91@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: <137C0AB7-7999-4C8C-AA26-30C75B4CA0B0@psu.edu> I was actually able to fix the error.. I had to increase http_max_hdr, http_req_hdr_len, and http_resp_hdr_len. Is is there any log letting you know when the headers exceed these values? Jason On Feb 1, 2013, at 5:32 PM, Jason Heffner wrote: > I didn't actually look too closely to the tcpdump till a few minutes later and noticed the issue is caused by jetpack. I got a ticket open with them now. I can see why Varnish decided to throw a 503. > > On Feb 1, 2013, at 4:50 PM, Jason Heffner wrote: > >> I attached a tcpdump since a curl isn't practical; I would need to include cookies and the the correct string parameters to simulate the same action. >> >> http://pastebin.com/3GHYFUwD >> >> >> On Feb 1, 2013, at 12:55 PM, "Rangel, Raul" wrote: >> >>> What does the backend log look like? >>> >>> In the log it shows >>> 16 FetchError c http format error >>> >>> What does curl show if you do the same request? >>> >>> -----Original Message----- >>> From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Jason Heffner >>> Sent: Friday, February 01, 2013 10:34 AM >>> To: varnish-misc at varnish-cache.org >>> Subject: Varnish 3.0.3 http error >>> >>> Hi, >>> >>> I've been through some of the archives on the list and have seen some similar problems. I was wondering if anyone can tell why this particular HTTP error might be caused? I also contacted the jetpack developers since it is their plugin returning the HTTP error from apache on the backend. If varnish is taken out of the setup the 302 redirect returned from apache functions properly. I can recreate the issue on a development system, and all other page loads are fine. >>> >>> Our setup: RHEL6, Pound 2.6.2, Varnish 3.0.3, Apache 2.2.15, php-fpm 5.3.21 >>> >>> I'd welcome any advice. >>> >>> Thanks, >>> Jason >>> >>> varnishlog -d -c -m TxStatus:503 >>> 16 ReqStart c xxx.xxx.xxx.xxx 45094 812364756 >>> 16 RxRequest c GET >>> 16 RxURL c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 >>> 16 RxProtocol c HTTP/1.1 >>> 16 RxHeader c Host: somesite.psu.edu >>> 16 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:18.0) Gecko/20100101 Firefox/18.0 >>> 16 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 >>> 16 RxHeader c Accept-Language: en-US,en;q=0.5 >>> 16 RxHeader c Accept-Encoding: gzip, deflate >>> 16 RxHeader c Cookie: __utma=165035810.2146825052.1359667423.1359667423.1359672023.2; __utmc=165035810; __utmz=165035810.1359667423.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); bp-activity-oldestpage=1; __qca=P0-1744595654-1359667424495; bp-activity-scope=all; bp >>> 16 RxHeader c Connection: keep-alive >>> 16 RxHeader c X-Forwarded-For: xxx.xxx.xxx.xxx >>> 16 VCL_call c recv pass >>> 16 VCL_call c hash >>> 16 Hash c /wp-admin/admin.php?page=jetpack&action=authorize&_wpnonce=cec3b377a1&code=dhcULw6Zlg&state=2 >>> 16 Hash c somesite.psu.edu >>> 16 VCL_return c hash >>> 16 VCL_call c pass pass >>> 16 Backend c 14 default_director web1 >>> 16 FetchError c http format error >>> 16 VCL_call c error deliver >>> 16 VCL_call c deliver deliver >>> 16 TxProtocol c HTTP/1.1 >>> 16 TxStatus c 503 >>> 16 TxResponse c Service Unavailable >>> 16 TxHeader c Server: Varnish >>> 16 TxHeader c Content-Type: text/html; charset=utf-8 >>> 16 TxHeader c Retry-After: 5 >>> 16 TxHeader c Content-Length: 418 >>> 16 TxHeader c Accept-Ranges: bytes >>> 16 TxHeader c Date: Thu, 31 Jan 2013 22:46:08 GMT >>> 16 TxHeader c Age: 3 >>> 16 TxHeader c Connection: close >>> 16 TxHeader c X-Cache: MISS >>> 16 Length c 418 >>> 16 ReqEnd c 812364756 1359672365.351407051 1359672368.171207428 1.863311291 2.819756508 0.000043869 >>> >>> p: (814) 865-1840, c: (814) 777-7665 >>> Systems Administrator >>> Teaching and Learning with Technology, Information Technology Services The Pennsylvania State University >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From wxz19861013 at gmail.com Sat Feb 2 02:35:23 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Sat, 2 Feb 2013 10:35:23 +0800 Subject: How to make multiple clients can get the response at the same time by stream. In-Reply-To: References: <20130129121340.GC19688@kuba.local.lp> Message-ID: Hi, I'm sorry, I wasn't being clear. Thank you for your patience. I'm trying to estimate the response header from backend, if the "content-type:" is " image/*", we know that the response body is image. Then I'll rewrite the image and insert it into varnish cache. Subsequent request will hit the rewritten image. That's I wanna to do. Fetch backend response, rewrite image and insert into varnish cache. Regards, -- Shawn Wang 2013/2/1 Per Buer > Hi, > > I don't quite understand what you're trying to do. Varnish will store the > jpg together with the response headers in memory. When you request the > object Varnish will deliver it verbatim along with the HTTP headers. What > exactly are you trying to do? > > > PS: I see we haven't built packages of 3.0.3-plus, yet. This should pop up > in the repo next week. Until then 3.0.2s might suffice. > > > On Fri, Feb 1, 2013 at 8:01 AM, Xianzhe Wang wrote: > >> Hi, >> Thanks for clarification. What you say is very clear. >> I am sorry to show my poor English, but I have tried my best to >> communicate. >> >> There is another question.For example, if we request a .jpg >> file(cacheable), varnish will encapsulation it as an object and insert >> into memory. How can we get the .jpg file from the object? >> >> Thank you for help again. >> >> -Shawn Wang >> >> >> 2013/1/30 Per Buer >> >>> Hi, >>> >>> I was a bit quick and I didn't read the whole email the first time. >>> Sorry about that. You're actually using the streaming branch, already I >>> see. What you're writing is really, really odd. There is a slight lock >>> while the "first" object is being fetched where other requests will be put >>> on the waiting list. However, when the hit-for-pass object is created these >>> should be released and pass'ed to the clients. >>> >>> If the backend takes forever coming back with the response headers then >>> the situation would be something like what you describe. However, that >>> would be odd and doesn't make much sense. >>> >>> PS: The streaming branch was renamed "plus" when it got other >>> experimental features. You'll find source >>> https://github.com/mbgrydeland/varnish-cache and packages at >>> repo.varnish-cache.org/test if I recall correctly. >>> >>> >>> >>> >>> On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang wrote: >>> >>>> >>>> Hi, >>>> Thanks a lot. >>>> >>>> I tried option >>>> "set req.hash_ignore_busy = true;" >>>> in vlc_recv. >>>> I think it works. But there are side effects: it would increase backend >>>> load. >>>> >>>> I have an idea about it in my previous email. what do you think about >>>> it? >>>> >>>> Another question is that where can I find the "plus" branch of Varnish >>>> which matches this issue. >>>> >>>> Any suggestions will be appreciate. >>>> Thanks again for help. >>>> >>>> Regards, >>>> -- >>>> Shawn Wang >>>> >>>> >>>> ---------- Forwarded message ---------- >>>> From: Xianzhe Wang >>>> Date: 2013/1/30 >>>> Subject: Re: How to make multiple clients can get the response at the >>>> same time by stream. >>>> To: Jakub S?oci?ski >>>> >>>> >>>> Hi Jakub S. >>>> Thank you very much. >>>> I tried, and take a simple test, two client request the big file at the >>>> same time, they get the response stream immediately, so it works. >>>> In that case, multiple requests will go directly to "pass", they do not >>>> need to wait, but it would increase backend load. >>>> We need to balance the benefits and drawbacks. >>>> >>>> I wanna is that: >>>> Client 1 requests url /foo >>>> Client 2..N request url /foo >>>> Varnish tasks a worker to fetch /foo for Client 1 >>>> Client 2..N are now queued pending response from the worker >>>> Worker fetch response header(just header not include body) from >>>> backend, and find it non-cacheable, then make the remaining >>>> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass >>>> object synchronously in the first request(Client 1). >>>> Subsequent requests are now given the hit_for_pass object >>>> instructing them to go to the backend as long as the hit_for_pass object >>>> exists. >>>> >>>> As I mentioned below, is it feasible? Or do you have any Suggestions? >>>> >>>> Thanks again for help. >>>> >>>> Regards, >>>> -- >>>> Shawn Wang >>>> >>>> >>>> >>>> 2013/1/29 Jakub S?oci?ski >>>> >>>>> Hi Xianzhe Wang, >>>>> you should try option >>>>> "set req.hash_ignore_busy = true;" >>>>> in vlc_recv. >>>>> >>>>> Regards, >>>>> -- >>>>> Jakub S. >>>>> >>>>> >>>>> Xianzhe Wang napisa?(a): >>>>> > Hello everyone, >>>>> > My varnish version is 3.0.2-streaming release.And I set >>>>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the >>>>> object to >>>>> > the client directly without fetching the whole object into varnish"; >>>>> > >>>>> > This is a part of my *.vcl file: >>>>> > >>>>> > sub vcl_fetch { >>>>> > set beresp.grace = 30m; >>>>> > >>>>> > set beresp.do_stream = true; >>>>> > >>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>> > "[0-9]{8,}") { >>>>> > return (hit_for_pass); >>>>> > } >>>>> > >>>>> > if (beresp.http.Pragma ~ "no-cache" || >>>>> beresp.http.Cache-Control ~ >>>>> > "no-cache" || beresp.http.Cache-Control ~ "private") { >>>>> > return (hit_for_pass); >>>>> > } >>>>> > >>>>> > if (beresp.ttl <= 0s || >>>>> > beresp.http.Set-Cookie || >>>>> > beresp.http.Vary == "*") { >>>>> > >>>>> > set beresp.ttl = 120 s; >>>>> > return (hit_for_pass); >>>>> > } >>>>> > >>>>> > return (deliver); >>>>> > } >>>>> > >>>>> > Then I request a big file(about 100M+) like "xxx.zip" from >>>>> clients.There is >>>>> > only one client can access the object.because "the object will >>>>> marked as >>>>> > busy as it is delivered." >>>>> > >>>>> > But if the request goes directly to ?pass? ,multiple clients can >>>>> get the >>>>> > response at the same time. >>>>> > >>>>> > Also if I remove >>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>> > "[0-9]{8,}") { >>>>> > return (hit_for_pass); >>>>> > } >>>>> > to make the file cacheable,multiple clients can get the response at >>>>> the >>>>> > same time. >>>>> > >>>>> > Now I want "multiple clients can get the response at the same time." >>>>> in all >>>>> > situations("pass","hit","hit_for_pass"). >>>>> > >>>>> > What can I do for it? >>>>> > Any suggestions will be appreciate. >>>>> > Thank you. >>>>> > >>>>> > -Shawn Wang >>>>> >>>>> > _______________________________________________ >>>>> > varnish-misc mailing list >>>>> > varnish-misc at varnish-cache.org >>>>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>> >>>>> >>>> >>>> >>> >>> >>> -- >>> *Per Buer* >>> >>> CEO | Varnish Software AS >>> Phone: +47 958 39 117 | Skype: per.buer >>> We Make Websites Fly! >>> >>> >> > > > -- > *Per Buer* > CEO | Varnish Software AS > Phone: +47 958 39 117 | Skype: per.buer > We Make Websites Fly! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Sat Feb 2 12:58:32 2013 From: perbu at varnish-software.com (Per Buer) Date: Sat, 2 Feb 2013 13:58:32 +0100 Subject: How to make multiple clients can get the response at the same time by stream. In-Reply-To: References: <20130129121340.GC19688@kuba.local.lp> Message-ID: That is an interesting use case. So, if I understand it correctly what would happen would be something like this: 1. Client requests /foo 2. Varnish goes to backend and asks for /foo 3. Backend responds and notifies that this would be accessed in the future as /bar 4. Varnish changes the object in memory or maybe copies it. I'm guessing that would not be possible as the hashing happens quite early and you would have to alter the hash before looking it up. It might be easier to maintain a map of various rewrites in memory, memcache or redis and lookup the rewrite in vcl_recv. On Sat, Feb 2, 2013 at 3:35 AM, Xianzhe Wang wrote: > Hi, > I'm sorry, I wasn't being clear. Thank you for your patience. > I'm trying to estimate the response header from backend, if the > "content-type:" is " image/*", we know that the response body is image. > Then I'll rewrite > the image and insert it into varnish cache. Subsequent request will hit > the rewritten image. > That's I wanna to do. Fetch backend response, rewrite image and insert > into varnish cache. > > Regards, > -- > Shawn Wang > > > > > > 2013/2/1 Per Buer > >> Hi, >> >> I don't quite understand what you're trying to do. Varnish will store the >> jpg together with the response headers in memory. When you request the >> object Varnish will deliver it verbatim along with the HTTP headers. What >> exactly are you trying to do? >> >> >> PS: I see we haven't built packages of 3.0.3-plus, yet. This should pop >> up in the repo next week. Until then 3.0.2s might suffice. >> >> >> On Fri, Feb 1, 2013 at 8:01 AM, Xianzhe Wang wrote: >> >>> Hi, >>> Thanks for clarification. What you say is very clear. >>> I am sorry to show my poor English, but I have tried my best to >>> communicate. >>> >>> There is another question.For example, if we request a .jpg >>> file(cacheable), varnish will encapsulation it as an object and insert >>> into memory. How can we get the .jpg file from the object? >>> >>> Thank you for help again. >>> >>> -Shawn Wang >>> >>> >>> 2013/1/30 Per Buer >>> >>>> Hi, >>>> >>>> I was a bit quick and I didn't read the whole email the first time. >>>> Sorry about that. You're actually using the streaming branch, already I >>>> see. What you're writing is really, really odd. There is a slight lock >>>> while the "first" object is being fetched where other requests will be put >>>> on the waiting list. However, when the hit-for-pass object is created these >>>> should be released and pass'ed to the clients. >>>> >>>> If the backend takes forever coming back with the response headers then >>>> the situation would be something like what you describe. However, that >>>> would be odd and doesn't make much sense. >>>> >>>> PS: The streaming branch was renamed "plus" when it got other >>>> experimental features. You'll find source >>>> https://github.com/mbgrydeland/varnish-cache and packages at >>>> repo.varnish-cache.org/test if I recall correctly. >>>> >>>> >>>> >>>> >>>> On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang wrote: >>>> >>>>> >>>>> Hi, >>>>> Thanks a lot. >>>>> >>>>> I tried option >>>>> "set req.hash_ignore_busy = true;" >>>>> in vlc_recv. >>>>> I think it works. But there are side effects: it would increase >>>>> backend load. >>>>> >>>>> I have an idea about it in my previous email. what do you think about >>>>> it? >>>>> >>>>> Another question is that where can I find the "plus" branch of Varnish >>>>> which matches this issue. >>>>> >>>>> Any suggestions will be appreciate. >>>>> Thanks again for help. >>>>> >>>>> Regards, >>>>> -- >>>>> Shawn Wang >>>>> >>>>> >>>>> ---------- Forwarded message ---------- >>>>> From: Xianzhe Wang >>>>> Date: 2013/1/30 >>>>> Subject: Re: How to make multiple clients can get the response at the >>>>> same time by stream. >>>>> To: Jakub S?oci?ski >>>>> >>>>> >>>>> Hi Jakub S. >>>>> Thank you very much. >>>>> I tried, and take a simple test, two client request the big file at >>>>> the same time, they get the response stream immediately, so it works. >>>>> In that case, multiple requests will go directly to "pass", they do >>>>> not need to wait, but it would increase backend load. >>>>> We need to balance the benefits and drawbacks. >>>>> >>>>> I wanna is that: >>>>> Client 1 requests url /foo >>>>> Client 2..N request url /foo >>>>> Varnish tasks a worker to fetch /foo for Client 1 >>>>> Client 2..N are now queued pending response from the worker >>>>> Worker fetch response header(just header not include body) from >>>>> backend, and find it non-cacheable, then make the remaining >>>>> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass >>>>> object synchronously in the first request(Client 1). >>>>> Subsequent requests are now given the hit_for_pass object >>>>> instructing them to go to the backend as long as the hit_for_pass object >>>>> exists. >>>>> >>>>> As I mentioned below, is it feasible? Or do you have any Suggestions? >>>>> >>>>> Thanks again for help. >>>>> >>>>> Regards, >>>>> -- >>>>> Shawn Wang >>>>> >>>>> >>>>> >>>>> 2013/1/29 Jakub S?oci?ski >>>>> >>>>>> Hi Xianzhe Wang, >>>>>> you should try option >>>>>> "set req.hash_ignore_busy = true;" >>>>>> in vlc_recv. >>>>>> >>>>>> Regards, >>>>>> -- >>>>>> Jakub S. >>>>>> >>>>>> >>>>>> Xianzhe Wang napisa?(a): >>>>>> > Hello everyone, >>>>>> > My varnish version is 3.0.2-streaming release.And I set >>>>>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the >>>>>> object to >>>>>> > the client directly without fetching the whole object into varnish"; >>>>>> > >>>>>> > This is a part of my *.vcl file: >>>>>> > >>>>>> > sub vcl_fetch { >>>>>> > set beresp.grace = 30m; >>>>>> > >>>>>> > set beresp.do_stream = true; >>>>>> > >>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>>> > "[0-9]{8,}") { >>>>>> > return (hit_for_pass); >>>>>> > } >>>>>> > >>>>>> > if (beresp.http.Pragma ~ "no-cache" || >>>>>> beresp.http.Cache-Control ~ >>>>>> > "no-cache" || beresp.http.Cache-Control ~ "private") { >>>>>> > return (hit_for_pass); >>>>>> > } >>>>>> > >>>>>> > if (beresp.ttl <= 0s || >>>>>> > beresp.http.Set-Cookie || >>>>>> > beresp.http.Vary == "*") { >>>>>> > >>>>>> > set beresp.ttl = 120 s; >>>>>> > return (hit_for_pass); >>>>>> > } >>>>>> > >>>>>> > return (deliver); >>>>>> > } >>>>>> > >>>>>> > Then I request a big file(about 100M+) like "xxx.zip" from >>>>>> clients.There is >>>>>> > only one client can access the object.because "the object will >>>>>> marked as >>>>>> > busy as it is delivered." >>>>>> > >>>>>> > But if the request goes directly to ?pass? ,multiple clients can >>>>>> get the >>>>>> > response at the same time. >>>>>> > >>>>>> > Also if I remove >>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>>> > "[0-9]{8,}") { >>>>>> > return (hit_for_pass); >>>>>> > } >>>>>> > to make the file cacheable,multiple clients can get the response at >>>>>> the >>>>>> > same time. >>>>>> > >>>>>> > Now I want "multiple clients can get the response at the same >>>>>> time." in all >>>>>> > situations("pass","hit","hit_for_pass"). >>>>>> > >>>>>> > What can I do for it? >>>>>> > Any suggestions will be appreciate. >>>>>> > Thank you. >>>>>> > >>>>>> > -Shawn Wang >>>>>> >>>>>> > _______________________________________________ >>>>>> > varnish-misc mailing list >>>>>> > varnish-misc at varnish-cache.org >>>>>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *Per Buer* >>>> >>>> CEO | Varnish Software AS >>>> Phone: +47 958 39 117 | Skype: per.buer >>>> We Make Websites Fly! >>>> >>>> >>> >> >> >> -- >> *Per Buer* >> CEO | Varnish Software AS >> Phone: +47 958 39 117 | Skype: per.buer >> We Make Websites Fly! >> >> > -- *Per Buer* CEO | Varnish Software AS Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Mon Feb 4 06:59:03 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Mon, 4 Feb 2013 14:59:03 +0800 Subject: How to make multiple clients can get the response at the same time by stream. In-Reply-To: References: <20130129121340.GC19688@kuba.local.lp> Message-ID: Thank you for help. I would like to show the detail of my thinking: 1. Client requests /foo.jpg, And the request header has an additional field which called x-rewrite-level (x-rewrite-level have three value: 1,2,3, indicate different rewrite level). For example some request header may contain the "x-rewrite-level: 1". 2. Varnish looks up foo.jpg based on "url + req.x-rewrite-level" hash value, at the first time it will miss. 3. Varnish goes to backend and asks for /foo.jpg and backend responds. 4. Varnish notifies that this should be rewritten, and rewrites it according to x-rewrite-level value by the rewrite module. 5. The rewritten image replaces the original one(fetch from backend) in the varnish storage. 6. Subsequent request /foo.jpg with req.x-rewrite-level = 1 will hit the rewritten object in varnish. 7. Subsequent request /foo.jpg with req.x-rewrite-level = 2 or 3 will miss based on "url + req.x-rewrite-level" hash value. It goes to step 3. The request with the same url and different x-rewrite-level value will receive different image from varnish. Varnish should store 3 different objects according to x-rewrite-level value(1,2,3) Regards, -- Shawn Wang 2013/2/2 Per Buer > That is an interesting use case. So, if I understand it correctly what > would happen would be something like this: > > 1. Client requests /foo > 2. Varnish goes to backend and asks for /foo > 3. Backend responds and notifies that this would be accessed in the future > as /bar > 4. Varnish changes the object in memory or maybe copies it. > > I'm guessing that would not be possible as the hashing happens quite early > and you would have to alter the hash before looking it up. It might be > easier to maintain a map of various rewrites in memory, memcache or redis > and lookup the rewrite in vcl_recv. > > > > > On Sat, Feb 2, 2013 at 3:35 AM, Xianzhe Wang wrote: > >> Hi, >> I'm sorry, I wasn't being clear. Thank you for your patience. >> I'm trying to estimate the response header from backend, if the >> "content-type:" is " image/*", we know that the response body is image. >> Then I'll rewrite >> the image and insert it into varnish cache. Subsequent request will hit >> the rewritten image. >> That's I wanna to do. Fetch backend response, rewrite image and insert >> into varnish cache. >> >> Regards, >> -- >> Shawn Wang >> >> >> >> >> >> 2013/2/1 Per Buer >> >>> Hi, >>> >>> I don't quite understand what you're trying to do. Varnish will store >>> the jpg together with the response headers in memory. When you request the >>> object Varnish will deliver it verbatim along with the HTTP headers. What >>> exactly are you trying to do? >>> >>> >>> PS: I see we haven't built packages of 3.0.3-plus, yet. This should pop >>> up in the repo next week. Until then 3.0.2s might suffice. >>> >>> >>> On Fri, Feb 1, 2013 at 8:01 AM, Xianzhe Wang wrote: >>> >>>> Hi, >>>> Thanks for clarification. What you say is very clear. >>>> I am sorry to show my poor English, but I have tried my best to >>>> communicate. >>>> >>>> There is another question.For example, if we request a .jpg >>>> file(cacheable), varnish will encapsulation it as an object and insert >>>> into memory. How can we get the .jpg file from the object? >>>> >>>> Thank you for help again. >>>> >>>> -Shawn Wang >>>> >>>> >>>> 2013/1/30 Per Buer >>>> >>>>> Hi, >>>>> >>>>> I was a bit quick and I didn't read the whole email the first time. >>>>> Sorry about that. You're actually using the streaming branch, already I >>>>> see. What you're writing is really, really odd. There is a slight lock >>>>> while the "first" object is being fetched where other requests will be put >>>>> on the waiting list. However, when the hit-for-pass object is created these >>>>> should be released and pass'ed to the clients. >>>>> >>>>> If the backend takes forever coming back with the response headers >>>>> then the situation would be something like what you describe. However, that >>>>> would be odd and doesn't make much sense. >>>>> >>>>> PS: The streaming branch was renamed "plus" when it got other >>>>> experimental features. You'll find source >>>>> https://github.com/mbgrydeland/varnish-cache and packages at >>>>> repo.varnish-cache.org/test if I recall correctly. >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Jan 30, 2013 at 3:32 AM, Xianzhe Wang wrote: >>>>> >>>>>> >>>>>> Hi, >>>>>> Thanks a lot. >>>>>> >>>>>> I tried option >>>>>> "set req.hash_ignore_busy = true;" >>>>>> in vlc_recv. >>>>>> I think it works. But there are side effects: it would increase >>>>>> backend load. >>>>>> >>>>>> I have an idea about it in my previous email. what do you think about >>>>>> it? >>>>>> >>>>>> Another question is that where can I find the "plus" branch of >>>>>> Varnish which matches this issue. >>>>>> >>>>>> Any suggestions will be appreciate. >>>>>> Thanks again for help. >>>>>> >>>>>> Regards, >>>>>> -- >>>>>> Shawn Wang >>>>>> >>>>>> >>>>>> ---------- Forwarded message ---------- >>>>>> From: Xianzhe Wang >>>>>> Date: 2013/1/30 >>>>>> Subject: Re: How to make multiple clients can get the response at the >>>>>> same time by stream. >>>>>> To: Jakub S?oci?ski >>>>>> >>>>>> >>>>>> Hi Jakub S. >>>>>> Thank you very much. >>>>>> I tried, and take a simple test, two client request the big file at >>>>>> the same time, they get the response stream immediately, so it works. >>>>>> In that case, multiple requests will go directly to "pass", they do >>>>>> not need to wait, but it would increase backend load. >>>>>> We need to balance the benefits and drawbacks. >>>>>> >>>>>> I wanna is that: >>>>>> Client 1 requests url /foo >>>>>> Client 2..N request url /foo >>>>>> Varnish tasks a worker to fetch /foo for Client 1 >>>>>> Client 2..N are now queued pending response from the worker >>>>>> Worker fetch response header(just header not include body) from >>>>>> backend, and find it non-cacheable, then make the remaining >>>>>> requests(Client 2..N) go directly to "pass". And creat the hit_for_pass >>>>>> object synchronously in the first request(Client 1). >>>>>> Subsequent requests are now given the hit_for_pass object >>>>>> instructing them to go to the backend as long as the hit_for_pass object >>>>>> exists. >>>>>> >>>>>> As I mentioned below, is it feasible? Or do you have any Suggestions? >>>>>> >>>>>> Thanks again for help. >>>>>> >>>>>> Regards, >>>>>> -- >>>>>> Shawn Wang >>>>>> >>>>>> >>>>>> >>>>>> 2013/1/29 Jakub S?oci?ski >>>>>> >>>>>>> Hi Xianzhe Wang, >>>>>>> you should try option >>>>>>> "set req.hash_ignore_busy = true;" >>>>>>> in vlc_recv. >>>>>>> >>>>>>> Regards, >>>>>>> -- >>>>>>> Jakub S. >>>>>>> >>>>>>> >>>>>>> Xianzhe Wang napisa?(a): >>>>>>> > Hello everyone, >>>>>>> > My varnish version is 3.0.2-streaming release.And I set >>>>>>> > "beresp.do_stream = true" in vcl_fetch in order to "Deliver the >>>>>>> object to >>>>>>> > the client directly without fetching the whole object into >>>>>>> varnish"; >>>>>>> > >>>>>>> > This is a part of my *.vcl file: >>>>>>> > >>>>>>> > sub vcl_fetch { >>>>>>> > set beresp.grace = 30m; >>>>>>> > >>>>>>> > set beresp.do_stream = true; >>>>>>> > >>>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>>>> > "[0-9]{8,}") { >>>>>>> > return (hit_for_pass); >>>>>>> > } >>>>>>> > >>>>>>> > if (beresp.http.Pragma ~ "no-cache" || >>>>>>> beresp.http.Cache-Control ~ >>>>>>> > "no-cache" || beresp.http.Cache-Control ~ "private") { >>>>>>> > return (hit_for_pass); >>>>>>> > } >>>>>>> > >>>>>>> > if (beresp.ttl <= 0s || >>>>>>> > beresp.http.Set-Cookie || >>>>>>> > beresp.http.Vary == "*") { >>>>>>> > >>>>>>> > set beresp.ttl = 120 s; >>>>>>> > return (hit_for_pass); >>>>>>> > } >>>>>>> > >>>>>>> > return (deliver); >>>>>>> > } >>>>>>> > >>>>>>> > Then I request a big file(about 100M+) like "xxx.zip" from >>>>>>> clients.There is >>>>>>> > only one client can access the object.because "the object will >>>>>>> marked as >>>>>>> > busy as it is delivered." >>>>>>> > >>>>>>> > But if the request goes directly to ?pass? ,multiple clients can >>>>>>> get the >>>>>>> > response at the same time. >>>>>>> > >>>>>>> > Also if I remove >>>>>>> > if (beresp.http.Content-Length && beresp.http.Content-Length ~ >>>>>>> > "[0-9]{8,}") { >>>>>>> > return (hit_for_pass); >>>>>>> > } >>>>>>> > to make the file cacheable,multiple clients can get the response >>>>>>> at the >>>>>>> > same time. >>>>>>> > >>>>>>> > Now I want "multiple clients can get the response at the same >>>>>>> time." in all >>>>>>> > situations("pass","hit","hit_for_pass"). >>>>>>> > >>>>>>> > What can I do for it? >>>>>>> > Any suggestions will be appreciate. >>>>>>> > Thank you. >>>>>>> > >>>>>>> > -Shawn Wang >>>>>>> >>>>>>> > _______________________________________________ >>>>>>> > varnish-misc mailing list >>>>>>> > varnish-misc at varnish-cache.org >>>>>>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> *Per Buer* >>>>> >>>>> CEO | Varnish Software AS >>>>> Phone: +47 958 39 117 | Skype: per.buer >>>>> We Make Websites Fly! >>>>> >>>>> >>>> >>> >>> >>> -- >>> *Per Buer* >>> CEO | Varnish Software AS >>> Phone: +47 958 39 117 | Skype: per.buer >>> We Make Websites Fly! >>> >>> >> > > > -- > *Per Buer* > CEO | Varnish Software AS > Phone: +47 958 39 117 | Skype: per.buer > We Make Websites Fly! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bedis9 at gmail.com Tue Feb 5 23:09:55 2013 From: bedis9 at gmail.com (Baptiste) Date: Wed, 6 Feb 2013 00:09:55 +0100 Subject: F5 loadbalancer between varnish and web servers In-Reply-To: <50FC5A89.2010106@antispam.it> References: <20130117043709.GD2564@nat.myhome> <50FC5A89.2010106@antispam.it> Message-ID: > maybe the next step will be to replace F5 with other opensource software > (like vls) even for the load balancing... this will happends when you will > realize the F5 yearly maintenance costs more than the hardware (+3years of > maintenance) where to run lvs on... but that's another story :-) Well, LVS or HAProxy... Baptiste From vedad at kigo.net Wed Feb 6 09:13:59 2013 From: vedad at kigo.net (Vedad KAJTAZ, Kigo Inc.) Date: Wed, 06 Feb 2013 10:13:59 +0100 Subject: Feature request: bulk vcl include Message-ID: <51121ED7.5080104@kigo.net> Hi, It would be nice if one could bulk-include a whole directory from VCL. Eg: include "custom/" or include "custom/*.vcl" I couldn't find such a feature-request on wiki. Thanks, Best regards, -- Vedad KAJTAZ Head of Software Development T: +33 9 72 39 69 65 | E: vedad at kigo.net kigo.net | Vacation Rental Software, Websites & Channel Manager -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3734 bytes Desc: Signature cryptographique S/MIME URL: From phk at phk.freebsd.dk Wed Feb 6 09:34:22 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 06 Feb 2013 09:34:22 +0000 Subject: Feature request: bulk vcl include In-Reply-To: <51121ED7.5080104@kigo.net> References: <51121ED7.5080104@kigo.net> Message-ID: <11501.1360143262@critter.freebsd.dk> Content-Type: text/plain; charset=ISO-8859-1 -------- In message <51121ED7.5080104 at kigo.net>, "Vedad KAJTAZ, Kigo Inc." writes: >It would be nice if one could bulk-include a whole directory from VCL. The easy way is to: cd directory ls *.vcl | awk '{printf "include \"%s\"\n"}' > .all_includes.vcl Then include "directory/.all_includes.vcl" -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From vedad at kigo.net Wed Feb 6 10:00:49 2013 From: vedad at kigo.net (Vedad KAJTAZ, Kigo Inc.) Date: Wed, 06 Feb 2013 11:00:49 +0100 Subject: Feature request: bulk vcl include In-Reply-To: <11501.1360143262@critter.freebsd.dk> References: <51121ED7.5080104@kigo.net> <11501.1360143262@critter.freebsd.dk> Message-ID: <511229D1.4000702@kigo.net> Hi, Thanks for the tip (haven't tried it, but I believe a ';' is missing before newline). Yet, built-in globbing would be far more convenient. Regards, Le 06/02/2013 10:34, Poul-Henning Kamp a ?crit : > Content-Type: text/plain; charset=ISO-8859-1 > -------- > In message <51121ED7.5080104 at kigo.net>, "Vedad KAJTAZ, Kigo Inc." writes: > >> It would be nice if one could bulk-include a whole directory from VCL. > > > The easy way is to: > > cd directory > ls *.vcl | awk '{printf "include \"%s\"\n"}' > .all_includes.vcl > > Then > > include "directory/.all_includes.vcl" > -- Vedad KAJTAZ Head of Software Development T: +33 9 72 39 69 65 | E: vedad at kigo.net kigo.net | Vacation Rental Software, Websites & Channel Manager -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3734 bytes Desc: Signature cryptographique S/MIME URL: From revirii at googlemail.com Wed Feb 6 12:19:58 2013 From: revirii at googlemail.com (=?ISO-8859-1?Q?Andreas_G=F6tzfried?=) Date: Wed, 6 Feb 2013 13:19:58 +0100 Subject: add req.http.x-forwarded-for header Message-ID: Hello, i use varnish (3.0.2) and nginx (1.2.1), and i have a special setup: http: varnish (listens on *.80) -> nginx-backend (127.0.0.1:81) https: nginx (public ip:443) -> proxy_pass to same varnish instance -> nginx-backend (127.0.0.1:81) When varnish receives the requests proxied by nginx (https), varnish sees 127.0.0.1 as source, and there seems to be no solution getting varnish see the real ip. But, as you might guess, i want the public ip (need it for performance reasons) of the user. But i found a solution - nginx is able to pass the real ip in a header: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; By customizing the varnishncsa log format i'm able to see the users' ip address passed by nginx. So far, so good. But for http (managed by varnish) i'm not able to set this header when nginx isn't involved. I tried this: At the beginning of 'sub vcl_recv': if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; } My intention was: if this header isn't set (and it shouldn't when varnish directly receives requests via http), set it with the value of the client ip. I've tried a couple of variations, but in the end the value in the varnishncsa log is always empty. Well... what am i doing wrong? Where's the error? thx Andreas From Raul.Rangel at disney.com Wed Feb 6 15:01:26 2013 From: Raul.Rangel at disney.com (Rangel, Raul) Date: Wed, 6 Feb 2013 07:01:26 -0800 Subject: add req.http.x-forwarded-for header In-Reply-To: References: Message-ID: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> The default.vcl included with varnish sets the X-Forwarded-For header or even appends to it if it exists. I'm assuming your vcl_recv has a return statement that is preventing the default from running. Raul -----Original Message----- From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of Andreas G?tzfried Sent: Wednesday, February 06, 2013 5:20 AM To: varnish-misc at varnish-cache.org Subject: add req.http.x-forwarded-for header Hello, i use varnish (3.0.2) and nginx (1.2.1), and i have a special setup: http: varnish (listens on *.80) -> nginx-backend (127.0.0.1:81) https: nginx (public ip:443) -> proxy_pass to same varnish instance -> nginx-backend (127.0.0.1:81) When varnish receives the requests proxied by nginx (https), varnish sees 127.0.0.1 as source, and there seems to be no solution getting varnish see the real ip. But, as you might guess, i want the public ip (need it for performance reasons) of the user. But i found a solution - nginx is able to pass the real ip in a header: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; By customizing the varnishncsa log format i'm able to see the users' ip address passed by nginx. So far, so good. But for http (managed by varnish) i'm not able to set this header when nginx isn't involved. I tried this: At the beginning of 'sub vcl_recv': if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; } My intention was: if this header isn't set (and it shouldn't when varnish directly receives requests via http), set it with the value of the client ip. I've tried a couple of variations, but in the end the value in the varnishncsa log is always empty. Well... what am i doing wrong? Where's the error? thx Andreas _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From revirii at googlemail.com Wed Feb 6 16:10:56 2013 From: revirii at googlemail.com (=?ISO-8859-1?Q?Andreas_G=F6tzfried?=) Date: Wed, 6 Feb 2013 17:10:56 +0100 Subject: Re%3A%20add%20req.http.x-forwarded-for%20header&In-Reply-To=%3C2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB%40SM-CALA-VXMB04A.swna.wdpr.disney.com%3E Message-ID: Hi Raul, thanks for your answer! I checked vcl_recv but couldn't find anything relevant. And i don't think this is the problem, because: I. nginx.https -> varnish x-forwarded-for is logged in /var/log/varnish/varnish.log II. only varnish.http x-forwarded-for isn't logged in /var/log/varnish/varnish.log So this has to be a quite strange return statement that clears the x-forwarded-for when the http request is received by varnish externally, but leaves the header untouched when it's sent via nginx (and this traffic is http as well). I'll re-check the code again, but... :-/ Andreas From revirii at googlemail.com Wed Feb 6 16:25:01 2013 From: revirii at googlemail.com (=?ISO-8859-1?Q?Andreas_G=F6tzfried?=) Date: Wed, 6 Feb 2013 17:25:01 +0100 Subject: add req.http.x-forwarded-for header In-Reply-To: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> References: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: Hi all, sorry for the last thread, i'm not really used to working with mailing lists. I hope at least this message get's sorted properly... Hi Raul, thanks for your answer! I checked vcl_recv but couldn't find anything relevant. And i don't think this is the problem, because: I. nginx.https -> varnish x-forwarded-for is logged in /var/log/varnish/varnish.log II. only varnish.http x-forwarded-for isn't logged in /var/log/varnish/varnish.log So this has to be a quite strange return statement that clears the x-forwarded-for when the http request is received by varnish externally, but leaves the header untouched when it's sent via nginx (and this traffic is http as well). I'll re-check the code again, but... :-/ Andreas From dridi.boukelmoune at zenika.com Wed Feb 6 17:40:55 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 6 Feb 2013 18:40:55 +0100 Subject: add req.http.x-forwarded-for header In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: Hi, I think varnishncsa reads the information in the shared memory log *before* your modification in the VCL. I can't test right now but if you follow both varnishlog and varnishncsa you might understand what happens. I have no idea if it would actually work but you could try to issue a restart to see whether it affects varnishncsa. if (!req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = client.ip; return(restart); } Best Regards, Dridi On Wed, Feb 6, 2013 at 5:25 PM, Andreas G?tzfried wrote: > Hi all, > > sorry for the last thread, i'm not really used to working with mailing > lists. I hope at least this message get's sorted properly... > > Hi Raul, > > thanks for your answer! I checked vcl_recv but couldn't find anything > relevant. And i don't think this is the problem, because: > > I. > nginx.https -> varnish > x-forwarded-for is logged in /var/log/varnish/varnish.log > > II. > only varnish.http > x-forwarded-for isn't logged in /var/log/varnish/varnish.log > > So this has to be a quite strange return statement that clears the > x-forwarded-for when the http request is received by varnish > externally, but leaves the header untouched when it's sent via nginx > (and this traffic is http as well). > > I'll re-check the code again, but... :-/ > > > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From revirii at googlemail.com Thu Feb 7 08:03:44 2013 From: revirii at googlemail.com (=?ISO-8859-1?Q?Andreas_G=F6tzfried?=) Date: Thu, 7 Feb 2013 09:03:44 +0100 Subject: add req.http.x-forwarded-for header In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: Good Morning, > I think varnishncsa reads the information in the shared memory log > *before* your modification in the VCL. I can't test right now but if > you follow both varnishlog and varnishncsa you might understand what > happens. Hm, but then the wished manipulation within vcl_recv (setting x-forwarded-for and writing it into varnish log instead of client.ip) won't work at all, as the logs are written before vcl_recv. Well, why should it work, as varnishncsa reads the shared memory and writes the log, and varnish itself reads shared memory and then starts with vcl_recv. https://www.varnish-cache.org/docs/trunk/reference/varnishncsa.html Changing the parameters of the log daemon won't help either, as an external request doesn't set x-forwarded-for- > I have no idea if it would actually work but you could try to issue a > restart to see whether it affects varnishncsa. > > if (!req.http.X-Forwarded-For) { > set req.http.X-Forwarded-For = client.ip; > return(restart); > } Message from VCC-compiler: Invalid return "restart" ('input' Line 97 Pos 24) return(restart); -----------------------#######-- ...in subroutine "vcl_recv" ('input' Line 77 Pos 5) sub vcl_recv { ----########-- ...which is the "vcl_recv" method Legal returns are: "error" "lookup" "pass" "pipe" Seems illegal ;-) thx Andreas From damon at huddler-inc.com Fri Feb 8 02:09:23 2013 From: damon at huddler-inc.com (Damon Snyder) Date: Thu, 7 Feb 2013 18:09:23 -0800 Subject: Poll() on shared pipe() high context switching issue affect varnish? :: was Feedback on session_linger (high load/n_wrk_overflow issues) Message-ID: I think the issue we have been seeing *may* be related to the polling on a shared pipe() issue that was first mentioned here: http://lkml.indiana.edu/hypermail/linux/kernel/1012.1/03515.html. I think I narrowed the cause of the high csw/s down to multiple esi includes (2 most of the time) with s-maxage=0 on a 1M+ pageview per month site (~60conn/s according to the first line of varnishstat). Most of the page views have these esi includes that aren't being cached (not all of the 60/s). If we set s-maxage to something > 0 (say 20s) in the headers for the esi module coming from the backend to varnish the context switching issue goes away-- the csw/s degrades linearly back to the steady state at about 10k/s. I created a threaded version of the example code from the lkml email ( https://gist.github.com/drsnyder/4735889). When I compile and run this code on a 2.6.32-279 or below kernel the csw/s goes to ~200k/s which defies any attempt I have made to reason about why that would be the case. If I run the same code on a 3.0+ Linux kernel, I will see about 12k csw/s. Still high, but much more reasonable. We see the same csw/s behavior when we set the s-maxage=0 for the esi modules on the production site with about the same number of threads (400). So does this polling on a shared pipe() issue affect varnish 2.1.5? It would appear so, but I'm looking for confirmation. I saw a commit referencing context switching here ( https://www.varnish-cache.org/trac/changeset/05f816345a329ef52644a0239892f8be6314a3fa) but I also can't imagine that this would trigger 200k/s in our environment. If it is an issue, is there anything we can do to mitigate it in the varnish configuration? Are there versions where this has been fixed? We have tested sending production traffic with s-maxage=0 to an Ubuntu system running 3.2.0-27 and saw a spike in context switching but it peaked at about 20k/s with no observed impact on stability or delivery. So, switching from Centos to Ubuntu is a possible option. Thanks, Damon ---------- Forwarded message ---------- From: Damon Snyder Date: Thu, Jan 31, 2013 at 11:10 PM Subject: Re: Feedback on session_linger (high load/n_wrk_overflow issues) To: "varnish-misc at varnish-cache.org" Well, I spoke too soon. Bumping session_linger brought us stability for about a day and then the high context switching returned. See the attached plot to illustrate the problem. The context switching (pulled from dstat and added to varnish) starts to ramp up. As soon it crosses the ~50k mark we start start seeing stability and latency issues (10k/s is more "normal"). So now I'm at a loss as to how to proceed. Below is the current varnishstat -1 output when the system is well behaved. I wasn't able to capture it when the context switching peaked. n_wrk_overflow spiked to about 1500 at the time and the load average over the past 5 min was ~15. This is running on a dual hex core Intel Xeon E5-2620 (24 contexts or cores) with 64GB of memory. The hitrate is about 0.65 and we are nuking (n_lru_nuked incrementing) once the system has been running for a few hours. We have long tail objects in that we have a lot of content, but only some of it is hot so it's difficult to predict our precise cache size requirements at any given time. We are using varnish-2.1.5 on Centos 5.6 with kernel 2.6.18-238. I haven't found anything in syslog that would be of interest. Thanks, Damon # currently running command line /usr/local/varnish-2.1.5/sbin/varnishd -P /var/run/varnish.pid -a 10.16.50.150:6081,127.0.0.1:6081 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 120 -w 100,2000,120 -u varnish -g varnish -s malloc,48G -p thread_pools 8 -p thread_pool_add_delay 2 -p listen_depth 1024 -p session_linger 110 -p lru_interval 60 -p sess_workspace 524288 # varnishstat -1 client_conn 5021009 442.11 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 5020366 442.05 Client requests received cache_hit 4597754 404.84 Cache hits cache_hitpass 24775 2.18 Cache hits for pass cache_miss 2664516 234.61 Cache misses backend_conn 3101882 273.13 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 99 0.01 Backend conn. reuses backend_toolate 11777 1.04 Backend conn. was closed backend_recycle 11877 1.05 Backend conn. recycles backend_unused 0 0.00 Backend conn. unused fetch_head 33 0.00 Fetch head fetch_length 2134117 187.91 Fetch with Length fetch_chunked 811172 71.42 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 156333 13.77 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 1 0.00 Fetch failed n_sess_mem 3867 . N struct sess_mem n_sess 3841 . N struct sess n_object 757929 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 758158 . N struct objectcore n_objecthead 760076 . N struct objecthead n_smf 0 . N struct smf n_smf_frag 0 . N small free smf n_smf_large 0 . N large free smf n_vbe_conn 69 . N struct vbe_conn n_wrk 800 . N worker threads n_wrk_create 800 0.07 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_queue 0 0.00 N queued work requests n_wrk_overflow 171 0.02 N overflowed work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 21 . N backends n_expired 1879027 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_saved 0 . N LRU saved objects n_lru_moved 813472 . N LRU moved objects n_deathrow 0 . N objects on deathrow losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 5195402 457.46 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 5020819 442.09 Total Sessions s_req 5020366 442.05 Total Requests s_pipe 0 0.00 Total pipe s_pass 437300 38.50 Total pass s_fetch 3101668 273.11 Total fetch s_hdrbytes 2575156242 226746.17 Total header bytes s_bodybytes 168406835436 14828461.34 Total body bytes sess_closed 5020817 442.09 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 0 0.00 Session Linger sess_herd 0 0.00 Session herd shm_records 430740934 37927.35 SHM records shm_writes 29264523 2576.78 SHM writes shm_flushes 51411 4.53 SHM flushes due to overflow shm_cont 78056 6.87 SHM MTX contention shm_cycles 186 0.02 SHM cycles through buffer sm_nreq 0 0.00 allocator requests sm_nobj 0 . outstanding allocations sm_balloc 0 . bytes allocated sm_bfree 0 . bytes free sma_nreq 5570153 490.46 SMA allocator requests sma_nobj 1810351 . SMA outstanding allocations sma_nbytes 35306367656 . SMA outstanding bytes sma_balloc 154583709612 . SMA bytes allocated sma_bfree 119277341956 . SMA bytes free sms_nreq 32127 2.83 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 29394423 . SMS bytes allocated sms_bfree 29394423 . SMS bytes freed backend_req 3101977 273.13 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_purge 31972 . N total active purges n_purge_add 31972 2.82 N new purges added n_purge_retire 0 0.00 N old purges deleted n_purge_obj_test 3317282 292.09 N objects tested n_purge_re_test 530623465 46722.15 N regexps tested against n_purge_dups 28047 2.47 N duplicate purges removed hcb_nolock 7285152 641.47 HCB Lookups without lock hcb_lock 2543066 223.92 HCB Lookups with lock hcb_insert 2542964 223.91 HCB Inserts esi_parse 741690 65.31 Objects ESI parsed (unlock) esi_errors 0 0.00 ESI parse errors (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 11357 1.00 Client uptime backend_retry 99 0.01 Backend conn. retry dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 14 0.00 Fetch no body (304) On Wed, Jan 30, 2013 at 3:45 PM, Damon Snyder wrote: > When you 'varnishadm -T localhost:port param.show session_linger' it > indicates at the bottom that "we don't know if this is a good idea... and > feeback is welcome." > > We found that setting session_linger pulled us out of a bind. I wanted to > add my feedback to the list in the hope that someone else might benefit > from what we experienced. > > We recently increased the number of esi includes on pages that get ~60-70 > req/s on our platform. Some of those modules were being rendered with > s-maxage set to zero so that they would be refreshed on every page load > (this is so we could insert a non-cached partial into the page) which > further increased the request load on varnish. > > What we found is that after a few hours the load on a varnish box went > from < 1 to > 10 or more and n_wkr_overflow started incrementing. After > investigating further we noticed that the context switching went from > ~10k/s to > 100k/s. We are running Linux specifically Centos. > > No adjusting of threads or thread pools had any impact on the thrashing. > After reading Kristian's post about > high-end varnish tuning we decided to try out session_linger. We started by > doubling the default from 50 to 100 to test the theory ('varnishadm -T > localhost:port param.set session_linger 100'). Once we did that we saw a > gradual settling of the context switching (using dstat or sar -w) and > a stabilizing of the load. > > It's such a great feature to be able to change this parameter via the > admin interface. We have 50GB malloc'ed and some nuking on our boxes so > restarting varnish doesn't come without some impact to the platform. > > Intuitively increasing session_linger makes sense. If you have several esi > modules rendered within a page and the gap between them is > 50ms then > they'll be reallocated elsewhere. > > What is not clear to me is how we should tune session_linger. We started > by setting it to the 3rd quantile of render times for the esi module taken > from a sampling of backend requests. This turned out to be 110ms. > > Damon > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-context-switching.png Type: image/png Size: 55752 bytes Desc: not available URL: From straightflush at gmail.com Fri Feb 8 18:08:25 2013 From: straightflush at gmail.com (AD) Date: Fri, 8 Feb 2013 13:08:25 -0500 Subject: varnishlog performance Message-ID: Hello, Looking for the most performant way to use varnishlog without losing any data. Are there any potential bottlenecks with varnishlog (assuming shmlog is big enough). Would writing to disk be too slow in high req/sec environments ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From drewsy.morris at gmail.com Tue Feb 12 09:04:14 2013 From: drewsy.morris at gmail.com (Drew Morris) Date: Tue, 12 Feb 2013 20:04:14 +1100 Subject: Varnish not listening to /etc/sysconfig/varnish on CentOS 6.2 Message-ID: Hi Guys, I've just setup Varnish on CentOS 6.2 via the official RPM and I'm hoping someone can help me to get it working! It seems Varnish doesn't reference DAEMON_OPTS in /etc/sysconfig/varnish, because it will always bind to ports 6081 and 6082, as seen in the netstat -tulpn results below: tcp 0 0 0.0.0.0:6081 0.0.0.0:* LISTEN 22844/varnishd tcp 0 0 127.0.0.1:6082 0.0.0.0:* LISTEN 22843/varnishd tcp 0 0 :::6081 :::* LISTEN 22844/varnishd Even after continuously restarting / reloading varnish, killing the process etc, it wont bind on the desired port (80) I am starting varnish by running "service varnish start" Here is my config for /etc/sysconfig/varnish: ========= # Configuration file for varnish # # /etc/init.d/varnish expects the variable $DAEMON_OPTS to be set from this # shell script fragment. # # Maximum number of open files (for ulimit -n) NFILES=131072 # Locked shared memory (for ulimit -l) # Default log size is 82MB + header MEMLOCK=82000 # Maximum size of corefile (for ulimit -c). Default in Fedora is 0 # DAEMON_COREFILE_LIMIT="unlimited" # Set this to 1 to make init script reload try to switch vcl without restart. # To make this work, you need to set the following variables # explicit: VARNISH_VCL_CONF, VARNISH_ADMIN_LISTEN_ADDRESS, # VARNISH_ADMIN_LISTEN_PORT, VARNISH_SECRET_FILE, or in short, # use Alternative 3, Advanced configuration, below RELOAD_VCL=1 # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a fixed-size cache file. # ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a # fixed-size cache file. # DAEMON_OPTS="-a :80 \ -T localhost:6081 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/varnish_storage.bin,1G" ## Alternative 3, Advanced configuration # # See varnishd(1) for more information. # # # Main configuration file. You probably want to change it :) VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=6081 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 # # # DAEMON_OPTS is used by the init script. If you add or remove options, make # # sure you update this section, too. DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -u varnish -g varnish \ -S ${VARNISH_SECRET_FILE} \ -s ${VARNISH_STORAGE}" # ## Alternative 4, Do It Yourself. See varnishd(1) for more information. # # DAEMON_OPTS="" ========= & Here is my VCL : /etc/varnish/default.vcl ========= # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. # sub vcl_recv { # if (req.restarts == 0) { # if (req.http.x-forwarded-for) { # set req.http.X-Forwarded-For = # req.http.X-Forwarded-For + ", " + client.ip; # } else { # set req.http.X-Forwarded-For = client.ip; # } # } # if (req.request != "GET" && # req.request != "HEAD" && # req.request != "PUT" && # req.request != "POST" && # req.request != "TRACE" && # req.request != "OPTIONS" && # req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ # return (pipe); # } # if (req.request != "GET" && req.request != "HEAD") { # /* We only deal with GET and HEAD by default */ # return (pass); # } # if (req.http.Authorization || req.http.Cookie) { # /* Not cacheable by default */ # return (pass); # } # return (lookup); # } # # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set bereq.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. # return (pipe); # } # # sub vcl_pass { # return (pass); # } # # sub vcl_hash { # hash_data(req.url); # if (req.http.host) { # hash_data(req.http.host); # } else { # hash_data(server.ip); # } # return (hash); # } # # sub vcl_hit { # return (deliver); # } # # sub vcl_miss { # return (fetch); # } # # sub vcl_fetch { # if (beresp.ttl <= 0s || # beresp.http.Set-Cookie || # beresp.http.Vary == "*") { # /* # * Mark as "Hit-For-Pass" for the next 2 minutes # */ # set beresp.ttl = 120 s; # return (hit_for_pass); # } # return (deliver); # } # # sub vcl_deliver { # return (deliver); # } # # sub vcl_error { # set obj.http.Content-Type = "text/html; charset=utf-8"; # set obj.http.Retry-After = "5"; # synthetic {" # # # # # "} + obj.status + " " + obj.response + {" # # #

Error "} + obj.status + " " + obj.response + {"

#

"} + obj.response + {"

#

Guru Meditation:

#

XID: "} + req.xid + {"

#
#

Varnish cache server

# # # "}; # return (deliver); # } # # sub vcl_init { # return (ok); # } # # sub vcl_fini { # return (ok); # } ========= Hoping you experts can help! -- Regards, Drew Morris -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Tue Feb 12 09:11:55 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Tue, 12 Feb 2013 10:11:55 +0100 Subject: Varnish not listening to /etc/sysconfig/varnish on CentOS 6.2 In-Reply-To: References: Message-ID: Hi, Your DAEMON_OPTS variable is defined twice. Once with raw values, and once with variables like VARNISH_LISTEN_PORT. Best Regards, Dridi On Tue, Feb 12, 2013 at 10:04 AM, Drew Morris wrote: > Hi Guys, > > I've just setup Varnish on CentOS 6.2 via the official RPM and I'm hoping > someone can help me to get it working! > > It seems Varnish doesn't reference DAEMON_OPTS in /etc/sysconfig/varnish, > because it will always bind to ports 6081 and 6082, as seen in the netstat > -tulpn results below: > > tcp 0 0 0.0.0.0:6081 0.0.0.0:* > LISTEN 22844/varnishd > tcp 0 0 127.0.0.1:6082 0.0.0.0:* > LISTEN 22843/varnishd > tcp 0 0 :::6081 :::* > LISTEN 22844/varnishd > > Even after continuously restarting / reloading varnish, killing the process > etc, it wont bind on the desired port (80) > > I am starting varnish by running "service varnish start" > > Here is my config for /etc/sysconfig/varnish: > > ========= > # Configuration file for varnish > # > # /etc/init.d/varnish expects the variable $DAEMON_OPTS to be set from this > # shell script fragment. > # > > # Maximum number of open files (for ulimit -n) > NFILES=131072 > > # Locked shared memory (for ulimit -l) > # Default log size is 82MB + header > MEMLOCK=82000 > > # Maximum size of corefile (for ulimit -c). Default in Fedora is 0 > # DAEMON_COREFILE_LIMIT="unlimited" > > # Set this to 1 to make init script reload try to switch vcl without > restart. > # To make this work, you need to set the following variables > # explicit: VARNISH_VCL_CONF, VARNISH_ADMIN_LISTEN_ADDRESS, > # VARNISH_ADMIN_LISTEN_PORT, VARNISH_SECRET_FILE, or in short, > # use Alternative 3, Advanced configuration, below > RELOAD_VCL=1 > > # This file contains 4 alternatives, please use only one. > > ## Alternative 1, Minimal configuration, no VCL > # > # Listen on port 6081, administration on localhost:6082, and forward to > # content server on localhost:8080. Use a fixed-size cache file. > # > > ## Alternative 2, Configuration with VCL > # > # Listen on port 6081, administration on localhost:6082, and forward to > # one content server selected by the vcl file, based on the request. Use a > # fixed-size cache file. > # > > DAEMON_OPTS="-a :80 \ > -T localhost:6081 \ > -f /etc/varnish/default.vcl \ > -u varnish -g varnish \ > -S /etc/varnish/secret \ > -s file,/var/lib/varnish/varnish_storage.bin,1G" > > > ## Alternative 3, Advanced configuration > # > # See varnishd(1) for more information. > # > # # Main configuration file. You probably want to change it :) > VARNISH_VCL_CONF=/etc/varnish/default.vcl > # > # # Default address and port to bind to > # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify > # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. > # VARNISH_LISTEN_ADDRESS= > VARNISH_LISTEN_PORT=6081 > # > # # Telnet admin interface listen address and port > VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 > VARNISH_ADMIN_LISTEN_PORT=6082 > # > # # Shared secret file for admin interface > VARNISH_SECRET_FILE=/etc/varnish/secret > # > # # The minimum number of worker threads to start > VARNISH_MIN_THREADS=1 > # > # # The Maximum number of worker threads to start > VARNISH_MAX_THREADS=1000 > # > # # Idle timeout for worker threads > VARNISH_THREAD_TIMEOUT=120 > # > # # Cache file location > VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin > # > # # Cache file size: in bytes, optionally using k / M / G / T suffix, > # # or in percentage of available disk space using the % suffix. > VARNISH_STORAGE_SIZE=1G > # > # # Backend storage specification > VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" > # > # # Default TTL used when the backend does not specify one > VARNISH_TTL=120 > # > # # DAEMON_OPTS is used by the init script. If you add or remove options, > make > # # sure you update this section, too. > DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} > \ > -t ${VARNISH_TTL} \ > -w > ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ > -u varnish -g varnish \ > -S ${VARNISH_SECRET_FILE} \ > -s ${VARNISH_STORAGE}" > # > > > ## Alternative 4, Do It Yourself. See varnishd(1) for more information. > # > # DAEMON_OPTS="" > ========= > > & Here is my VCL : /etc/varnish/default.vcl > > ========= > # This is a basic VCL configuration file for varnish. See the vcl(7) > # man page for details on VCL syntax and semantics. > # > # Default backend definition. Set this to point to your content > # server. > # > backend default { > .host = "127.0.0.1"; > .port = "8080"; > } > # > # Below is a commented-out copy of the default VCL logic. If you > # redefine any of these subroutines, the built-in logic will be > # appended to your code. > # sub vcl_recv { > # if (req.restarts == 0) { > # if (req.http.x-forwarded-for) { > # set req.http.X-Forwarded-For = > # req.http.X-Forwarded-For + ", " + client.ip; > # } else { > # set req.http.X-Forwarded-For = client.ip; > # } > # } > # if (req.request != "GET" && > # req.request != "HEAD" && > # req.request != "PUT" && > # req.request != "POST" && > # req.request != "TRACE" && > # req.request != "OPTIONS" && > # req.request != "DELETE") { > # /* Non-RFC2616 or CONNECT which is weird. */ > # return (pipe); > # } > # if (req.request != "GET" && req.request != "HEAD") { > # /* We only deal with GET and HEAD by default */ > # return (pass); > # } > # if (req.http.Authorization || req.http.Cookie) { > # /* Not cacheable by default */ > # return (pass); > # } > # return (lookup); > # } > # > # sub vcl_pipe { > # # Note that only the first request to the backend will have > # # X-Forwarded-For set. If you use X-Forwarded-For and want to > # # have it set for all requests, make sure to have: > # # set bereq.http.connection = "close"; > # # here. It is not set by default as it might break some broken web > # # applications, like IIS with NTLM authentication. > # return (pipe); > # } > # > # sub vcl_pass { > # return (pass); > # } > # > # sub vcl_hash { > # hash_data(req.url); > # if (req.http.host) { > # hash_data(req.http.host); > # } else { > # hash_data(server.ip); > # } > # return (hash); > # } > # > # sub vcl_hit { > # return (deliver); > # } > # > # sub vcl_miss { > # return (fetch); > # } > # > # sub vcl_fetch { > # if (beresp.ttl <= 0s || > # beresp.http.Set-Cookie || > # beresp.http.Vary == "*") { > # /* > # * Mark as "Hit-For-Pass" for the next 2 minutes > # */ > # set beresp.ttl = 120 s; > # return (hit_for_pass); > # } > # return (deliver); > # } > # > # sub vcl_deliver { > # return (deliver); > # } > # > # sub vcl_error { > # set obj.http.Content-Type = "text/html; charset=utf-8"; > # set obj.http.Retry-After = "5"; > # synthetic {" > # > # # "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> > # > # > # "} + obj.status + " " + obj.response + {" > # > # > #

Error "} + obj.status + " " + obj.response + {"

> #

"} + obj.response + {"

> #

Guru Meditation:

> #

XID: "} + req.xid + {"

> #
> #

Varnish cache server

> # > # > # "}; > # return (deliver); > # } > # > # sub vcl_init { > # return (ok); > # } > # > # sub vcl_fini { > # return (ok); > # } > ========= > > Hoping you experts can help! > > -- > Regards, > > Drew Morris > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From tharanga.abeyseela at gmail.com Tue Feb 12 09:16:18 2013 From: tharanga.abeyseela at gmail.com (Tharanga Abeyseela) Date: Tue, 12 Feb 2013 20:16:18 +1100 Subject: Varnish not listening to /etc/sysconfig/varnish on CentOS 6.2 In-Reply-To: References: Message-ID: change the VARNISH_LISTEN_PORT to 80. On Tue, Feb 12, 2013 at 8:11 PM, Dridi Boukelmoune wrote: > Hi, > > Your DAEMON_OPTS variable is defined twice. Once with raw values, and > once with variables like VARNISH_LISTEN_PORT. > > Best Regards, > Dridi > > On Tue, Feb 12, 2013 at 10:04 AM, Drew Morris wrote: >> Hi Guys, >> >> I've just setup Varnish on CentOS 6.2 via the official RPM and I'm hoping >> someone can help me to get it working! >> >> It seems Varnish doesn't reference DAEMON_OPTS in /etc/sysconfig/varnish, >> because it will always bind to ports 6081 and 6082, as seen in the netstat >> -tulpn results below: >> >> tcp 0 0 0.0.0.0:6081 0.0.0.0:* >> LISTEN 22844/varnishd >> tcp 0 0 127.0.0.1:6082 0.0.0.0:* >> LISTEN 22843/varnishd >> tcp 0 0 :::6081 :::* >> LISTEN 22844/varnishd >> >> Even after continuously restarting / reloading varnish, killing the process >> etc, it wont bind on the desired port (80) >> >> I am starting varnish by running "service varnish start" >> >> Here is my config for /etc/sysconfig/varnish: >> >> ========= >> # Configuration file for varnish >> # >> # /etc/init.d/varnish expects the variable $DAEMON_OPTS to be set from this >> # shell script fragment. >> # >> >> # Maximum number of open files (for ulimit -n) >> NFILES=131072 >> >> # Locked shared memory (for ulimit -l) >> # Default log size is 82MB + header >> MEMLOCK=82000 >> >> # Maximum size of corefile (for ulimit -c). Default in Fedora is 0 >> # DAEMON_COREFILE_LIMIT="unlimited" >> >> # Set this to 1 to make init script reload try to switch vcl without >> restart. >> # To make this work, you need to set the following variables >> # explicit: VARNISH_VCL_CONF, VARNISH_ADMIN_LISTEN_ADDRESS, >> # VARNISH_ADMIN_LISTEN_PORT, VARNISH_SECRET_FILE, or in short, >> # use Alternative 3, Advanced configuration, below >> RELOAD_VCL=1 >> >> # This file contains 4 alternatives, please use only one. >> >> ## Alternative 1, Minimal configuration, no VCL >> # >> # Listen on port 6081, administration on localhost:6082, and forward to >> # content server on localhost:8080. Use a fixed-size cache file. >> # >> >> ## Alternative 2, Configuration with VCL >> # >> # Listen on port 6081, administration on localhost:6082, and forward to >> # one content server selected by the vcl file, based on the request. Use a >> # fixed-size cache file. >> # >> >> DAEMON_OPTS="-a :80 \ >> -T localhost:6081 \ >> -f /etc/varnish/default.vcl \ >> -u varnish -g varnish \ >> -S /etc/varnish/secret \ >> -s file,/var/lib/varnish/varnish_storage.bin,1G" >> >> >> ## Alternative 3, Advanced configuration >> # >> # See varnishd(1) for more information. >> # >> # # Main configuration file. You probably want to change it :) >> VARNISH_VCL_CONF=/etc/varnish/default.vcl >> # >> # # Default address and port to bind to >> # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify >> # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. >> # VARNISH_LISTEN_ADDRESS= >> VARNISH_LISTEN_PORT=6081 >> # >> # # Telnet admin interface listen address and port >> VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 >> VARNISH_ADMIN_LISTEN_PORT=6082 >> # >> # # Shared secret file for admin interface >> VARNISH_SECRET_FILE=/etc/varnish/secret >> # >> # # The minimum number of worker threads to start >> VARNISH_MIN_THREADS=1 >> # >> # # The Maximum number of worker threads to start >> VARNISH_MAX_THREADS=1000 >> # >> # # Idle timeout for worker threads >> VARNISH_THREAD_TIMEOUT=120 >> # >> # # Cache file location >> VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin >> # >> # # Cache file size: in bytes, optionally using k / M / G / T suffix, >> # # or in percentage of available disk space using the % suffix. >> VARNISH_STORAGE_SIZE=1G >> # >> # # Backend storage specification >> VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" >> # >> # # Default TTL used when the backend does not specify one >> VARNISH_TTL=120 >> # >> # # DAEMON_OPTS is used by the init script. If you add or remove options, >> make >> # # sure you update this section, too. >> DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ >> -f ${VARNISH_VCL_CONF} \ >> -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} >> \ >> -t ${VARNISH_TTL} \ >> -w >> ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ >> -u varnish -g varnish \ >> -S ${VARNISH_SECRET_FILE} \ >> -s ${VARNISH_STORAGE}" >> # >> >> >> ## Alternative 4, Do It Yourself. See varnishd(1) for more information. >> # >> # DAEMON_OPTS="" >> ========= >> >> & Here is my VCL : /etc/varnish/default.vcl >> >> ========= >> # This is a basic VCL configuration file for varnish. See the vcl(7) >> # man page for details on VCL syntax and semantics. >> # >> # Default backend definition. Set this to point to your content >> # server. >> # >> backend default { >> .host = "127.0.0.1"; >> .port = "8080"; >> } >> # >> # Below is a commented-out copy of the default VCL logic. If you >> # redefine any of these subroutines, the built-in logic will be >> # appended to your code. >> # sub vcl_recv { >> # if (req.restarts == 0) { >> # if (req.http.x-forwarded-for) { >> # set req.http.X-Forwarded-For = >> # req.http.X-Forwarded-For + ", " + client.ip; >> # } else { >> # set req.http.X-Forwarded-For = client.ip; >> # } >> # } >> # if (req.request != "GET" && >> # req.request != "HEAD" && >> # req.request != "PUT" && >> # req.request != "POST" && >> # req.request != "TRACE" && >> # req.request != "OPTIONS" && >> # req.request != "DELETE") { >> # /* Non-RFC2616 or CONNECT which is weird. */ >> # return (pipe); >> # } >> # if (req.request != "GET" && req.request != "HEAD") { >> # /* We only deal with GET and HEAD by default */ >> # return (pass); >> # } >> # if (req.http.Authorization || req.http.Cookie) { >> # /* Not cacheable by default */ >> # return (pass); >> # } >> # return (lookup); >> # } >> # >> # sub vcl_pipe { >> # # Note that only the first request to the backend will have >> # # X-Forwarded-For set. If you use X-Forwarded-For and want to >> # # have it set for all requests, make sure to have: >> # # set bereq.http.connection = "close"; >> # # here. It is not set by default as it might break some broken web >> # # applications, like IIS with NTLM authentication. >> # return (pipe); >> # } >> # >> # sub vcl_pass { >> # return (pass); >> # } >> # >> # sub vcl_hash { >> # hash_data(req.url); >> # if (req.http.host) { >> # hash_data(req.http.host); >> # } else { >> # hash_data(server.ip); >> # } >> # return (hash); >> # } >> # >> # sub vcl_hit { >> # return (deliver); >> # } >> # >> # sub vcl_miss { >> # return (fetch); >> # } >> # >> # sub vcl_fetch { >> # if (beresp.ttl <= 0s || >> # beresp.http.Set-Cookie || >> # beresp.http.Vary == "*") { >> # /* >> # * Mark as "Hit-For-Pass" for the next 2 minutes >> # */ >> # set beresp.ttl = 120 s; >> # return (hit_for_pass); >> # } >> # return (deliver); >> # } >> # >> # sub vcl_deliver { >> # return (deliver); >> # } >> # >> # sub vcl_error { >> # set obj.http.Content-Type = "text/html; charset=utf-8"; >> # set obj.http.Retry-After = "5"; >> # synthetic {" >> # >> # > # "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> >> # >> # >> # "} + obj.status + " " + obj.response + {" >> # >> # >> #

Error "} + obj.status + " " + obj.response + {"

>> #

"} + obj.response + {"

>> #

Guru Meditation:

>> #

XID: "} + req.xid + {"

>> #
>> #

Varnish cache server

>> # >> # >> # "}; >> # return (deliver); >> # } >> # >> # sub vcl_init { >> # return (ok); >> # } >> # >> # sub vcl_fini { >> # return (ok); >> # } >> ========= >> >> Hoping you experts can help! >> >> -- >> Regards, >> >> Drew Morris >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From tobias.eichelbroenner at lamp-solutions.de Tue Feb 12 10:47:54 2013 From: tobias.eichelbroenner at lamp-solutions.de (=?ISO-8859-15?Q?Tobias_Eichelbr=F6nner?=) Date: Tue, 12 Feb 2013 11:47:54 +0100 Subject: Seed up build process Message-ID: <511A1DDA.8010406@lamp-solutions.de> Hi, I have got plenty of vcl code mainly containing redirect information for hundreds of domains. All together there are about 34 MB of vcl text files containing 70000 rules for redirection. Varnish generates about 158MB c-code out of it, that is compiled into a 90MB shared object. My problem is, that it takes more than one and a half hour to parse the vcl code into c-code and than compiling it on an Intel Xeon W3530. Does anyone has an idea how to speed up the process? Is there for example a possibility to tell the parser to pass any optimization steps while generating the c code? I could store all information in a database and call it in realtime, but that would cause heavy IO on the database. Sincerely, Tobias From dridi.boukelmoune at zenika.com Wed Feb 13 21:32:47 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 13 Feb 2013 22:32:47 +0100 Subject: add req.http.x-forwarded-for header In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: Hi, Try this vmod instead: https://github.com/dridi/libvmod-logger Best Regards, Dridi On Thu, Feb 7, 2013 at 9:03 AM, Andreas G?tzfried wrote: > Good Morning, > >> I think varnishncsa reads the information in the shared memory log >> *before* your modification in the VCL. I can't test right now but if >> you follow both varnishlog and varnishncsa you might understand what >> happens. > > Hm, but then the wished manipulation within vcl_recv (setting > x-forwarded-for and writing it into varnish log instead of client.ip) > won't work at all, as the logs are written before vcl_recv. Well, why > should it work, as varnishncsa reads the shared memory and writes the > log, and varnish itself reads shared memory and then starts with > vcl_recv. > > https://www.varnish-cache.org/docs/trunk/reference/varnishncsa.html > > Changing the parameters of the log daemon won't help either, as an > external request doesn't set x-forwarded-for- > >> I have no idea if it would actually work but you could try to issue a >> restart to see whether it affects varnishncsa. >> >> if (!req.http.X-Forwarded-For) { >> set req.http.X-Forwarded-For = client.ip; >> return(restart); >> } > > Message from VCC-compiler: > Invalid return "restart" > ('input' Line 97 Pos 24) > return(restart); > -----------------------#######-- > > ...in subroutine "vcl_recv" > ('input' Line 77 Pos 5) > sub vcl_recv { > ----########-- > > ...which is the "vcl_recv" method > Legal returns are: "error" "lookup" "pass" "pipe" > > Seems illegal ;-) > > > thx > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From dridi.boukelmoune at zenika.com Wed Feb 13 21:39:41 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 13 Feb 2013 22:39:41 +0100 Subject: add req.http.x-forwarded-for header In-Reply-To: References: <2465AAEEC8B8A242B26ED5F44BCA805F2609D265EB@SM-CALA-VXMB04A.swna.wdpr.disney.com> Message-ID: I should also mention that I've tested it with varnish 3.0.3, and so should you probably try it on this version first to ensure it actually does what you want. If it doesn't work with varnish 3.0.2 I'll gladly fix that. The client.ip fallback probably belongs more in varnishncsa itself, and I chose the easy path on this one... Best Regards, Dridi On Wed, Feb 13, 2013 at 10:32 PM, Dridi Boukelmoune wrote: > Hi, > > Try this vmod instead: https://github.com/dridi/libvmod-logger > > Best Regards, > Dridi > > On Thu, Feb 7, 2013 at 9:03 AM, Andreas G?tzfried > wrote: >> Good Morning, >> >>> I think varnishncsa reads the information in the shared memory log >>> *before* your modification in the VCL. I can't test right now but if >>> you follow both varnishlog and varnishncsa you might understand what >>> happens. >> >> Hm, but then the wished manipulation within vcl_recv (setting >> x-forwarded-for and writing it into varnish log instead of client.ip) >> won't work at all, as the logs are written before vcl_recv. Well, why >> should it work, as varnishncsa reads the shared memory and writes the >> log, and varnish itself reads shared memory and then starts with >> vcl_recv. >> >> https://www.varnish-cache.org/docs/trunk/reference/varnishncsa.html >> >> Changing the parameters of the log daemon won't help either, as an >> external request doesn't set x-forwarded-for- >> >>> I have no idea if it would actually work but you could try to issue a >>> restart to see whether it affects varnishncsa. >>> >>> if (!req.http.X-Forwarded-For) { >>> set req.http.X-Forwarded-For = client.ip; >>> return(restart); >>> } >> >> Message from VCC-compiler: >> Invalid return "restart" >> ('input' Line 97 Pos 24) >> return(restart); >> -----------------------#######-- >> >> ...in subroutine "vcl_recv" >> ('input' Line 77 Pos 5) >> sub vcl_recv { >> ----########-- >> >> ...which is the "vcl_recv" method >> Legal returns are: "error" "lookup" "pass" "pipe" >> >> Seems illegal ;-) >> >> >> thx >> Andreas >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From damon at huddler-inc.com Thu Feb 14 18:37:14 2013 From: damon at huddler-inc.com (Damon Snyder) Date: Thu, 14 Feb 2013 10:37:14 -0800 Subject: Poll() on shared pipe() high context switching issue affect varnish? :: was Feedback on session_linger (high load/n_wrk_overflow issues) In-Reply-To: References: Message-ID: My apologies for spamming the mailing list. I hope I save someone some time by providing this monolog. I'm not sure exactly what is causing the high load and context switching. Adding s-maxage=0 to the Cache-Control header on multiple esi tags in 100s or 1000s of high traffic pages that are cached appears to introduce it. The apparent fix for this issue is to (pass) in vcl_fetch when Cache-Control contains s-maxage=0. Alternatively, you can match the specific URLs of the esi includes that you need to bypass. We have been doing the latter for a few days in production without issue. I'm testing the former now. Adding this eliminated the persistently high 100k context switches per second and reduced the csw/s down to a steady state of less than 4k csw/s. The load is also a lot more stable. n_wrk_overflow is also now stable and very low. Previously, when the csw/s would exceed ~100k we would start seeing n_wrk_overflow increment and timeouts. As part of this process, I created a simple varnish testbed to experiment with different varnish and VCL configurations to try and drive up the csw in a controlled environment and test the changes that might bring it back down. It's not fun (or profitable) trying to experiment with the varnish configuration in production on a high traffic site. The testbed is here https://github.com/drsnyder/varnish-testbed. I used sinatra + unicorn for the backend and siege to test the traffic. For the test, I generate a medium 10k+ page and insert three esi tags that generate text. I cache the page but not the esi includes. I also introduce a random sleep to try and simulate a "real" backend request. Other backends or experiments should be easy to generate. YMMV. Damon On Thu, Feb 7, 2013 at 6:09 PM, Damon Snyder wrote: > I think the issue we have been seeing *may* be related to the polling on a > shared pipe() issue that was first mentioned here: > http://lkml.indiana.edu/hypermail/linux/kernel/1012.1/03515.html. > > I think I narrowed the cause of the high csw/s down to multiple esi > includes (2 most of the time) with s-maxage=0 on a 1M+ pageview per month > site (~60conn/s according to the first line of varnishstat). Most of the > page views have these esi includes that aren't being cached (not all of the > 60/s). If we set s-maxage to something > 0 (say 20s) in the headers for the > esi module coming from the backend to varnish the context switching issue > goes away-- the csw/s degrades linearly back to the steady state at about > 10k/s. > > I created a threaded version of the example code from the lkml email ( > https://gist.github.com/drsnyder/4735889). When I compile and run this > code on a 2.6.32-279 or below kernel the csw/s goes to ~200k/s which defies > any attempt I have made to reason about why that would be the case. If I > run the same code on a 3.0+ Linux kernel, I will see about 12k csw/s. Still > high, but much more reasonable. > > We see the same csw/s behavior when we set the s-maxage=0 for the esi > modules on the production site with about the same number of threads (400). > > So does this polling on a shared pipe() issue affect varnish 2.1.5? It > would appear so, but I'm looking for confirmation. I saw a commit > referencing context switching here ( > https://www.varnish-cache.org/trac/changeset/05f816345a329ef52644a0239892f8be6314a3fa) > but I also can't imagine that this would trigger 200k/s in our environment. > If it is an issue, is there anything we can do to mitigate it in the > varnish configuration? Are there versions where this has been fixed? > > We have tested sending production traffic with s-maxage=0 to an Ubuntu > system running 3.2.0-27 and saw a spike in context switching but it peaked > at about 20k/s with no observed impact on stability or delivery. So, > switching from Centos to Ubuntu is a possible option. > > Thanks, > Damon > > > ---------- Forwarded message ---------- > From: Damon Snyder > Date: Thu, Jan 31, 2013 at 11:10 PM > Subject: Re: Feedback on session_linger (high load/n_wrk_overflow issues) > To: "varnish-misc at varnish-cache.org" > > > Well, I spoke too soon. Bumping session_linger brought us stability for > about a day and then the high context switching returned. See the attached > plot to illustrate the problem. The context switching (pulled from dstat > and added to varnish) starts to ramp up. As soon it crosses the ~50k mark > we start start seeing stability and latency issues (10k/s is more > "normal"). So now I'm at a loss as to how to proceed. > > Below is the current varnishstat -1 output when the system is well > behaved. I wasn't able to capture it when the context switching peaked. > n_wrk_overflow spiked to about 1500 at the time and the load average over > the past 5 min was ~15. > > This is running on a dual hex core Intel Xeon E5-2620 (24 contexts or > cores) with 64GB of memory. The hitrate is about 0.65 and we are nuking > (n_lru_nuked incrementing) once the system has been running for a few > hours. We have long tail objects in that we have a lot of content, but only > some of it is hot so it's difficult to predict our precise cache size > requirements at any given time. We are using varnish-2.1.5 on Centos 5.6 > with kernel 2.6.18-238. > > I haven't found anything in syslog that would be of interest. > > Thanks, > Damon > > # currently running command line > /usr/local/varnish-2.1.5/sbin/varnishd -P /var/run/varnish.pid -a > 10.16.50.150:6081,127.0.0.1:6081 -f /etc/varnish/default.vcl -T > 127.0.0.1:6082 -t 120 -w 100,2000,120 -u varnish -g varnish -s malloc,48G > -p thread_pools 8 -p thread_pool_add_delay 2 -p listen_depth 1024 -p > session_linger 110 -p lru_interval 60 -p sess_workspace 524288 > > > # varnishstat -1 > client_conn 5021009 442.11 Client connections accepted > client_drop 0 0.00 Connection dropped, no sess/wrk > client_req 5020366 442.05 Client requests received > cache_hit 4597754 404.84 Cache hits > cache_hitpass 24775 2.18 Cache hits for pass > cache_miss 2664516 234.61 Cache misses > backend_conn 3101882 273.13 Backend conn. success > backend_unhealthy 0 0.00 Backend conn. not attempted > backend_busy 0 0.00 Backend conn. too many > backend_fail 0 0.00 Backend conn. failures > backend_reuse 99 0.01 Backend conn. reuses > backend_toolate 11777 1.04 Backend conn. was closed > backend_recycle 11877 1.05 Backend conn. recycles > backend_unused 0 0.00 Backend conn. unused > fetch_head 33 0.00 Fetch head > fetch_length 2134117 187.91 Fetch with Length > fetch_chunked 811172 71.42 Fetch chunked > fetch_eof 0 0.00 Fetch EOF > fetch_bad 0 0.00 Fetch had bad headers > fetch_close 156333 13.77 Fetch wanted close > fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed > fetch_zero 0 0.00 Fetch zero len > fetch_failed 1 0.00 Fetch failed > n_sess_mem 3867 . N struct sess_mem > n_sess 3841 . N struct sess > n_object 757929 . N struct object > n_vampireobject 0 . N unresurrected objects > n_objectcore 758158 . N struct objectcore > n_objecthead 760076 . N struct objecthead > n_smf 0 . N struct smf > n_smf_frag 0 . N small free smf > n_smf_large 0 . N large free smf > n_vbe_conn 69 . N struct vbe_conn > n_wrk 800 . N worker threads > n_wrk_create 800 0.07 N worker threads created > n_wrk_failed 0 0.00 N worker threads not created > n_wrk_max 0 0.00 N worker threads limited > n_wrk_queue 0 0.00 N queued work requests > n_wrk_overflow 171 0.02 N overflowed work requests > n_wrk_drop 0 0.00 N dropped work requests > n_backend 21 . N backends > n_expired 1879027 . N expired objects > n_lru_nuked 0 . N LRU nuked objects > n_lru_saved 0 . N LRU saved objects > n_lru_moved 813472 . N LRU moved objects > n_deathrow 0 . N objects on deathrow > losthdr 0 0.00 HTTP header overflows > n_objsendfile 0 0.00 Objects sent with sendfile > n_objwrite 5195402 457.46 Objects sent with write > n_objoverflow 0 0.00 Objects overflowing workspace > s_sess 5020819 442.09 Total Sessions > s_req 5020366 442.05 Total Requests > s_pipe 0 0.00 Total pipe > s_pass 437300 38.50 Total pass > s_fetch 3101668 273.11 Total fetch > s_hdrbytes 2575156242 226746.17 Total header bytes > s_bodybytes 168406835436 14828461.34 Total body bytes > sess_closed 5020817 442.09 Session Closed > sess_pipeline 0 0.00 Session Pipeline > sess_readahead 0 0.00 Session Read Ahead > sess_linger 0 0.00 Session Linger > sess_herd 0 0.00 Session herd > shm_records 430740934 37927.35 SHM records > shm_writes 29264523 2576.78 SHM writes > shm_flushes 51411 4.53 SHM flushes due to overflow > shm_cont 78056 6.87 SHM MTX contention > shm_cycles 186 0.02 SHM cycles through buffer > sm_nreq 0 0.00 allocator requests > sm_nobj 0 . outstanding allocations > sm_balloc 0 . bytes allocated > sm_bfree 0 . bytes free > sma_nreq 5570153 490.46 SMA allocator requests > sma_nobj 1810351 . SMA outstanding allocations > sma_nbytes 35306367656 . SMA outstanding bytes > sma_balloc 154583709612 . SMA bytes allocated > sma_bfree 119277341956 . SMA bytes free > sms_nreq 32127 2.83 SMS allocator requests > sms_nobj 0 . SMS outstanding allocations > sms_nbytes 0 . SMS outstanding bytes > sms_balloc 29394423 . SMS bytes allocated > sms_bfree 29394423 . SMS bytes freed > backend_req 3101977 273.13 Backend requests made > n_vcl 1 0.00 N vcl total > n_vcl_avail 1 0.00 N vcl available > n_vcl_discard 0 0.00 N vcl discarded > n_purge 31972 . N total active purges > n_purge_add 31972 2.82 N new purges added > n_purge_retire 0 0.00 N old purges deleted > n_purge_obj_test 3317282 292.09 N objects tested > n_purge_re_test 530623465 46722.15 N regexps tested against > n_purge_dups 28047 2.47 N duplicate purges removed > hcb_nolock 7285152 641.47 HCB Lookups without lock > hcb_lock 2543066 223.92 HCB Lookups with lock > hcb_insert 2542964 223.91 HCB Inserts > esi_parse 741690 65.31 Objects ESI parsed (unlock) > esi_errors 0 0.00 ESI parse errors (unlock) > accept_fail 0 0.00 Accept failures > client_drop_late 0 0.00 Connection dropped late > uptime 11357 1.00 Client uptime > backend_retry 99 0.01 Backend conn. retry > dir_dns_lookups 0 0.00 DNS director lookups > dir_dns_failed 0 0.00 DNS director failed lookups > dir_dns_hit 0 0.00 DNS director cached lookups hit > dir_dns_cache_full 0 0.00 DNS director full dnscache > fetch_1xx 0 0.00 Fetch no body (1xx) > fetch_204 0 0.00 Fetch no body (204) > fetch_304 14 0.00 Fetch no body (304) > > > > On Wed, Jan 30, 2013 at 3:45 PM, Damon Snyder wrote: > >> When you 'varnishadm -T localhost:port param.show session_linger' it >> indicates at the bottom that "we don't know if this is a good idea... and >> feeback is welcome." >> >> We found that setting session_linger pulled us out of a bind. I wanted to >> add my feedback to the list in the hope that someone else might benefit >> from what we experienced. >> >> We recently increased the number of esi includes on pages that get ~60-70 >> req/s on our platform. Some of those modules were being rendered with >> s-maxage set to zero so that they would be refreshed on every page load >> (this is so we could insert a non-cached partial into the page) which >> further increased the request load on varnish. >> >> What we found is that after a few hours the load on a varnish box went >> from < 1 to > 10 or more and n_wkr_overflow started incrementing. After >> investigating further we noticed that the context switching went from >> ~10k/s to > 100k/s. We are running Linux specifically Centos. >> >> No adjusting of threads or thread pools had any impact on the thrashing. >> After reading Kristian's post about >> high-end varnish tuning we decided to try out session_linger. We started by >> doubling the default from 50 to 100 to test the theory ('varnishadm -T >> localhost:port param.set session_linger 100'). Once we did that we saw a >> gradual settling of the context switching (using dstat or sar -w) and >> a stabilizing of the load. >> >> It's such a great feature to be able to change this parameter via the >> admin interface. We have 50GB malloc'ed and some nuking on our boxes so >> restarting varnish doesn't come without some impact to the platform. >> >> Intuitively increasing session_linger makes sense. If you have several >> esi modules rendered within a page and the gap between them is > 50ms then >> they'll be reallocated elsewhere. >> >> What is not clear to me is how we should tune session_linger. We started >> by setting it to the 3rd quantile of render times for the esi module taken >> from a sampling of backend requests. This turned out to be 110ms. >> >> Damon >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damon at huddler-inc.com Thu Feb 14 22:33:53 2013 From: damon at huddler-inc.com (Damon Snyder) Date: Thu, 14 Feb 2013 14:33:53 -0800 Subject: Seed up build process In-Reply-To: <511A1DDA.8010406@lamp-solutions.de> References: <511A1DDA.8010406@lamp-solutions.de> Message-ID: If all you are doing is redirects what about the possibility of putting nginx in front of varnish to do that portion of the request? I would imagine that HUP'ing nginx would be a lot faster for this kind of thing. Damon On Tue, Feb 12, 2013 at 2:47 AM, Tobias Eichelbr?nner < tobias.eichelbroenner at lamp-solutions.de> wrote: > Hi, > > I have got plenty of vcl code mainly containing redirect information for > hundreds of domains. > > All together there are about 34 MB of vcl text files containing 70000 > rules for redirection. Varnish generates about 158MB c-code out of it, > that is compiled into a 90MB shared object. > > My problem is, that it takes more than one and a half hour to parse the > vcl code into c-code and than compiling it on an Intel Xeon W3530. > > Does anyone has an idea how to speed up the process? > Is there for example a possibility to tell the parser to pass any > optimization steps while generating the c code? > > I could store all information in a database and call it in realtime, but > that would cause heavy IO on the database. > > Sincerely, > > Tobias > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pprocacci at datapipe.com Fri Feb 15 02:08:20 2013 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Thu, 14 Feb 2013 20:08:20 -0600 Subject: Seed up build process In-Reply-To: References: <511A1DDA.8010406@lamp-solutions.de> Message-ID: <20130215020820.GF79878@nat.myhome> You could write a vmod for it. Out of curiosity I just wrote a vmod that enters null terminated strings into a critbit tree. The strings are essentially `key=value\0'. I added/compiled 100,000 unique strings in 103.76 seconds on an 800mhz processor. >From there, it's just a little vcl magic: I.E.: ############################ import critbit; backend test { /** Your backend definitions. */ } sub vcl_recv { if(req.http.host){ set req.http.X-REDIRECT = critbit.get(req.http.host + "="); if(req.http.X-REDIRECT){ /* Perform redirect stuff */ } } } ############################ Hope this helps, and if anything, gives you some ideas. If you want my vmod code, certainly I'll provide it, but I must warn it only had the 2 seconds of testing coming up with the concept and isn't very dev friendly. ;) ~Paul On Thu, Feb 14, 2013 at 02:33:53PM -0800, Damon Snyder wrote: > If all you are doing is redirects what about the possibility of putting > nginx in front of varnish to do that portion of the request? I would > imagine that HUP'ing nginx would be a lot faster for this kind of thing. > > Damon > > > On Tue, Feb 12, 2013 at 2:47 AM, Tobias Eichelbr?nner < > tobias.eichelbroenner at lamp-solutions.de> wrote: > > > Hi, > > > > I have got plenty of vcl code mainly containing redirect information for > > hundreds of domains. > > > > All together there are about 34 MB of vcl text files containing 70000 > > rules for redirection. Varnish generates about 158MB c-code out of it, > > that is compiled into a 90MB shared object. > > > > My problem is, that it takes more than one and a half hour to parse the > > vcl code into c-code and than compiling it on an Intel Xeon W3530. > > > > Does anyone has an idea how to speed up the process? > > Is there for example a possibility to tell the parser to pass any > > optimization steps while generating the c code? > > > > I could store all information in a database and call it in realtime, but > > that would cause heavy IO on the database. > > > > Sincerely, > > > > Tobias > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > ________________________________ This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. From tousif1988 at gmail.com Fri Feb 15 06:49:09 2013 From: tousif1988 at gmail.com (tousif baig) Date: Fri, 15 Feb 2013 12:19:09 +0530 Subject: Blank Referrer with Varnish Message-ID: Hi, I was wondering if its possible to blank referrers with varnish and then send it to the server for further processing. I tried this with *req.http.referer *and then *set req.http.referer* in varnish 2.1 on centos 32-bit machine but it didn't work when i checked the results with the command *varnishtop -i TxHeader -I Referer* * * Anyone who got any idea about this or any other way to blank referrers will be great. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Sat Feb 16 02:57:10 2013 From: james at ifixit.com (James Pearson) Date: Fri, 15 Feb 2013 18:57:10 -0800 Subject: Blank Referrer with Varnish In-Reply-To: References: Message-ID: <1360982990-sup-5162@geror.local> Excerpts from tousif baig's message of 2013-02-14 22:49:09 -0800: > I was wondering if its possible to blank referrers with varnish and then > send it to the server for further processing. > > I tried this with *req.http.referer *and then *set req.http.referer* in > varnish 2.1 on centos 32-bit machine but it didn't work when i checked the > results with the command *varnishtop -i TxHeader -I Referer* If I was testing a rule like that, I'd do logging inside my backend application, not Varnish. I believe that if you look at incoming requests, you'll see the headers before they are modified by your VCL, but I may be mistaken on that. - P From tousif1988 at gmail.com Sat Feb 16 03:10:46 2013 From: tousif1988 at gmail.com (tousif baig) Date: Sat, 16 Feb 2013 08:40:46 +0530 Subject: Blank Referrer with Varnish In-Reply-To: <1360982990-sup-5162@geror.local> References: <1360982990-sup-5162@geror.local> Message-ID: I think TxHeader meant the header being sent to the backend server or application, so if VCL is working then it should send the referrer what i want and not wat it is receiving. So you think it is possible with this directive in varnish vcl to change or blank referrer of the incoming traffic? Are there any other ways to do it in vcl? On Sat, Feb 16, 2013 at 8:27 AM, James Pearson wrote: > Excerpts from tousif baig's message of 2013-02-14 22:49:09 -0800: > > I was wondering if its possible to blank referrers with varnish and then > > send it to the server for further processing. > > > > I tried this with *req.http.referer *and then *set req.http.referer* in > > varnish 2.1 on centos 32-bit machine but it didn't work when i checked > the > > results with the command *varnishtop -i TxHeader -I Referer* > > If I was testing a rule like that, I'd do logging inside my backend > application, not Varnish. I believe that if you look at incoming requests, > you'll see the headers before they are modified by your VCL, but I may be > mistaken on that. > - P > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ottolski at web.de Mon Feb 18 12:30:54 2013 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 18 Feb 2013 12:30:54 +0000 Subject: How to calculate cache file usage (for Nagios monitoring) in varnish 3? In-Reply-To: References: Message-ID: <14504318.9HBkG0OQ61@sottolsi-vaio> Am Montag, 19. November 2012, 19:57:39 schrieben Sie: > Hi there, > > I'm wondering how I can monitor the actual usage of the cache "space". > Previous versions of the Nagios plugin had a "usage" parameter, that > would spit out a fill level as a percentage. Seems seems to have gone > away. I tried to roll my own by simply dividing > > SMF.s0.c_bytes (Bytes allocated) by SMF.s0.g_space (Bytes available) > > However, I misunderstood the semantics, the latter value does give me > the amount of free space, not the total size of the cache file. > > I could get the configured size from the config file > > VARNISH_STORAGE_SIZE=512676693.9k > > or the commandline that is constructed from it > > -s file,/var/cache/varnish/store.bin,512676693.9k > > but that feels a bit hacky to me. Any way to get that information from > the running instance? > > Thanks in advance for any pointers > > Sascha Please allow me to try again, I would expect that I'm not the only one wanting to monitor the cache usage ratio, am I? Cheers Sascha From radecki.rafal at gmail.com Mon Feb 18 12:37:46 2013 From: radecki.rafal at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Radecki?=) Date: Mon, 18 Feb 2013 13:37:46 +0100 Subject: Varnish replication? Message-ID: Hi all. I have a two node pacemaker cluster. If primary node goes down then secondary node takes its role. I use varnish on both nodes. When I switch to the secondary node the cache needs to be regenerated (as expected). Is there a way to replicate varnish cache between two servers? Something like conntrackd for iptables? Best regards, Rafal Radecki. From daghf at varnish-software.com Mon Feb 18 13:14:41 2013 From: daghf at varnish-software.com (Dag Haavi Finstad) Date: Mon, 18 Feb 2013 14:14:41 +0100 Subject: How to calculate cache file usage (for Nagios monitoring) in varnish 3? In-Reply-To: <14504318.9HBkG0OQ61@sottolsi-vaio> References: <14504318.9HBkG0OQ61@sottolsi-vaio> Message-ID: Hi You can accomplish this by looking at SMF.s0.g_bytes and SMF.s0.g_space. g_bytes is the amount of memory currently in use, g_space is what's available. The sum of the two will equal the configured storage size. To get "fill level" as a percentage, do g_bytes / (g_bytes + g_space). Regards, Dag On Mon, Feb 18, 2013 at 1:30 PM, Sascha Ottolski wrote: > Am Montag, 19. November 2012, 19:57:39 schrieben Sie: > > Hi there, > > > > I'm wondering how I can monitor the actual usage of the cache "space". > > Previous versions of the Nagios plugin had a "usage" parameter, that > > would spit out a fill level as a percentage. Seems seems to have gone > > away. I tried to roll my own by simply dividing > > > > SMF.s0.c_bytes (Bytes allocated) by SMF.s0.g_space (Bytes available) > > > > However, I misunderstood the semantics, the latter value does give me > > the amount of free space, not the total size of the cache file. > > > > I could get the configured size from the config file > > > > VARNISH_STORAGE_SIZE=512676693.9k > > > > or the commandline that is constructed from it > > > > -s file,/var/cache/varnish/store.bin,512676693.9k > > > > but that feels a bit hacky to me. Any way to get that information from > > the running instance? > > > > Thanks in advance for any pointers > > > > Sascha > > Please allow me to try again, I would expect that I'm not the only one > wanting to monitor the cache usage ratio, am I? > > Cheers > > Sascha > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Dag Haavi Finstad* Developer | Varnish Software AS Phone: +47 21 98 92 60 We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Mon Feb 18 13:25:39 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Mon, 18 Feb 2013 14:25:39 +0100 Subject: Varnish replication? In-Reply-To: References: Message-ID: Hi, Does this help ? https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy Best Regards, Dridi On Mon, Feb 18, 2013 at 1:37 PM, Rafa? Radecki wrote: > Hi all. > > I have a two node pacemaker cluster. If primary node goes down then > secondary node takes its role. > I use varnish on both nodes. When I switch to the secondary node the > cache needs to be regenerated (as expected). Is there a way to > replicate varnish cache between two servers? Something like conntrackd > for iptables? > > Best regards, > Rafal Radecki. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ottolski at web.de Mon Feb 18 13:35:39 2013 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 18 Feb 2013 13:35:39 +0000 Subject: How to calculate cache file usage (for Nagios monitoring) in varnish 3? In-Reply-To: References: <14504318.9HBkG0OQ61@sottolsi-vaio> Message-ID: <2007815.t1TGhQnqfT@sottolsi-vaio> Am Montag, 18. Februar 2013, 14:14:41 schrieb Dag Haavi Finstad: > Hi > > You can accomplish this by looking at SMF.s0.g_bytes and > SMF.s0.g_space. > > g_bytes is the amount of memory currently in use, g_space is what's > available. The sum of the two will equal the configured storage size. > > To get "fill level" as a percentage, do g_bytes / (g_bytes + g_space). > > Regards, > Dag great, thanks a lot! Sascha From ottolski at web.de Mon Feb 18 13:44:50 2013 From: ottolski at web.de (Sascha Ottolski) Date: Mon, 18 Feb 2013 13:44:50 +0000 Subject: Varnish replication? In-Reply-To: References: Message-ID: <3478376.Ijoys3jO6k@sottolsi-vaio> Am Montag, 18. Februar 2013, 13:37:46 schrieb Rafa? Radecki: > Hi all. > > I have a two node pacemaker cluster. If primary node goes down then > secondary node takes its role. > I use varnish on both nodes. When I switch to the secondary node the > cache needs to be regenerated (as expected). Is there a way to > replicate varnish cache between two servers? Something like conntrackd > for iptables? > > Best regards, > Rafal Radecki. I would recommend an active/active setup instead. Put a loadbalancer in front, and let it distribute the request to both nodes. If one goes down, the other is obviously warm. No special "tricks" required, and you make better use of you're hardware when both nodes are fine. Cheers Sascha From radecki.rafal at gmail.com Mon Feb 18 13:46:54 2013 From: radecki.rafal at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Radecki?=) Date: Mon, 18 Feb 2013 14:46:54 +0100 Subject: Varnish replication? In-Reply-To: References: Message-ID: Interesting, what if one of the servers goes down? Should the backends be defined in order? For example: - varnish1 asks varnish2 - if varnish2 does not answer it asks backend directly ? 2013/2/18 Dridi Boukelmoune : > Hi, > > Does this help ? > https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy > > Best Regards, > Dridi > > On Mon, Feb 18, 2013 at 1:37 PM, Rafa? Radecki wrote: >> Hi all. >> >> I have a two node pacemaker cluster. If primary node goes down then >> secondary node takes its role. >> I use varnish on both nodes. When I switch to the secondary node the >> cache needs to be regenerated (as expected). Is there a way to >> replicate varnish cache between two servers? Something like conntrackd >> for iptables? >> >> Best regards, >> Rafal Radecki. >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From radecki.rafal at gmail.com Mon Feb 18 13:49:27 2013 From: radecki.rafal at gmail.com (=?ISO-8859-2?Q?Rafa=B3_Radecki?=) Date: Mon, 18 Feb 2013 14:49:27 +0100 Subject: Varnish replication? In-Reply-To: <3478376.Ijoys3jO6k@sottolsi-vaio> References: <3478376.Ijoys3jO6k@sottolsi-vaio> Message-ID: I cannot switch from active/passive to active/active at the moment. 2013/2/18 Sascha Ottolski : > Am Montag, 18. Februar 2013, 13:37:46 schrieb Rafa? Radecki: >> Hi all. >> >> I have a two node pacemaker cluster. If primary node goes down then >> secondary node takes its role. >> I use varnish on both nodes. When I switch to the secondary node the >> cache needs to be regenerated (as expected). Is there a way to >> replicate varnish cache between two servers? Something like conntrackd >> for iptables? >> >> Best regards, >> Rafal Radecki. > > I would recommend an active/active setup instead. Put a loadbalancer in > front, and let it distribute the request to both nodes. If one goes > down, the other is obviously warm. No special "tricks" required, and you > make better use of you're hardware when both nodes are fine. > > Cheers > > Sascha > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From dridi.boukelmoune at zenika.com Mon Feb 18 14:18:37 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Mon, 18 Feb 2013 15:18:37 +0100 Subject: Varnish replication? In-Reply-To: References: Message-ID: You can use a director to have the other varnish as a fallback of your real backends, and maybe play with the weights and retries to keep you varnishes warm. Best Regards, Dridi On Mon, Feb 18, 2013 at 2:46 PM, Rafa? Radecki wrote: > Interesting, what if one of the servers goes down? Should the backends > be defined in order? For example: > - varnish1 asks varnish2 > - if varnish2 does not answer it asks backend directly > ? > > 2013/2/18 Dridi Boukelmoune : >> Hi, >> >> Does this help ? >> https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >> >> Best Regards, >> Dridi >> >> On Mon, Feb 18, 2013 at 1:37 PM, Rafa? Radecki wrote: >>> Hi all. >>> >>> I have a two node pacemaker cluster. If primary node goes down then >>> secondary node takes its role. >>> I use varnish on both nodes. When I switch to the secondary node the >>> cache needs to be regenerated (as expected). Is there a way to >>> replicate varnish cache between two servers? Something like conntrackd >>> for iptables? >>> >>> Best regards, >>> Rafal Radecki. >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From programacao at maxmeio.com Mon Feb 18 20:10:44 2013 From: programacao at maxmeio.com (=?ISO-8859-1?B?UHJvZ3JhbWHn428tRmFicu1jaW8=?=) Date: Mon, 18 Feb 2013 17:10:44 -0300 Subject: Security Message-ID: I was thinking if I install the varnish in a SERVER X and host my files in another server SERVER Y, will I have more security? or it not make sense. I have so many problem with malwares in wordpress. client --------> SERVER X[Varnish] -------> Server Y[ my site] Thanks -- Att, ----------------------------------- Fabr?cio Costa Programador Maxmeio -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Tue Feb 19 01:13:46 2013 From: james at ifixit.com (James Pearson) Date: Mon, 18 Feb 2013 17:13:46 -0800 Subject: Security In-Reply-To: References: Message-ID: <1361236170-sup-5984@geror.local> Excerpts from Programa??o-Fabr?cio's message of 2013-02-18 12:10:44 -0800: > I was thinking if I install the varnish in a SERVER X and host my files in > another server SERVER Y, will I have more security? or it not make sense. > I have so many problem with malwares in wordpress. Wordpress has lots of issues because the people writing Wordpress plugins are usually not terribly educated in web security. Varnish, otoh, has very smart and experienced people working on it. It's also naturally less vulnerable due to the kinds of things it does - SQL injection and XSS (by far the two most common web vulnerabilities) just don't apply. Varnish is only pulling minimal information out of requests, and don't execute them directly or any such nonsense. Separating out services will always lead to some additional level of security (after all, someone *could* find a bug in Varnish that leads to arbitrary code execution), but I wouldn't (and don't) worry about it. - P From wxz19861013 at gmail.com Tue Feb 19 11:44:01 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Tue, 19 Feb 2013 19:44:01 +0800 Subject: Has anybody used the "persistent storage" in a production environment? Message-ID: As I know, "persistent storage" is experimental. Has anybody used the "persistent storage" in a production environment? Is there some matters need attention? Thanks for help. Shown -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Tue Feb 19 11:44:53 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Tue, 19 Feb 2013 19:44:53 +0800 Subject: How to duplicate cached data on all Varnish instance? Message-ID: Here I take nginx as a load balancer and it contects 2 varnish severs. I wanna share the cache object between 2 varnishes. When one varnish is down, the left one will work fine, and the cache object is still work. Is there anything I can do for this? I aslo saw an example something like this: https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy But i think it will increase network delay. So I don't want do it like this. Is someone can share their experience? Thanks a lot. Shawn -------------- next part -------------- An HTML attachment was scrubbed... URL: From ottolski at web.de Tue Feb 19 16:58:50 2013 From: ottolski at web.de (Sascha Ottolski) Date: Tue, 19 Feb 2013 16:58:50 +0000 Subject: How to duplicate cached data on all Varnish instance? In-Reply-To: References: Message-ID: <4008738.RadVvLCgW5@sottolsi-vaio> Am Dienstag, 19. Februar 2013, 19:44:53 schrieb Xianzhe Wang: > Here I take nginx as a load balancer and it contects 2 varnish severs. > I wanna share the cache object between 2 varnishes. > When one varnish is down, the left one will work fine, and the cache > object is still work. > Is there anything I can do for this? > > I aslo saw an example something like this: > https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy > But i think it will increase network delay. So I don't want do it like > this. > > Is someone can share their experience? Thanks a lot. > > Shawn I would say, you already have your solution. If nginx send the requests randomly to any of the two servers, each will obviously fill its cache; so if one goes down, the other is still there. The two caches may not be completely identically, depending on the size of your cacheable content, but each should be "warm" enough to serve most requests from its cache. And you're not limited to two varnish servers, of course. The more you put into your loadbalanced cluster, the lower the impact if one fails. Cheers Sascha From wxz19861013 at gmail.com Wed Feb 20 02:55:05 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Wed, 20 Feb 2013 10:55:05 +0800 Subject: How to duplicate cached data on all Varnish instance? In-Reply-To: <4008738.RadVvLCgW5@sottolsi-vaio> References: <4008738.RadVvLCgW5@sottolsi-vaio> Message-ID: Thank you for help. What you said is all correct. I think I'm not clear. I want exactly is that: 2 varnishes share only one cache file(or cache memory). As one request, 2 varnishes share only one cache object. Something like this: varnish1--> cache file <--varnish2 Could you have any suggestions or experiences? Everything will be appreciated. Regard Shawn 2013/2/20 Sascha Ottolski > Am Dienstag, 19. Februar 2013, 19:44:53 schrieb Xianzhe Wang: > > Here I take nginx as a load balancer and it contects 2 varnish severs. > > I wanna share the cache object between 2 varnishes. > > When one varnish is down, the left one will work fine, and the cache > > object is still work. > > Is there anything I can do for this? > > > > I aslo saw an example something like this: > > https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy > > But i think it will increase network delay. So I don't want do it like > > this. > > > > Is someone can share their experience? Thanks a lot. > > > > Shawn > > I would say, you already have your solution. If nginx send the requests > randomly to any of the two servers, each will obviously fill its cache; > so if one goes down, the other is still there. The two caches may not be > completely identically, depending on the size of your cacheable content, > but each should be "warm" enough to serve most requests from its cache. > > And you're not limited to two varnish servers, of course. The more you > put into your loadbalanced cluster, the lower the impact if one fails. > > Cheers > > Sascha > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Wed Feb 20 03:44:20 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Wed, 20 Feb 2013 11:44:20 +0800 Subject: Fwd: How to duplicate cached data on all Varnish instance? In-Reply-To: References: <4008738.RadVvLCgW5@sottolsi-vaio> Message-ID: I have seen the same question and some answered: https://www.varnish-cache.org/lists/pipermail/varnish-misc/2012-April/021957.html " I don't think what you have in mind would work. Varnish requires an explicit lock on the files in manages. Sharing a cache between Varnish instances won't ever work. What I would recommend you do is to hash incoming requests based on URL so each time the same URL is hit it is served from the same server. That way you don't duplicate the content between caches. Varnish can do this, F5's can do it, haproxy should be able to do this as well. " So I don't think what I have in mind would work. ---------- Forwarded message ---------- From: Xianzhe Wang Date: 2013/2/20 Subject: Re: How to duplicate cached data on all Varnish instance? To: Sascha Ottolski Cc: Varnish misc Thank you for help. What you said is all correct. I think I'm not clear. I want exactly is that: 2 varnishes share only one cache file(or cache memory). As one request, 2 varnishes share only one cache object. Something like this: varnish1--> cache file <--varnish2 Could you have any suggestions or experiences? Everything will be appreciated. Regard Shawn 2013/2/20 Sascha Ottolski > Am Dienstag, 19. Februar 2013, 19:44:53 schrieb Xianzhe Wang: > > Here I take nginx as a load balancer and it contects 2 varnish severs. > > I wanna share the cache object between 2 varnishes. > > When one varnish is down, the left one will work fine, and the cache > > object is still work. > > Is there anything I can do for this? > > > > I aslo saw an example something like this: > > https://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy > > But i think it will increase network delay. So I don't want do it like > > this. > > > > Is someone can share their experience? Thanks a lot. > > > > Shawn > > I would say, you already have your solution. If nginx send the requests > randomly to any of the two servers, each will obviously fill its cache; > so if one goes down, the other is still there. The two caches may not be > completely identically, depending on the size of your cacheable content, > but each should be "warm" enough to serve most requests from its cache. > > And you're not limited to two varnish servers, of course. The more you > put into your loadbalanced cluster, the lower the impact if one fails. > > Cheers > > Sascha > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wxz19861013 at gmail.com Thu Feb 21 03:21:08 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Thu, 21 Feb 2013 11:21:08 +0800 Subject: how to examine the already usage amount of varnish cache memory or file Message-ID: In order to test my varnish sever, I wanna to know how: many bytes the total cache objects(alive, not out of date) have taken? Is my cache storage file empty or full? So are there any order can show this information? Thanks a lot for help. Regards, Shawn -------------- next part -------------- An HTML attachment was scrubbed... URL: From pprocacci at datapipe.com Thu Feb 21 04:24:14 2013 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Wed, 20 Feb 2013 22:24:14 -0600 Subject: how to examine the already usage amount of varnish cache memory or file In-Reply-To: References: Message-ID: <20130221042414.GE89154@nat.myhome> On Thu, Feb 21, 2013 at 11:21:08AM +0800, Xianzhe Wang wrote: > In order to test my varnish sever, I wanna to know how: > many bytes the total cache objects(alive, not out of date) have taken? Is > my cache storage file empty or full? > So are there any order can show this information? varnishstat(1) is what you are looking for. ~Paul ________________________________ This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. From wxz19861013 at gmail.com Thu Feb 21 06:39:51 2013 From: wxz19861013 at gmail.com (Xianzhe Wang) Date: Thu, 21 Feb 2013 14:39:51 +0800 Subject: how to examine the already usage amount of varnish cache memory or file In-Reply-To: <20130221042414.GE89154@nat.myhome> References: <20130221042414.GE89154@nat.myhome> Message-ID: Thank you for help. I have used ./varnishstat -1 to show varnish statistics. But I didn't see that. s_bodybytes 62818 28.53 Total body bytes but "s_bodybytes is bytes of object body sent to the clients". https://www.varnish-cache.org/lists/pipermail/varnish-misc/2010-October/019296.html I am trying to get bytes of cache objects(alive, not expired) in varnish cache, in order to insure how many space is idle so I can insert more cache objects. could you tell me which field is what I'm looking exactly? Thank you again. Regards, Shawn 2013/2/21 Paul A. Procacci > On Thu, Feb 21, 2013 at 11:21:08AM +0800, Xianzhe Wang wrote: > > In order to test my varnish sever, I wanna to know how: > > many bytes the total cache objects(alive, not out of date) have taken? > Is > > my cache storage file empty or full? > > So are there any order can show this information? > > varnishstat(1) is what you are looking for. > > ~Paul > > ________________________________ > > This message may contain confidential or privileged information. If you > are not the intended recipient, please advise us immediately and delete > this message. See http://www.datapipe.com/legal/email_disclaimer/ for > further information on confidentiality and the risks of non-secure > electronic communication. If you cannot access these links, please notify > us by reply message and we will send the contents to you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From straightflush at gmail.com Thu Feb 21 18:55:43 2013 From: straightflush at gmail.com (AD) Date: Thu, 21 Feb 2013 13:55:43 -0500 Subject: What doesnt varnishncsa log ? Message-ID: hi, Looking to use varnishncsa for billing purposes but I read that varnishncsa wont log pipe() requests or requests where varnish does no issue an additional backend request. Is this true? Is there a safer way to log ALL requests if not ? Thanks -AD -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampe at hauke-lampe.de Fri Feb 22 08:43:59 2013 From: lampe at hauke-lampe.de (Hauke Lampe) Date: Fri, 22 Feb 2013 09:43:59 +0100 Subject: how to examine the already usage amount of varnish cache memory or file In-Reply-To: References: <20130221042414.GE89154@nat.myhome> Message-ID: <51272FCF.2050705@hauke-lampe.de> On 21.02.2013 07:39, Xianzhe Wang wrote: > I am trying to get bytes of cache objects(alive, not expired) in varnish > cache, in order to insure how many space is idle so I can insert more > cache objects. I graph "SMA.*.g_bytes" (bytes allocated in caches) and "sms_nbytes" with munin: https://github.com/lampeh/varnish-munin/blob/master/varnishstat_ Hauke. From jefe78 at gmail.com Mon Feb 25 20:08:23 2013 From: jefe78 at gmail.com (Jeffrey Taylor) Date: Mon, 25 Feb 2013 15:08:23 -0500 Subject: Pass 200 on 404 Message-ID: Hi all, We're using Amazon Cloudfront(CF). The issue we've run into is that when CF gets a 404, it caches that for ~5 minutes. What we want to do is have Varnish(already part of our stack) pass a 200 to CF on a 404 event, but only cache that '404' for ~1 minute. In doing so, we give ourselves 1 minute to recover from the 404 event on our stack, in doing so, we can keep serving content 1 minute after, instead of 5 minutes because of CF. How should I get started on this? Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.cisneiros at gmail.com Mon Feb 25 20:17:49 2013 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Mon, 25 Feb 2013 17:17:49 -0300 Subject: Pass 200 on 404 In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 5:08 PM, Jeffrey Taylor wrote: > We're using Amazon Cloudfront(CF). The issue we've run into is that when > CF gets a 404, it caches that for ~5 minutes. What we want to do is have > Varnish(already part of our stack) pass a 200 to CF on a 404 event, but > only cache that '404' for ~1 minute. In doing so, we give ourselves 1 > minute to recover from the 404 event on our stack, in doing so, we can keep > serving content 1 minute after, instead of 5 minutes because of CF. > On vcl_fetch, you can test if the response status from backend request is 404 and return a 200 to the client (CloudFrond) using the vcl_error, like this: sub vcl_fetch { if (beresp.status == 404) { set beresp.ttl = 1m; error 200 "Not Found"; } [...] } The default vcl_error should return the correct HTTP status code. But IMHO this isn't a good thing to do with every content from the backend :-) Isn't there a cache TTL setting on CloudFront? []'s Hugo www.devin.com.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefe78 at gmail.com Mon Feb 25 20:22:40 2013 From: jefe78 at gmail.com (Jeffrey Taylor) Date: Mon, 25 Feb 2013 15:22:40 -0500 Subject: Pass 200 on 404 In-Reply-To: References: Message-ID: Hey Hugo, We only actually want to do this with our video segments. The reason is, if our streaming box or our ingress box crash, we start generating 404's to CF. The second CF gets a 404, we're screwed for ~5 minutes. That's a lifetime for live video. CF allows TTLs on all types of files/rules, but a 404 is a solid 5 minutes cached by them. I'm open to other ideas! Thanks, Jeff On Mon, Feb 25, 2013 at 3:17 PM, Hugo Cisneiros (Eitch) < hugo.cisneiros at gmail.com> wrote: > On Mon, Feb 25, 2013 at 5:08 PM, Jeffrey Taylor wrote: > >> We're using Amazon Cloudfront(CF). The issue we've run into is that when >> CF gets a 404, it caches that for ~5 minutes. What we want to do is have >> Varnish(already part of our stack) pass a 200 to CF on a 404 event, but >> only cache that '404' for ~1 minute. In doing so, we give ourselves 1 >> minute to recover from the 404 event on our stack, in doing so, we can keep >> serving content 1 minute after, instead of 5 minutes because of CF. >> > > On vcl_fetch, you can test if the response status from backend request is > 404 and return a 200 to the client (CloudFrond) using the vcl_error, like > this: > > sub vcl_fetch { > if (beresp.status == 404) { > set beresp.ttl = 1m; > error 200 "Not Found"; > } > > [...] > } > > The default vcl_error should return the correct HTTP status code. > > But IMHO this isn't a good thing to do with every content from the backend > :-) Isn't there a cache TTL setting on CloudFront? > > []'s > Hugo > www.devin.com.br > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugo.cisneiros at gmail.com Mon Feb 25 20:28:26 2013 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Mon, 25 Feb 2013 17:28:26 -0300 Subject: Pass 200 on 404 In-Reply-To: References: Message-ID: On Mon, Feb 25, 2013 at 5:22 PM, Jeffrey Taylor wrote: > We only actually want to do this with our video segments. The reason is, > if our streaming box or our ingress box crash, we start generating 404's to > CF. The second CF gets a 404, we're screwed for ~5 minutes. That's a > lifetime for live video. > > CF allows TTLs on all types of files/rules, but a 404 is a solid 5 minutes > cached by them. > > I'm open to other ideas! > Ah, if it's only these items, I think it's OK. I thought that replying 200 to everything should confuse crawlers, robots, users, and so on :-) Give the beresp.status and vcl_error a try. I used this a lot to generate temporary redirects (301) instead of 200 or 404. -- []'s Hugo www.devin.com.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefe78 at gmail.com Mon Feb 25 20:31:24 2013 From: jefe78 at gmail.com (Jeffrey Taylor) Date: Mon, 25 Feb 2013 15:31:24 -0500 Subject: Pass 200 on 404 In-Reply-To: References: Message-ID: Ya, passing 200's on everything would be fun :) Thanks again Hugo, I'll set that up. Jeff On Mon, Feb 25, 2013 at 3:28 PM, Hugo Cisneiros (Eitch) < hugo.cisneiros at gmail.com> wrote: > On Mon, Feb 25, 2013 at 5:22 PM, Jeffrey Taylor wrote: > >> We only actually want to do this with our video segments. The reason is, >> if our streaming box or our ingress box crash, we start generating 404's to >> CF. The second CF gets a 404, we're screwed for ~5 minutes. That's a >> lifetime for live video. >> >> CF allows TTLs on all types of files/rules, but a 404 is a solid 5 >> minutes cached by them. >> >> I'm open to other ideas! >> > > Ah, if it's only these items, I think it's OK. I thought that replying 200 > to everything should confuse crawlers, robots, users, and so on :-) > > Give the beresp.status and vcl_error a try. I used this a lot to generate > temporary redirects (301) instead of 200 or 404. > > -- > []'s > Hugo > www.devin.com.br > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Mon Feb 25 22:10:03 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Mon, 25 Feb 2013 23:10:03 +0100 Subject: Pass 200 on 404 In-Reply-To: References: Message-ID: Hi, Have you tried the grace mechanism ? It allows an additional TTL when a backend is sick. If you want to mark only specific URLs as sick instead of the whole backend you can play with the saint mode. It will definitely be tricky the way you described your issue. I'd also search CF's documentation to make sure whether caching can be tuned. Best Regards, Dridi Envoy? de mon smartphone Le 25 f?vr. 2013 21:31, "Jeffrey Taylor" a ?crit : > Ya, passing 200's on everything would be fun :) > > Thanks again Hugo, I'll set that up. > > Jeff > > > On Mon, Feb 25, 2013 at 3:28 PM, Hugo Cisneiros (Eitch) < > hugo.cisneiros at gmail.com> wrote: > >> On Mon, Feb 25, 2013 at 5:22 PM, Jeffrey Taylor wrote: >> >>> We only actually want to do this with our video segments. The reason is, >>> if our streaming box or our ingress box crash, we start generating 404's to >>> CF. The second CF gets a 404, we're screwed for ~5 minutes. That's a >>> lifetime for live video. >>> >>> CF allows TTLs on all types of files/rules, but a 404 is a solid 5 >>> minutes cached by them. >>> >>> I'm open to other ideas! >>> >> >> Ah, if it's only these items, I think it's OK. I thought that replying >> 200 to everything should confuse crawlers, robots, users, and so on :-) >> >> Give the beresp.status and vcl_error a try. I used this a lot to generate >> temporary redirects (301) instead of 200 or 404. >> >> -- >> []'s >> Hugo >> www.devin.com.br >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From programacao at maxmeio.com Tue Feb 26 16:06:10 2013 From: programacao at maxmeio.com (=?ISO-8859-1?B?UHJvZ3JhbWHn428tRmFicu1jaW8=?=) Date: Tue, 26 Feb 2013 13:06:10 -0300 Subject: Shared hosting Message-ID: It is possible to have a backend on a shared hosting? When I put in the url of the site, it tries to access the server root. backend default { .host = "mydomain.com"; .port = "80"; } mydomain.com is on a shared hosting. -- Att, ----------------------------------- Fabr?cio Costa Programador Maxmeio -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuba at ovh.net Tue Feb 26 16:13:16 2013 From: kuba at ovh.net (Jakub =?utf-8?B?U8WCb2NpxYRza2k=?=) Date: Tue, 26 Feb 2013 17:13:16 +0100 Subject: Varnish replication? In-Reply-To: References: <3478376.Ijoys3jO6k@sottolsi-vaio> Message-ID: <20130226161316.GD16295@kuba> Rafa? Radecki napisa?(a): > I cannot switch from active/passive to active/active at the moment. > Hi Rafa?, in such a situation you may try to use varnishreplay to warm up second node before first one goes down. You may consider using first one as a temporary backend if real backend is far or slow. -- Jakub S?oci?ski > 2013/2/18 Sascha Ottolski : > > Am Montag, 18. Februar 2013, 13:37:46 schrieb Rafa? Radecki: > >> Hi all. > >> > >> I have a two node pacemaker cluster. If primary node goes down then > >> secondary node takes its role. > >> I use varnish on both nodes. When I switch to the secondary node the > >> cache needs to be regenerated (as expected). Is there a way to > >> replicate varnish cache between two servers? Something like conntrackd > >> for iptables? > >> > >> Best regards, > >> Rafal Radecki. > > > > I would recommend an active/active setup instead. Put a loadbalancer in > > front, and let it distribute the request to both nodes. If one goes > > down, the other is obviously warm. No special "tricks" required, and you > > make better use of you're hardware when both nodes are fine. > > > > Cheers > > > > Sascha > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From hugo.cisneiros at gmail.com Tue Feb 26 17:02:59 2013 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Tue, 26 Feb 2013 14:02:59 -0300 Subject: Shared hosting In-Reply-To: References: Message-ID: On Tue, Feb 26, 2013 at 1:06 PM, Programa??o-Fabr?cio < programacao at maxmeio.com> wrote: > It is possible to have a backend on a shared hosting? When I put in the > url of the site, it tries to access the server root. > > backend default { > .host = "mydomain.com"; > .port = "80"; > } > > mydomain.com is on a shared hosting. > Yes, it is possible. Instead of the ".url" on the backend configuration, you can use the ".request" to generate a custom request. For example: backend default { .host = "shared-server-ip"; .port = "80"; .request = "GET / HTTP/1.1" "Host: www.my-vhost-domain.com" "Connection: close"; } []'s Hugo www.devin.com.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From webmaster at serviidb.com Tue Feb 26 23:30:33 2013 From: webmaster at serviidb.com (Stephen Strickland) Date: Tue, 26 Feb 2013 18:30:33 -0500 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid Message-ID: <000001ce1479$46d18e60$d474ab20$@serviidb.com> I running a site with Drupal 7 and with apache2 installed initially I was using mod prefork, I was getting varnish hit rates at least high 80's or middle 90s. I changed to mpm worker with mod_fcgid and now I am now getting an average of 25%. Anyone out there know of anything that could be causing this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick.tailor at gmail.com Wed Feb 27 00:28:18 2013 From: nick.tailor at gmail.com (nick tailor) Date: Tue, 26 Feb 2013 16:28:18 -0800 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: <000001ce1479$46d18e60$d474ab20$@serviidb.com> References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> Message-ID: I believe it has to do with the new processes being spawned. You might need to tweak the fastCGI processes to allow more request per process in one of your confs MaxRequestsPerProcess 1000 MaxProcessCount 5 IPCCommTimeout 600 IdleTimeout 600 Hope that helps Cheers Nick Tailor On Tue, Feb 26, 2013 at 3:30 PM, Stephen Strickland wrote: > I running a site with Drupal 7 and with apache2 installed initially I was > using mod prefork, I was getting varnish hit rates at least high 80?s or > middle 90s. I changed to mpm worker with mod_fcgid and now I am now > getting an average of 25%.**** > > ** ** > > Anyone out there know of anything that could be causing this?**** > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smsmail at roadrunner.com Wed Feb 27 00:37:33 2013 From: smsmail at roadrunner.com (Mark Strickland) Date: Tue, 26 Feb 2013 19:37:33 -0500 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> Message-ID: <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> I have this in m php-fcgid.conf AddHandler fcgid-script .fcgi .php # Where to look for the php.ini file? DefaultInitEnv PHPRC "/etc/php5/cgi" # Maximum requests a process handles before it is terminated MaxRequestsPerProcess 1000 # Maximum number of PHP processes MaxProcessCount 10 # Number of seconds of idle time before a process is terminated IPCCommTimeout 300 IdleTimeout 240 #Or use this if you use the file above FCGIWrapper /usr/bin/php-cgi .php ServerLimit 500 StartServers 3 MinSpareThreads 3 MaxSpareThreads 10 ThreadsPerChild 10 MaxClients 300 MaxRequestsPerChild 1000 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of nick tailor Sent: Tuesday, February 26, 2013 7:28 PM To: Stephen Strickland Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish with mod prefork vs mpm worker with mod-fcgid I believe it has to do with the new processes being spawned. You might need to tweak the fastCGI processes to allow more request per process in one of your confs MaxRequestsPerProcess 1000 MaxProcessCount 5 IPCCommTimeout 600 IdleTimeout 600 Hope that helps Cheers Nick Tailor On Tue, Feb 26, 2013 at 3:30 PM, Stephen Strickland wrote: I running a site with Drupal 7 and with apache2 installed initially I was using mod prefork, I was getting varnish hit rates at least high 80's or middle 90s. I changed to mpm worker with mod_fcgid and now I am now getting an average of 25%. Anyone out there know of anything that could be causing this? _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _____ No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.2899 / Virus Database: 2641/6135 - Release Date: 02/26/13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick.tailor at gmail.com Wed Feb 27 00:49:53 2013 From: nick.tailor at gmail.com (nick tailor) Date: Tue, 26 Feb 2013 16:49:53 -0800 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> Message-ID: I have heard of others having similar issue with same setup. Generally they use mpm-prefork or mod fcgi with varnish. I have heard using Nginx with varnish is the way to go. What I would do, is disable modfcgi and see if it changes. If it does you know the problems lies in the settings. Cheers Nick Tailor On Tue, Feb 26, 2013 at 4:37 PM, Mark Strickland wrote: > I have this in m php-fcgid.conf**** > > ** ** > > AddHandler fcgid-script .fcgi .php**** > > # Where to look for the php.ini file?**** > > DefaultInitEnv PHPRC "/etc/php5/cgi"**** > > # Maximum requests a process handles before it is terminated**** > > MaxRequestsPerProcess 1000**** > > # Maximum number of PHP processes**** > > MaxProcessCount 10**** > > # Number of seconds of idle time before a process is terminated**** > > IPCCommTimeout 300**** > > IdleTimeout 240**** > > #Or use this if you use the file above**** > > FCGIWrapper /usr/bin/php-cgi .php**** > > ** ** > > ** ** > > ** ** > > ServerLimit 500**** > > StartServers 3**** > > MinSpareThreads 3**** > > MaxSpareThreads 10**** > > ThreadsPerChild 10**** > > MaxClients 300**** > > MaxRequestsPerChild 1000**** > > ** ** > > *From:* varnish-misc-bounces at varnish-cache.org [mailto: > varnish-misc-bounces at varnish-cache.org] *On Behalf Of *nick tailor > *Sent:* Tuesday, February 26, 2013 7:28 PM > *To:* Stephen Strickland > *Cc:* varnish-misc at varnish-cache.org > *Subject:* Re: Varnish with mod prefork vs mpm worker with mod-fcgid**** > > ** ** > > I believe it has to do with the new processes being spawned. You might > need to tweak the fastCGI processes to allow more request per process in > one of your confs**** > > ** ** > > ** ** > > MaxRequestsPerProcess 1000**** > > MaxProcessCount 5**** > > IPCCommTimeout 600**** > > IdleTimeout 600**** > > ** ** > > Hope that helps**** > > ** ** > > Cheers**** > > ** ** > > Nick Tailor**** > > ** ** > > On Tue, Feb 26, 2013 at 3:30 PM, Stephen Strickland < > webmaster at serviidb.com> wrote:**** > > I running a site with Drupal 7 and with apache2 installed initially I was > using mod prefork, I was getting varnish hit rates at least high 80?s or > middle 90s. I changed to mpm worker with mod_fcgid and now I am now > getting an average of 25%.**** > > **** > > Anyone out there know of anything that could be causing this?**** > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc**** > > ** ** > ------------------------------ > > No virus found in this message. > Checked by AVG - www.avg.com > Version: 2013.0.2899 / Virus Database: 2641/6135 - Release Date: 02/26/13* > *** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From webmaster at serviidb.com Wed Feb 27 00:52:46 2013 From: webmaster at serviidb.com (Stephen Strickland) Date: Tue, 26 Feb 2013 19:52:46 -0500 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> Message-ID: <003501ce1484$c2c87a00$48596e00$@serviidb.com> When I was using mpm-preform varnish worked great with a high hit rate, but the server kept getting oom errors. From: nick tailor [mailto:nick.tailor at gmail.com] Sent: Tuesday, February 26, 2013 7:50 PM To: Mark Strickland Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish with mod prefork vs mpm worker with mod-fcgid I have heard of others having similar issue with same setup. Generally they use mpm-prefork or mod fcgi with varnish. I have heard using Nginx with varnish is the way to go. What I would do, is disable modfcgi and see if it changes. If it does you know the problems lies in the settings. Cheers Nick Tailor On Tue, Feb 26, 2013 at 4:37 PM, Mark Strickland wrote: I have this in m php-fcgid.conf AddHandler fcgid-script .fcgi .php # Where to look for the php.ini file? DefaultInitEnv PHPRC "/etc/php5/cgi" # Maximum requests a process handles before it is terminated MaxRequestsPerProcess 1000 # Maximum number of PHP processes MaxProcessCount 10 # Number of seconds of idle time before a process is terminated IPCCommTimeout 300 IdleTimeout 240 #Or use this if you use the file above FCGIWrapper /usr/bin/php-cgi .php ServerLimit 500 StartServers 3 MinSpareThreads 3 MaxSpareThreads 10 ThreadsPerChild 10 MaxClients 300 MaxRequestsPerChild 1000 From: varnish-misc-bounces at varnish-cache.org [mailto:varnish-misc-bounces at varnish-cache.org] On Behalf Of nick tailor Sent: Tuesday, February 26, 2013 7:28 PM To: Stephen Strickland Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish with mod prefork vs mpm worker with mod-fcgid I believe it has to do with the new processes being spawned. You might need to tweak the fastCGI processes to allow more request per process in one of your confs MaxRequestsPerProcess 1000 MaxProcessCount 5 IPCCommTimeout 600 IdleTimeout 600 Hope that helps Cheers Nick Tailor On Tue, Feb 26, 2013 at 3:30 PM, Stephen Strickland wrote: I running a site with Drupal 7 and with apache2 installed initially I was using mod prefork, I was getting varnish hit rates at least high 80's or middle 90s. I changed to mpm worker with mod_fcgid and now I am now getting an average of 25%. Anyone out there know of anything that could be causing this? _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _____ No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.2899 / Virus Database: 2641/6135 - Release Date: 02/26/13 No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.2899 / Virus Database: 2641/6128 - Release Date: 02/24/13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From varnish at ds.schledermann.net Wed Feb 27 07:51:16 2013 From: varnish at ds.schledermann.net (Daniel Schledermann) Date: Wed, 27 Feb 2013 08:51:16 +0100 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: <003501ce1484$c2c87a00$48596e00$@serviidb.com> References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> <003501ce1484$c2c87a00$48596e00$@serviidb.com> Message-ID: <512DBAF4.90707@ds.schledermann.net> Den 27-02-2013 01:52, Stephen Strickland skrev: > > When I was using mpm-preform varnish worked great with a high hit > rate, but the server kept getting oom errors. > Yes, mpm_prefork can be pretty memory intensive with modern CMS'es. > *From:*nick tailor [mailto:nick.tailor at gmail.com] > *Sent:* Tuesday, February 26, 2013 7:50 PM > *To:* Mark Strickland > *Cc:* varnish-misc at varnish-cache.org > *Subject:* Re: Varnish with mod prefork vs mpm worker with mod-fcgid > > I have heard of others having similar issue with same setup. > > Generally they use mpm-prefork or mod fcgi with varnish. I have heard > using Nginx with varnish is the way to go. > > What I would do, is disable modfcgi and see if it changes. If it does > you know the problems lies in the settings. > > It sounds like you should sanitize the output headers from Apache. You might have a high number of hit_for_pass. That is the only reasonable way that the low level server setup should be able to influence caching performance. But a better advide might be to use both Apache with mpm_prefork and NGINX on the site. Configure varnish to split the traffic and use NGINX for static files and Apache mpm_prefork for PHP requests only. That way you can configure the prefork with real conservative settings to only have a limited number of apache-processes, and maybe set MaxRequestsPerChild to avoid excessive ballooning of PHP memory. The majority of the request will go to NGINX, which do not use much memory in any case. That way you can keep maximum compatibility with PHP-code and at the same time avoid oom problems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Wed Feb 27 10:08:39 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 27 Feb 2013 11:08:39 +0100 Subject: Varnish with mod prefork vs mpm worker with mod-fcgid In-Reply-To: <512DBAF4.90707@ds.schledermann.net> References: <000001ce1479$46d18e60$d474ab20$@serviidb.com> <001b01ce1482$a27a9690$e76fc3b0$@roadrunner.com> <003501ce1484$c2c87a00$48596e00$@serviidb.com> <512DBAF4.90707@ds.schledermann.net> Message-ID: Hi, I have no idea whether this could help but I had trouble with CGI in the past because of headers being renamed or rewritten. An RFC sample: The header data may be presented as sent by the client, or may be rewritten in ways which do not change its semantics. If multiple headers with the same field-name are received then they must be rewritten as a single header having the same semantics. http://tools.ietf.org/html/draft-robinson-www-interface-00 So basically, I had trouble with headers not being seen because they were renamed, and thus ignored by the application which changed its behavior. Best Regards, Dridi On Wed, Feb 27, 2013 at 8:51 AM, Daniel Schledermann wrote: > Den 27-02-2013 01:52, Stephen Strickland skrev: > > When I was using mpm-preform varnish worked great with a high hit rate, but > the server kept getting oom errors. > > > Yes, mpm_prefork can be pretty memory intensive with modern CMS'es. > > > > > From: nick tailor [mailto:nick.tailor at gmail.com] > Sent: Tuesday, February 26, 2013 7:50 PM > To: Mark Strickland > Cc: varnish-misc at varnish-cache.org > Subject: Re: Varnish with mod prefork vs mpm worker with mod-fcgid > > > > I have heard of others having similar issue with same setup. > > > > Generally they use mpm-prefork or mod fcgi with varnish. I have heard using > Nginx with varnish is the way to go. > > > > What I would do, is disable modfcgi and see if it changes. If it does you > know the problems lies in the settings. > > > > > > It sounds like you should sanitize the output headers from Apache. You might > have a high number of hit_for_pass. That is the only reasonable way that the > low level server setup should be able to influence caching performance. > > But a better advide might be to use both Apache with mpm_prefork and NGINX > on the site. Configure varnish to split the traffic and use NGINX for static > files and Apache mpm_prefork for PHP requests only. That way you can > configure the prefork with real conservative settings to only have a limited > number of apache-processes, and maybe set MaxRequestsPerChild to avoid > excessive ballooning of PHP memory. The majority of the request will go to > NGINX, which do not use much memory in any case. That way you can keep > maximum compatibility with PHP-code and at the same time avoid oom problems. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From anands at rediff.co.in Wed Feb 27 13:44:00 2013 From: anands at rediff.co.in (Anand) Date: 27 Feb 2013 13:44:00 -0000 Subject: =?utf-8?B?MjAgQnl0ZSBzZXJ2ZWQ=?= Message-ID: <20130227134400.32504.qmail@pro237-208.mxout.rediffmailpro.com> Hi, Recently we see a weird problem surfaced while serving contents from varnish. A 20 Byte response is seen in the varnish logs whereas no genuine apache request log is seen on the origin server. Suspect is that we receive first 20 Byte response form origin servers and then the connection gets closed but the response is cached as 200 OK and thereafter all request failed because of incorrectly cached content. Below is trace of such event; can someone go through to help me understand the cause.  136 SessionOpen  c 10.50.252.5 4319 :80   136 ReqStart     c 10.50.252.5 4319 1560877350   136 RxRequest    c GET   136 RxURL        c /vi/ent/gi60s-4-/230987   136 RxProtocol   c HTTP/1.1   136 RxHeader     c Host: rest.anand.com   136 RxHeader     c Pragma: no-cache   136 RxHeader     c Accept: */*   136 RxHeader     c Accept-Encoding: deflate, gzip   136 RxHeader     c Referer: http://world.anand.com   136 RxHeader     c Connection: Close   136 RxHeader     c Cipher-Suite:   136 RxHeader     c SSL-Version: 0   136 RxHeader     c Cipher-Bits: 0   136 VCL_call     c recv   136 VCL_acl      c MATCH purge 10.50.0.0/16   136 VCL_return   c lookup   136 VCL_call     c hash   136 Hash         c /vi/ent/gi60s-4-/230987   136 Hash         c rest.is.anand.com   136 VCL_return   c hash   136 VCL_call     c miss fetch   136 Backend      c 42 isharerest isharerest[1]   136 TTL          c 1560877350 RFC 120 -1 -1 1360686521 0 1360686464 0 0   136 VCL_call     c fetch   136 TTL          c 1560877350 VCL 2592000 -1 -1 1360686521 -0   136 VCL_return   c deliver   136 ObjProtocol  c HTTP/1.1   136 ObjResponse  c OK   136 ObjHeader    c Date: Tue, 12 Feb 2013 16:27:44 GMT   136 ObjHeader    c Server: Apache   136 ObjHeader    c X-Powered-By: PHP/5.1.4   136 ObjHeader    c Content-Encoding: gzip   136 ObjHeader    c Last-Modified: Tue, 12 Feb 2013 16:28:40 GMT   136 ObjHeader    c cache-control: max-age=2592000   136 ObjHeader    c Content-Type: text/html; charset=utf-8   136 Gzip         c u F - 20 0 80 80 90   136 VCL_call     c deliver deliver   136 TxProtocol   c HTTP/1.1   136 TxStatus     c 200   136 TxResponse   c OK   136 TxHeader     c Content-Encoding: gzip   136 TxHeader     c Last-Modified: Tue, 12 Feb 2013 16:28:40 GMT   136 TxHeader     c cache-control: max-age=2592000   136 TxHeader     c Content-Type: text/html; charset=utf-8   136 TxHeader     c Content-Length: 20   136 TxHeader     c Accept-Ranges: bytes   136 TxHeader     c Date: Tue, 12 Feb 2013 16:28:40 GMT   136 TxHeader     c Connection: close   136 TxHeader     c X-Served-By: cdnserver   136 TxHeader     c Server: Rediff/3.0.2   136 TxHeader     c X-Cache: TCP_MISS   136 Length       c 20   136 ReqEnd       c 1560877350 1360686520.826237917 1360686520.841027975 0.000053883 0.004394054 0.010396004 Regards, Anand Shah A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oddur at ccpgames.com Thu Feb 28 11:56:26 2013 From: oddur at ccpgames.com (=?iso-8859-1?Q?Oddur_Sn=E6r_Magn=FAsson?=) Date: Thu, 28 Feb 2013 11:56:26 +0000 Subject: Initial streaming request blocking other requests to same url Message-ID: I have a scenario where I?m caching really big objects (multiple gigabytes) and delivering them using http streaming using varnish-3.0.3 When 2 clients make a request for an object, it successfully streams to the client that made the connection first, but the seconds client hangs until the object is fully delivered. From what I gather, the object is put in "busy" state and the second request blocks on that, rather than collapsing the requests. The 3.0 official documentation(https://www.varnish-cache.org/docs/3.0/reference/vcl.html) says: "As of Varnish Cache 3.0 the object will marked as busy as it is delivered so only client can access the object." Best regards, Oddur Sn?r Magn?sson Service Infrastructure Tech Lead | Reykjav?k | Iceland :wq From r at roze.lv Thu Feb 28 12:37:34 2013 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 Feb 2013 14:37:34 +0200 Subject: Initial streaming request blocking other requests to same url In-Reply-To: References: Message-ID: <8C49830EFED6449F8FF43261A3E61552@MasterPC> > The 3.0 official > documentation(https://www.varnish-cache.org/docs/3.0/reference/vcl.html) > says: > "As of Varnish Cache 3.0 the object will marked as busy as it is delivered > so only client can access the object." Try the streaming fork: http://repo.varnish-cache.org/source/varnish-3.0.2-streaming.tar.gz or https://github.com/mbgrydeland/varnish-cache-streaming/ iirc the git version had some extra fixes since 3.0.2 release. And for the particular objects in vcl_fetch set set beresp.do_stream = true; For example: sub vcl_fetch { # whatever else if (req.url ~ "\.mp4") { set beresp.do_stream = true; } return (deliver); } p.s. if this works want ISKs in Eve :) rr From oddur at ccpgames.com Thu Feb 28 13:58:39 2013 From: oddur at ccpgames.com (=?utf-8?B?T2RkdXIgU27DpnIgTWFnbsO6c3Nvbg==?=) Date: Thu, 28 Feb 2013 13:58:39 +0000 Subject: Initial streaming request blocking other requests to same url In-Reply-To: <8C49830EFED6449F8FF43261A3E61552@MasterPC> References: <8C49830EFED6449F8FF43261A3E61552@MasterPC> Message-ID: Reverting to 3.0.2streaming fixed this. I had previously used version and I thought it had been rolled into 3.0.3, but apparently not. Are there any plans to integrate these streaming fixes into the trunk release ? - Oddur -----Original Message----- From: Reinis Rozitis [mailto:r at roze.lv] Sent: Thursday, February 28, 2013 12:38 PM To: varnish-misc at varnish-cache.org Cc: Oddur Sn?r Magn?sson Subject: Re: Initial streaming request blocking other requests to same url > The 3.0 official > documentation(https://www.varnish-cache.org/docs/3.0/reference/vcl.htm > l) > says: > "As of Varnish Cache 3.0 the object will marked as busy as it is > delivered so only client can access the object." Try the streaming fork: http://repo.varnish-cache.org/source/varnish-3.0.2-streaming.tar.gz or https://github.com/mbgrydeland/varnish-cache-streaming/ iirc the git version had some extra fixes since 3.0.2 release. And for the particular objects in vcl_fetch set set beresp.do_stream = true; For example: sub vcl_fetch { # whatever else if (req.url ~ "\.mp4") { set beresp.do_stream = true; } return (deliver); } p.s. if this works want ISKs in Eve :) rr From dheianevans at gmail.com Thu Feb 28 23:33:23 2013 From: dheianevans at gmail.com (Ian Evans) Date: Thu, 28 Feb 2013 18:33:23 -0500 Subject: Rewriting served URLs dependent on user agent Message-ID: I've been looking at this site's discussion of how they're handling the traffic loss caused by Google's redesign of their image search. http://pixabay.com/en/blog/posts/hotlinking-protection-and-watermarking-for-google-32/ One of the ways they handle it is by having img urls served to humans end with ?i while robots like Googlebot would just see the img url in the source without the ?i They said their solution doesn't scale too well, and I began wondering if there was a way to use Varnish in the process. I've just started reading about Varnish and VCL, but from articles I've read, I thought it might be a great solution. Let's say the backend has all the img src urls already ending in "?i" as in img src="http://www.example.com/test.jpg?i" Is there a way that Varnish could cache two versions of the page? One, human visitors would get the cached page with the?i Two, robot user agents would get a cached version where Varnish would strip all the ?i from urls. Is that possible? Thanks for any pointers. From squeeks619 at gmail.com Sun Feb 3 21:17:49 2013 From: squeeks619 at gmail.com (Piper Sponaas) Date: Sun, 03 Feb 2013 21:17:49 -0000 Subject: Error 413 Request Entity Too Large Message-ID: How do I get rid of it. Its ruining my life!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.thrainer at jambit.com Mon Feb 4 09:49:22 2013 From: thomas.thrainer at jambit.com (Thomas Thrainer) Date: Mon, 04 Feb 2013 09:49:22 -0000 Subject: Expect-Header is not passed to backend Message-ID: <636217675D4F0B4591F132971A1EB3262FB7CCFA@exc1.jambit.com> Hi, I'm using Varnish 3.0.2 and have some troubles with the HTTP Expect header. My backend processes PUT requests, which have an Expect: 100-Continue header, in a special way: it does not send the 100 Continue but a redirect to the actual storage location for this specific PUT request. Varnish, however, sends the 100 Continue automatically (in cache_center.c:1526, as far as I have seen). It even removes the header from the request, so I can't check for it in VCL and switch to pipe mode in such a case. The effect is, that the backend consumes (and ignores) all the data sent from the client, then generates the redirect which causes the client to upload the data again. Is there any possibility to prevent Varnish from sending a 100 Continue on its own? Or could I figure out if such a continue was already sent in VCL code? Thanks, Thomas ________________________________ Dipl.-Ing. Thomas Thrainer, Senior Software Architect Phone: +49.89.45 23 47 - 331 Fingerprint: DD1D 0EA2 FD83 EA0C F135 3C01 BD00 43B9 D5FB 8DDD jambit GmbH Erika-Mann-Str. 63, 80636 M?nchen Phone: +49.89.45 23 47-0 Fax: +49.89.45 23 47-70 http://www.jambit.com where innovation works Gesch?ftsf?hrer: Peter F. Fellinger, Markus Hartinger Sitz: M?nchen; Registergericht: M?nchen, HRB 129139 -------------- next part -------------- An HTML attachment was scrubbed... URL: From charbonnier at crealya.com Sun Feb 10 14:36:09 2013 From: charbonnier at crealya.com (Tristan CHARBONNIER @ Crealya) Date: Sun, 10 Feb 2013 14:36:09 -0000 Subject: Varnish 2.1 - Error 503 during Logrotate Message-ID: <01d601ce079b$f3218a70$d9649f50$@crealya.com> Hello, First, I want to thanks everyone who is contributing to this great piece of software! Indeed, it's working very well and it's very efficient. I just encounter a small problem : I use varnish/squeeze v2.1.3-8 on a Debian Linux 6.0 web server with the kernel v2.6.32-5-amd64. I use Virtualmin to manage this webserver and I set up Varnish on top of apache2/squeeze v2.2.16-6+squeeze10. During the logrotate process, Varnish gives 503 errors. I traced the problem to the multiple apache2 graceful restart that are done during this process. Indeed, I host plenty of websites (more than 100) on my web server and, for each one, I have a block like this in my '/etc/logrotate.conf' file: /var/log/virtualmin/mywebsite.fr_access_log /var/log/virtualmin/mywebsite.fr_error_log { rotate 5 weekly compress postrotate /usr/sbin/apache2ctl graceful ; sleep 5 endscript } So, during the logrotate process (which lasts for 20 minutes), there is 100+ graceful restart of apache. 'graceful restart' are not supposed to terminate existing connections so I don't understand why Varnish fails to work: probably a mistake on my side! In my VCL file, I just have "backend apache { .host = "88.190.19.00"; .port = "8080"; }" block along with the usual ones (acl purge, vcl_fetch, .). I use the default Varnish configuration file, except that I raised the memory to 4G and use ram to store the cache. Any clue on how I can solve this small problem? Thanks in advance, Tristan CHARBONNIER -------------- next part -------------- An HTML attachment was scrubbed... URL: From jain.rahul2490 at gmail.com Mon Feb 25 16:27:37 2013 From: jain.rahul2490 at gmail.com (rahul jain) Date: Mon, 25 Feb 2013 16:27:37 -0000 Subject: Problem in varnish Message-ID: Hi, i am getting problem how to send PURGE request Header from HTTP and how to matche if(req.request=="PURGE") here what is PURGE and Where to set PURGE variable . please help me out. Thanks & regards Rahul JAin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nlbessay at yahoo.fr Tue Feb 5 10:29:11 2013 From: nlbessay at yahoo.fr (=?iso-8859-1?Q?nic=E9e_lenny_bessay?=) Date: Tue, 05 Feb 2013 10:29:11 -0000 Subject: url rewrite - varnish Message-ID: <1360060143.25270.YahooMailNeo@web171505.mail.ir2.yahoo.com> Hi Sir, I'm having problem rewriting the request from varnish to query the backend. Here is what varnish receive ==> http://cmsfoqua:8180/vgnExtTemplating/stresource?SecurityKey=lVPNsLcf&SiteName=axabanque&ServiceName=DetailLAB&Language=fr&ResourceName=Entete&TTL=3600&CIBLAGE= And I would like varnish to rewrite the url like this (http://cmsfoqua/xml/DetailLABEntete.xml) , then call the backend. Here is my rule in vcl_recv subroutine: set req.url = regsub(req.url, "^/vgnExtTemplating/stresource.([&?]SecurityKey=([a-zA-Z0-9])|[&?]SiteName=([a-zA-Z])|[&?]ServiceName=([a-zA-Z])|[&?]Language=([a-zA-Z])|[&?]ResourceName=([a-zA-Z])|[&?]TTL=([0-9])|[&?]CIBLAGE=([a-zA-Z]))*", "/xml/\1\2\4.xml"); Here is actually what the backend get : /xml/.xml Please, Help!!!! Regards, ? Lenny?? BESSAY ? tel:0665454628 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnerin at gmail.com Tue Feb 26 13:48:48 2013 From: jnerin at gmail.com (Jorge) Date: Tue, 26 Feb 2013 13:48:48 -0000 Subject: [PATCH] varnish_ munin plugin Message-ID: Hello, in the varnish_ munin plugin included with official munin package and hosted at http://munin-monitoring.org/browser/munin/plugins/node.d/varnish_.in it says the official way to send patches is this list. We have been using a modified varnish_ plugin for some time and I finally I have taken the code in sync with current plugin in order to send upstream our modifications. Attached is a patch that does add a few values to some graphs: * Add client_drop to request_rate graph. * Add s_pass s_pipe to hit_rate graph (now they are closer to sum 100% as we have ~3% of s_pass which weren't shown previously). * Add backend_toolate & backend_retry to backend_traffic graph. * Add n_wrk_lqueue & n_wrk_queued to threads graph. * Add Transient memory to memory_usage graph. In our old plugin we also have a graph that I'm not sure how to do now and I think it's useful. It shows the average object size measured as: 'rpn' => [ 'SMF_s0_g_bytes', 'SMA_s0_g_bytes', '+', 'n_object', '/' ] We sum the values of the two storages because at one time we switched from the file storage to the malloc storage and we lost the graphs. Usually one will be zero and the other will be the current memory in use. I know that it's not the exact average object size as there are more things stored, but at least for our cache contents seems to be a valuable metric, we are in the range 15-20k. But in the current version I'm not sure how to replicate this graph. Regards. -- Jorge Ner?n -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish_.in.patch Type: application/octet-stream Size: 2114 bytes Desc: not available URL: