From jpotter-varnish at codepuppy.com Wed Apr 2 15:53:44 2014 From: jpotter-varnish at codepuppy.com (Jeff Potter) Date: Wed, 2 Apr 2014 11:53:44 -0400 Subject: vcl_error and first_byte_timeout? Message-ID: Hi List, Is there a way to figure out from inside vcl_error if the error was raised because of a backend timeout? I?m trying to return a synthetically generated doc to clients that lets us see that the error was due to a backend taking too long to reply, so that the client can react different in this case (as opposed to other errors). Thanks! -Jeff From raymond.jennings at nytimes.com Wed Apr 2 16:06:29 2014 From: raymond.jennings at nytimes.com (Jennings, Raymond) Date: Wed, 2 Apr 2014 12:06:29 -0400 Subject: vcl_error and first_byte_timeout? In-Reply-To: References: Message-ID: That would be awesome to have if it does not exist. I would even settle for something making it's way to the varnishncsa log file via something like a %{Varnish... parameter. On Wed, Apr 2, 2014 at 11:53 AM, Jeff Potter wrote: > > Hi List, > > Is there a way to figure out from inside vcl_error if the error was raised > because of a backend timeout? > > I'm trying to return a synthetically generated doc to clients that lets us > see that the error was due to a backend taking too long to reply, so that > the client can react different in this case (as opposed to other errors). > > Thanks! > > -Jeff > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From puneet.arora at insticator.com Thu Apr 3 16:46:12 2014 From: puneet.arora at insticator.com (Puneet Arora) Date: Thu, 3 Apr 2014 12:46:12 -0400 Subject: 400 error returned by varnish Message-ID: Hi All, I have this crazy issue that happens some times. Varnish returns 400 error response and does not sends the request to the backend. I am using varnish for caching static elements along with the security module of varnish security.vcl. It has a LoadBalancer (haproxy) in front of it which also serves SSL termination. Firstly, the error happens sometimes (not able to figure out any pattern). And once i happens I have to clear my browser cache to get rid of it, and after that it works usual. Secondly, there is a LOST HEADER c COOKIE : in the logs. But I am unable to figure out what is the issue and how to resolve it. 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1396541072 1.0 11 SessionOpen c 10.151.90.119 37020 :3002 11 ReqStart c 10.151.90.119 37020 1943004300 11 RxRequest c GET 11 RxURL c / 11 RxProtocol c HTTP/1.1 11 RxHeader c Host: mywebsite.com 11 RxHeader c User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:28.0) Gecko/20100101 Firefox/28.0 11 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 11 RxHeader c Accept-Language: en-US,en;q=0.5 11 RxHeader c Accept-Encoding: gzip, deflate 11 LostHeader c Cookie: optimizelySegments=%7B%22536291792%22%3A%22direct%22%2C%22545421312%22%3A%22ff%22%2C%22549961054%22%3A%22false%22%7D; optimizelyEndUserId=oeu1395411026003r0.796982480060219; optimizelyBuckets=%7B%7D; __zlcmid=NydchMMELhVZZz; gs_u_GSN-985801-P=133 11 HttpGarbage c GET 11 VCL_call c error deliver 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 400 11 TxResponse c Bad Request 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Age: 0 11 TxHeader c Date: Thu, 03 Apr 2014 16:04:34 GMT 11 TxHeader c Content-Length: 409 11 TxHeader c Content-Type: text/html; charset=utf-8 11 TxHeader c X-Cache: MISS 11 Length c 409 11 ReqEnd c 1943004300 1396541074.854442120 1396541074.854650736 0.000062227 0.000128031 0.000080585 11 SessionClose c error 11 StatSess c 10.151.90.119 37020 0 1 1 0 0 0 171 409 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1396541075 1.0 Looking for any help to figure out the issue and suggestions to resolve it. Thanks *--* *Puneet Arora, Lead Developer* ______________________________________________________________ *Insticator featured as the top Super Bowl companion app! - CNET * *Insticator is the #1 gamification tool the competition has missed! - Business 2 Community* *Recent press: * *Let's get social:* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpotter-varnish at codepuppy.com Fri Apr 4 13:40:00 2014 From: jpotter-varnish at codepuppy.com (Jeff Potter) Date: Fri, 4 Apr 2014 09:40:00 -0400 Subject: vcl_error and first_byte_timeout? In-Reply-To: References: Message-ID: Hi List, Just checking if anyone has any ideas ? the list has been awfully quiet in general! Thanks, Jeff On Apr 2, 2014, at 11:53 AM, Jeff Potter wrote: > Hi List, > > Is there a way to figure out from inside vcl_error if the error was raised because of a backend timeout? > > I?m trying to return a synthetically generated doc to clients that lets us see that the error was due to a backend taking too long to reply, so that the client can react different in this case (as opposed to other errors). > > Thanks! > > -Jeff From perbu at varnish-software.com Fri Apr 4 16:29:55 2014 From: perbu at varnish-software.com (Per Buer) Date: Fri, 4 Apr 2014 18:29:55 +0200 Subject: vcl_error and first_byte_timeout? In-Reply-To: References: Message-ID: I think the answer is no. Varnishlog has some info, but not something you can pump into varnishncsa. There are of course, a number of ideas floating around to address this, but nothing has solidified yet. And in general we're all pretty busy with 4.0, which looks to be in pretty good shape. :-) Per. On Fri, Apr 4, 2014 at 3:40 PM, Jeff Potter wrote: > > Hi List, > > Just checking if anyone has any ideas ... the list has been awfully quiet in > general! > > Thanks, > Jeff > > > On Apr 2, 2014, at 11:53 AM, Jeff Potter > wrote: > > > Hi List, > > > > Is there a way to figure out from inside vcl_error if the error was > raised because of a backend timeout? > > > > I'm trying to return a synthetically generated doc to clients that lets > us see that the error was due to a backend taking too long to reply, so > that the client can react different in this case (as opposed to other > errors). > > > > Thanks! > > > > -Jeff > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From devel at jasonwoods.me.uk Mon Apr 7 09:01:36 2014 From: devel at jasonwoods.me.uk (Jason Woods) Date: Mon, 7 Apr 2014 10:01:36 +0100 Subject: Handling of Urlencoded string in URL in Varnish Message-ID: <26E961FF-F831-4F8A-B70E-F77389362FCB@jasonwoods.me.uk> Hi all, I apologise if this is the wrong place for a general question / request for advice about Varnish. If there's a better place, please do let me know! I have a URL (that I've modified) that has an url encoded sequence, e.g. /block/jason%E2%80%99s-first-blog That sequence is a fancy apostrophe. It must have snuck into the title through a copy and paste or something, but it brought up some caching issue with the encoding. We notice that when we printed the URL links in HTML, it was printed as above, %E2%80%99. But when some browsers appear to request the URL, it seems to lowercase the encoding to %e2%80%99. So we ended up with two instances of this page cached. Presumedly the browser decoded the URL and then re-encoded it for the request. This meant that when we modified the blog post we sent a PURGE for %E2%80%99 - but it only updated one instance of the cache, and there were many people seeing the old version. We ended up having to manually PURGE the other cache. Would I be right in saying that the following two URL are identical, and should have one instance of cache? I'm not sure if this is accounted for in RFC/ISO of anything. URLs are generally case-sensitive IIRC but is it true also for encodings? /block/jason%E2%80%99s-first-blog /block/jason%e2%80%99s-first-blog Is there a way for Varnish to decode the URL before hashing for the cache? Or is there a better approach? We are using varnish-3.0.3 revision 9e6a70f, which may be quite old, maybe this is fixed in a recent version? Thanks! Jason From perbu at varnish-software.com Mon Apr 7 09:25:35 2014 From: perbu at varnish-software.com (Per Buer) Date: Mon, 7 Apr 2014 11:25:35 +0200 Subject: Handling of Urlencoded string in URL in Varnish In-Reply-To: <26E961FF-F831-4F8A-B70E-F77389362FCB@jasonwoods.me.uk> References: <26E961FF-F831-4F8A-B70E-F77389362FCB@jasonwoods.me.uk> Message-ID: Hi Jason. Docwilco from Fastly has written an URL encoder/decorder VMOD that you can use. You could run it through it twice or patch it do uppcase/lowercase the encoding. https://www.varnish-cache.org/vmod/url-code Varnish itself doesn't try to interpret the URL much. Per. On Mon, Apr 7, 2014 at 11:01 AM, Jason Woods wrote: > Hi all, > > I apologise if this is the wrong place for a general question / request > for advice about Varnish. If there's a better place, please do let me know! > > I have a URL (that I've modified) that has an url encoded sequence, e.g. > /block/jason%E2%80%99s-first-blog > That sequence is a fancy apostrophe. It must have snuck into the title > through a copy and paste or something, but it brought up some caching issue > with the encoding. > > We notice that when we printed the URL links in HTML, it was printed as > above, %E2%80%99. But when some browsers appear to request the URL, it > seems to lowercase the encoding to %e2%80%99. So we ended up with two > instances of this page cached. Presumedly the browser decoded the URL and > then re-encoded it for the request. > This meant that when we modified the blog post we sent a PURGE for > %E2%80%99 - but it only updated one instance of the cache, and there were > many people seeing the old version. We ended up having to manually PURGE > the other cache. > > Would I be right in saying that the following two URL are identical, and > should have one instance of cache? I'm not sure if this is accounted for in > RFC/ISO of anything. URLs are generally case-sensitive IIRC but is it true > also for encodings? > /block/jason%E2%80%99s-first-blog > /block/jason%e2%80%99s-first-blog > > Is there a way for Varnish to decode the URL before hashing for the cache? > Or is there a better approach? > We are using varnish-3.0.3 revision 9e6a70f, which may be quite old, maybe > this is fixed in a recent version? > > Thanks! > > Jason > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From devel at jasonwoods.me.uk Mon Apr 7 14:04:21 2014 From: devel at jasonwoods.me.uk (Jason Woods) Date: Mon, 7 Apr 2014 15:04:21 +0100 Subject: Handling of Urlencoded string in URL in Varnish In-Reply-To: References: <26E961FF-F831-4F8A-B70E-F77389362FCB@jasonwoods.me.uk> Message-ID: <984A675C-B5FB-48E2-A098-87506954A44B@jasonwoods.me.uk> Hi On 7 Apr 2014, at 10.25, Per Buer wrote: > Hi Jason. > > Docwilco from Fastly has written an URL encoder/decorder VMOD that you can use. You could run it through it twice or patch it do uppcase/lowercase the encoding. > > https://www.varnish-cache.org/vmod/url-code > > Varnish itself doesn't try to interpret the URL much. > > Per. Thanks Per, that looks great! Would you agree this would be better resolved in varnish itself? It looks as though in default VCL it uses hash_data(req.url) - but I question the intension. If the intension is to cache distinct URLs then it needs to use hash_data(urldecode.from.core(req.url)) or hash_data(req.urldecoded). In using hash_data(req.url) it appears to say that it wants to cache distinct binary representations of a URL, which to be is not the intention. For reference, RFC3986 (I don't know if this means much though) it says in 2.1: > The uppercase hexadecimal digits 'A' through 'F' are equivalent to > the lowercase digits 'a' through 'f', respectively. If two URIs > differ only in the case of hexadecimal digits used in percent-encoded > octets, they are equivalent. For consistency, URI producers and > normalizers should use uppercase hexadecimal digits for all percent- > encodings. So I wonder if really the varnish core should decode before it hashes? I guess this is a very edge scenario though so not likely to be touched since it only affects outside latin characters and most places will use a more friendly URL or latin characters where it doesn't have any issue. What are your thoughts? Do you think it worth raising or best to just leave it be and work around it elsewhere? Regards, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From devel at jasonwoods.me.uk Mon Apr 7 14:17:14 2014 From: devel at jasonwoods.me.uk (Jason Woods) Date: Mon, 7 Apr 2014 15:17:14 +0100 Subject: Handling of Urlencoded string in URL in Varnish In-Reply-To: <984A675C-B5FB-48E2-A098-87506954A44B@jasonwoods.me.uk> References: <26E961FF-F831-4F8A-B70E-F77389362FCB@jasonwoods.me.uk> <984A675C-B5FB-48E2-A098-87506954A44B@jasonwoods.me.uk> Message-ID: <1102BF35-7844-4C82-AF30-5B2CB984EBB7@jasonwoods.me.uk> > So I wonder if really the varnish core should decode before it hashes? Maybe even simply "normalise" the URL, or provide a builtin function to do so which converts the url encodings to upper case (as mentioned in the RFC). From ticktockhouse at gmail.com Mon Apr 7 14:26:57 2014 From: ticktockhouse at gmail.com (Jerry Steele) Date: Mon, 7 Apr 2014 15:26:57 +0100 Subject: Multiple backends on same server (cPanel) In-Reply-To: References: <5D103CE839D50E4CBC62C9FD7B83287C8FB4BD10@EXCN015.encara.local.ads> Message-ID: Hello, As I stated, the above seems to work, but now, I have a requirement to be able to point development.domain.com to the "default" backend. I have tried this: backend default { .host = "11.22.33.208"; .port = "8081"; } backend default2 { .host = "11.22.33.210"; .port = "8081"; } sub vcl_recv { if (req.http.host == "development.domain.com" ) { set.req.backend = default; } if (req.http.host ~ "domain3.com$";) { set req.backend = default2; } but I get this error: Message from VCC-compiler: Expected an action, 'if', '{' or '}' ('input' Line 21 Pos 9) set.req.backend = default; --------###############----------- ...can anyone suggest why this isn't working? Is it some appallingly bad syntax on my part? Thanks -- --- Jerry Steele Telephone: +44 (0)7920 237105 http://ticktockhouse.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at smartjog.com Mon Apr 7 14:29:33 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Mon, 7 Apr 2014 16:29:33 +0200 Subject: Multiple backends on same server (cPanel) In-Reply-To: References: <5D103CE839D50E4CBC62C9FD7B83287C8FB4BD10@EXCN015.encara.local.ads> Message-ID: <5342B64D.2090702@smartjog.com> On 04/07/2014 04:26 PM, Jerry Steele wrote: > set.req.backend = default; Try: set req.backend = default; (no dot) -- Guillaume Quintard From ticktockhouse at gmail.com Mon Apr 7 14:32:17 2014 From: ticktockhouse at gmail.com (Jerry Steele) Date: Mon, 7 Apr 2014 15:32:17 +0100 Subject: Multiple backends on same server (cPanel) In-Reply-To: <5342B64D.2090702@smartjog.com> References: <5D103CE839D50E4CBC62C9FD7B83287C8FB4BD10@EXCN015.encara.local.ads> <5342B64D.2090702@smartjog.com> Message-ID: Thank you! *slaps forehead* The syntax check now works, I'll have to wait till tonight to reload the config to test it.. Thanks again.. -- --- Jerry Steele Telephone: +44 (0)7920 237105 http://ticktockhouse.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From edward at alatest.com Tue Apr 8 15:19:36 2014 From: edward at alatest.com (Edward Zambrano) Date: Tue, 8 Apr 2014 17:19:36 +0200 Subject: Size of the url for the varnish hash Message-ID: Hello, Reading the logs from the varnishncsa daemon I realized that the url top size is 255, even though that the url reaching the backend is larger (it has many GET parameters). also checking the Hash entries from the barnishlog I see that is only using the first 255 characters.... I'm looking how to increase the size and I found some related issue here https://www.varnish-cache.org/trac/ticket/1016 but I'm reading the description of those parameters in https://www.varnish-cache.org/docs/3.0/reference/varnishd.html and it doesn't look related to my problem (in the apache logs of my backend servers I can see that the URL is arriving correctly, with more than 255 characters) Could you please tell me how to increase that limitation? thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.jennings at nytimes.com Tue Apr 8 15:31:47 2014 From: raymond.jennings at nytimes.com (Jennings, Raymond) Date: Tue, 8 Apr 2014 11:31:47 -0400 Subject: Size of the url for the varnish hash In-Reply-To: References: Message-ID: I think what you want is this: SHM_RECLEN=1024 I have this value in my /etc/sysconfig/varnish and is used at the bottom to start varnishd DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ -f ${VARNISH_VCL_CONF} \ -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ -t ${VARNISH_TTL} \ -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ -p shm_reclen=${SHM_RECLEN} \ -p http_req_hdr_len=${HTTP_REQ_HDR_LEN} \ -p idle_send_timeout=${IDLE_SEND_TIMEOUT} \ -u varnish -g varnish \ -S ${VARNISH_SECRET_FILE} \ -s ${VARNISH_STORAGE}" When varnishd is running you should something like (see the shm_reclen parameter): root 9095 1 0 11:19 ? 00:00:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 10.242.217.220:6082 -t 2400 -w 1,1000,120 -p shm_reclen=1024 -p http_req_hdr_len=17408 -p idle_send_timeout=40 -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,9G On Tue, Apr 8, 2014 at 11:19 AM, Edward Zambrano wrote: > Hello, > > Reading the logs from the varnishncsa daemon I realized that the url top > size is 255, even though that the url reaching the backend is larger (it > has many GET parameters). also checking the Hash entries from the > barnishlog I see that is only using the first 255 characters.... I'm > looking how to increase the size and I found some related issue here > https://www.varnish-cache.org/trac/ticket/1016 but I'm reading the > description of those parameters in > https://www.varnish-cache.org/docs/3.0/reference/varnishd.html and it > doesn't look related to my problem (in the apache logs of my backend > servers I can see that the URL is arriving correctly, with more than 255 > characters) > > Could you please tell me how to increase that limitation? > > thanks! > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edward at alatest.com Tue Apr 8 16:13:00 2014 From: edward at alatest.com (Edward Zambrano) Date: Tue, 8 Apr 2014 18:13:00 +0200 Subject: Size of the url for the varnish hash In-Reply-To: References: Message-ID: works like a charm, thanks! On Tue, Apr 8, 2014 at 5:31 PM, Jennings, Raymond < raymond.jennings at nytimes.com> wrote: > I think what you want is this: > > SHM_RECLEN=1024 > > I have this value in my /etc/sysconfig/varnish and is used at the bottom > to start varnishd > > DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ > -f ${VARNISH_VCL_CONF} \ > -T > ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ > -t ${VARNISH_TTL} \ > -w > ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ > -p shm_reclen=${SHM_RECLEN} \ > -p http_req_hdr_len=${HTTP_REQ_HDR_LEN} \ > -p idle_send_timeout=${IDLE_SEND_TIMEOUT} \ > -u varnish -g varnish \ > -S ${VARNISH_SECRET_FILE} \ > -s ${VARNISH_STORAGE}" > > > > When varnishd is running you should something like (see the shm_reclen > parameter): > > root 9095 1 0 11:19 ? 00:00:00 /usr/sbin/varnishd -P > /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T > 10.242.217.220:6082 -t 2400 -w 1,1000,120 -p shm_reclen=1024 -p > http_req_hdr_len=17408 -p idle_send_timeout=40 -u varnish -g varnish -S > /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,9G > > > > > > > On Tue, Apr 8, 2014 at 11:19 AM, Edward Zambrano wrote: > >> Hello, >> >> Reading the logs from the varnishncsa daemon I realized that the url top >> size is 255, even though that the url reaching the backend is larger (it >> has many GET parameters). also checking the Hash entries from the >> barnishlog I see that is only using the first 255 characters.... I'm >> looking how to increase the size and I found some related issue here >> https://www.varnish-cache.org/trac/ticket/1016 but I'm reading the >> description of those parameters in >> https://www.varnish-cache.org/docs/3.0/reference/varnishd.html and it >> doesn't look related to my problem (in the apache logs of my backend >> servers I can see that the URL is arriving correctly, with more than 255 >> characters) >> >> Could you please tell me how to increase that limitation? >> >> thanks! >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From royf442 at gmail.com Tue Apr 8 18:08:08 2014 From: royf442 at gmail.com (Roy Forster) Date: Tue, 8 Apr 2014 19:08:08 +0100 Subject: The max-age value appears to be: 0 Message-ID: Drupal 7 isvarnishworking.com gives me this "Varnish appears to be responding at that url, but the Cache-Control header's "max-age" value is less than 1, which means that Varnish will never serve content from cache at this url." When I serve http pages from port 80 setting the cache control for anonymous users works as expected. Yet when I set apache to listen on 8080 and start varnish on port 80 the cache control sets max-age = 0, and this is despite adding; "$conf['page_cache_maximum_age '] = 21600;" to settings.php Hope someone can help me. Kind regards, Roy Forster -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Tue Apr 8 20:50:36 2014 From: perbu at varnish-software.com (Per Buer) Date: Tue, 8 Apr 2014 22:50:36 +0200 Subject: The max-age value appears to be: 0 In-Reply-To: References: Message-ID: Hi Roy. This list is mostly concerned with Varnish. This sounds like something Drupal or Apache does, so you might have better luck seeking help there. You could potentially work around the issue by forcing the TTL in Varnish. Please see https://www.varnish-cache.org/docs/3.0/tutorial/increasing_your_hitrate.html#overriding-the-time-to-live-ttl Regards, Per. On Tue, Apr 8, 2014 at 8:08 PM, Roy Forster wrote: > Drupal 7 > isvarnishworking.com gives me this > "Varnish appears to be responding at that url, but the Cache-Control > header's > "max-age" value is less than 1, which means that Varnish will never serve > content from cache at this url." > > When I serve http pages from port 80 setting the cache control for > anonymous > users works as expected. > > Yet when I set apache to listen on 8080 and start varnish on port 80 the > cache control sets max-age = 0, and this is despite adding; > > "$conf['page_cache_maximum_age > '] = 21600;" to settings.php > > Hope someone can help me. > > Kind regards, > Roy Forster > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From royf442 at gmail.com Thu Apr 10 00:38:14 2014 From: royf442 at gmail.com (Roy Forster) Date: Thu, 10 Apr 2014 01:38:14 +0100 Subject: What else can I try Message-ID: It's great when something works and of course not when it doesn't. When I access my site using :8080 the site loads but there is no mention of varnish in the header. Without :8080 appended I get the default website page but varnish is mentioned in the header. A port scan reveals both 80 and 8080 are reachable I've also tried stopping iptables. I have rebuilt the server, Centos VPS, with just the basic modules enabled enough to get drupal 7 up and running, (disabling pagespeed etc) in an attempt to resolve any server misconfiguration. isvarnishworking.com reckons that varnish is working. The site is drupal 7. Is there anything else that I could try. I've searched the internet and seen one possible solution is to try things the other way round that is to have varnish listen on port 8080 instead and have port 80 as the backend. What else can I try, check, change on the server to have varnish serve page from :? I wonder how to move forward. The annoying thing is that I at one stage did have things working correctly -------------- next part -------------- An HTML attachment was scrubbed... URL: From pprocacci at datapipe.com Thu Apr 10 00:42:59 2014 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Wed, 9 Apr 2014 19:42:59 -0500 Subject: What else can I try In-Reply-To: References: Message-ID: <20140410004259.GP46686@nat.myhome> On Thu, Apr 10, 2014 at 01:38:14AM +0100, Roy Forster wrote: > When I access my site using :8080 the site loads but there is no mention of > varnish in the header. Without :8080 appended I get the default website > page but varnish is mentioned in the header. You'll have to forgive me, but this sounds like it's functioning just fine. From pprocacci at datapipe.com Thu Apr 10 00:45:55 2014 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Wed, 9 Apr 2014 19:45:55 -0500 Subject: What else can I try In-Reply-To: <20140410004259.GP46686@nat.myhome> References: <20140410004259.GP46686@nat.myhome> Message-ID: <20140410004555.GQ46686@nat.myhome> On Wed, Apr 09, 2014 at 07:42:59PM -0500, Paul A. Procacci wrote: > On Thu, Apr 10, 2014 at 01:38:14AM +0100, Roy Forster wrote: > > When I access my site using :8080 the site loads but there is no mention of > > varnish in the header. Without :8080 appended I get the default website > > page but varnish is mentioned in the header. > > You'll have to forgive me, but this sounds like it's functioning just fine. I'm sorry (again) ... you mean default apache page? If that's the case, make sure your backend configuration matches the way you are accessing it. For instance....if you are accessing it via ip address, you need to ensure that ip address is defined in the vhost container. That should be about it. From pprocacci at datapipe.com Thu Apr 10 01:01:37 2014 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Wed, 9 Apr 2014 20:01:37 -0500 Subject: What else can I try In-Reply-To: <20140410004555.GQ46686@nat.myhome> References: <20140410004259.GP46686@nat.myhome> <20140410004555.GQ46686@nat.myhome> Message-ID: <20140410010137.GR46686@nat.myhome> Copied the list for completeness...... On Thu, Apr 10, 2014 at 01:50:46AM +0100, Roy Forster wrote: > Thanks Paul not yet sure how to do that but at least I feel hopeful Let me try to be a bit descriptive: Let's say you have apache listening on 127.0.0.1:8080 and 123.45.67.89:8080 (randomly selected) The host your interested in as defined in your apache config is sitting on 123.45.67.89:8080. Varnish listens on 123.45.67.89:80. In your varnish config you define the backend as 127.0.0.1:8080. The problem I believe you are running into is you are connecting to apache from varnish over an interface that apache can't make a match on. You've connected to 127.0.0.1:8080 but the host is sitting @ 123.45.67.89:8080. Since you didn't provide this information this is a stab in the dark, but something like this or similar would give you the default page you are describing. ~Paul > > On 10 April 2014 01:45, Paul A. Procacci wrote: > > > On Wed, Apr 09, 2014 at 07:42:59PM -0500, Paul A. Procacci wrote: > > > On Thu, Apr 10, 2014 at 01:38:14AM +0100, Roy Forster wrote: > > > > When I access my site using :8080 the site loads but there is no > > mention of > > > > varnish in the header. Without :8080 appended I get the default website > > > > page but varnish is mentioned in the header. > > > > > > You'll have to forgive me, but this sounds like it's functioning just > > fine. > > > > I'm sorry (again) ... you mean default apache page? > > > > If that's the case, make sure your backend configuration matches the way > > you > > are accessing it. For instance....if you are accessing it via ip address, > > you need to ensure that ip address is defined in the vhost container. > > > > That should be about it. > > From ruben at varnish-software.com Thu Apr 10 06:06:53 2014 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Thu, 10 Apr 2014 08:06:53 +0200 Subject: Varnish Cache 4.0 Release Parties on April 29th, 2014 Message-ID: Hello everyone, Every few years we have a major Varnish Cache release and that needs to be celebrated properly. So let's again gather locally to learn about the goodness the 4.0.0 release brings in the good company of fellow Varnishers. As the release is now imminent we have set the date to April 29th, 2014 for the Varnish 4.0 Release Parties. Our site is now up and you can organize or join a party already now: * Party Site with Map and General info > http://v4party.varnish-cache.org/ * Your own party + party pack? > http://v4party.varnish-cache.org/help-us As usual, make sure to tag your blogs, tweets, pictures and videos with the #vr4p hashtag so that we can all be part of the fun both during and after the parties. If you are busy on work rotation that day or have no party nearby, no worries, you can join us online on the Varnish Cache 4.0 Release Party - Live Stream (Hangout on Air) from Copenhagen, Oslo and London. You can RSVP already now > https://plus.google.com/events/cvuqnm8cof58paogkc2uj0giru0 If you have any questions related to the Release party (i.e. want to set up your own party and don't know where to start or want to broadcast from your local party) feel free to reply to this email or reach out to me on Skype, Twitter or IRC and I will do what i can to give you a hand. Hope you all can join us! In behalf of the Varnish Cache team, -- *Rub?n Romero* Community & Sales | Varnish Software AS Cell: +47 95964088 / Office: +47 21989260 Skype, Twitter & IRC: ruben_varnish We Make Websites Fly!Winner of the 2013 Red Herring Top 100 Global Awards -------------- next part -------------- An HTML attachment was scrubbed... URL: From StephenG at glam.com Thu Apr 10 08:55:12 2014 From: StephenG at glam.com (Stephen Gazard) Date: Thu, 10 Apr 2014 08:55:12 +0000 Subject: What else can I try In-Reply-To: <20140410010137.GR46686@nat.myhome> References: <20140410004259.GP46686@nat.myhome> <20140410004555.GQ46686@nat.myhome> <20140410010137.GR46686@nat.myhome> Message-ID: Roy, I agree this is the case. I've given more details in the thread I responded to last year. That the linked article is WordPress does not matter. You need RPAF in apache and it configured to pass on the server's external IP address See https://www.varnish-cache.org/lists/pipermail/varnish-misc/2013-March/022940.html Stephen On Thu, Apr 10, 2014 at 01:50:46AM +0100, Roy Forster wrote: > Thanks Paul not yet sure how to do that but at least I feel hopeful Let me try to be a bit descriptive: Let's say you have apache listening on 127.0.0.1:8080 and 123.45.67.89:8080 (randomly selected) The host your interested in as defined in your apache config is sitting on 123.45.67.89:8080. Varnish listens on 123.45.67.89:80. In your varnish config you define the backend as 127.0.0.1:8080. The problem I believe you are running into is you are connecting to apache from varnish over an interface that apache can't make a match on. You've connected to 127.0.0.1:8080 but the host is sitting @ 123.45.67.89:8080. Since you didn't provide this information this is a stab in the dark, but something like this or similar would give you the default page you are describing. ~Paul _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jpeltier at sfu.ca Fri Apr 11 06:26:40 2014 From: jpeltier at sfu.ca (James A. Peltier) Date: Thu, 10 Apr 2014 23:26:40 -0700 (PDT) Subject: Varnish Cache 4.0 Release Parties on April 29th, 2014 In-Reply-To: Message-ID: <1474196505.47951096.1397197600704.JavaMail.root@sfu.ca> ----- Original Message ----- | Hello everyone, | Every few years we have a major Varnish Cache release and that needs | to be celebrated properly. So let's again gather locally to learn | about the goodness the 4.0.0 release brings in the good company of | fellow Varnishers. | As the release is now imminent we have set the date to April 29th, | 2014 for the Varnish 4.0 Release Parties. Our site is now up and you | can organize or join a party already now: | * Party Site with Map and General info > | http://v4party.varnish-cache.org/ | * Your own party + party pack? > | http://v4party.varnish-cache.org/help-us | As usual, make sure to tag your blogs, tweets, pictures and videos | with the #vr4p hashtag so that we can all be part of the fun both | during and after the parties. | If you are busy on work rotation that day or have no party nearby, no | worries, you can join us online on the Varnish Cache 4.0 Release | Party - Live Stream (Hangout on Air) from Copenhagen, Oslo and | London. You can RSVP already now > | https://plus.google.com/events/cvuqnm8cof58paogkc2uj0giru0 | If you have any questions related to the Release party (i.e. want to | set up your own party and don't know where to start or want to | broadcast from your local party) feel free to reply to this email or | reach out to me on Skype, Twitter or IRC and I will do what i can to | give you a hand. | Hope you all can join us! | In behalf of the Varnish Cache team, It would seem that Varnish 4 RHEL 6 repos have a requirement for jemalloc, however there are no instructions of where to get it and it is not provided as part of the repository. Is it an error that it is not listed as a dependancy to the RHEL/CentOS 6 build requirements or is the RPM wrong? -- James A. Peltier Manager, IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier at sfu.ca Website : http://www.sfu.ca/itservices "Around here, however, we don?t look backwards for very long. We KEEP MOVING FORWARD, opening up new doors and doing things because we?re curious and curiosity keeps leading us down new paths." - Walt Disney -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Fri Apr 11 07:11:14 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Fri, 11 Apr 2014 09:11:14 +0200 Subject: Varnish Cache 4.0 Release Parties on April 29th, 2014 In-Reply-To: <1474196505.47951096.1397197600704.JavaMail.root@sfu.ca> References: <1474196505.47951096.1397197600704.JavaMail.root@sfu.ca> Message-ID: On Fri, Apr 11, 2014 at 8:26 AM, James A. Peltier wrote: > ________________________________ > > Hello everyone, > > Every few years we have a major Varnish Cache release and that needs to be > celebrated properly. So let's again gather locally to learn about the > goodness the 4.0.0 release brings in the good company of fellow Varnishers. > > As the release is now imminent we have set the date to April 29th, 2014 for > the Varnish 4.0 Release Parties. Our site is now up and you can organize or > join a party already now: > > * Party Site with Map and General info > http://v4party.varnish-cache.org/ > * Your own party + party pack? > http://v4party.varnish-cache.org/help-us > > As usual, make sure to tag your blogs, tweets, pictures and videos with the > #vr4p hashtag so that we can all be part of the fun both during and after > the parties. > > If you are busy on work rotation that day or have no party nearby, no > worries, you can join us online on the Varnish Cache 4.0 Release Party - > Live Stream (Hangout on Air) from Copenhagen, Oslo and London. You can RSVP > already now > https://plus.google.com/events/cvuqnm8cof58paogkc2uj0giru0 > > If you have any questions related to the Release party (i.e. want to set up > your own party and don't know where to start or want to broadcast from your > local party) feel free to reply to this email or reach out to me on Skype, > Twitter or IRC and I will do what i can to give you a hand. > > Hope you all can join us! > > In behalf of the Varnish Cache team, > > > It would seem that Varnish 4 RHEL 6 repos have a requirement for jemalloc, > however there are no instructions of where to get it and it is not provided > as part of the repository. Is it an error that it is not listed as a > dependancy to the RHEL/CentOS 6 build requirements or is the RPM wrong? Hi James, jemalloc is availlable through the EPEL repo for RHEL and its rebuilds like CentOS. Cheers, Dridi > -- > James A. Peltier > Manager, IT Services - Research Computing Group > Simon Fraser University - Burnaby Campus > Phone : 778-782-6573 > Fax : 778-782-3045 > E-Mail : jpeltier at sfu.ca > Website : http://www.sfu.ca/itservices > > "Around here, however, we don?t look backwards for very long. We KEEP > MOVING FORWARD, opening up new doors and doing things because we?re curious > and curiosity keeps leading us down new paths." - Walt Disney > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jpeltier at sfu.ca Fri Apr 11 07:51:49 2014 From: jpeltier at sfu.ca (James A. Peltier) Date: Fri, 11 Apr 2014 00:51:49 -0700 (PDT) Subject: Varnish Cache 4.0 Release Parties on April 29th, 2014 In-Reply-To: Message-ID: <459011850.47972477.1397202709993.JavaMail.root@sfu.ca> ----- Original Message ----- | On Fri, Apr 11, 2014 at 8:26 AM, James A. Peltier | wrote: | > ________________________________ | > | > Hello everyone, | > | > Every few years we have a major Varnish Cache release and that | > needs to be | > celebrated properly. So let's again gather locally to learn about | > the | > goodness the 4.0.0 release brings in the good company of fellow | > Varnishers. | > | > As the release is now imminent we have set the date to April 29th, | > 2014 for | > the Varnish 4.0 Release Parties. Our site is now up and you can | > organize or | > join a party already now: | > | > * Party Site with Map and General info > | > http://v4party.varnish-cache.org/ | > * Your own party + party pack? > | > http://v4party.varnish-cache.org/help-us | > | > As usual, make sure to tag your blogs, tweets, pictures and videos | > with the | > #vr4p hashtag so that we can all be part of the fun both during and | > after | > the parties. | > | > If you are busy on work rotation that day or have no party nearby, | > no | > worries, you can join us online on the Varnish Cache 4.0 Release | > Party - | > Live Stream (Hangout on Air) from Copenhagen, Oslo and London. You | > can RSVP | > already now > | > https://plus.google.com/events/cvuqnm8cof58paogkc2uj0giru0 | > | > If you have any questions related to the Release party (i.e. want | > to set up | > your own party and don't know where to start or want to broadcast | > from your | > local party) feel free to reply to this email or reach out to me on | > Skype, | > Twitter or IRC and I will do what i can to give you a hand. | > | > Hope you all can join us! | > | > In behalf of the Varnish Cache team, | > | > | > It would seem that Varnish 4 RHEL 6 repos have a requirement for | > jemalloc, | > however there are no instructions of where to get it and it is not | > provided | > as part of the repository. Is it an error that it is not listed as | > a | > dependancy to the RHEL/CentOS 6 build requirements or is the RPM | > wrong? | | Hi James, | | jemalloc is availlable through the EPEL repo for RHEL and its | rebuilds | like CentOS. | | Cheers, | Dridi Yes, that I understand, more the documentation for compilation and for installation of the RPM do not state that EPEL is a requirement for Varnish 4. https://www.varnish-cache.org/installation/redhat Does not include Varnish 4 documentation (yet), but is linked to from Varnish 4 document https://www.varnish-cache.org/docs/4.0/installation/install.html#red-hat-centos nor is it listed as a dependancy for compiling. I assume this is a "bug" and I'm reporting it. -- James A. Peltier Manager, IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier at sfu.ca Website : http://www.sfu.ca/itservices "Around here, however, we don?t look backwards for very long. We KEEP MOVING FORWARD, opening up new doors and doing things because we?re curious and curiosity keeps leading us down new paths." - Walt Disney From dridi.boukelmoune at zenika.com Fri Apr 11 08:23:12 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Fri, 11 Apr 2014 10:23:12 +0200 Subject: Varnish Cache 4.0 Release Parties on April 29th, 2014 In-Reply-To: <459011850.47972477.1397202709993.JavaMail.root@sfu.ca> References: <459011850.47972477.1397202709993.JavaMail.root@sfu.ca> Message-ID: On Fri, Apr 11, 2014 at 9:51 AM, James A. Peltier wrote: > ----- Original Message ----- > | On Fri, Apr 11, 2014 at 8:26 AM, James A. Peltier > | wrote: > | > ________________________________ > | > > | > Hello everyone, > | > > | > Every few years we have a major Varnish Cache release and that > | > needs to be > | > celebrated properly. So let's again gather locally to learn about > | > the > | > goodness the 4.0.0 release brings in the good company of fellow > | > Varnishers. > | > > | > As the release is now imminent we have set the date to April 29th, > | > 2014 for > | > the Varnish 4.0 Release Parties. Our site is now up and you can > | > organize or > | > join a party already now: > | > > | > * Party Site with Map and General info > > | > http://v4party.varnish-cache.org/ > | > * Your own party + party pack? > > | > http://v4party.varnish-cache.org/help-us > | > > | > As usual, make sure to tag your blogs, tweets, pictures and videos > | > with the > | > #vr4p hashtag so that we can all be part of the fun both during and > | > after > | > the parties. > | > > | > If you are busy on work rotation that day or have no party nearby, > | > no > | > worries, you can join us online on the Varnish Cache 4.0 Release > | > Party - > | > Live Stream (Hangout on Air) from Copenhagen, Oslo and London. You > | > can RSVP > | > already now > > | > https://plus.google.com/events/cvuqnm8cof58paogkc2uj0giru0 > | > > | > If you have any questions related to the Release party (i.e. want > | > to set up > | > your own party and don't know where to start or want to broadcast > | > from your > | > local party) feel free to reply to this email or reach out to me on > | > Skype, > | > Twitter or IRC and I will do what i can to give you a hand. > | > > | > Hope you all can join us! > | > > | > In behalf of the Varnish Cache team, > | > > | > > | > It would seem that Varnish 4 RHEL 6 repos have a requirement for > | > jemalloc, > | > however there are no instructions of where to get it and it is not > | > provided > | > as part of the repository. Is it an error that it is not listed as > | > a > | > dependancy to the RHEL/CentOS 6 build requirements or is the RPM > | > wrong? > | > | Hi James, > | > | jemalloc is availlable through the EPEL repo for RHEL and its > | rebuilds > | like CentOS. > | > | Cheers, > | Dridi > > Yes, that I understand, more the documentation for compilation and for installation of the RPM do not state that EPEL is a requirement for Varnish 4. > > https://www.varnish-cache.org/installation/redhat > > Does not include Varnish 4 documentation (yet), but is linked to from Varnish 4 document > > https://www.varnish-cache.org/docs/4.0/installation/install.html#red-hat-centos > > nor is it listed as a dependancy for compiling. I assume this is a "bug" and I'm reporting it. Good catch, If my memory serves well Varnish used to embed its own copy of jemalloc, someone forgot to update the documentation. It still mentions el5 too which is not supported for Varnish 4. Dridi > -- > James A. Peltier > Manager, IT Services - Research Computing Group > Simon Fraser University - Burnaby Campus > Phone : 778-782-6573 > Fax : 778-782-3045 > E-Mail : jpeltier at sfu.ca > Website : http://www.sfu.ca/itservices > > "Around here, however, we don?t look backwards for very long. We KEEP MOVING FORWARD, opening up new doors and doing things because we?re curious and curiosity keeps leading us down new paths." - Walt Disney From vkjk89 at gmail.com Fri Apr 11 08:55:50 2014 From: vkjk89 at gmail.com (Vimal Jain) Date: Fri, 11 Apr 2014 14:25:50 +0530 Subject: Is HTTPS supoorted by Varnish ? Message-ID: Hi, I am planning to use Varnish as my front end accelerator in production. I am using HTTPS.Is it supported by Varnish ? -- Thanks and Regards, Vimal Jain -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef at scaleengine.com Fri Apr 11 09:02:06 2014 From: stef at scaleengine.com (Stefan Caunter) Date: Fri, 11 Apr 2014 05:02:06 -0400 Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: References: Message-ID: On Fri, Apr 11, 2014 at 4:55 AM, Vimal Jain wrote: > Hi, > I am planning to use Varnish as my front end accelerator in production. > I am using HTTPS.Is it supported by Varnish ? > > -- > Thanks and Regards, > Vimal Jain to save you the trouble of googling this question https://www.google.com/search?q=varnish+https&pws=0 nope use nginx or some other endpoint have fun From vkjk89 at gmail.com Fri Apr 11 09:05:39 2014 From: vkjk89 at gmail.com (Vimal Jain) Date: Fri, 11 Apr 2014 14:35:39 +0530 Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: References: Message-ID: Thanks Stefan. On Fri, Apr 11, 2014 at 2:32 PM, Stefan Caunter wrote: > On Fri, Apr 11, 2014 at 4:55 AM, Vimal Jain wrote: > > Hi, > > I am planning to use Varnish as my front end accelerator in production. > > I am using HTTPS.Is it supported by Varnish ? > > > > -- > > Thanks and Regards, > > Vimal Jain > > to save you the trouble of googling this question > https://www.google.com/search?q=varnish+https&pws=0 > > nope > > use nginx or some other endpoint > > have fun > -- Thanks and Regards, Vimal Jain -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at darkmere.gen.nz Fri Apr 11 09:12:14 2014 From: simon at darkmere.gen.nz (Simon Lyall) Date: Fri, 11 Apr 2014 21:12:14 +1200 (NZST) Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: References: Message-ID: On Fri, 11 Apr 2014, Vimal Jain wrote: > I am planning to use Varnish as my front end accelerator in production. > I am using HTTPS.Is it supported by Varnish ? No, See: https://www.varnish-cache.org/docs/trunk/phk/ssl.html Good alternatives include (in rough order of recomendation): Haproxy: http://haproxy.1wt.eu Stud: https://github.com/bumptech/stud Pound: http://www.apsis.ch/pound Stunnel: https://www.stunnel.org/index.html Note: haproxy only supports termination fully in the 1.5 dev branch but this is a lot more stable than the most dev software. You can also combine it with stud, pound or stunnel -- Simon Lyall | Very Busy | Web: http://www.simonlyall.com/ "To stay awake all night adds a day to your life" - Stilgar From vkjk89 at gmail.com Fri Apr 11 09:48:55 2014 From: vkjk89 at gmail.com (Vimal Jain) Date: Fri, 11 Apr 2014 15:18:55 +0530 Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: References: Message-ID: Thanks Simon. On Fri, Apr 11, 2014 at 2:42 PM, Simon Lyall wrote: > On Fri, 11 Apr 2014, Vimal Jain wrote: > >> I am planning to use Varnish as my front end accelerator in production. >> I am using HTTPS.Is it supported by Varnish ? >> > > No, See: > > https://www.varnish-cache.org/docs/trunk/phk/ssl.html > > Good alternatives include (in rough order of recomendation): > > Haproxy: http://haproxy.1wt.eu > Stud: https://github.com/bumptech/stud > Pound: http://www.apsis.ch/pound > Stunnel: https://www.stunnel.org/index.html > > Note: haproxy only supports termination fully in the 1.5 dev branch but > this is a lot more stable than the most dev software. You can also > combine it with stud, pound or stunnel > > -- > Simon Lyall | Very Busy | Web: http://www.simonlyall.com/ > "To stay awake all night adds a day to your life" - Stilgar > > -- Thanks and Regards, Vimal Jain -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.jennings at nytimes.com Fri Apr 11 11:47:19 2014 From: raymond.jennings at nytimes.com (Raymond Jennings) Date: Fri, 11 Apr 2014 07:47:19 -0400 Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: References: Message-ID: <-9077112942333174424@unknownmsgid> I would put nginx at the top of that list for many reasons including having it up and running in minutes. > On Apr 11, 2014, at 5:33 AM, Simon Lyall wrote: > >> On Fri, 11 Apr 2014, Vimal Jain wrote: >> I am planning to use Varnish as my front end accelerator in production. >> I am using HTTPS.Is it supported by Varnish ? > > No, See: > > https://www.varnish-cache.org/docs/trunk/phk/ssl.html > > Good alternatives include (in rough order of recomendation): > > Haproxy: http://haproxy.1wt.eu > Stud: https://github.com/bumptech/stud > Pound: http://www.apsis.ch/pound > Stunnel: https://www.stunnel.org/index.html > > Note: haproxy only supports termination fully in the 1.5 dev branch but > this is a lot more stable than the most dev software. You can also > combine it with stud, pound or stunnel > > -- > Simon Lyall | Very Busy | Web: http://www.simonlyall.com/ > "To stay awake all night adds a day to your life" - Stilgar > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From guery.b at gmail.com Fri Apr 11 11:57:33 2014 From: guery.b at gmail.com (=?ISO-8859-1?Q?Boris_Gu=E9ry?=) Date: Fri, 11 Apr 2014 13:57:33 +0200 Subject: Is HTTPS supoorted by Varnish ? In-Reply-To: <-9077112942333174424@unknownmsgid> References: <-9077112942333174424@unknownmsgid> Message-ID: <5347D8AD.1050507@gmail.com> Le 11/04/2014 13:47, Raymond Jennings a ?crit : > I would put nginx at the top of that list for many reasons > including having it up and running in minutes. Indeed, I successfully use it as ssl terminator, the configuration is simple and it does the job. >> On Apr 11, 2014, at 5:33 AM, Simon Lyall >> wrote: >> >>> On Fri, 11 Apr 2014, Vimal Jain wrote: I am planning to use >>> Varnish as my front end accelerator in production. I am using >>> HTTPS.Is it supported by Varnish ? >> >> No, See: >> >> https://www.varnish-cache.org/docs/trunk/phk/ssl.html >> >> Good alternatives include (in rough order of recomendation): >> >> Haproxy: http://haproxy.1wt.eu Stud: >> https://github.com/bumptech/stud Pound: >> http://www.apsis.ch/pound Stunnel: >> https://www.stunnel.org/index.html >> >> Note: haproxy only supports termination fully in the 1.5 dev >> branch but this is a lot more stable than the most dev software. >> You can also combine it with stud, pound or stunnel >> >> -- Simon Lyall | Very Busy | Web: http://www.simonlyall.com/ >> "To stay awake all night adds a day to your life" - Stilgar >> >> >> _______________________________________________ varnish-misc >> mailing list varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > >> > _______________________________________________ varnish-misc > mailing list varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Boris Gu?ry mobile: +33 6 86 83 03 12 skype: borisguery pgp: 0x034C6265 From royf442 at gmail.com Fri Apr 11 23:18:39 2014 From: royf442 at gmail.com (Roy Forster) Date: Sat, 12 Apr 2014 00:18:39 +0100 Subject: restarting varnish Message-ID: Hello I tried making some changes that didn't work. So I changed it back to how varnish was running previously and now i can't start the actual service. Running varnish cache cli gives me child(xxx) said child start. But when I try "varnish service restart" it fails. What do I do next? I wonder if anyone can help me. Kind regards, Roy Forster -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio at dataspace.com.br Fri Apr 11 23:37:12 2014 From: fabio at dataspace.com.br (Fabio Fraga [DS]) Date: Fri, 11 Apr 2014 20:37:12 -0300 Subject: restarting varnish In-Reply-To: References: Message-ID: Hello, Roy Did you tried #varnishd -C -f /pathvarnishconfig. ? I hope it helps Regards, Fabio On Apr 11, 2014 8:19 PM, "Roy Forster" wrote: > Hello > I tried making some changes that didn't work. > So I changed it back to how varnish was running previously and now i can't > start the actual service. > Running varnish cache cli gives me child(xxx) said child start. > But when I try "varnish service restart" it fails. What do I do next? I > wonder if anyone can help me. > Kind regards, > Roy Forster > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smwood4 at gmail.com Fri Apr 11 23:39:18 2014 From: smwood4 at gmail.com (Stephen Wood) Date: Fri, 11 Apr 2014 16:39:18 -0700 Subject: restarting varnish In-Reply-To: References: Message-ID: The first thing I would do is test your vcl: varnishd -C -f foo.vcl If that's good, then make sure nothing is listening on varnish's port. If so, kill it: netstat -an (look for varnish ports) On Fri, Apr 11, 2014 at 4:18 PM, Roy Forster wrote: > Hello > I tried making some changes that didn't work. > So I changed it back to how varnish was running previously and now i can't > start the actual service. > Running varnish cache cli gives me child(xxx) said child start. > But when I try "varnish service restart" it fails. What do I do next? I > wonder if anyone can help me. > Kind regards, > Roy Forster > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Stephen Wood www.heystephenwood.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluethundr at gmail.com Sat Apr 12 20:00:30 2014 From: bluethundr at gmail.com (Tim Dunphy) Date: Sat, 12 Apr 2014 16:00:30 -0400 Subject: best node placement for varnish accelaration Message-ID: Hello list, I'm writing to you today about a job I've been asked to do which utilizes varnish and memcached to accelerate the site. I just realized something about the way that my colleague set this up that makes me question whether the site will actually benefit from ANY acceleration. My guess is no, but I'd like to see what you think and maybe have someone offer suggestions for optimal host placement on the network. We have an F5 load balancer creating a vip which points to 3 web servers. Let's say the VIP in question is 10.10.40.42 for illustration purposes. The traffic hits the vip on the load balancer and gets distributed to the 3 web servers in the VIP pool. Let's say the web servers are 10.10.40.10, .11 and .12. However on the same subnet as the web servers and not being referenced by the load balancer is our Varnish / Memcached nodes. We have two cache nodes running both varnish and memcached at 10.10.40.8 and 10.10.40.9. So if the load balancer is handling all the traffic into the site and the caching hosts are not referenced in the load balancer, don't things need to be structured differently in order for the site to benefit from the acceleration they are trying to use? For instance, don't the caching nodes need to intercept the vip address (10.10.40.42) and pass the vip traffic onto the load balancer and have the load balancer distribute the load in a round robin fashion to the web servers? Or maybe the load balancer can just intercept the VIP (10.10.40.42) and load balance the two caching nodes as its back end and have the varnish setup round robin the web servers? Our current setup is similar to the second option above, except the load balancer is looking at the web servers as it's back end and not the varnish hosts. In our current default.vcl we have this: backend web1 { .host = "10.10.40.42"; .port = "80"; .connect_timeout = 30s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web2 { .host = "10.10.40.10"; .port = "80"; .connect_timeout = 30s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web2 { .host = "10.10.40.11"; .port = "80"; .connect_timeout = 30s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web3 { .host = "10.10.40.12"; .port = "80"; .connect_timeout = 30s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend varnish1 { .host = "10.10.40.8"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 1000; } backend varnish2 { .host = "10.10.40.9"; .port = "80"; .connect_timeout = 5s; .first_byte_timeout = 30s; .between_bytes_timeout = 30s; .max_connections = 1000; } acl purge { "localhost"; "127.0.0.1"; "10.10.40.8"; "10.10.40.9"; } director www round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } } director cache round-robin { { .backend = varnish1; } { .backend = varnish2; } } if (req.restarts == 0) { if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { set req.backend = www; } elsif (server.ip == "10.10.40.8") { set req.backend = varnish2; } else { set req.backend = varnish1; } } elsif (req.restarts >= 2) { return (pass); There's actually a bit more to that vcl file. However I believe that what I've just presented to you are the most salient parts that will illustrate what we're doing, here. Also in the config I've inherited, that last stanza (if (req.restarts=0)) is the same on both varnish nodes. Would you want to vary that stanza so that it would say this on the second varnish node: if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { set req.backend = www; } elsif (server.ip == "10.10.40.9") { set req.backend = varnish1; } else { set req.backend = varnish2; } And to be honest I'm not really clear on the purpose of this section. If someone could enlighten me on that point that'd be great! Thanks in advance, Tim -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Sun Apr 13 05:51:08 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Sat, 12 Apr 2014 22:51:08 -0700 Subject: Varnish 4.0 and IMS oddity Message-ID: Hi. I'm testing out Varnish 4.0 finally. I'm super excited about the IMS stuff and am dying to use it. And a big congratulations on the 4.0 release. I was playing with porting our shared hosting configuration to 4.0 and ran into a slight weirdness with IMS. Keeping in mind that we do a number of things to make things play nicely with the fact that we have no idea what our customers might be doing (and therefore have to jump through a bunch of crazy hoops to make sure we don't return things like authenticated content to unauthenticated users), this could very easily be something weird with our particular setup. I've re-run the test a bunch of times and seen the same thing each time. Here's the scenario: * I have IMS up and running and working. Other than this one particular oddity, everything else IMS-related seems to be working great and I'm greatly looking forward to using it. The test page I'm using deliberately returns a TTL of 1 second to make testing easier. * As a mockup of a customer doing something like cookie-base authentication, or IP-based .htaccess authentication, I wrote up a simple rewrite rule to return a 403 if a certain cookie was missing. * I turn off the rewrite rule * Do a request to that page a few times with the expected 200 from the backend. On the 2nd and subsequent reqs, the IMS stuff kicks in. BTW, is the client getting a "HTTP/1.1 200 Not Modified" the expected behavior? I know the strings after the status code are completely arbitrary but it looked a bit odd. * I turn the rewrite rule back *on* * Do the request again. Here's where it gets odd. * Varnish does an IMS request to the backend * The backend responds with a 403 as expected. * Varnish replies to the client with a HTTP/1.1 200 Not Modified I would expect an error status (or really anything that's not a 304) to fail the IMS test on Varnish's side and that it would then return the 403 to the client. Something weird about what I'm doing/abusing? -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio at antispam.it Sun Apr 13 08:15:23 2014 From: emilio at antispam.it (emilio brambilla) Date: Sun, 13 Apr 2014 10:15:23 +0200 Subject: best node placement for varnish accelaration In-Reply-To: References: Message-ID: <534A479B.2000006@antispam.it> hello, On 2014/04/12 22:00, Tim Dunphy wrote: > > So if the load balancer is handling all the traffic into the site and > the caching hosts are not referenced in the load balancer, don't > things need to be structured differently in order for the site to > benefit from the acceleration they are trying to use? I cannot understand what you want to accomplish with your vcl, but in your conditions a classic configuration is: - F5 balance the vip on the two cache - the cache balance on the 3 web servers as backend -- bye, emilio From jc.bedier at gmail.com Sun Apr 13 08:35:20 2014 From: jc.bedier at gmail.com (Jean-Christian BEDIER) Date: Sun, 13 Apr 2014 10:35:20 +0200 Subject: best node placement for varnish accelaration In-Reply-To: References: Message-ID: Hello Tim, In my opinion your loadbalancer should load balance traffic accross your varnish's server by using hash uri and backend polling to your varnish's instance. Then, your varnish's instance may have to load balance request accorss your web server. (step 1 : vip f5) 10.10.40.42 -> (step 2 hash uri backend polling) -> 1 0.10.40.8/10.10.40.8 -> (step 3: backend polling) -> 10.10.40.10/ 10.10.40.11/10.10.40.12 Step 2 ensure that /plop will always hit cache on 10.10.40.8 and /plip will always hit cache on 10.10.40.9 If one of your varnish's server will go down your loadbalancer will just hit empty cache on other one and remove the failed varnish from vip 10.10.42.42. This method permit build cache on varnish without making redundancy between them (better web server offload and). Step 3 avoid reuse f5 LB (except if you need it for rewrite or hash uri for local cache on your web server), as you know varnish is able to health check baskend on layer 7. If i'm unclear feel free to ask. Regards, On Sat, Apr 12, 2014 at 10:00 PM, Tim Dunphy wrote: > Hello list, > > I'm writing to you today about a job I've been asked to do which utilizes > varnish and memcached to accelerate the site. > > I just realized something about the way that my colleague set this up > that makes me question whether the site will actually benefit from ANY > acceleration. My guess is no, but I'd like to see what you think and maybe > have someone offer suggestions for optimal host placement on the network. > > We have an F5 load balancer creating a vip which points to 3 web servers. > Let's say the VIP in question is 10.10.40.42 for illustration purposes. > > The traffic hits the vip on the load balancer and gets distributed to the > 3 web servers in the VIP pool. Let's say the web servers are 10.10.40.10, > .11 and .12. > > However on the same subnet as the web servers and not being referenced by > the load balancer is our Varnish / Memcached nodes. We have two cache nodes > running both varnish and memcached at 10.10.40.8 and 10.10.40.9. > > So if the load balancer is handling all the traffic into the site and the > caching hosts are not referenced in the load balancer, don't things need to > be structured differently in order for the site to benefit from the > acceleration they are trying to use? > > For instance, don't the caching nodes need to intercept the vip address > (10.10.40.42) and pass the vip traffic onto the load balancer and have the > load balancer distribute the load in a round robin fashion to the web > servers? Or maybe the load balancer can just intercept the VIP > (10.10.40.42) and load balance the two caching nodes as its back end and > have the varnish setup round robin the web servers? > > Our current setup is similar to the second option above, except the load > balancer is looking at the web servers as it's back end and not the varnish > hosts. > > In our current default.vcl we have this: > > backend web1 { > .host = "10.10.40.42"; > .port = "80"; > .connect_timeout = 30s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 70; > .probe = { > .url = "/healthcheck.php"; > .timeout = 5s; > .interval = 30s; > .window = 10; > .threshold = 1; > } > } > > backend web2 { > .host = "10.10.40.10"; > .port = "80"; > .connect_timeout = 30s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 70; > .probe = { > .url = "/healthcheck.php"; > .timeout = 5s; > .interval = 30s; > .window = 10; > .threshold = 1; > } > } > > backend web2 { > .host = "10.10.40.11"; > .port = "80"; > .connect_timeout = 30s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 70; > .probe = { > .url = "/healthcheck.php"; > .timeout = 5s; > .interval = 30s; > .window = 10; > .threshold = 1; > } > } > > backend web3 { > .host = "10.10.40.12"; > .port = "80"; > .connect_timeout = 30s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 70; > .probe = { > .url = "/healthcheck.php"; > .timeout = 5s; > .interval = 30s; > .window = 10; > .threshold = 1; > } > } > > > backend varnish1 { > .host = "10.10.40.8"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 1000; > } > > > backend varnish2 { > .host = "10.10.40.9"; > .port = "80"; > .connect_timeout = 5s; > .first_byte_timeout = 30s; > .between_bytes_timeout = 30s; > .max_connections = 1000; > } > > > acl purge { > "localhost"; > "127.0.0.1"; > "10.10.40.8"; > "10.10.40.9"; > } > > director www round-robin { > { .backend = web1; } > { .backend = web2; } > { .backend = web3; } > } > > director cache round-robin { > { .backend = varnish1; } > { .backend = varnish2; } > } > > > if (req.restarts == 0) { > if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { > set req.backend = www; > } elsif (server.ip == "10.10.40.8") { > set req.backend = varnish2; > } else { > set req.backend = varnish1; > } > } elsif (req.restarts >= 2) { > return (pass); > > > > There's actually a bit more to that vcl file. However I believe that what > I've just presented to you are the most salient parts that will illustrate > what we're doing, here. > > > Also in the config I've inherited, that last stanza (if (req.restarts=0)) > is the same on both varnish nodes. Would you want to vary that stanza so > that it would say this on the second varnish node: > > if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { > set req.backend = www; > } elsif (server.ip == "10.10.40.9") { > set req.backend = varnish1; > } else { > set req.backend = varnish2; > } > > And to be honest I'm not really clear on the purpose of this section. If > someone could enlighten me on that point that'd be great! > > Thanks in advance, > Tim > > -- > GPG me!! > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluethundr at gmail.com Mon Apr 14 01:21:03 2014 From: bluethundr at gmail.com (Tim Dunphy) Date: Sun, 13 Apr 2014 21:21:03 -0400 Subject: best node placement for varnish accelaration In-Reply-To: <534A479B.2000006@antispam.it> References: <534A479B.2000006@antispam.it> Message-ID: > > I cannot understand what you want to accomplish with your vcl, but in your > conditions a classic configuration is: > - F5 balance the vip on the two cache > - the cache balance on the 3 web servers as backend Hello, Thanks for your input. That was exactly what needed to confirm what I was thinking we'd ought to do. I'm going to go ahead and recommend that we take the web servers out of the vip pool and instead point the vip at the two varnish cache nodes. I'm thinking we'll need a heartbeat established between the two (something like keepalived) to enable the failover so that each node can assume the identity of the VIP ip. But I'm sorry if what I posted from my config was unclear. All I am really still curious about at this point is whether I should post this section on my first node: if (req.restarts == 0) { if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { set req.backend = www; } elsif (server.ip == "10.10.40.8") { set req.backend = varnish2; } else { set req.backend = varnish1; } } elsif (req.restarts >= 2) { return (pass); And this configuration on the second node: if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { set req.backend = www; } elsif (server.ip == "10.10.40.9") { set req.backend = varnish1; } else { set req.backend = varnish2; } And I am wondering what the purpose of that stanza is. In the config I am inheriting the first version of the stanza that I show (the one on top) is present in the same exact way on both varnish nodes. However what I am thinking is that it needs to be varied the way that I am demonstrating here from one machine to the other. In the demo I'm showing the first varnish node is 10.10.40.8 and the second varnish node is 10.10.40.9. I'd really love to be clear on what that stanza is trying to accomplish. Here's the full config to provide some context. backend web1 { .host = "10.10.40.42"; .port = "80"; .connect_timeout = 45s; .first_byte_timeout = 45s; .between_bytes_timeout = 45s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web2 { .host = "10.10.40.10"; .port = "80"; .connect_timeout = 45s; .first_byte_timeout = 45s; .between_bytes_timeout = 45s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web3 { .host = "10.10.40.11"; .port = "80"; .connect_timeout = 45s; .first_byte_timeout = 45s; .between_bytes_timeout = 45s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } backend web4 { .host = "10.10.40.12"; .port = "80"; .connect_timeout = 45s; .first_byte_timeout = 45s; .between_bytes_timeout = 45s; .max_connections = 70; .probe = { .url = "/healthcheck.php"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 1; } } acl purge { "localhost"; "127.0.0.1"; "10.10.40.8"; "10.10.40.9"; } director www round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } } sub vcl_recv { set req.backend = www; set req.grace = 6h; if (!req.backend.healthy) { set req.grace = 24h; } set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; if (req.http.host ~ "^origin\.(.+\.|)my_site_tv\.com$") { return (pass); } if (req.http.host ~ ".*\.my_site_tv.com|my_site_tv.com") { /* allow (origin.)stage.m.my_site_tv.com to be a separate host */ if (req.http.host != "stage.m.my_site_tv.com") { set req.http.host = "stage.my_site_tv.com"; } } else { return (pass); } if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.url ~ "sites/all/modules/custom/my_site__ad/ads.html\?.*") { set req.url = "/sites/all/modules/custom/my_site__ad/ads.html"; } if (req.url ~ "eyeblaster/addineyeV2.html\?.*") { set req.url = "/eyeblaster/addineyeV2.html"; } if (req.url ~ "ahah_helper\.php|my_site__points\.php|install\.php|update\.php|cron\.php|/json(:?\?.*)?$") { return (pass); } if (req.http.Authorization) { return (pass); } if (req.url ~ "login" || req.url ~ "logout") { return (pass); } if (req.url ~ "^/admin/" || req.url ~ "^/node/add/") { return (pass); } if (req.http.Cache-Control ~ "no-cache") { // return (pass); } if (req.http.Cookie ~ "(VARNISH|DRUPAL_UID|LOGGED_IN|SESS|_twitter_sess)") { set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); } else { unset req.http.Cookie; } /* removed varnish cache backend logic */ if (req.restarts == 0) { set req.backend = www; } elsif (req.restarts >= 2) { return (pass); } if (req.restarts >= 2) { return (pass); } if (req.url ~ "\.(ico|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|ICO|JPG|JPEG|PNG|GIF|GZ|TGZ|BZ2|TBZ|MP3|OOG|SWF)") { unset req.http.Accept-Encoding; } if (req.url ~ "^/(sites/all/modules/my_site_tv_admanager/includes/ads.php|doubleclick/DARTIframe.html)(\?.*|)$") { set req.url = regsub(req.url, "\?.*$", ""); } if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } return (lookup); } sub vcl_pipe { set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if (req.http.Cookie ~ "VARNISH|DRUPAL_UID|LOGGED_IN") { set req.hash += req.http.Cookie; } return (hash); } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_fetch { if (beresp.status == 500) { set req.http.X-Varnish-Error = "1"; restart; } set beresp.grace = 6h; # Set a short circuit cache lifetime for resp codes above 302 if (beresp.status > 302) { set beresp.ttl = 60s; set beresp.http.Cache-Control = "max-age = 60"; } if (beresp.http.Edge-control ~ "no-store") { set beresp.http.storage = "1"; set beresp.cacheable = false; return (pass); } if (beresp.status >= 300 || !beresp.cacheable) { set beresp.http.Varnish-X-Cacheable = "Not Cacheable"; set beresp.http.storage = "1"; return (pass); } if (beresp.http.Set-Cookie) { return (pass); } if (beresp.cacheable) { unset beresp.http.expires; set beresp.ttl = 600s; set beresp.http.Cache-Control = "max-age = 600"; if (req.url ~ "\.(ico|jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|ICO|JPG|JPEG|PNG|GIF|GZ|TGZ|BZ2|TBZ|MP3|OOG|SWF)") { set beresp.ttl = 43829m; set beresp.http.Cache-Control = "max-age = 1000000"; } } return (deliver); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.Varnish-X-Cache = "HIT"; set resp.http.Varnish-X-Cache-Hits = obj.hits; } else { set resp.http.Varnish-X-Cache = "MISS"; } return (deliver); } sub vcl_error { if (req.restarts == 0) { return (restart); } if (req.http.X-Varnish-Error != "1") { set req.http.X-Varnish-Error = "1"; return (restart); } set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" my_site_tv.com "} obj.status " " obj.response {"
"}; return (deliver); } Thanks Tim On Sun, Apr 13, 2014 at 4:15 AM, emilio brambilla wrote: > hello, > > > On 2014/04/12 22:00, Tim Dunphy wrote: > >> >> So if the load balancer is handling all the traffic into the site and >> the caching hosts are not referenced in the load balancer, don't things >> need to be structured differently in order for the site to benefit from the >> acceleration they are trying to use? >> > I cannot understand what you want to accomplish with your vcl, but in your > conditions a classic configuration is: > > - F5 balance the vip on the two cache > - the cache balance on the 3 web servers as backend > > -- > bye, > emilio > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Mon Apr 14 19:12:32 2014 From: james at ifixit.com (James Pearson) Date: Mon, 14 Apr 2014 12:12:32 -0700 Subject: restarting varnish In-Reply-To: References: Message-ID: <1397502598-sup-3002@geror.local> Excerpts from Roy Forster's message of 2014-04-11 16:18:39 -0700: > Running varnish cache cli gives me child(xxx) said child start. > But when I try "varnish service restart" it fails. Do you mean `service varnish restart`? service(8) is a wrapper around your system's initscripts (likely in /etc/init.d), and is generally the preferred way of starting and stopping services. Sometimes, depending on the initscript, I find myself explicitly running the stop and start commands. What *nix distro are you using? - P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From emilio at antispam.it Mon Apr 14 20:09:23 2014 From: emilio at antispam.it (emilio brambilla) Date: Mon, 14 Apr 2014 22:09:23 +0200 Subject: best node placement for varnish accelaration In-Reply-To: References: <534A479B.2000006@antispam.it> Message-ID: <534C4073.9050807@antispam.it> hello, On 2014/04/14 03:21, Tim Dunphy wrote: > > Thanks for your input. That was exactly what needed to confirm what I > was thinking we'd ought to do. I'm going to go ahead and recommend > that we take the web servers out of the vip pool and instead point the > vip at the two varnish cache nodes. I'm thinking we'll need a > heartbeat established between the two (something like keepalived) to > enable the failover so that each node can assume the identity of the > VIP ip. > you have at least two way: - active-active with the F5 balancing the varnish node (then the varnish will balance the 4 web server directly as backends without the F5) - active-standby without the F5 and with keepalived on the two varnish node (I have some deployments like this where I also put the two varnish node, with keepalived for the HA, outside the firewall (the firewall may be one of the bottleneck on hi traffic sites) of course the active-standby version with keepalived will NOT use the F5 balancer neither in the frontend nor in the backend. > > All I am really still curious about at this point is whether I should > post this section on my first node: > > if (req.restarts == 0) { > if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { > set req.backend = www; > } elsif (server.ip == "10.10.40.8") { > set req.backend = varnish2; > } else { > set req.backend = varnish1; > } > } elsif (req.restarts >= 2) { > return (pass); I really cannot undestand this... you will have the same vcl on both the varnish nodes, and you will have 4 backend (the 4 web server) on them the F5 vip and the cache ip shoud NOT be backend on your vcl From guery.b at gmail.com Tue Apr 15 08:24:54 2014 From: guery.b at gmail.com (=?ISO-8859-1?Q?Boris_Gu=E9ry?=) Date: Tue, 15 Apr 2014 10:24:54 +0200 Subject: restarting varnish In-Reply-To: <1397502598-sup-3002@geror.local> References: <1397502598-sup-3002@geror.local> Message-ID: <534CECD6.6070007@gmail.com> You try to run `sh -vx /etc/init.d/varnish restart` assuming you're using a debian based distro. It will verbosely run your init.d service script. Le 14/04/14 21:12, James Pearson a ?crit : > Excerpts from Roy Forster's message of 2014-04-11 16:18:39 -0700: >> Running varnish cache cli gives me child(xxx) said child start. >> But when I try "varnish service restart" it fails. > Do you mean `service varnish restart`? service(8) is a wrapper around your > system's initscripts (likely in /etc/init.d), and is generally the preferred > way of starting and stopping services. > > Sometimes, depending on the initscript, I find myself explicitly running the > stop and start commands. > > What *nix distro are you using? > - P > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Boris Gu?ry twitter: @borisguery skype: borisguery mobile: +33 6 86 83 03 12 pgp: 0x034C6265 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluethundr at gmail.com Tue Apr 15 14:01:10 2014 From: bluethundr at gmail.com (Tim Dunphy) Date: Tue, 15 Apr 2014 10:01:10 -0400 Subject: best node placement for varnish accelaration In-Reply-To: <534C4073.9050807@antispam.it> References: <534A479B.2000006@antispam.it> <534C4073.9050807@antispam.it> Message-ID: Hi Emilio, you have at least two way: > - active-active with the F5 balancing the varnish node (then the varnish > will balance the 4 web server directly as backends without the F5) > - active-standby without the F5 and with keepalived on the two varnish > node (I have some deployments like this where I also put the two varnish > node, with keepalived for the HA, outside the firewall (the firewall may be > one of the bottleneck on hi traffic sites) Thank you, this is great clarification. So it looks like whichever way we go we still need to assign the VIP (10.10.40.42) to the two Varnish nodes, not the web servers! And in any case have Varnish load balance the web servers. Whether we want to setup active/standby with keepalived or active/active with the F5 is something I'll have to talk over with the team. However in the config I've inherited from the other datacenter as I demonstrated the first IP in that config is the VIP. So even tho that's what gave me the idea that the load balancer needs to be looking at the Varnish nodes and not the web servers. But what you're saying is that whether we go active / active we shouldn't define the VIP IP anywhere in the VCL?? That's really the only question I still have. Otherwise I think I have a clear idea of how we should go about this. Thanks! Tim On Mon, Apr 14, 2014 at 4:09 PM, emilio brambilla wrote: > hello, > > > On 2014/04/14 03:21, Tim Dunphy wrote: > >> >> Thanks for your input. That was exactly what needed to confirm what I >> was thinking we'd ought to do. I'm going to go ahead and recommend that we >> take the web servers out of the vip pool and instead point the vip at the >> two varnish cache nodes. I'm thinking we'll need a heartbeat established >> between the two (something like keepalived) to enable the failover so that >> each node can assume the identity of the VIP ip. >> >> you have at least two way: > - active-active with the F5 balancing the varnish node (then the varnish > will balance the 4 web server directly as backends without the F5) > - active-standby without the F5 and with keepalived on the two varnish > node (I have some deployments like this where I also put the two varnish > node, with keepalived for the HA, outside the firewall (the firewall may be > one of the bottleneck on hi traffic sites) > > of course the active-standby version with keepalived will NOT use the F5 > balancer neither in the frontend nor in the backend. > > >> All I am really still curious about at this point is whether I should >> post this section on my first node: >> >> if (req.restarts == 0) { >> if (client.ip == "10.10.40.8" || client.ip == "10.10.40.9") { >> set req.backend = www; >> } elsif (server.ip == "10.10.40.8") { >> set req.backend = varnish2; >> } else { >> set req.backend = varnish1; >> } >> } elsif (req.restarts >= 2) { >> return (pass); >> > I really cannot undestand this... you will have the same vcl on both the > varnish nodes, and you will have 4 backend (the 4 web server) on them the > F5 vip and the cache ip shoud NOT be backend on your vcl > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcruzero at gmail.com Wed Apr 16 12:37:58 2014 From: lcruzero at gmail.com (L Cruzero) Date: Wed, 16 Apr 2014 08:37:58 -0400 Subject: intermittent 503's Message-ID: Hi, in our varnish setup(varnish-3.0.3-1.el5) using a loadbalanced really Healthy VIP as the only backend, we seem to be getting intermittent 503's stdout captured with varnishlog -c -m TxStatus:503 35 ReqStart c 10.33.13.254 24582 1166630723 35 RxRequest c POST 35 RxURL c /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b 35 RxProtocol c HTTP/1.1 35 RxHeader c User-Agent: BOT/0.1 (BOT for JCE) 35 RxHeader c Content-Type: multipart/form-data; boundary=---------------------------41184676334 35 RxHeader c Content-Length: 5000 35 RxHeader c True-Client-IP: 202.80.119.178 35 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true 35 RxHeader c TE: chunked;q=1.0 35 RxHeader c Connection: TE 35 RxHeader c Akamai-Origin-Hop: 2 35 RxHeader c Via: 1.1 v1-akamaitech.net(ghost) (AkamaiGHost), 1.1 akamai.net(ghost) (AkamaiGHost) 35 RxHeader c X-Forwarded-For: 202.80.119.178, 114.4.39.206 35 RxHeader c Host: www.somedomain.com 35 RxHeader c Cache-Control: max-age=120 35 RxHeader c Connection: keep-alive 35 RxHeader c X-Forwarded-For: 23.73.180.223 35 RxHeader c Accept-Encoding: identity 35 VCL_call c recv pass 35 VCL_call c hash 35 Hash c /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b 35 Hash c www.somedomain.com 35 VCL_return c hash 35 VCL_call c pass pass 35 Backend c 29 www_prod www_prod 35 FetchError c backend write error: 104 (Connection reset by peer) 35 Backend c 42 www_prod www_prod 35 FetchError c backend write error: 104 (Connection reset by peer) 35 VCL_call c error deliver 35 VCL_call c deliver deliver 35 TxProtocol c HTTP/1.1 35 TxStatus c 503 35 TxResponse c Service Unavailable 35 TxHeader c Server: Varnish 35 TxHeader c Content-Type: text/html; charset=utf-8 35 TxHeader c Retry-After: 5 35 TxHeader c Content-Length: 419 35 TxHeader c Accept-Ranges: bytes 35 TxHeader c Date: Tue, 15 Apr 2014 19:31:22 GMT 35 TxHeader c X-Varnish: 1166630723 35 TxHeader c Age: 0 35 TxHeader c Via: 1.1 varnish 35 TxHeader c Connection: close 35 Length c 419 35 ReqEnd c 1166630723 1397590282.467632055 1397590282.899372101 0.033779144 0.431685925 0.0000541 here is super simple pass thru NO cache config : backend www_prod { .host = "cs-****.*****.*****.net"; .port = "80"; .probe = { .url = "/"; .timeout = 5s; .interval = 1s; .window = 5; .threshold = 2; } } sub vcl_recv { if (req.http.X-ADI-MISS) { # Force a cache miss set req.hash_always_miss = true; } if (req.url == "/varnish-health/" || req.url ~ "^/stack-check*") { error 200 "Varnish is responding"; set req.http.Connection = "close"; } if (req.http.host ~ "^(origin|www)") { set req.http.host = regsub(req.http.host, "^origin\.", "www."); set req.backend = www_prod; return(pass); } ## we tried with and w/o these conditions intermittent 503's persisted # if (req.backend.healthy) { # set req.grace = 30s; # } else { # set req.grace = 24h; # } } sub vcl_fetch { # set beresp.do_esi = true; set beresp.ttl = 1m; # set beresp.grace = 24h; } Any helpful thoughts and or possible leads on this will be much appreciated. kind regards, -LC -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.jennings at nytimes.com Wed Apr 16 20:12:45 2014 From: raymond.jennings at nytimes.com (Jennings, Raymond) Date: Wed, 16 Apr 2014 16:12:45 -0400 Subject: intermittent 503's In-Reply-To: References: Message-ID: I can't say with any definitive response but I have seen cases where Varnish times out before Apache finishes the response. I see this when there is a request that causes a heavy database transaction on the backend. Does looking at your backend servers provide any more details? On Wed, Apr 16, 2014 at 8:37 AM, L Cruzero wrote: > Hi, in our varnish setup(varnish-3.0.3-1.el5) using a loadbalanced really > Healthy VIP as the only backend, we seem to be getting intermittent 503's > > > stdout captured with varnishlog -c -m TxStatus:503 > > 35 ReqStart c 10.33.13.254 24582 1166630723 > 35 RxRequest c POST > 35 RxURL c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > 35 RxProtocol c HTTP/1.1 > 35 RxHeader c User-Agent: BOT/0.1 (BOT for JCE) > 35 RxHeader c Content-Type: multipart/form-data; > boundary=---------------------------41184676334 > 35 RxHeader c Content-Length: 5000 > 35 RxHeader c True-Client-IP: 202.80.119.178 > 35 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true > 35 RxHeader c TE: chunked;q=1.0 > 35 RxHeader c Connection: TE > 35 RxHeader c Akamai-Origin-Hop: 2 > 35 RxHeader c Via: 1.1 v1-akamaitech.net(ghost) (AkamaiGHost), 1.1 > akamai.net(ghost) (AkamaiGHost) > 35 RxHeader c X-Forwarded-For: 202.80.119.178, 114.4.39.206 > 35 RxHeader c Host: www.somedomain.com > 35 RxHeader c Cache-Control: max-age=120 > 35 RxHeader c Connection: keep-alive > 35 RxHeader c X-Forwarded-For: 23.73.180.223 > 35 RxHeader c Accept-Encoding: identity > 35 VCL_call c recv pass > 35 VCL_call c hash > 35 Hash c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > 35 Hash c www.somedomain.com > 35 VCL_return c hash > 35 VCL_call c pass pass > 35 Backend c 29 www_prod www_prod > 35 FetchError c backend write error: 104 (Connection reset by peer) > 35 Backend c 42 www_prod www_prod > 35 FetchError c backend write error: 104 (Connection reset by peer) > 35 VCL_call c error deliver > 35 VCL_call c deliver deliver > 35 TxProtocol c HTTP/1.1 > 35 TxStatus c 503 > 35 TxResponse c Service Unavailable > 35 TxHeader c Server: Varnish > 35 TxHeader c Content-Type: text/html; charset=utf-8 > 35 TxHeader c Retry-After: 5 > 35 TxHeader c Content-Length: 419 > 35 TxHeader c Accept-Ranges: bytes > 35 TxHeader c Date: Tue, 15 Apr 2014 19:31:22 GMT > 35 TxHeader c X-Varnish: 1166630723 > 35 TxHeader c Age: 0 > 35 TxHeader c Via: 1.1 varnish > 35 TxHeader c Connection: close > 35 Length c 419 > 35 ReqEnd c 1166630723 1397590282.467632055 1397590282.899372101 > 0.033779144 0.431685925 0.0000541 > > > here is super simple pass thru NO cache config : > > > backend www_prod { > .host = "cs-****.*****.*****.net"; > .port = "80"; > .probe = { > .url = "/"; > .timeout = 5s; > .interval = 1s; > .window = 5; > .threshold = 2; > } > } > > sub vcl_recv { > if (req.http.X-ADI-MISS) { > # Force a cache miss > set req.hash_always_miss = true; > } > > if (req.url == "/varnish-health/" || req.url ~ "^/stack-check*") { > error 200 "Varnish is responding"; > set req.http.Connection = "close"; > } > > > if (req.http.host ~ "^(origin|www)") { > set req.http.host = regsub(req.http.host, "^origin\.", "www."); > set req.backend = www_prod; > return(pass); > > } > > > ## we tried with and w/o these conditions intermittent 503's persisted > > # if (req.backend.healthy) { > # set req.grace = 30s; > # } else { > # set req.grace = 24h; > # } > > } > > > sub vcl_fetch { > # set beresp.do_esi = true; > set beresp.ttl = 1m; > > # set beresp.grace = 24h; > } > > > > Any helpful thoughts and or possible leads on this will be much appreciated. > > > kind regards, > > > -LC > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guery.b at gmail.com Wed Apr 16 20:36:28 2014 From: guery.b at gmail.com (=?ISO-8859-1?Q?Boris_Gu=E9ry?=) Date: Wed, 16 Apr 2014 22:36:28 +0200 Subject: intermittent 503's In-Reply-To: References: Message-ID: <534EE9CC.8070004@gmail.com> Le 16/04/2014 22:12, Jennings, Raymond a ?crit : > I can't say with any definitive response but I have seen cases > where Varnish times out before Apache finishes the response. I see > this when there is a request that causes a heavy database > transaction on the backend. Does looking at your backend servers > provide any more details? > Well I don't think it is some sort of timeout, the `backend write error: 104 (Connection reset by peer)` error is straight forward. Still, that's probably on the backend side. Try to check your apache log and processes through `mod_status` to make sure you're not running out of processes/threads, whatever you use. > > On Wed, Apr 16, 2014 at 8:37 AM, L Cruzero > wrote: > > Hi, in our varnish setup(varnish-3.0.3-1.el5) using a > loadbalanced really Healthy VIP as the only backend, we seem to be > getting intermittent 503's > > > stdout captured with varnishlog -c -m TxStatus:503 > > 35 ReqStart c 10.33.13.254 24582 1166630723 35 RxRequest c > POST 35 RxURL c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > > 35 RxProtocol c HTTP/1.1 > 35 RxHeader c User-Agent: BOT/0.1 (BOT for JCE) 35 RxHeader > c Content-Type: multipart/form-data; > boundary=---------------------------41184676334 35 RxHeader c > Content-Length: 5000 35 RxHeader c True-Client-IP: > 202.80.119.178 35 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true > 35 RxHeader c TE: chunked;q=1.0 35 RxHeader c Connection: > TE 35 RxHeader c Akamai-Origin-Hop: 2 35 RxHeader c Via: > 1.1 v1-akamaitech.net (ghost) > (AkamaiGHost), 1.1 akamai.net (ghost) > (AkamaiGHost) 35 RxHeader c X-Forwarded-For: 202.80.119.178, > 114.4.39.206 35 RxHeader c Host: www.somedomain.com > 35 RxHeader c Cache-Control: > max-age=120 35 RxHeader c Connection: keep-alive 35 RxHeader > c X-Forwarded-For: 23.73.180.223 35 RxHeader c Accept-Encoding: > identity 35 VCL_call c recv pass 35 VCL_call c hash 35 Hash > c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > > 35 Hash c www.somedomain.com > 35 VCL_return c hash 35 VCL_call c pass pass 35 Backend > c 29 www_prod www_prod 35 FetchError c backend write error: 104 > (Connection reset by peer) 35 Backend c 42 www_prod www_prod > 35 FetchError c backend write error: 104 (Connection reset by > peer) 35 VCL_call c error deliver 35 VCL_call c deliver > deliver 35 TxProtocol c HTTP/1.1 35 TxStatus c 503 35 > TxResponse c Service Unavailable 35 TxHeader c Server: > Varnish 35 TxHeader c Content-Type: text/html; charset=utf-8 35 > TxHeader c Retry-After: 5 35 TxHeader c Content-Length: > 419 35 TxHeader c Accept-Ranges: bytes 35 TxHeader c Date: > Tue, 15 Apr 2014 19:31:22 GMT 35 TxHeader c X-Varnish: > 1166630723 35 TxHeader c Age: 0 35 TxHeader c Via: 1.1 > varnish 35 TxHeader c Connection: close 35 Length c 419 > 35 ReqEnd c 1166630723 1397590282.467632055 > 1397590282.899372101 0.033779144 0.431685925 0 > .0000541 > > > here is super simple pass thru NO cache config : > > > backend www_prod { .host = "cs-****.*****.*****.net"; .port = > "80"; .probe = { .url = "/"; .timeout = 5s; .interval = 1s; .window > = 5; .threshold = 2; } } > > sub vcl_recv { if (req.http.X-ADI-MISS) { # Force a cache miss set > req.hash_always_miss = true; } > > if (req.url == "/varnish-health/" || req.url ~ "^/stack-check*") { > error 200 "Varnish is responding"; set req.http.Connection = > "close"; } > > > if (req.http.host ~ "^(origin|www)") { set req.http.host = > regsub(req.http.host, "^origin\.", "www."); set req.backend = > www_prod; return(pass); > > } > > > ## we tried with and w/o these conditions intermittent 503's > persisted > > # if (req.backend.healthy) { # set req.grace = 30s; # > } else { # set req.grace = 24h; # } > > } > > > sub vcl_fetch { # set beresp.do_esi = true; set > beresp.ttl = 1m; > > # set beresp.grace = 24h; } > > > > Any helpful thoughts and or possible leads on this will be much > appreciated. > > > kind regards, > > > -LC > > > _______________________________________________ varnish-misc > mailing list varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ varnish-misc > mailing list varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Boris Gu?ry mobile: +33 6 86 83 03 12 skype: borisguery pgp: 0x034C6265 From geoff at uplex.de Thu Apr 17 05:46:21 2014 From: geoff at uplex.de (Geoffrey Simmons) Date: Thu, 17 Apr 2014 17:46:21 +1200 Subject: intermittent 503's In-Reply-To: References: Message-ID: > 35 FetchError c backend write error: 104 (Connection reset by peer) "backend write error" in versions up to at least 3.0.3 can be a very misleading error message -- you may in fact have had a connection reset on the *client* connection while reading the request body of your POST request. In my experience, that's much more common than an error writing to the backend connection. Sending the request body to the backend is part of the fetch operation, which is why an error in that phase is logged as a FetchError. Varnish reads the body from the client and writes it to the backend in a loop, so if any error occurs then, either on the client or backend side, it's logged as "FetchError: backend write error". There was a fix that made the error message distinguish the problem more clearly, might have made it into 3.0.4. Anyway if I were you I'd look for a connection reset on a read of the POST body from the client connection. HTH, Geoff Sent from my iPad > On Apr 17, 2014, at 12:37 AM, L Cruzero wrote: > > Hi, in our varnish setup(varnish-3.0.3-1.el5) using a loadbalanced really Healthy VIP as the only backend, we seem to be getting intermittent 503's > > > stdout captured with varnishlog -c -m TxStatus:503 > > 35 ReqStart c 10.33.13.254 24582 1166630723 > 35 RxRequest c POST > 35 RxURL c /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > 35 RxProtocol c HTTP/1.1 > 35 RxHeader c User-Agent: BOT/0.1 (BOT for JCE) > 35 RxHeader c Content-Type: multipart/form-data; boundary=---------------------------41184676334 > 35 RxHeader c Content-Length: 5000 > 35 RxHeader c True-Client-IP: 202.80.119.178 > 35 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true > 35 RxHeader c TE: chunked;q=1.0 > 35 RxHeader c Connection: TE > 35 RxHeader c Akamai-Origin-Hop: 2 > 35 RxHeader c Via: 1.1 v1-akamaitech.net(ghost) (AkamaiGHost), 1.1 akamai.net(ghost) (AkamaiGHost) > 35 RxHeader c X-Forwarded-For: 202.80.119.178, 114.4.39.206 > 35 RxHeader c Host: www.somedomain.com > 35 RxHeader c Cache-Control: max-age=120 > 35 RxHeader c Connection: keep-alive > 35 RxHeader c X-Forwarded-For: 23.73.180.223 > 35 RxHeader c Accept-Encoding: identity > 35 VCL_call c recv pass > 35 VCL_call c hash > 35 Hash c /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > 35 Hash c www.somedomain.com > 35 VCL_return c hash > 35 VCL_call c pass pass > 35 Backend c 29 www_prod www_prod > 35 FetchError c backend write error: 104 (Connection reset by peer) > 35 Backend c 42 www_prod www_prod > 35 FetchError c backend write error: 104 (Connection reset by peer) > 35 VCL_call c error deliver > 35 VCL_call c deliver deliver > 35 TxProtocol c HTTP/1.1 > 35 TxStatus c 503 > 35 TxResponse c Service Unavailable > 35 TxHeader c Server: Varnish > 35 TxHeader c Content-Type: text/html; charset=utf-8 > 35 TxHeader c Retry-After: 5 > 35 TxHeader c Content-Length: 419 > 35 TxHeader c Accept-Ranges: bytes > 35 TxHeader c Date: Tue, 15 Apr 2014 19:31:22 GMT > 35 TxHeader c X-Varnish: 1166630723 > 35 TxHeader c Age: 0 > 35 TxHeader c Via: 1.1 varnish > 35 TxHeader c Connection: close > 35 Length c 419 > 35 ReqEnd c 1166630723 1397590282.467632055 1397590282.899372101 0.033779144 0.431685925 0.0000541 > > > here is super simple pass thru NO cache config : > > > backend www_prod { > .host = "cs-****.*****.*****.net"; > .port = "80"; > .probe = { > .url = "/"; > .timeout = 5s; > .interval = 1s; > .window = 5; > .threshold = 2; > } > } > > sub vcl_recv { > if (req.http.X-ADI-MISS) { > # Force a cache miss > set req.hash_always_miss = true; > } > > if (req.url == "/varnish-health/" || req.url ~ "^/stack-check*") { > error 200 "Varnish is responding"; > set req.http.Connection = "close"; > } > > > if (req.http.host ~ "^(origin|www)") { > set req.http.host = regsub(req.http.host, "^origin\.", "www."); > set req.backend = www_prod; > return(pass); > > } > > > ## we tried with and w/o these conditions intermittent 503's persisted > > # if (req.backend.healthy) { > # set req.grace = 30s; > # } else { > # set req.grace = 24h; > # } > > } > > > sub vcl_fetch { > # set beresp.do_esi = true; > set beresp.ttl = 1m; > > # set beresp.grace = 24h; > } > > > Any helpful thoughts and or possible leads on this will be much appreciated. > kind regards, > -LC > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcruzero at gmail.com Thu Apr 17 11:51:48 2014 From: lcruzero at gmail.com (L Cruzero) Date: Thu, 17 Apr 2014 07:51:48 -0400 Subject: varnish-misc Digest, Vol 97, Issue 16 In-Reply-To: References: Message-ID: Geoff, Et al, thanks for your replies, i should have mentioned, that the 503's are not unique to POST requests, they are also happening more frequent within (very rarely == on 0.0038% of requests), on a GETs. i've been thinking of testing within my config something along this FOUND vcl code snipped as a possible solution. *sub vcl_fetch { ** if (obj.status == 500 || obj.status == 503 || obj.status == ** 504) { ** restart; ** }* 174 ReqStart c 10.33.13.254 38233 1505576793 174 RxRequest c GET 174 RxURL c /auburnfootball/modules/most_read.html 174 RxProtocol c HTTP/1.1 174 RxHeader c If-Modified-Since: Wed, 16 Apr 2014 14:23:34 GMT 174 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 174 RxHeader c Accept-Charset: utf-8, iso-8859-1, utf-16, *;q=0.7 174 RxHeader c Accept-Language: en-US 174 RxHeader c Cookie: adv_lid=04ea3e9b86834130c4cb3be359f7b220; OAX=RsRvNFEDhgkAAuO2; __qca=P0-1959933173-1359185426964; __atuvc=0%7C27%2C0%7C28%2C0%7C29%2C0%7C30%2C1%7C31; mp_948b359d7741fa246376d95399585cc3_mixpanel=%7B%7D; __utma=130511327.2000905843.1375913741.1375 174 RxHeader c User-Agent: Mozilla/5.0 (Linux; U; Android 4.1.2; en-us; SCH-I200 Build/JZO54K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30 174 RxHeader c X-UIDH: MTQ5ODI0NDA2MAA/58mlyaG6h8c9ATsT9BtivrzyjC/Gl0mdhNCPRIOC1Q== 174 RxHeader c x-wap-profile: http://uaprof.vtext.com/sam/SCH-I200/SCH-I200.xml 174 RxHeader c X-Akamai-Esi-Fragment-Suffix: f.6 174 RxHeader c True-Client-IP: 127.0.0.1 174 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true 174 RxHeader c TE: chunked;q=1.0 174 RxHeader c Connection: TE 174 RxHeader c Akamai-Origin-Hop: 2 174 RxHeader c Via: 4.0 asi_server, 1.0 v1-akamaitech.net(ghost) (AkamaiGHost), 1.1 akamai.net(ghost) (AkamaiGHost) 174 RxHeader c Accept-ESI: 1.0 174 RxHeader c X-Forwarded-For: 127.0.0.1, 96.17.202.197 174 RxHeader c Host: www.al.com 174 RxHeader c Cache-Control: max-age=180 174 RxHeader c Connection: keep-alive 174 RxHeader c X-Forwarded-For: 23.73.180.223 174 RxHeader c Accept-Encoding: identity 174 VCL_call c recv pass 174 VCL_call c hash 174 Hash c /auburnfootball/modules/most_read.html 174 Hash c www.**.com 174 VCL_return c hash 174 VCL_call c pass pass 174 Backend c 101 www_prod www_prod 174 FetchError c http first read error: -1 104 (Connection reset by peer) 174 Backend c 56 www_prod www_prod 174 FetchError c http first read error: -1 104 (Connection reset by peer) 174 VCL_call c error deliver 174 VCL_call c deliver deliver 174 TxProtocol c HTTP/1.1 174 TxStatus c 503 174 TxResponse c Service Unavailable 174 TxHeader c Server: Varnish 174 TxHeader c Content-Type: text/html; charset=utf-8 174 TxHeader c Retry-After: 5 174 TxHeader c Content-Length: 419 174 TxHeader c Accept-Ranges: bytes 174 TxHeader c Date: Wed, 16 Apr 2014 16:30:29 GMT 174 TxHeader c X-Varnish: 1505576793 174 TxHeader c Age: 0 174 TxHeader c Via: 1.1 varnish 174 TxHeader c Connection: close 174 Length c 419 174 ReqEnd c 1505576793 1397665829.285193920 1397665829.298522949 0.022354841 0.013305187 0.000023842 On Thu, Apr 17, 2014 at 6:00 AM, wrote: > Send varnish-misc mailing list submissions to > varnish-misc at varnish-cache.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > or, via email, send a message with subject or body 'help' to > varnish-misc-request at varnish-cache.org > > You can reach the person managing the list at > varnish-misc-owner at varnish-cache.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of varnish-misc digest..." > > > Today's Topics: > > 1. Re: intermittent 503's (Geoffrey Simmons) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 17 Apr 2014 17:46:21 +1200 > From: Geoffrey Simmons > To: L Cruzero > Cc: "varnish-misc at varnish-cache.org" > Subject: Re: intermittent 503's > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > 35 FetchError c backend write error: 104 (Connection reset by peer) > > "backend write error" in versions up to at least 3.0.3 can be a very > misleading error message -- you may in fact have had a connection reset on > the *client* connection while reading the request body of your POST > request. In my experience, that's much more common than an error writing to > the backend connection. > > Sending the request body to the backend is part of the fetch operation, > which is why an error in that phase is logged as a FetchError. Varnish > reads the body from the client and writes it to the backend in a loop, so > if any error occurs then, either on the client or backend side, it's logged > as "FetchError: backend write error". > > There was a fix that made the error message distinguish the problem more > clearly, might have made it into 3.0.4. > > Anyway if I were you I'd look for a connection reset on a read of the POST > body from the client connection. > > > HTH, > Geoff > > Sent from my iPad > > > On Apr 17, 2014, at 12:37 AM, L Cruzero wrote: > > > > Hi, in our varnish setup(varnish-3.0.3-1.el5) using a loadbalanced > really Healthy VIP as the only backend, we seem to be getting intermittent > 503's > > > > > > stdout captured with varnishlog -c -m TxStatus:503 > > > > 35 ReqStart c 10.33.13.254 24582 1166630723 > > 35 RxRequest c POST > > 35 RxURL c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > > 35 RxProtocol c HTTP/1.1 > > 35 RxHeader c User-Agent: BOT/0.1 (BOT for JCE) > > 35 RxHeader c Content-Type: multipart/form-data; > boundary=---------------------------41184676334 > > 35 RxHeader c Content-Length: 5000 > > 35 RxHeader c True-Client-IP: 202.80.119.178 > > 35 RxHeader c X-Akamai-CONFIG-LOG-DETAIL: true > > 35 RxHeader c TE: chunked;q=1.0 > > 35 RxHeader c Connection: TE > > 35 RxHeader c Akamai-Origin-Hop: 2 > > 35 RxHeader c Via: 1.1 v1-akamaitech.net(ghost) (AkamaiGHost), > 1.1 akamai.net(ghost) (AkamaiGHost) > > 35 RxHeader c X-Forwarded-For: 202.80.119.178, 114.4.39.206 > > 35 RxHeader c Host: www.somedomain.com > > 35 RxHeader c Cache-Control: max-age=120 > > 35 RxHeader c Connection: keep-alive > > 35 RxHeader c X-Forwarded-For: 23.73.180.223 > > 35 RxHeader c Accept-Encoding: identity > > 35 VCL_call c recv pass > > 35 VCL_call c hash > > 35 Hash c > /index.php?option=com_jce&task=plugin&plugin=imgmanager&file=imgmanager&method=form&cid=20&6bc427c8a7981f4fe1f5ac65c1246b5f=cf6dd3cf1923c950586d0dd595c8e20b > > 35 Hash c www.somedomain.com > > 35 VCL_return c hash > > 35 VCL_call c pass pass > > 35 Backend c 29 www_prod www_prod > > 35 FetchError c backend write error: 104 (Connection reset by peer) > > 35 Backend c 42 www_prod www_prod > > 35 FetchError c backend write error: 104 (Connection reset by peer) > > 35 VCL_call c error deliver > > 35 VCL_call c deliver deliver > > 35 TxProtocol c HTTP/1.1 > > 35 TxStatus c 503 > > 35 TxResponse c Service Unavailable > > 35 TxHeader c Server: Varnish > > 35 TxHeader c Content-Type: text/html; charset=utf-8 > > 35 TxHeader c Retry-After: 5 > > 35 TxHeader c Content-Length: 419 > > 35 TxHeader c Accept-Ranges: bytes > > 35 TxHeader c Date: Tue, 15 Apr 2014 19:31:22 GMT > > 35 TxHeader c X-Varnish: 1166630723 > > 35 TxHeader c Age: 0 > > 35 TxHeader c Via: 1.1 varnish > > 35 TxHeader c Connection: close > > 35 Length c 419 > > 35 ReqEnd c 1166630723 1397590282.467632055 > 1397590282.899372101 0.033779144 0.431685925 0.0000541 > > > > > > here is super simple pass thru NO cache config : > > > > > > backend www_prod { > > .host = "cs-****.*****.*****.net"; > > .port = "80"; > > .probe = { > > .url = "/"; > > .timeout = 5s; > > .interval = 1s; > > .window = 5; > > .threshold = 2; > > } > > } > > > > sub vcl_recv { > > if (req.http.X-ADI-MISS) { > > # Force a cache miss > > set req.hash_always_miss = true; > > } > > > > if (req.url == "/varnish-health/" || req.url ~ "^/stack-check*") > { > > error 200 "Varnish is responding"; > > set req.http.Connection = "close"; > > } > > > > > > if (req.http.host ~ "^(origin|www)") { > > set req.http.host = regsub(req.http.host, "^origin\.", "www."); > > set req.backend = www_prod; > > return(pass); > > > > } > > > > > > ## we tried with and w/o these conditions intermittent 503's persisted > > > > # if (req.backend.healthy) { > > # set req.grace = 30s; > > # } else { > > # set req.grace = 24h; > > # } > > > > } > > > > > > sub vcl_fetch { > > # set beresp.do_esi = true; > > set beresp.ttl = 1m; > > > > # set beresp.grace = 24h; > > } > > > > > > Any helpful thoughts and or possible leads on this will be much > appreciated. > > kind regards, > > -LC > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20140417/cf00e773/attachment-0001.html > > > > ------------------------------ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > End of varnish-misc Digest, Vol 97, Issue 16 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anton.Kornexl at Uni-Passau.De Thu Apr 17 14:38:58 2014 From: Anton.Kornexl at Uni-Passau.De (Anton Kornexl) Date: Thu, 17 Apr 2014 16:38:58 +0200 Subject: varnish cache gets randomly cleared Message-ID: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> Hello, how do i find out why varnish cache gets cleared randomly. I have searched the varnishlog, but did not find a PURGE or BAN with a regular expression which would match all urls The varnish version is: varnishd (varnish-3.0.5 revision 1a89b1f) -- Kind regards Anton Kornexl Rechenzentrum Universit?t Passau Innstr. 33 D-94032 Passau Tel.: 0851/509-1812 Fax: 0851/509-1802 -------------- next part -------------- A non-text attachment was scrubbed... Name: Kornexl, Anton.vcf Type: application/octet-stream Size: 244 bytes Desc: not available URL: From raymond.jennings at nytimes.com Thu Apr 17 14:51:10 2014 From: raymond.jennings at nytimes.com (Jennings, Raymond) Date: Thu, 17 Apr 2014 10:51:10 -0400 Subject: varnish cache gets randomly cleared In-Reply-To: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> Message-ID: Also check your syslog to see if varnishd is restarting for some reason. I had a strange case where something external caused varnish to restart multiple times per day (and thereby clearing the cache.) On Thu, Apr 17, 2014 at 10:38 AM, Anton Kornexl wrote: > Hello, > > how do i find out why varnish cache gets cleared randomly. > > I have searched the varnishlog, but did not find a PURGE or BAN with a > regular expression which would match all urls > > The varnish version is: varnishd (varnish-3.0.5 revision 1a89b1f) > > -- > Kind regards > Anton Kornexl > > Rechenzentrum Universit?t Passau > Innstr. 33 > D-94032 Passau > Tel.: 0851/509-1812 > Fax: 0851/509-1802 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Thu Apr 17 14:55:31 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Thu, 17 Apr 2014 16:55:31 +0200 Subject: varnish cache gets randomly cleared In-Reply-To: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> Message-ID: Hi, The cache process might crash, and be restarted automatically by the management process. Any panics in varnishadm? Dridi On Thu, Apr 17, 2014 at 4:38 PM, Anton Kornexl wrote: > Hello, > > how do i find out why varnish cache gets cleared randomly. > > I have searched the varnishlog, but did not find a PURGE or BAN with a > regular expression which would match all urls > > The varnish version is: varnishd (varnish-3.0.5 revision 1a89b1f) > > -- > Kind regards > Anton Kornexl > > Rechenzentrum Universit?t Passau > Innstr. 33 > D-94032 Passau > Tel.: 0851/509-1812 > Fax: 0851/509-1802 > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From thomas.lecomte at virtual-expo.com Thu Apr 17 14:58:39 2014 From: thomas.lecomte at virtual-expo.com (Thomas Lecomte) Date: Thu, 17 Apr 2014 16:58:39 +0200 Subject: varnish cache gets randomly cleared In-Reply-To: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> Message-ID: <20140417145839.GE18211@wks140.directindustry.com> On Thu, Apr 17, 2014 at 04:38:58PM +0200, Anton Kornexl wrote: > Hello, > > how do i find out why varnish cache gets cleared randomly. Maybe have a look at the n_lru_nuked metric. If you cache gets full, varnish will start nuking the least used objects to make room for the new ones. -- Thomas Lecomte / Administrateur Syst?me & R?seau +33 4 86 13 48 65 / Virtual Expo / Marseille From geoff at uplex.de Thu Apr 17 17:17:30 2014 From: geoff at uplex.de (Geoffrey Simmons) Date: Fri, 18 Apr 2014 05:17:30 +1200 Subject: varnish-misc Digest, Vol 97, Issue 16 In-Reply-To: References: Message-ID: > On Apr 17, 2014, at 11:51 PM, L Cruzero wrote: > > Geoff, Et al, thanks for your replies, i should have mentioned, that the 503's are not unique to POST requests, they are also happening more frequent within > (very rarely == on 0.0038% of requests), on a GETs. But in your example with the GET, the FetchError is connection reset reading the backend response: > 174 FetchError c http first read error: -1 104 (Connection reset by peer) That's pretty straightforward -- your backend broke off the connection before Varnish could read the response. Nothing wrong with the client connection that time. But your previous "FetchError: backend write error", which in my experience has almost always really been an error on the client connection, is always about request bodies, and hence always about POST requests. Understanding your "503's" will mean that you'll have to look at the details of the FetchError in each individual case, they're all different. > i've been thinking of testing within my config something along this FOUND vcl code snipped as a possible solution. > sub vcl_fetch { > if (obj.status == 500 || obj.status == 503 || obj.status == > 504) { > restart; > } Well, that's retrying on those error codes, which is more of a workaround than a solution. Retries are not a bad idea (why not just obj.status >= 500 ?), but they don't solve whatever it is that's causing the problem, they just give you another chance when there is a problem. And if the trouble keeps happening, then you'll hit the restart maximum and end up with 503 anyway. HTH, Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Sat Apr 19 16:11:38 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Sat, 19 Apr 2014 09:11:38 -0700 Subject: Varnish 4.0 and IMS oddity In-Reply-To: References: Message-ID: Finally got a chance to play around with this some more. It seems like if I turn off grace, it works like I'd expect it to (i.e. non-304 backend requests override whatever is stored in varnish by 'keep'). It sort of makes sense in a grace way but could be a surprise if someone wasn't expecting it. That is, that grace would trump the non-304 in the IMS request for a request or two. Actually, maybe it's just a grace thing. Even with IMS/keep off, I'm still seeing at least one stale response, which sounds very much like grace, though I'd only expect it to kick in for a sick backend or multiple clients. This is a test box, so I'm the only one hitting the server (and using curl, so only a single request is hitting it at a time). I've got grace on in 3.x and things work like I'd expect there. Perhaps just a behavior change in 4.x? I've not tested it super extensively yet, so I could be wrong. I've switched it back and forth a couple of times and with grace on, it does the same as above; with grace off, it acts like I'd expect. Though as always, my setup is probably more than a bit wonky. On Sat, Apr 12, 2014 at 10:51 PM, Mark Moseley wrote: > Hi. I'm testing out Varnish 4.0 finally. I'm super excited about the IMS > stuff and am dying to use it. And a big congratulations on the 4.0 release. > > I was playing with porting our shared hosting configuration to 4.0 and ran > into a slight weirdness with IMS. Keeping in mind that we do a number of > things to make things play nicely with the fact that we have no idea what > our customers might be doing (and therefore have to jump through a bunch of > crazy hoops to make sure we don't return things like authenticated content > to unauthenticated users), this could very easily be something weird with > our particular setup. I've re-run the test a bunch of times and seen the > same thing each time. > > Here's the scenario: > > * I have IMS up and running and working. Other than this one particular > oddity, everything else IMS-related seems to be working great and I'm > greatly looking forward to using it. The test page I'm using deliberately > returns a TTL of 1 second to make testing easier. > > * As a mockup of a customer doing something like cookie-base > authentication, or IP-based .htaccess authentication, I wrote up a simple > rewrite rule to return a 403 if a certain cookie was missing. > > * I turn off the rewrite rule > > * Do a request to that page a few times with the expected 200 from the > backend. On the 2nd and subsequent reqs, the IMS stuff kicks in. BTW, is > the client getting a "HTTP/1.1 200 Not Modified" the expected behavior? I > know the strings after the status code are completely arbitrary but it > looked a bit odd. > > * I turn the rewrite rule back *on* > > * Do the request again. Here's where it gets odd. > > * Varnish does an IMS request to the backend > > * The backend responds with a 403 as expected. > > * Varnish replies to the client with a HTTP/1.1 200 Not Modified > > I would expect an error status (or really anything that's not a 304) to > fail the IMS test on Varnish's side and that it would then return the 403 > to the client. Something weird about what I'm doing/abusing? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moseleymark at gmail.com Mon Apr 21 04:27:11 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Sun, 20 Apr 2014 21:27:11 -0700 Subject: Varnish 4.0 and IMS oddity In-Reply-To: References: Message-ID: I guess this is probably the asynchronous fetches in varnish 4, so presumably expected behavior. I really love the IMS feature btw :) On Sat, Apr 19, 2014 at 9:11 AM, Mark Moseley wrote: > Finally got a chance to play around with this some more. > > It seems like if I turn off grace, it works like I'd expect it to (i.e. > non-304 backend requests override whatever is stored in varnish by 'keep'). > It sort of makes sense in a grace way but could be a surprise if someone > wasn't expecting it. That is, that grace would trump the non-304 in the IMS > request for a request or two. > > Actually, maybe it's just a grace thing. Even with IMS/keep off, I'm still > seeing at least one stale response, which sounds very much like grace, > though I'd only expect it to kick in for a sick backend or multiple > clients. This is a test box, so I'm the only one hitting the server (and > using curl, so only a single request is hitting it at a time). I've got > grace on in 3.x and things work like I'd expect there. Perhaps just a > behavior change in 4.x? > > I've not tested it super extensively yet, so I could be wrong. I've > switched it back and forth a couple of times and with grace on, it does the > same as above; with grace off, it acts like I'd expect. Though as always, > my setup is probably more than a bit wonky. > > > On Sat, Apr 12, 2014 at 10:51 PM, Mark Moseley wrote: > >> Hi. I'm testing out Varnish 4.0 finally. I'm super excited about the IMS >> stuff and am dying to use it. And a big congratulations on the 4.0 release. >> >> I was playing with porting our shared hosting configuration to 4.0 and >> ran into a slight weirdness with IMS. Keeping in mind that we do a number >> of things to make things play nicely with the fact that we have no idea >> what our customers might be doing (and therefore have to jump through a >> bunch of crazy hoops to make sure we don't return things like authenticated >> content to unauthenticated users), this could very easily be something >> weird with our particular setup. I've re-run the test a bunch of times and >> seen the same thing each time. >> >> Here's the scenario: >> >> * I have IMS up and running and working. Other than this one particular >> oddity, everything else IMS-related seems to be working great and I'm >> greatly looking forward to using it. The test page I'm using deliberately >> returns a TTL of 1 second to make testing easier. >> >> * As a mockup of a customer doing something like cookie-base >> authentication, or IP-based .htaccess authentication, I wrote up a simple >> rewrite rule to return a 403 if a certain cookie was missing. >> >> * I turn off the rewrite rule >> >> * Do a request to that page a few times with the expected 200 from the >> backend. On the 2nd and subsequent reqs, the IMS stuff kicks in. BTW, is >> the client getting a "HTTP/1.1 200 Not Modified" the expected behavior? I >> know the strings after the status code are completely arbitrary but it >> looked a bit odd. >> >> * I turn the rewrite rule back *on* >> >> * Do the request again. Here's where it gets odd. >> >> * Varnish does an IMS request to the backend >> >> * The backend responds with a 403 as expected. >> >> * Varnish replies to the client with a HTTP/1.1 200 Not Modified >> >> I would expect an error status (or really anything that's not a 304) to >> fail the IMS test on Varnish's side and that it would then return the 403 >> to the client. Something weird about what I'm doing/abusing? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Tue Apr 22 08:04:39 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 13:34:39 +0530 Subject: varnish reports [CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request] Message-ID: Hello all, My setting works well through nginx->apache but not through nginx->varnish->apache apache is configured to listen to port 8080 . when nginx uses proxy_pass http://127.0.0.1:8080 the sites are running fine. If I introduce varnish after nginx by [proxy_pass http://127.0.0.1:6082] the nginx starts throwing following error and browser also shows "*Zero Sized Reply"* [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while reading response header from upstream and /var/log/messages shows varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request.#012Type 'help' for more info.#012all commands are in lower-case. varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd Cache-Control: max-age=0 obviously varnish is configured to listen to apache backend default { .host = "127.0.0.1"; .port = "8080"; } Can anyone please suggest the possible reason which is causing the problem ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Tue Apr 22 10:03:23 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Tue, 22 Apr 2014 12:03:23 +0200 Subject: varnish reports [CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request] In-Reply-To: References: Message-ID: On Tue, Apr 22, 2014 at 10:04 AM, Joydeep Bakshi wrote: > Hello all, > > My setting works well through nginx->apache but not through > nginx->varnish->apache > > apache is configured to listen to port 8080 . when nginx uses > > proxy_pass http://127.0.0.1:8080 > > the sites are running fine. > > If I introduce varnish after nginx by [proxy_pass http://127.0.0.1:6082] > the nginx starts throwing following error and browser also shows "Zero Sized > Reply" Hi, Varnish listens to the port 6081 by default for HTTP. The port 6082 is the default port for the process administration (eg. varnishadm). Cheers, Dridi > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while reading > response header from upstream > > and /var/log/messages shows > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown > request.#012Type 'help' for more info.#012all commands are in lower-case. > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd Cache-Control: > max-age=0 > > obviously varnish is configured to listen to apache > > backend default { > .host = "127.0.0.1"; > .port = "8080"; > } > > Can anyone please suggest the possible reason which is causing the problem ? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From joydeep.bakshi at netzrezepte.de Tue Apr 22 10:12:28 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 15:42:28 +0530 Subject: varnish reports [CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request] In-Reply-To: References: Message-ID: Thanks for the info, but after starting varnish only the following ports are open # netstat -nat | grep 60 tcp 0 0 0.0.0.0:6082 0.0.0.0:* LISTEN tcp 0 0 :::6082 :::* LISTEN On Tue, Apr 22, 2014 at 3:33 PM, Dridi Boukelmoune < dridi.boukelmoune at zenika.com> wrote: > On Tue, Apr 22, 2014 at 10:04 AM, Joydeep Bakshi > wrote: > > Hello all, > > > > My setting works well through nginx->apache but not through > > nginx->varnish->apache > > > > apache is configured to listen to port 8080 . when nginx uses > > > > proxy_pass http://127.0.0.1:8080 > > > > the sites are running fine. > > > > If I introduce varnish after nginx by [proxy_pass > http://127.0.0.1:6082] > > the nginx starts throwing following error and browser also shows "Zero > Sized > > Reply" > > Hi, > > Varnish listens to the port 6081 by default for HTTP. The port 6082 is > the default port for the process administration (eg. varnishadm). > > Cheers, > Dridi > > > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while > reading > > response header from upstream > > > > and /var/log/messages shows > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown > > request.#012Type 'help' for more info.#012all commands are in lower-case. > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd > Cache-Control: > > max-age=0 > > > > obviously varnish is configured to listen to apache > > > > backend default { > > .host = "127.0.0.1"; > > .port = "8080"; > > } > > > > Can anyone please suggest the possible reason which is causing the > problem ? > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Tue Apr 22 10:16:41 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 15:46:41 +0530 Subject: varnish reports [CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request] In-Reply-To: References: Message-ID: and even with 6081 the log reports followings [error] 17648#0: *8 connect() failed (111: Connection refused) while connecting to upstream, client: 88.198.185.226, server: www.dustri.bookopt.de, request: "GET / HTTP/1.1", upstream: " http://127.0.0.1:6081/" On Tue, Apr 22, 2014 at 3:42 PM, Joydeep Bakshi < joydeep.bakshi at netzrezepte.de> wrote: > Thanks for the info, but after starting varnish only the following ports > are open > > # netstat -nat | grep 60 > tcp 0 0 0.0.0.0:6082 0.0.0.0:* LISTEN > tcp 0 0 :::6082 :::* LISTEN > > > On Tue, Apr 22, 2014 at 3:33 PM, Dridi Boukelmoune < > dridi.boukelmoune at zenika.com> wrote: > >> On Tue, Apr 22, 2014 at 10:04 AM, Joydeep Bakshi >> wrote: >> > Hello all, >> > >> > My setting works well through nginx->apache but not through >> > nginx->varnish->apache >> > >> > apache is configured to listen to port 8080 . when nginx uses >> > >> > proxy_pass http://127.0.0.1:8080 >> > >> > the sites are running fine. >> > >> > If I introduce varnish after nginx by [proxy_pass >> http://127.0.0.1:6082] >> > the nginx starts throwing following error and browser also shows "Zero >> Sized >> > Reply" >> >> Hi, >> >> Varnish listens to the port 6081 by default for HTTP. The port 6082 is >> the default port for the process administration (eg. varnishadm). >> >> Cheers, >> Dridi >> >> > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while >> reading >> > response header from upstream >> > >> > and /var/log/messages shows >> > >> > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 >> Unknown >> > request.#012Type 'help' for more info.#012all commands are in >> lower-case. >> > >> > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd >> Cache-Control: >> > max-age=0 >> > >> > obviously varnish is configured to listen to apache >> > >> > backend default { >> > .host = "127.0.0.1"; >> > .port = "8080"; >> > } >> > >> > Can anyone please suggest the possible reason which is causing the >> problem ? >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Tue Apr 22 12:11:04 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Tue, 22 Apr 2014 14:11:04 +0200 Subject: varnish reports [CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request] In-Reply-To: References: Message-ID: On Tue, Apr 22, 2014 at 12:16 PM, Joydeep Bakshi wrote: > and even with 6081 the log reports followings > > [error] 17648#0: *8 connect() failed (111: Connection refused) while > connecting to upstream, client: 88.198.185.226, server: > www.dustri.bookopt.de, request: "GET / HTTP/1.1", upstream: > "http://127.0.0.1:6081/" 6081 is the default port, maybe it was changed to something else. Try this: sudo lsof -i TCP | grep varnishd Dridi > On Tue, Apr 22, 2014 at 3:42 PM, Joydeep Bakshi > wrote: >> >> Thanks for the info, but after starting varnish only the following ports >> are open >> >> # netstat -nat | grep 60 >> tcp 0 0 0.0.0.0:6082 0.0.0.0:* LISTEN >> tcp 0 0 :::6082 :::* LISTEN >> >> >> On Tue, Apr 22, 2014 at 3:33 PM, Dridi Boukelmoune >> wrote: >>> >>> On Tue, Apr 22, 2014 at 10:04 AM, Joydeep Bakshi >>> wrote: >>> > Hello all, >>> > >>> > My setting works well through nginx->apache but not through >>> > nginx->varnish->apache >>> > >>> > apache is configured to listen to port 8080 . when nginx uses >>> > >>> > proxy_pass http://127.0.0.1:8080 >>> > >>> > the sites are running fine. >>> > >>> > If I introduce varnish after nginx by [proxy_pass >>> > http://127.0.0.1:6082] >>> > the nginx starts throwing following error and browser also shows "Zero >>> > Sized >>> > Reply" >>> >>> Hi, >>> >>> Varnish listens to the port 6081 by default for HTTP. The port 6082 is >>> the default port for the process administration (eg. varnishadm). >>> >>> Cheers, >>> Dridi >>> >>> > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while >>> > reading >>> > response header from upstream >>> > >>> > and /var/log/messages shows >>> > >>> > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 >>> > Unknown >>> > request.#012Type 'help' for more info.#012all commands are in >>> > lower-case. >>> > >>> > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd >>> > Cache-Control: >>> > max-age=0 >>> > >>> > obviously varnish is configured to listen to apache >>> > >>> > backend default { >>> > .host = "127.0.0.1"; >>> > .port = "8080"; >>> > } >>> > >>> > Can anyone please suggest the possible reason which is causing the >>> > problem ? >>> > >>> > _______________________________________________ >>> > varnish-misc mailing list >>> > varnish-misc at varnish-cache.org >>> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > From Anton.Kornexl at Uni-Passau.De Tue Apr 22 14:30:23 2014 From: Anton.Kornexl at Uni-Passau.De (Anton Kornexl) Date: Tue, 22 Apr 2014 16:30:23 +0200 Subject: Antw: Re: varnish cache gets randomly cleared In-Reply-To: <20140417145839.GE18211@wks140.directindustry.com> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> <20140417145839.GE18211@wks140.directindustry.com> Message-ID: <5356991F020000940007F0F6@smtp1.gw.uni-passau.de> > On Thu, Apr 17, 2014 at 04:38:58PM +0200, Anton Kornexl wrote: >> Hello, >> >> how do i find out why varnish cache gets cleared randomly. > Hello, no nuked objects n_expired 1461928 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 4774281 . N LRU moved objects no panic: varnish> panic.show 300 Child has not panicked or panic has been cleared varnishstat shows: 14+07:39:55 in the first line but twice a cleared cache in this period (14 days) -- Kind regards Anton Kornexl Rechenzentrum Universit?t Passau Innstr. 33 D-94032 Passau Tel.: 0851/509-1812 Fax: 0851/509-1802 -------------- next part -------------- A non-text attachment was scrubbed... Name: Kornexl, Anton.vcf Type: application/octet-stream Size: 244 bytes Desc: not available URL: From rainer at ultra-secure.de Tue Apr 22 22:33:55 2014 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 23 Apr 2014 00:33:55 +0200 Subject: where did trac go? Message-ID: Hi, I just received a mail from trac about one of the bugs I opened or commented on (long ago). I clicked on the link and it said ?file not found?. On https://www.varnish-cache.org clicking on project -> trace leads to the same 404. Are you guys rebuilding the website? Not really an important issue, I admit. Best Regards, Rainer From james at ifixit.com Tue Apr 22 23:01:03 2014 From: james at ifixit.com (James Pearson) Date: Tue, 22 Apr 2014 16:01:03 -0700 Subject: Antw: Re: varnish cache gets randomly cleared In-Reply-To: <5356991F020000940007F0F6@smtp1.gw.uni-passau.de> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> <20140417145839.GE18211@wks140.directindustry.com> <5356991F020000940007F0F6@smtp1.gw.uni-passau.de> Message-ID: <1398207610-sup-1311@geror.local> Excerpts from Anton Kornexl's message of 2014-04-22 07:30:23 -0700: > > On Thu, Apr 17, 2014 at 04:38:58PM +0200, Anton Kornexl wrote: > >> Hello, > >> > >> how do i find out why varnish cache gets cleared randomly. > > no nuked objects > n_expired 1461928 . N expired objects > n_lru_nuked 0 . N LRU nuked objects > n_lru_moved 4774281 . N LRU moved objects > > no panic: > varnish> panic.show > 300 > Child has not panicked or panic has been cleared > > varnishstat shows: > > 14+07:39:55 in the first line > > but twice a cleared cache in this period (14 days) How did you determine the cache is cleared? - P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From phk at phk.freebsd.dk Wed Apr 23 05:06:00 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 23 Apr 2014 05:06:00 +0000 Subject: where did trac go? In-Reply-To: References: Message-ID: <41907.1398229560@critter.freebsd.dk> In message , Rainer Duffn er writes: >I just received a mail from trac about one of the bugs I opened or >commented on (long ago). We had a spammer deface all closed tickets yesterday evening, well be restoring access once we have cleaned up. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From Anton.Kornexl at Uni-Passau.De Wed Apr 23 07:56:14 2014 From: Anton.Kornexl at Uni-Passau.De (Anton Kornexl) Date: Wed, 23 Apr 2014 09:56:14 +0200 Subject: Antw: Re: varnish cache gets randomly cleared In-Reply-To: <1398207610-sup-1311@geror.local> References: <535003A2020000940007F07E@smtp1.gw.uni-passau.de> <20140417145839.GE18211@wks140.directindustry.com> <5356991F020000940007F0F6@smtp1.gw.uni-passau.de> <1398207610-sup-1311@geror.local> Message-ID: <53578E3E020000940007F109@smtp1.gw.uni-passau.de> >>> James Pearson schrieb am Mittwoch, 23. April 2014 um 01:01 in Nachricht <1398207610-sup-1311 at geror.local>: > Excerpts from Anton Kornexl's message of 2014-04-22 07:30:23 -0700: >> > On Thu, Apr 17, 2014 at 04:38:58PM +0200, Anton Kornexl wrote: >> >> Hello, >> >> >> >> how do i find out why varnish cache gets cleared randomly. >> >> no nuked objects >> n_expired 1461928 . N expired objects >> n_lru_nuked 0 . N LRU nuked objects >> n_lru_moved 4774281 . N LRU moved objects >> >> no panic: >> varnish> panic.show >> 300 >> Child has not panicked or panic has been cleared >> >> varnishstat shows: >> >> 14+07:39:55 in the first line >> >> but twice a cleared cache in this period (14 days) > > How did you determine the cache is cleared? > - P Hello, i'm monitoring the number of storage objects / the nuked objects (n_object/n_lru_nuked) and the free / busy storage (SMF.s0.g_space/SMF.s0.g_bytes) with mrtg (values from varnishstat) Both graphs indicate a cleared cache at the same time the number of objects drop from about 20 000 to nearly zero the busy storage drops from one GB to about zero bytes (the free stroage goes up from some intermediate value to 100%) -- Kind regards Anton Kornexl Rechenzentrum Universit?t Passau Innstr. 33 D-94032 Passau Tel.: 0851/509-1812 Fax: 0851/509-1802 -------------- next part -------------- A non-text attachment was scrubbed... Name: Kornexl, Anton.vcf Type: application/octet-stream Size: 244 bytes Desc: not available URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 08:13:14 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 13:43:14 +0530 Subject: varnish reduces hit rate Message-ID: Dear list, When use nginx+apache the web hit rates maximize dramatically. But when introduce varnish in between as nginx->varnish->apache the hit rates reduce. How to optimize varnish to get higher hit rates ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at smartjog.com Wed Apr 23 08:19:57 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Wed, 23 Apr 2014 10:19:57 +0200 Subject: varnish reduces hit rate In-Reply-To: References: Message-ID: <535777AD.9080100@smartjog.com> On 04/23/2014 10:13 AM, Joydeep Bakshi wrote: > Dear list, > > When use nginx+apache the web hit rates maximize dramatically. But > when introduce varnish in between as nginx->varnish->apache the hit > rates reduce. > How to optimize varnish to get higher hit rates ? > Did you give the same storage space to varnish as you did to nginx? This can be of interest: https://www.varnish-cache.org/docs/3.0/tutorial/increasing_your_hitrate.html -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 08:30:55 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 14:00:55 +0530 Subject: varnish reduces hit rate In-Reply-To: <535777AD.9080100@smartjog.com> References: <535777AD.9080100@smartjog.com> Message-ID: sorry but I failed to understand "same storage space". Even I have not done any such configuration tewak. Though I like to share first few lines of varnishstat which shows the poor cache_hit 0+00:57:24 Hitrate ratio: 2 2 2 Hitrate avg: 0.0149 0.0149 0.0149 35062 0.00 10.18 client_conn - Client connections accepted 35062 0.00 10.18 client_req - Client requests received 2 0.00 0.00 cache_hit - Cache hits 34994 0.00 10.16 cache_hitpass - Cache hits for pass 66 0.00 0.02 cache_miss - Cache misses 34975 0.00 10.16 backend_conn - Backend conn. success On Wed, Apr 23, 2014 at 1:49 PM, Guillaume Quintard < guillaume.quintard at smartjog.com> wrote: > On 04/23/2014 10:13 AM, Joydeep Bakshi wrote: > > Dear list, > > When use nginx+apache the web hit rates maximize dramatically. But when > introduce varnish in between as nginx->varnish->apache the hit rates reduce. > How to optimize varnish to get higher hit rates ? > > > Did you give the same storage space to varnish as you did to nginx? > This can be of interest: > https://www.varnish-cache.org/docs/3.0/tutorial/increasing_your_hitrate.html > > -- > Guillaume Quintard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at smartjog.com Wed Apr 23 08:33:33 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Wed, 23 Apr 2014 10:33:33 +0200 Subject: varnish reduces hit rate In-Reply-To: References: <535777AD.9080100@smartjog.com> Message-ID: <53577ADD.7090003@smartjog.com> On 04/23/2014 10:30 AM, Joydeep Bakshi wrote: > sorry but I failed to understand "same storage space". Even I have > not done any such configuration tewak. > > Though I like to share first few lines of varnishstat which shows the > poor cache_hit > > 0+00:57:24 > Hitrate ratio: 2 2 2 > Hitrate avg: 0.0149 0.0149 0.0149 > > 35062 0.00 10.18 client_conn - Client > connections accepted > 35062 0.00 10.18 client_req - Client requests > received > 2 0.00 0.00 cache_hit - Cache hits > 34994 0.00 10.16 cache_hitpass - Cache hits for pass > 66 0.00 0.02 cache_miss - Cache misses > 34975 0.00 10.16 backend_conn - Backend conn. > success > You have a high hitpass ratio, your backend may not be sending TTL/Age information, causing varnish to avoid caching. -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 08:57:50 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 14:27:50 +0530 Subject: varnish reduces hit rate In-Reply-To: <53577ADD.7090003@smartjog.com> References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> Message-ID: can we do something to improve ? On Wed, Apr 23, 2014 at 2:03 PM, Guillaume Quintard < guillaume.quintard at smartjog.com> wrote: > On 04/23/2014 10:30 AM, Joydeep Bakshi wrote: > > sorry but I failed to understand "same storage space". Even I have not > done any such configuration tewak. > > Though I like to share first few lines of varnishstat which shows the > poor cache_hit > > 0+00:57:24 > Hitrate ratio: 2 2 2 > Hitrate avg: 0.0149 0.0149 0.0149 > > 35062 0.00 10.18 client_conn - Client connections > accepted > 35062 0.00 10.18 client_req - Client requests > received > 2 0.00 0.00 cache_hit - Cache hits > 34994 0.00 10.16 cache_hitpass - Cache hits for pass > 66 0.00 0.02 cache_miss - Cache misses > 34975 0.00 10.16 backend_conn - Backend conn. success > > You have a high hitpass ratio, your backend may not be sending TTL/Age > information, causing varnish to avoid caching. > > -- Guillaume Quintard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at smartjog.com Wed Apr 23 09:12:17 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Wed, 23 Apr 2014 11:12:17 +0200 Subject: varnish reduces hit rate In-Reply-To: References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> Message-ID: <535783F1.1010006@smartjog.com> On 04/23/2014 10:57 AM, Joydeep Bakshi wrote: > can we do something to improve ? > You can force the TTL to something positive if it's <= 0s on the varnish side. And on the backend side, you can configure the server to send the ttl headers. -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 09:55:32 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 15:25:32 +0530 Subject: varnish reduces hit rate In-Reply-To: <535783F1.1010006@smartjog.com> References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> <535783F1.1010006@smartjog.com> Message-ID: Since I am completely new to varnish, may I request to share the knowledge doing this ? Thanks On Wed, Apr 23, 2014 at 2:42 PM, Guillaume Quintard < guillaume.quintard at smartjog.com> wrote: > On 04/23/2014 10:57 AM, Joydeep Bakshi wrote: > > can we do something to improve ? > > You can force the TTL to something positive if it's <= 0s on the varnish > side. And on the backend side, you can configure the server to send the ttl > headers. > > -- > Guillaume Quintard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.quintard at smartjog.com Wed Apr 23 10:07:52 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Wed, 23 Apr 2014 12:07:52 +0200 Subject: varnish reduces hit rate In-Reply-To: References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> <535783F1.1010006@smartjog.com> Message-ID: <535790F8.9080708@smartjog.com> On 04/23/2014 11:55 AM, Joydeep Bakshi wrote: > Since I am completely new to varnish, may I request to share the > knowledge doing this ? I recommend reading the documentation first, alot of your answers are in them. -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 10:14:15 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 15:44:15 +0530 Subject: varnish reduces hit rate In-Reply-To: <535790F8.9080708@smartjog.com> References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> <535783F1.1010006@smartjog.com> <535790F8.9080708@smartjog.com> Message-ID: ok, thanks On Wed, Apr 23, 2014 at 3:37 PM, Guillaume Quintard < guillaume.quintard at smartjog.com> wrote: > On 04/23/2014 11:55 AM, Joydeep Bakshi wrote: > > Since I am completely new to varnish, may I request to share the knowledge > doing this ? > > I recommend reading the documentation first, alot of your answers are in > them. > > -- > Guillaume Quintard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 23 13:40:32 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 23 Apr 2014 19:10:32 +0530 Subject: varnish reduces hit rate In-Reply-To: References: <535777AD.9080100@smartjog.com> <53577ADD.7090003@smartjog.com> <535783F1.1010006@smartjog.com> <535790F8.9080708@smartjog.com> Message-ID: Thanks for your TTL related suggestion. I have gone through the documentation and add following for TTL [.....] import std; # needed for std.log sub vcl_fetch { if (beresp.ttl < 120s) { std.log("Adjusting TTL"); set beresp.ttl = 120s; } } [...] and found improvement in hit rates Hitrate ratio: 10 19 19Hitrate avg: 0.9998 0.9997 0.9997 45101 0.00 26.56 client_conn - Client connections accepted 45101 0.00 26.56 client_req - Client requests received 45005 0.00 26.50 cache_hit - Cache hits 40 0.00 0.02 cache_hitpass - Cache hits for pass 56 0.00 0.03 cache_miss - Cache misses 59 0.00 0.03 backend_conn - Backend conn. success 37 0.00 0.02 backend_reuse - Backend conn. reuses 15 0.00 0.01 backend_toolate - Backend conn. was closed 53 0.00 0.03 backend_recycle - Backend conn. recycles wonder if there is any other option for improvements. Any opinion / suggestion is very much welcome. many many thanks On Wed, Apr 23, 2014 at 3:44 PM, Joydeep Bakshi < joydeep.bakshi at netzrezepte.de> wrote: > ok, thanks > > > On Wed, Apr 23, 2014 at 3:37 PM, Guillaume Quintard < > guillaume.quintard at smartjog.com> wrote: > >> On 04/23/2014 11:55 AM, Joydeep Bakshi wrote: >> >> Since I am completely new to varnish, may I request to share the >> knowledge doing this ? >> >> I recommend reading the documentation first, alot of your answers are in >> them. >> >> -- >> Guillaume Quintard >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From japrice at gmail.com Sat Apr 26 02:10:45 2014 From: japrice at gmail.com (Jason Price) Date: Fri, 25 Apr 2014 22:10:45 -0400 Subject: System can't take more than 5k req /sec Message-ID: Note: this isn't a varnish question, per se, because varnish indicates it isn't the problem. What OS parameters has the community found valuable to tweak? Varnish 3.0.5, running on a c3.4xlarge amazon instance. 16 'cores', 30gb ram, amazon linux (aka RHEL 6ish) and half of a 10gb card to play with. Very recent linux kernel. We see approximately 50% cache hit. Payload is approximately 3.5kBytes compressed. Response times are averaging 10 milliseconds... Backend response time is around 17-20 milliseconds. We aren't seeing any lru_nuked objects, so the cache is sized appropriately for the cachability of the data (we can only cache for 15ish minutes due to the nature of the data). Threads set to 4k per thread pool. Two thread pools, increased sess-workspace. I see queued messages in varnish stat, but I never see dropped messages, and I don't see the thread count getting near max of 8k (usually hovers around 3300-3500). We have a very high requests to connections ratio (approximately 100:1). Vmod for url coding is in play, and occasionally some syslog messages. Tcp socket reuse and recycle got us some improvement. I've also played with the txqueuelen on the network interface, and a couple other related parameters to little avail. Anything I can do? -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnerin at gmail.com Sat Apr 26 10:33:24 2014 From: jnerin at gmail.com (Jorge) Date: Sat, 26 Apr 2014 12:33:24 +0200 Subject: System can't take more than 5k req /sec In-Reply-To: References: Message-ID: 5k req/s is very low, I was able to achieve ~12-16k with a vm with 2 cores and 8Gb, albeit our hit ratio was higher, around 90% (and it was a stress test). One of the things I had to disable for our test was an iptables rule for max new connections per second that we had to avoid some ddos, check to see if you have some external limiting factor like that. On Sat, Apr 26, 2014 at 4:10 AM, Jason Price wrote: > Note: this isn't a varnish question, per se, because varnish indicates it > isn't the problem. > > What OS parameters has the community found valuable to tweak? > > Varnish 3.0.5, running on a c3.4xlarge amazon instance. 16 'cores', 30gb > ram, amazon linux (aka RHEL 6ish) and half of a 10gb card to play with. > Very recent linux kernel. > > We see approximately 50% cache hit. Payload is approximately 3.5kBytes > compressed. Response times are averaging 10 milliseconds... Backend > response time is around 17-20 milliseconds. We aren't seeing any lru_nuked > objects, so the cache is sized appropriately for the cachability of the > data (we can only cache for 15ish minutes due to the nature of the data). > > Threads set to 4k per thread pool. Two thread pools, increased > sess-workspace. I see queued messages in varnish stat, but I never see > dropped messages, and I don't see the thread count getting near max of 8k > (usually hovers around 3300-3500). We have a very high requests to > connections ratio (approximately 100:1). > > Vmod for url coding is in play, and occasionally some syslog messages. > > Tcp socket reuse and recycle got us some improvement. I've also played > with the txqueuelen on the network interface, and a couple other related > parameters to little avail. > > Anything I can do? > > -Jason > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Jorge Ner?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Mon Apr 28 19:09:14 2014 From: james at ifixit.com (James Pearson) Date: Mon, 28 Apr 2014 12:09:14 -0700 Subject: System can't take more than 5k req /sec In-Reply-To: References: Message-ID: <1398712042-sup-837@geror.local> Excerpts from Jorge's message of 2014-04-26 03:33:24 -0700: > 5k req/s is very low, I was able to achieve ~12-16k with a vm with 2 cores > and 8Gb, albeit our hit ratio was higher, around 90% (and it was a stress > test). We did some load-testing recently after a French tv special took us down :( , and we found that cache misses are *vastly* more expensive, resource-wise, than cache hits. I don't recall the numbers, but if I were you I'd focus on upping the hit rate as much as possible. - P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lnsano at bol.com.br Mon Apr 28 20:37:41 2014 From: lnsano at bol.com.br (lnsano at bol.com.br) Date: Mon, 28 Apr 2014 17:37:41 -0300 Subject: System can't take more than 5k req /sec In-Reply-To: CAChvjRDW9nLOHYNHhaOnZvCPZ3nSdip09U+a+5jqxsv=xyX_KQ@mail.gmail.com References: CAChvjRDW9nLOHYNHhaOnZvCPZ3nSdip09U+a+5jqxsv=xyX_KQ@mail.gmail.com Message-ID: <535ebc159bd10_5163583282c5656c@a4-wakko5.mail> An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Tue Apr 29 07:25:51 2014 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Tue, 29 Apr 2014 09:25:51 +0200 Subject: Varnish 4.0 Release Party Today! > http://v4party.varnish-cache.org/ Message-ID: Hello there, In behalf of the Varnish team, I would like to thank everyone that has joined to make all the Varnish 4.0 release parties today! Thank to you, Varnishers all around the world will be celebrating and raising a Varnishini toast (HOW-TO Video) for all the hard work that has been put since June 2011 into making this release. It will likely be remembered 4-ever! :-D In less than three hours *Tokyo*, as usual, will be up partying first. *L.A.* will close up fashionably late when we all have probably given up already. In between *we have 27 other parties that you can join IRL. Check them out and register here: http://v4party.varnish-cache.org/ * *In case you cannot make it, you can always follow the Live-Stream on Youtube . It starts at 18.00 CEST. See your local time here . You can see the Agenda here .* Really looking forward to celebrate the release like there is no Varnish 5! ;-) See you tonight! Best regards, -- *Rub?n Romero* Self-Appointed Varnish Community Cheerleader | Varnish Software AS Cell: +47 95964088 / Office: +47 21989260 Skype, Twitter & IRC: ruben_varnish We Make Websites Fly!Winner of the 2013 Red Herring Top 100 Global Awards *RSVP to the Global Varnish 4.0 Release Party on April 29th: http://v4party.varnish-cache.org/ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marius.Storm-Olsen at student.bi.no Tue Apr 29 13:18:32 2014 From: Marius.Storm-Olsen at student.bi.no (Storm-Olsen, Marius) Date: Tue, 29 Apr 2014 13:18:32 +0000 Subject: Open Source Organizational Culture Message-ID: <1398777513261.21125@student.bi.no> Hi, I would like to request your participation in a survey on Open Source Organizational Culture, which will provide valuable insight into how Open Source projects are run, how their participants act, how they might change going forward, and how particular Open Source projects compare with one another and with traditional business cultures. The survey shouldn't take more than 10-15 minutes to complete. Some of the projects already participating in the survey include Qt, KDE, Chromium, OpenStack, OpenDaylight, FFmpeg, Go, Git, Subversion, Bazaar, LibreOffice, Perl, Python, Ruby on Rails, U-Boot and Zanata http://bit.ly/OSOCAS2014 Who? ---- My name is Marius Storm-Olsen, and I am currently working on a thesis on Open Source Organizational Culture. I've been an active part of Open Source for years, most notably on the Qt and Git/MsysGit projects. Although I have my own experiences to draw on, they do not qualify for the Open Source community at large. Why? ---- The survey will be used as part of my thesis on Open Source Organizational Culture at BI Norwegian Business School (www.bi.no/en, or www.bi.edu), but in true Open Source spirit the raw - but anonymized - results will be open for all. So, your Open Source project will be able to massage and dissect the results any way you wish, and see how you compare with other projects out there. Up until now, most research in Open Source culture has been based on mining mailing lists to find out how people act, who they interact with, and how projects organize themselves. In this research we would rather ask the participants directly about how a project is managed and what should change for the project to be spectacularly successful. When? ----- The survey is open now through May 5th. Where? ------ The bit.ly address above brings you to the following page https://www.surveygizmo.com/s3/1587798/osocas-2014 You can save your progress at any time and come back to the survey at a later point when you have time to finish it. How to help? ------------ If you want to help, feel free to send me an email with the name and website of the project, and I will go through the appropriate channels to request permission to send the survey out on their mailing list(s). If you are a prominent member of said project, feel free to forward the survey invitation directly, but please let me know so I can update that survey request overview below. You'll find a wiki containing the current survey request status here: https://github.com/mstormo/OSOCAS/wiki I do hope you can participate, and thank you for your consideration! Best regards, Marius Storm-Olsen From japrice at gmail.com Tue Apr 29 18:34:19 2014 From: japrice at gmail.com (Jason Price) Date: Tue, 29 Apr 2014 14:34:19 -0400 Subject: System can't take more than 5k req /sec In-Reply-To: <535ebc159bd10_5163583282c5656c@a4-wakko5.mail> References: <535ebc159bd10_5163583282c5656c@a4-wakko5.mail> Message-ID: (And Jorge: iptables isn't in play at all. lsmod | grep iptables shows nothing.) On Mon, Apr 28, 2014 at 4:37 PM, wrote: > Could you print the output for? > $ ss -s > $ sudo sysctl -a|egrep > "ip_local_port_range|tcp_max_tw_buckets|backlog|somaxconn" [root at XXXXXXXXX ~]# ss -s Total: 853 (kernel 6127) TCP: 695 (estab 292, closed 94, orphaned 0, synrecv 0, timewait 94/0), ports 0 Transport Total IP IPv6 * 6127 - - RAW 0 0 0 UDP 8 5 3 TCP 601 598 3 INET 609 603 6 FRAG 0 0 0 [root at XXXXXXXXX ~]# sysctl -a|egrep "ip_local_port_range|tcp_max_tw_buckets|backlog|somaxconn" net.core.netdev_max_backlog = 5000 net.core.somaxconn = 512 net.ipv4.ip_local_port_range = 32768 61000 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_max_tw_buckets = 131072 This system isn't under super heavy load currently (about 400 req/sec) but it has been at max load. From ragnar at waalaskala.com Tue Apr 29 19:13:55 2014 From: ragnar at waalaskala.com (Ragnar Kurm) Date: Tue, 29 Apr 2014 22:13:55 +0300 Subject: Varnish 503 error Message-ID: <535FF9F3.7040704@waalaskala.com> > 12 FetchError c http first read error: -1 0 (No error recorded) Jean, Got into same situation myself and replying mostly for documentation purposes, since your original post was quite some time ago. Found that Apache was closing keep-alive connection improperly in some cases (at least this is how I understand keep-alive). Varnish sent request during keep-alive session, Apache closed connection in return. Apache configuration "KeepAlive Off" solved it for me. For complete details I posted my findings here: https://www.varnish-cache.org/forum/topic/1468 Ragnar From numard at gmail.com Tue Apr 29 23:00:20 2014 From: numard at gmail.com (Norberto Meijome) Date: Wed, 30 Apr 2014 09:00:20 +1000 Subject: System can't take more than 5k req /sec In-Reply-To: References: <535ebc159bd10_5163583282c5656c@a4-wakko5.mail> Message-ID: Have you ruled out AWS limits? Putting varnish aside for a minute, can you handle 5k/sec TCP conns with something like nginx +static files.? On 30/04/2014 4:35 am, "Jason Price" wrote: > (And Jorge: iptables isn't in play at all. lsmod | grep iptables > shows nothing.) > > On Mon, Apr 28, 2014 at 4:37 PM, wrote: > > Could you print the output for? > > $ ss -s > > $ sudo sysctl -a|egrep > > "ip_local_port_range|tcp_max_tw_buckets|backlog|somaxconn" > > [root at XXXXXXXXX ~]# ss -s > Total: 853 (kernel 6127) > TCP: 695 (estab 292, closed 94, orphaned 0, synrecv 0, timewait 94/0), > ports 0 > > Transport Total IP IPv6 > * 6127 - - > RAW 0 0 0 > UDP 8 5 3 > TCP 601 598 3 > INET 609 603 6 > FRAG 0 0 0 > > [root at XXXXXXXXX ~]# sysctl -a|egrep > "ip_local_port_range|tcp_max_tw_buckets|backlog|somaxconn" > net.core.netdev_max_backlog = 5000 > net.core.somaxconn = 512 > net.ipv4.ip_local_port_range = 32768 61000 > net.ipv4.tcp_max_syn_backlog = 4096 > net.ipv4.tcp_max_tw_buckets = 131072 > > This system isn't under super heavy load currently (about 400 req/sec) > but it has been at max load. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From japrice at gmail.com Wed Apr 30 02:41:06 2014 From: japrice at gmail.com (Jason Price) Date: Tue, 29 Apr 2014 22:41:06 -0400 Subject: System can't take more than 5k req /sec In-Reply-To: References: <535ebc159bd10_5163583282c5656c@a4-wakko5.mail> Message-ID: On Tuesday, April 29, 2014, Norberto Meijome wrote: > Have you ruled out AWS limits? Putting varnish aside for a minute, can you > handle 5k/sec TCP conns with something like nginx +static files.? > This is an excellent question. I'll see what kind of answer I'll get to it. -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: