From huglester at gmail.com Sun Nov 1 09:57:42 2015 From: huglester at gmail.com (Jaroslav) Date: Sun, 1 Nov 2015 11:57:42 +0200 Subject: Varnish4 cache not always working Message-ID: Hello, I have successfully set up Varnish4 to cache the entire site for specific time. It is working with no problem. Since these are our specific customers sites, we can safety remove headers like 'Set-Cookie', "Vary', because those sites do not have user-login, only backend. The problem is that later we wanted to connect one more site, they return near the same Headers, but one is not cached (the dynamic part). What is cached is only the assets like css, js etc.. The only possible problem, we could spot, is that for example these are response headers are different when we call the problematic site two times in a row.: ``` HTTP/1.1 200 OK Age: 0 Connection: keep-alive Content-Encoding: gzip Content-Length: 3354 Content-Type: text/html; charset=UTF-8 Date: Sun, 01 Nov 2015 09:45:18 GMT Vary: Accept-Encoding X-Cache-Hits: 0 X-Varnish-Cache: MISS HTTP/1.1 200 OK Age: 0 Connection: keep-alive Content-Length: 0 Content-Type: text/html; charset=UTF-8 Date: Sun, 01 Nov 2015 09:45:33 GMT X-Cache-Hits: 0 X-Varnish-Cache: MISS ``` As you can see, there are no 'Content-Encoding' headers, and 'Content-Length' is set to: 0 after I run the headers check for second time. Maybe someone of you have ideas what I could be wrong? Thank you, Jaroslav -------------- next part -------------- An HTML attachment was scrubbed... URL: From colas.delmas at gmail.com Mon Nov 2 15:23:08 2015 From: colas.delmas at gmail.com (Nicolas Delmas) Date: Mon, 2 Nov 2015 16:23:08 +0100 Subject: FetchError : req.body read error / backend write error Message-ID: Hello, Sometimes I get backend_fetch_failed for some URL. This URL is called by AJAX in POST method. The response of this call is empty because it's just log and the content-length = 20. And backend response status code = 200 I don't get the error all time, for exemple I can't reproduce from my browser. I get this error in through std.syslog : - BereqMethod POST - BereqURL /proxy.php?xdp_path=http%3A%2F%2Fmonsite.com%2Fcf_log.php - BereqProtocol HTTP/1.1 - BereqHeader Accept: application/json, text/javascript, */*; q=0.01 - BereqHeader Content-Type: application/x-www-form-urlencoded; charset=UTF-8 - BereqHeader X-Requested-With: XMLHttpRequest - BereqHeader Referer: - BereqHeader Accept-Language: fr-FR,fr;q=0.5 - BereqHeader Accept-Encoding: gzip, deflate - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; LCJB; rv:11.0) like Gecko - BereqHeader Content-Length: 635 - BereqHeader DNT: 1 - BereqHeader Cache-Control: no-cache - BereqHeader Cookie: fofirdId=c652b193-cbff-4a0e-abdf-836087b20393; xtvrn=$390974$; xtan390974=-; xtant390974=1; CF_SESSION_SYNC=1446135592 - BereqHeader X-UA-Device: pc - BereqHeader X-UA-Device-Simplified: desktop - BereqHeader Via: 1.1 varnish-v4 - BereqHeader grace: none - BereqHeader X-Forwarded-For: xx.xx.xx.xx, xx.xx.xx.xx - BereqHeader Host: monsite.com - BereqHeader X-Pass: POST request - BereqHeader X-Pass-D: POST request - hash - Pass - BereqHeader X-Varnish: 2031880 - VCL_call BACKEND_FETCH - VCL_return fetch - BackendOpen 258 backend1(xx.xx.xx.xx,,80) xx.xx.xx.xx 40582 - Backend 258 backend backedn1(xx.xx.xx.xx,,80) - FetchError req.body read error: 104 (Connection reset by peer) - FetchError backend write error: 104 (Connection reset by peer) Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Nov 2 16:31:46 2015 From: perbu at varnish-software.com (Per Buer) Date: Mon, 2 Nov 2015 17:31:46 +0100 Subject: FetchError : req.body read error / backend write error In-Reply-To: References: Message-ID: Hi, Your connection is being reset by the backend (the peer). You need to go debug the backend. Per. On Mon, Nov 2, 2015 at 4:23 PM, Nicolas Delmas wrote: > Hello, > > Sometimes I get backend_fetch_failed for some URL. > This URL is called by AJAX in POST method. > > The response of this call is empty because it's just log and the > content-length = 20. > And backend response status code = 200 > > > I don't get the error all time, for exemple I can't reproduce from my > browser. > > I get this error in through std.syslog : > > - BereqMethod POST > - BereqURL /proxy.php?xdp_path=http%3A%2F%2Fmonsite.com%2Fcf_log.php > - BereqProtocol HTTP/1.1 > - BereqHeader Accept: application/json, text/javascript, */*; q=0.01 > - BereqHeader Content-Type: application/x-www-form-urlencoded; charset=UTF-8 > - BereqHeader X-Requested-With: XMLHttpRequest > - BereqHeader Referer: > - BereqHeader Accept-Language: fr-FR,fr;q=0.5 > - BereqHeader Accept-Encoding: gzip, deflate > - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; LCJB; rv:11.0) like Gecko > - BereqHeader Content-Length: 635 > - BereqHeader DNT: 1 > - BereqHeader Cache-Control: no-cache > - BereqHeader Cookie: fofirdId=c652b193-cbff-4a0e-abdf-836087b20393; xtvrn=$390974$; xtan390974=-; xtant390974=1; CF_SESSION_SYNC=1446135592 > - BereqHeader X-UA-Device: pc > - BereqHeader X-UA-Device-Simplified: desktop > - BereqHeader Via: 1.1 varnish-v4 > - BereqHeader grace: none > - BereqHeader X-Forwarded-For: xx.xx.xx.xx, xx.xx.xx.xx > - BereqHeader Host: monsite.com > - BereqHeader X-Pass: POST request > - BereqHeader X-Pass-D: POST request - hash - Pass > - BereqHeader X-Varnish: 2031880 > - VCL_call BACKEND_FETCH > - VCL_return fetch > - BackendOpen 258 backend1(xx.xx.xx.xx,,80) xx.xx.xx.xx 40582 > - Backend 258 backend backedn1(xx.xx.xx.xx,,80) > - FetchError req.body read error: 104 (Connection reset by peer) > - FetchError backend write error: 104 (Connection reset by peer) > > Thank you. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From uxbod at splatnix.net Mon Nov 2 17:03:20 2015 From: uxbod at splatnix.net (Phil Daws) Date: Mon, 2 Nov 2015 17:03:20 +0000 (GMT) Subject: Varnish, NGINX SSL and Wordpress Message-ID: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> Hello, Are any of you running Varnish in-front of a SSL Wordpress site ? I have tried using NGINX as the SSL termination point and proxying back to Varnish on port 80 but you end up with mixed content errors. If you tell Wordpress to use https exclusively, and you are proxy with http, then you get into 301 perm loop. Any thoughts please ? Thanks, Phil From cfernand at sju.edu Mon Nov 2 17:37:50 2015 From: cfernand at sju.edu (=?iso-8859-1?Q?Carlos_M._Fern=E1ndez?=) Date: Mon, 2 Nov 2015 12:37:50 -0500 (EST) Subject: Varnish, NGINX SSL and Wordpress In-Reply-To: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> References: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> Message-ID: <003b01d11595$318adeb0$94a09c10$@sju.edu> Hi, Phil, We don't use Nginx but do SSL termination at a hardware load balancer, with most of the work to support that setup done in the VCL, and something similar could possibly apply to your scenario. Our load balancer can use different backend ports depending on which protocol the client requests; e.g., if the client connects to port 80 for HTTP, then the load balancer proxies that to Varnish on port 80, while if the client connects to 443 for HTTPS the load balancer proxies to Varnish on port 8008. The choice of Varnish port numbers doesn't matter, just the fact that Varnish listens on both ports and that the load balancer uses one or the other based on the SSL status with the client (using the command line option "-a :80,8008" in this case). Then, in vcl_recv, we have the following to inform the backend when an SSL request has arrived: if ( std.port( server.ip ) == 8008 ) { set req.http.X-Forwarded-Proto = "https"; } We also have the following in vcl_hash to cache HTTP and HTTPS requests separately and avoid redirection loops: if ( req.http.X-Forwarded-Proto ) { hash_data( req.http.X-Forwarded-Proto ); } The backend then can look for that header and respond accordingly. For example, in Apache we set the HTTPS environment variable to "on": SetEnvIf X_FORWARDED_PROTO https HTTPS=on I have no knowledge of Nginx, but if it can be configured to use different backend ports then you should be able to use the above. Best regards, -- Carlos. -----Original Message----- From: varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org [mailto:varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org] On Behalf Of Phil Daws Sent: Monday, 02 November, 2015 12:03 To: varnish-misc at varnish-cache.org Subject: Varnish, NGINX SSL and Wordpress Hello, Are any of you running Varnish in-front of a SSL Wordpress site ? I have tried using NGINX as the SSL termination point and proxying back to Varnish on port 80 but you end up with mixed content errors. If you tell Wordpress to use https exclusively, and you are proxy with http, then you get into 301 perm loop. Any thoughts please ? Thanks, Phil _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ksorensen at nordija.com Mon Nov 2 18:04:36 2015 From: ksorensen at nordija.com (=?UTF-8?Q?Kristian_Gr=C3=B8nfeldt_S=C3=B8rensen?=) Date: Mon, 2 Nov 2015 19:04:36 +0100 Subject: varnishncsa -- please support a config file In-Reply-To: References: <1446223339.1984798.424778313.4C3BB196@webmail.messagingengine.com> Message-ID: On 30 October 2015 at 17:54, Raymond Jennings III < raymond.jennings.iii at gmail.com> wrote: > I'll second that request. > > On Fri, Oct 30, 2015 at 12:42 PM, Mark Felder wrote: > >> >> If we could read settings from a config file it would solve this problem >> cleanly... >> >> Hi, I submitted a patch last year that made it possible to read the log format from a file, which solves the issue with qouting : https://www.varnish-cache.org/patchwork/patch/244/ I haven't tried applying and compiling it on 4.1.0, but' it's pretty simple, so assume it will be trivial to apply it on 4.1.0. Hopefully we can convince somebody to accept it in to 4.1.1 BR Kristian S?rensen -------------- next part -------------- An HTML attachment was scrubbed... URL: From feld at feld.me Mon Nov 2 18:09:40 2015 From: feld at feld.me (Mark Felder) Date: Mon, 02 Nov 2015 12:09:40 -0600 Subject: varnishncsa -- please support a config file In-Reply-To: References: <1446223339.1984798.424778313.4C3BB196@webmail.messagingengine.com> Message-ID: <1446487780.3073293.426976745.23A0BCE3@webmail.messagingengine.com> On Mon, Nov 2, 2015, at 12:04, Kristian Gr?nfeldt S?rensen wrote: > On 30 October 2015 at 17:54, Raymond Jennings III < > raymond.jennings.iii at gmail.com> wrote: > > > I'll second that request. > > > > On Fri, Oct 30, 2015 at 12:42 PM, Mark Felder wrote: > > > >> > >> If we could read settings from a config file it would solve this problem > >> cleanly... > >> > >> > Hi, > > I submitted a patch last year that made it possible to read the log > format > from a file, which solves the issue with qouting : > https://www.varnish-cache.org/patchwork/patch/244/ > I haven't tried applying and compiling it on 4.1.0, but' it's pretty > simple, so assume it will be trivial to apply it on 4.1.0. > Hopefully we can convince somebody to accept it in to 4.1.1 > > BR > > Kristian S?rensen Confirmed it still compiles with 4.1.0, but I have not actually tested it yet. -- Mark Felder feld at feld.me From rmorgan at greenmtd.com Tue Nov 3 03:22:10 2015 From: rmorgan at greenmtd.com (Ryan Morgan) Date: Mon, 2 Nov 2015 22:22:10 -0500 Subject: change ttl based on request url Message-ID: <2F59220D-65AE-48B6-8EEA-374620E40A78@yonder.it> I have multiple subdomains pointing to one varnish instance. I read in the documentation that PCRE regex should be used. I believe the regex I have below should return true when the request url is ?http://internal.my.com/any/thing ? and the 15s ttl should be set. I?ve tried just (req.url ~ ?internal.my.com ?) as well because I read that it should match if any part of the request url. Help is appreciated! Thanks! -Ryan # Cache for a longer time if the internal.my.com URL isn't used sub vcl_fetch { if (req.url ~ "^[(http:\/\/)|(https:\/\/)]*internal\.my\.com.*"){ set beresp.ttl = 15 s; } else { set beresp.ttl = 300 s; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.villafuerte at optusnet.com.au Tue Nov 3 07:07:16 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Tue, 3 Nov 2015 18:07:16 +1100 Subject: change ttl based on request url In-Reply-To: <2F59220D-65AE-48B6-8EEA-374620E40A78@yonder.it> References: <2F59220D-65AE-48B6-8EEA-374620E40A78@yonder.it> Message-ID: <20151103070716.GC17755@optusnet.com.au> Hi Ryan, you're mixing req.url with req.http.host Also be aware of changes between Varnish 3 and 4. You didn't specified which version you're using so I won't go into details. Those can be found here https://www.varnish-cache.org/docs/4.0/whats-new/upgrading.html On Mon 02 Nov 2015 22:22:10, Ryan Morgan wrote: > I have multiple subdomains pointing to one varnish instance. I read in the documentation that PCRE regex should be used. I believe the regex I have below should return true when the request url is ?http://internal.my.com/any/thing ? and the 15s ttl should be set. I?ve tried just (req.url ~ ?internal.my.com ?) as well because I read that it should match if any part of the request url. Help is appreciated! Thanks! > > -Ryan > > # Cache for a longer time if the internal.my.com URL isn't used > sub vcl_fetch { > if (req.url ~ "^[(http:\/\/)|(https:\/\/)]*internal\.my\.com.*"){ what you're matching here is req.http.host and so the statement never returns true because it never matches Also I don't like the way the regex looks. I have not tested this so quite possibly it works but it's bit ugly. I'm sure simpler and cleaner regex could/should be found. v > set beresp.ttl = 15 s; > } else { > set beresp.ttl = 300 s; > } > } > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From rmorgan at greenmtd.com Tue Nov 3 12:36:24 2015 From: rmorgan at greenmtd.com (Ryan Morgan) Date: Tue, 3 Nov 2015 07:36:24 -0500 Subject: change ttl based on request url In-Reply-To: <20151103070716.GC17755@optusnet.com.au> References: <2F59220D-65AE-48B6-8EEA-374620E40A78@yonder.it> <20151103070716.GC17755@optusnet.com.au> Message-ID: <751fc2f5-4d08-4ba2-bbe5-64458404525b@Spark> I'm using varnish 3. I switched it to req.http.host and simplified the regex to just "internal.my.com" and it worked perfectly. Thank you! -Ryan On Nov 3, 2015, 2:08 AM -0500, Viktor Villafuerte, wrote: > Hi Ryan, > > you're mixing req.url with req.http.host > > Also be aware of changes between Varnish 3 and 4. You didn't specified > which version you're using so I won't go into details. Those can be > found here > https://www.varnish-cache.org/docs/4.0/whats-new/upgrading.html > > > On Mon 02 Nov 2015 22:22:10, Ryan Morgan wrote: > > I have multiple subdomains pointing to one varnish instance. I read in the documentation that PCRE regex should be used. I believe the regex I have below should return true when the request url is ?http://internal.my.com/any/thing? and the 15s ttl should be set. I?ve tried just (req.url ~ ?internal.my.com?) as well because I read that it should match if any part of the request url. Help is appreciated! Thanks! > > > > -Ryan > > > > # Cache for a longer time if the internal.my.com URL isn't used > > sub vcl_fetch { > > if (req.url ~ "^[(http:\/\/)|(https:\/\/)]*internal\.my\.com.*"){ > > what you're matching here is req.http.host and so the statement never > returns true because it never matches > > Also I don't like the way the regex looks. I have not tested this so > quite possibly it works but it's bit ugly. I'm sure simpler and cleaner > regex could/should be found. > > > v > > > > set beresp.ttl = 15 s; > > } else { > > set beresp.ttl = 300 s; > > } > > } > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cairesvs at gmail.com Tue Nov 3 12:42:52 2015 From: cairesvs at gmail.com (Caires Vinicius) Date: Tue, 03 Nov 2015 12:42:52 +0000 Subject: 100% CPU IOwait Message-ID: Hi guys! We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 minutes and grace of 5 hours, all the other configs are standard. Sometimes when we have a lot of request that result into cache miss we started to notice that our request latency grows and the iowait stays at 100%, something similar to this https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... . And our threads reaches the maximum (1000). Do you guys have any idea why is that? Cheers, Caires -------------- next part -------------- An HTML attachment was scrubbed... URL: From cairesvs at gmail.com Tue Nov 3 12:54:55 2015 From: cairesvs at gmail.com (Caires Vinicius) Date: Tue, 03 Nov 2015 12:54:55 +0000 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: My /etc/fstab is *tmpfs /var/lib/varnish tmpfs defaults,noatime,nodiratime,size=300M 0 0* On Tue, Nov 3, 2015 at 10:42 AM Caires Vinicius wrote: > Hi guys! > > We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, > memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 > minutes and grace of 5 hours, all the other configs are standard. > > Sometimes when we have a lot of request that result into cache miss we > started to notice that our request latency grows and the iowait stays at > 100%, something similar to this > https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... > . > And our threads reaches the maximum (1000). > > Do you guys have any idea why is that? > > Cheers, > > Caires > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cairesvs at gmail.com Tue Nov 3 12:55:57 2015 From: cairesvs at gmail.com (Caires Vinicius) Date: Tue, 03 Nov 2015 12:55:57 +0000 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: And my storage is locate here /var/lib/varnish-cache/varnish_storage.bin On Tue, Nov 3, 2015 at 10:42 AM Caires Vinicius wrote: > Hi guys! > > We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, > memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 > minutes and grace of 5 hours, all the other configs are standard. > > Sometimes when we have a lot of request that result into cache miss we > started to notice that our request latency grows and the iowait stays at > 100%, something similar to this > https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... > . > And our threads reaches the maximum (1000). > > Do you guys have any idea why is that? > > Cheers, > > Caires > -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Tue Nov 3 12:56:00 2015 From: perbu at varnish-software.com (Per Buer) Date: Tue, 3 Nov 2015 13:56:00 +0100 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: Hi, On Tue, Nov 3, 2015 at 1:42 PM, Caires Vinicius wrote: > We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, > memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 > minutes and grace of 5 hours, all the other configs are standard. > I would, on a general basis, recommend against using the file backend. It will start to struggle with fragmentation relatively quickly and the performance isn't all that great (lots of unnecessary synchronous reads). > Sometimes when we have a lot of request that result into cache miss we > started to notice that our request latency grows and the iowait stays at > 100%, something similar to this > https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... > . > And our threads reaches the maximum (1000). > > Do you guys have any idea why is that? > Yeah. New objects get assign a piece of memory, starts writing, triggers pagefault, kernel takes over and reads/merges the underlying page, varnish then overwrites that page which then gets written back to disk. This naturally slows down delivery so Varnish spawns new threads. Try malloc. You should start with -s malloc,30G or there about - if you have lots of small objects you might need to go a bit down to avoid swapping. Not related: You should also move /var/lib/varnish onto tempfs. Linux will do a lot of writing if the shared memory segment is visible on a filesystem that is backed by a disk. -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdh132 at psu.edu Tue Nov 3 14:17:59 2015 From: jdh132 at psu.edu (Jason Heffner) Date: Tue, 3 Nov 2015 09:17:59 -0500 Subject: Varnish, NGINX SSL and Wordpress In-Reply-To: <003b01d11595$318adeb0$94a09c10$@sju.edu> References: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> <003b01d11595$318adeb0$94a09c10$@sju.edu> Message-ID: <3255A052-5EC0-4786-BDFA-E16A44519F85@psu.edu> We run Varnish in between an F5 and Apache as well as use Nginx for ssl and load balancing in development, in conjunction with Wordpress backends. You have to tell Wordpress that you are behind SSL and it will function properly. To accomplish this I?d use the following code in wp-config.php if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') { $_SERVER['HTTPS']='on'; } You can then also set FORCE_SSL_ADMIN and FORCE_SSL_LOGIN however you see fit and it should work. I saw some updates not that long ago to support proxy headers but don?t believe they are fully supported yet. Jason > On Nov 2, 2015, at 12:37 PM, Carlos M. Fern?ndez wrote: > > Hi, Phil, > > We don't use Nginx but do SSL termination at a hardware load balancer, > with most of the work to support that setup done in the VCL, and something > similar could possibly apply to your scenario. > > Our load balancer can use different backend ports depending on which > protocol the client requests; e.g., if the client connects to port 80 for > HTTP, then the load balancer proxies that to Varnish on port 80, while if > the client connects to 443 for HTTPS the load balancer proxies to Varnish > on port 8008. The choice of Varnish port numbers doesn't matter, just the > fact that Varnish listens on both ports and that the load balancer uses > one or the other based on the SSL status with the client (using the > command line option "-a :80,8008" in this case). > > Then, in vcl_recv, we have the following to inform the backend when an SSL > request has arrived: > > if ( std.port( server.ip ) == 8008 ) { > set req.http.X-Forwarded-Proto = "https"; > } > > We also have the following in vcl_hash to cache HTTP and HTTPS requests > separately and avoid redirection loops: > > if ( req.http.X-Forwarded-Proto ) { > hash_data( req.http.X-Forwarded-Proto ); > } > > The backend then can look for that header and respond accordingly. For > example, in Apache we set the HTTPS environment variable to "on": > > SetEnvIf X_FORWARDED_PROTO https HTTPS=on > > I have no knowledge of Nginx, but if it can be configured to use different > backend ports then you should be able to use the above. > > Best regards, > -- > Carlos. > > -----Original Message----- > From: varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org > [mailto:varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org] On Behalf > Of Phil Daws > Sent: Monday, 02 November, 2015 12:03 > To: varnish-misc at varnish-cache.org > Subject: Varnish, NGINX SSL and Wordpress > > Hello, > > Are any of you running Varnish in-front of a SSL Wordpress site ? > > I have tried using NGINX as the SSL termination point and proxying back to > Varnish on port 80 but you end up with mixed content errors. If you tell > Wordpress to use https exclusively, and you are proxy with http, then you > get into 301 perm loop. > > Any thoughts please ? > > Thanks, Phil > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From cairesvs at gmail.com Tue Nov 3 15:08:52 2015 From: cairesvs at gmail.com (Caires Vinicius) Date: Tue, 03 Nov 2015 15:08:52 +0000 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: We had some problems with malloc with the same kind of aws instance and the -s malloc,5.8G(80% of the memory total). The only trace of the error was a cannot fork cannot allocate memory into syslog. We're probably missing some point, maybe the instance size ins't the right fit for us. On Tue, Nov 3, 2015 at 10:56 AM Per Buer wrote: > Hi, > > > On Tue, Nov 3, 2015 at 1:42 PM, Caires Vinicius > wrote: > >> We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, >> memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 >> minutes and grace of 5 hours, all the other configs are standard. >> > I would, on a general basis, recommend against using the file backend. It > will start to struggle with fragmentation relatively quickly and the > performance isn't all that great (lots of unnecessary synchronous reads). > >> Sometimes when we have a lot of request that result into cache miss we >> started to notice that our request latency grows and the iowait stays at >> 100%, something similar to this >> https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... >> . >> And our threads reaches the maximum (1000). >> >> Do you guys have any idea why is that? >> > Yeah. New objects get assign a piece of memory, starts writing, triggers > pagefault, kernel takes over and reads/merges the underlying page, varnish > then overwrites that page which then gets written back to disk. This > naturally slows down delivery so Varnish spawns new threads. > > Try malloc. You should start with -s malloc,30G or there about - if you > have lots of small objects you might need to go a bit down to avoid > swapping. > > Not related: You should also move /var/lib/varnish onto tempfs. Linux will > do a lot of writing if the shared memory segment is visible on a filesystem > that is backed by a disk. > -- > *Per Buer* > CTO | Varnish Software AS > Cell: +47 95839117 > We Make Websites Fly! > www.varnish-software.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From uxbod at splatnix.net Tue Nov 3 15:16:17 2015 From: uxbod at splatnix.net (Phil Daws) Date: Tue, 3 Nov 2015 15:16:17 +0000 (GMT) Subject: Varnish, NGINX SSL and Wordpress In-Reply-To: <3255A052-5EC0-4786-BDFA-E16A44519F85@psu.edu> References: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> <003b01d11595$318adeb0$94a09c10$@sju.edu> <3255A052-5EC0-4786-BDFA-E16A44519F85@psu.edu> Message-ID: <1483600311.501567.1446563777189.JavaMail.zimbra@innovot.com> Thank you to both. Will clone my existing instance and give these suggestions a whirl. Phil. ----- On 3 Nov, 2015, at 14:17, Jason Heffner jdh132 at psu.edu wrote: > We run Varnish in between an F5 and Apache as well as use Nginx for ssl and load > balancing in development, in conjunction with Wordpress backends. You have to > tell Wordpress that you are behind SSL and it will function properly. To > accomplish this I?d use the following code in wp-config.php > > if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') { > $_SERVER['HTTPS']='on'; > } > > You can then also set FORCE_SSL_ADMIN and FORCE_SSL_LOGIN however you see fit > and it should work. I saw some updates not that long ago to support proxy > headers but don?t believe they are fully supported yet. > > Jason > > >> On Nov 2, 2015, at 12:37 PM, Carlos M. Fern?ndez wrote: >> >> Hi, Phil, >> >> We don't use Nginx but do SSL termination at a hardware load balancer, >> with most of the work to support that setup done in the VCL, and something >> similar could possibly apply to your scenario. >> >> Our load balancer can use different backend ports depending on which >> protocol the client requests; e.g., if the client connects to port 80 for >> HTTP, then the load balancer proxies that to Varnish on port 80, while if >> the client connects to 443 for HTTPS the load balancer proxies to Varnish >> on port 8008. The choice of Varnish port numbers doesn't matter, just the >> fact that Varnish listens on both ports and that the load balancer uses >> one or the other based on the SSL status with the client (using the >> command line option "-a :80,8008" in this case). >> >> Then, in vcl_recv, we have the following to inform the backend when an SSL >> request has arrived: >> >> if ( std.port( server.ip ) == 8008 ) { >> set req.http.X-Forwarded-Proto = "https"; >> } >> >> We also have the following in vcl_hash to cache HTTP and HTTPS requests >> separately and avoid redirection loops: >> >> if ( req.http.X-Forwarded-Proto ) { >> hash_data( req.http.X-Forwarded-Proto ); >> } >> >> The backend then can look for that header and respond accordingly. For >> example, in Apache we set the HTTPS environment variable to "on": >> >> SetEnvIf X_FORWARDED_PROTO https HTTPS=on >> >> I have no knowledge of Nginx, but if it can be configured to use different >> backend ports then you should be able to use the above. >> >> Best regards, >> -- >> Carlos. >> >> -----Original Message----- >> From: varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org >> [mailto:varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org] On Behalf >> Of Phil Daws >> Sent: Monday, 02 November, 2015 12:03 >> To: varnish-misc at varnish-cache.org >> Subject: Varnish, NGINX SSL and Wordpress >> >> Hello, >> >> Are any of you running Varnish in-front of a SSL Wordpress site ? >> >> I have tried using NGINX as the SSL termination point and proxying back to >> Varnish on port 80 but you end up with mixed content errors. If you tell >> Wordpress to use https exclusively, and you are proxy with http, then you >> get into 301 perm loop. >> >> Any thoughts please ? >> >> Thanks, Phil >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From perbu at varnish-software.com Tue Nov 3 15:30:05 2015 From: perbu at varnish-software.com (Per Buer) Date: Tue, 3 Nov 2015 16:30:05 +0100 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: Hi, On Tue, Nov 3, 2015 at 4:08 PM, Caires Vinicius wrote: > We had some problems with malloc with the same kind of aws instance and > the -s malloc,5.8G(80% of the memory total). The only trace of the error > was a cannot fork cannot allocate memory into syslog. We're probably > missing some point, maybe the instance size ins't the right fit for us. > This sounds like your running out of virtual memory. Maybe you're running without swap space? Per. > > > On Tue, Nov 3, 2015 at 10:56 AM Per Buer > wrote: > >> Hi, >> >> >> On Tue, Nov 3, 2015 at 1:42 PM, Caires Vinicius >> wrote: >> >>> We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, >>> memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 >>> minutes and grace of 5 hours, all the other configs are standard. >>> >> I would, on a general basis, recommend against using the file backend. It >> will start to struggle with fragmentation relatively quickly and the >> performance isn't all that great (lots of unnecessary synchronous reads). >> >>> Sometimes when we have a lot of request that result into cache miss we >>> started to notice that our request latency grows and the iowait stays at >>> 100%, something similar to this >>> https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... >>> . >>> And our threads reaches the maximum (1000). >>> >>> Do you guys have any idea why is that? >>> >> Yeah. New objects get assign a piece of memory, starts writing, triggers >> pagefault, kernel takes over and reads/merges the underlying page, varnish >> then overwrites that page which then gets written back to disk. This >> naturally slows down delivery so Varnish spawns new threads. >> >> Try malloc. You should start with -s malloc,30G or there about - if you >> have lots of small objects you might need to go a bit down to avoid >> swapping. >> >> Not related: You should also move /var/lib/varnish onto tempfs. Linux >> will do a lot of writing if the shared memory segment is visible on a >> filesystem that is backed by a disk. >> -- >> *Per Buer* >> CTO | Varnish Software AS >> Cell: +47 95839117 >> We Make Websites Fly! >> www.varnish-software.com >> >> > -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cairesvs at gmail.com Tue Nov 3 16:15:35 2015 From: cairesvs at gmail.com (Caires Vinicius) Date: Tue, 03 Nov 2015 16:15:35 +0000 Subject: 100% CPU IOwait In-Reply-To: References: Message-ID: Hi Per, Correcting: At the time the machine was Ubuntu not Amazon Linux. We didn't have the proper monitoring at the time and after that we did it some load testing with the request urls and we couldn't reproduce the error. The other machine had this line of error in the syslog. varnishd[22217]: segfault at 18 ip 00007f061ea62565 sp 00007ef8a87e8170 error 4 in libjemalloc.so.1[7f061ea57000+30000] Similar to this: https://bugs.launchpad.net/ubuntu/+source/jemalloc/+bug/1333581 Hi Paul, The long TTL would apply for grace? Size down would help the cache evict? On Tue, Nov 3, 2015 at 1:30 PM Per Buer wrote: > Hi, > > On Tue, Nov 3, 2015 at 4:08 PM, Caires Vinicius > wrote: > >> We had some problems with malloc with the same kind of aws instance and >> the -s malloc,5.8G(80% of the memory total). The only trace of the error >> was a cannot fork cannot allocate memory into syslog. We're probably >> missing some point, maybe the instance size ins't the right fit for us. >> > > > This sounds like your running out of virtual memory. Maybe you're running > without swap space? > > Per. > > >> >> >> On Tue, Nov 3, 2015 at 10:56 AM Per Buer >> wrote: >> >>> Hi, >>> >>> >>> On Tue, Nov 3, 2015 at 1:42 PM, Caires Vinicius >>> wrote: >>> >>>> We've started to use Varnish 4 with Amazon Linux with EBS SSD of 40GB, >>>> memory of 7.5GB. We use the file storage with 20G allocated with ttl of 11 >>>> minutes and grace of 5 hours, all the other configs are standard. >>>> >>> I would, on a general basis, recommend against using the file backend. >>> It will start to struggle with fragmentation relatively quickly and the >>> performance isn't all that great (lots of unnecessary synchronous reads). >>> >>>> Sometimes when we have a lot of request that result into cache miss we >>>> started to notice that our request latency grows and the iowait stays at >>>> 100%, something similar to this >>>> https://www.varnish-cache.org/lists/pipermail/varnish-misc/2008-April/01... >>>> . >>>> And our threads reaches the maximum (1000). >>>> >>>> Do you guys have any idea why is that? >>>> >>> Yeah. New objects get assign a piece of memory, starts writing, triggers >>> pagefault, kernel takes over and reads/merges the underlying page, varnish >>> then overwrites that page which then gets written back to disk. This >>> naturally slows down delivery so Varnish spawns new threads. >>> >>> Try malloc. You should start with -s malloc,30G or there about - if you >>> have lots of small objects you might need to go a bit down to avoid >>> swapping. >>> >>> Not related: You should also move /var/lib/varnish onto tempfs. Linux >>> will do a lot of writing if the shared memory segment is visible on a >>> filesystem that is backed by a disk. >>> -- >>> *Per Buer* >>> CTO | Varnish Software AS >>> Cell: +47 95839117 >>> We Make Websites Fly! >>> www.varnish-software.com >>> >>> >> > > > -- > *Per Buer* > CTO | Varnish Software AS > Cell: +47 95839117 > We Make Websites Fly! > www.varnish-software.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.villafuerte at optusnet.com.au Wed Nov 4 06:57:00 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Wed, 4 Nov 2015 17:57:00 +1100 Subject: Varnish4 + do_stream Message-ID: <20151104065700.GE17755@optusnet.com.au> Hi all, I've been trying to use do_stream in my VCL on Varnish4. But no matter what I do it does not seem to work the way I expect it :) sub vcl_backend_response { set beresp.do_stream = true; .. } >From reading around I think this is actually the default but Varnish always waits until it has the whole file? The setup is Varnish -> Nginx -> Nginx. If I do request directly to the first Nginx the return of the file starts immediatelly. If I do request to Varnish first it waits about 3 mins (file is about 3G in size) before it actually serves anything.. Please, let me know if you need more info about my setup! Any suggestions are welcome thanks v -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From viktor.villafuerte at optusnet.com.au Mon Nov 9 21:53:11 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Tue, 10 Nov 2015 08:53:11 +1100 Subject: Varnish4 + do_stream In-Reply-To: <20151104065700.GE17755@optusnet.com.au> References: <20151104065700.GE17755@optusnet.com.au> Message-ID: <20151109215311.GH17755@optusnet.com.au> Just in case anybody is interested.. :) do_esi must be turned off v On Wed 04 Nov 2015 17:57:00, Viktor Villafuerte wrote: > Hi all, > > I've been trying to use do_stream in my VCL on Varnish4. But no matter > what I do it does not seem to work the way I expect it :) > > sub vcl_backend_response { > set beresp.do_stream = true; > .. > } > > >From reading around I think this is actually the default but Varnish > always waits until it has the whole file? > > The setup is Varnish -> Nginx -> Nginx. If I do request directly to the > first Nginx the return of the file starts immediatelly. If I do request > to Varnish first it waits about 3 mins (file is about 3G in size) before > it actually serves anything.. > > Please, let me know if you need more info about my setup! > > > Any suggestions are welcome > > thanks > > v > > > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From phk at phk.freebsd.dk Tue Nov 10 08:00:09 2015 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 10 Nov 2015 08:00:09 +0000 Subject: Varnish4 + do_stream In-Reply-To: <20151109215311.GH17755@optusnet.com.au> References: <20151104065700.GE17755@optusnet.com.au> <20151109215311.GH17755@optusnet.com.au> Message-ID: <69033.1447142409@critter.freebsd.dk> -------- In message <20151109215311.GH17755 at optusnet.com.au>, Viktor Villafuerte writes: >Just in case anybody is interested.. :) > >do_esi must be turned off Yes, we cannot do streaming ESI (yet). -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From akelly at snafu.de Tue Nov 10 11:21:52 2015 From: akelly at snafu.de (Andrew Kelly) Date: Tue, 10 Nov 2015 12:21:52 +0100 Subject: varnishncsa and logrotate Message-ID: <1447154512.3481.1.camel@localhost.localdomain> Has anybody out there gotten varnishncsa to function on a Debian system? I can't seem to get it to create a PID file, which is wreaking havoc with my logrotate scripts. Andy From zxcvbn4038 at gmail.com Tue Nov 10 21:55:31 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 10 Nov 2015 16:55:31 -0500 Subject: thread_pool_max Message-ID: I'm confused about thread_pool_max - the official documentation seems to suggest it is a per-thread pool setting, however I've seen a number of blogs and slide shares state that it is actually global setting (i.e. should be >= then thread_pool_min * thread_pools). If someone could clarify how this setting is used I'd appreciate it greatly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.villafuerte at optusnet.com.au Tue Nov 10 22:52:52 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Wed, 11 Nov 2015 09:52:52 +1100 Subject: Threads + thread queue length In-Reply-To: <20151012041705.GA1990@optusnet.com.au> References: <20151012041705.GA1990@optusnet.com.au> Message-ID: <20151110225252.GI17755@optusnet.com.au> I'm bumping this 'thread' up to the top since CJ Ess has just asked a question related to threads also.. I hate doing this but I'm hoping that somebody could (maybe) answer this too..? v On Mon 12 Oct 2015 15:17:05, Viktor Villafuerte wrote: > Hi all you carpenters and other Varnish using folk, > > There are couple of things in the output of varnishstat that puzzle me a > little.. > > MAIN.sess_drop 0 0.00 Sessions dropped > MAIN.sess_dropped 3809332 0.32 Sessions dropped for > thread > MAIN.fetch_no_thread 58746 0.01 Fetch failed (no > thread) > MAIN.pools 2 . Number of thread pools > MAIN.threads 1255 . Total number of > threads > > I've got 2 pools of 4000 threads set in Varnish config and > man varnish-counters says: > > sess_drop > Count of sessions silently dropped due to lack of worker thread. > > sess_dropped > Number of times session was dropped because the queue were too long > already. See also parameter queue_max. > > fetch_no_thread > beresp fetch failed, no thread available > > > This tells me that there's no lack of worker threads (good!), but the > thread queue length does get too long and subsequently sessions get > dropped (bad!). Also backend fetch failed due to no threads being > available (what?) > > > Now the puzzling bit :) > > 1) why would the thread queue get too long if there seems to be NO lack > of threads to use? > > 2) why would there be no threads if there seems to be NO lack of threads > > 3) 'See also the parameter queue_max' - but I cannot find any mention of > such parameter anywhere around? Where does this ellusive paramater live > then? > > > Could anybody shed bit of light on this for me? > > > > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From theistian at gmx.com Wed Nov 11 18:58:34 2015 From: theistian at gmx.com (=?UTF-8?Q?Carlos_Pe=C3=B1as_San_Jos=C3=A9?=) Date: Wed, 11 Nov 2015 19:58:34 +0100 Subject: Varnish, NGINX SSL and Wordpress In-Reply-To: <1483600311.501567.1446563777189.JavaMail.zimbra@innovot.com> References: <1486838860.467704.1446483800802.JavaMail.zimbra@innovot.com> <003b01d11595$318adeb0$94a09c10$@sju.edu> <3255A052-5EC0-4786-BDFA-E16A44519F85@psu.edu> <1483600311.501567.1446563777189.JavaMail.zimbra@innovot.com> Message-ID: Also if you want to deliver different content and using the same varnish you must add x-forwarded-proto to the hashed_data I do not use wordpress, my stack is nginx(http(80), http(443)) - varnish (6080) - rails (8080) When a request is made to the 443 port I add the header x-forwarded-proto "https" to the proxied request in nginx (via proxy_set_header). then in vcl you must add that header in hash_data here: sub vcl_hash { if (req.http.X-Forwarded-Proto) { hash_data(req.http.X-Forwarded-Proto); } } Whit this You are storing two versions for the same url, one for http and other for https, so you wouldn't end getting an http cached item when accessing via https and vice-versa In the app backend the Rails app renders stuff using X-Forwarded-Proto as a hint to know whats the working protocol. 2015-11-03 16:16 GMT+01:00 Phil Daws : > Thank you to both. > > Will clone my existing instance and give these suggestions a whirl. > > Phil. > > ----- On 3 Nov, 2015, at 14:17, Jason Heffner jdh132 at psu.edu wrote: > > > We run Varnish in between an F5 and Apache as well as use Nginx for ssl > and load > > balancing in development, in conjunction with Wordpress backends. You > have to > > tell Wordpress that you are behind SSL and it will function properly. To > > accomplish this I?d use the following code in wp-config.php > > > > if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') { > > $_SERVER['HTTPS']='on'; > > } > > > > You can then also set FORCE_SSL_ADMIN and FORCE_SSL_LOGIN however you > see fit > > and it should work. I saw some updates not that long ago to support proxy > > headers but don?t believe they are fully supported yet. > > > > Jason > > > > > >> On Nov 2, 2015, at 12:37 PM, Carlos M. Fern?ndez > wrote: > >> > >> Hi, Phil, > >> > >> We don't use Nginx but do SSL termination at a hardware load balancer, > >> with most of the work to support that setup done in the VCL, and > something > >> similar could possibly apply to your scenario. > >> > >> Our load balancer can use different backend ports depending on which > >> protocol the client requests; e.g., if the client connects to port 80 > for > >> HTTP, then the load balancer proxies that to Varnish on port 80, while > if > >> the client connects to 443 for HTTPS the load balancer proxies to > Varnish > >> on port 8008. The choice of Varnish port numbers doesn't matter, just > the > >> fact that Varnish listens on both ports and that the load balancer uses > >> one or the other based on the SSL status with the client (using the > >> command line option "-a :80,8008" in this case). > >> > >> Then, in vcl_recv, we have the following to inform the backend when an > SSL > >> request has arrived: > >> > >> if ( std.port( server.ip ) == 8008 ) { > >> set req.http.X-Forwarded-Proto = "https"; > >> } > >> > >> We also have the following in vcl_hash to cache HTTP and HTTPS requests > >> separately and avoid redirection loops: > >> > >> if ( req.http.X-Forwarded-Proto ) { > >> hash_data( req.http.X-Forwarded-Proto ); > >> } > >> > >> The backend then can look for that header and respond accordingly. For > >> example, in Apache we set the HTTPS environment variable to "on": > >> > >> SetEnvIf X_FORWARDED_PROTO https HTTPS=on > >> > >> I have no knowledge of Nginx, but if it can be configured to use > different > >> backend ports then you should be able to use the above. > >> > >> Best regards, > >> -- > >> Carlos. > >> > >> -----Original Message----- > >> From: varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org > >> [mailto:varnish-misc-bounces+cfernand=sju.edu at varnish-cache.org] On > Behalf > >> Of Phil Daws > >> Sent: Monday, 02 November, 2015 12:03 > >> To: varnish-misc at varnish-cache.org > >> Subject: Varnish, NGINX SSL and Wordpress > >> > >> Hello, > >> > >> Are any of you running Varnish in-front of a SSL Wordpress site ? > >> > >> I have tried using NGINX as the SSL termination point and proxying back > to > >> Varnish on port 80 but you end up with mixed content errors. If you > tell > >> Wordpress to use https exclusively, and you are proxy with http, then > you > >> get into 301 perm loop. > >> > >> Any thoughts please ? > >> > >> Thanks, Phil > >> > >> > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Wed Nov 11 20:50:34 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Wed, 11 Nov 2015 15:50:34 -0500 Subject: thread_pool_max In-Reply-To: References: Message-ID: Ok I figured it out.. I saw the info about thread_pool_max being aggregate in a blog, an e-mail archive, and an Varnishcon presentation. I noticed today that all of these were dated 2009-2010. I go back to the Varnish 2.1 documentation and thread_pool_max says: The maximum number of worker threads in all pools combined However in Varnish 3 and 4 it says: The maximum number of worker threads in each pool So the real problem is that Google is finding ancient cruft when I search for information about tuning these parameters! On Tue, Nov 10, 2015 at 4:55 PM, CJ Ess wrote: > I'm confused about thread_pool_max - the official documentation seems to > suggest it is a per-thread pool setting, however I've seen a number of > blogs and slide shares state that it is actually global setting (i.e. > should be >= then thread_pool_min * thread_pools). If someone could clarify > how this setting is used I'd appreciate it greatly! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.villafuerte at optusnet.com.au Thu Nov 12 01:40:27 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Thu, 12 Nov 2015 12:40:27 +1100 Subject: Threads + thread queue length In-Reply-To: <20151110225252.GI17755@optusnet.com.au> References: <20151012041705.GA1990@optusnet.com.au> <20151110225252.GI17755@optusnet.com.au> Message-ID: <20151112014027.GJ17755@optusnet.com.au> I think I might have found the problem here.. :/ thread_pool_add_delay the doco says that the values are in milliseconds but if I set this to 2 (which should be 2ms) it makes the server handle traffic very badly and only very gradually increases perfomance.. However if I set this to 0 then everything runs immediately and well. Furthermore in the docs it says: Reducing the add_delay lets you create threads faster which is essential - specially at startup - to avoid filling up the queue and dropping requests. which would exactly fit my (sad) scenario below.. Could anybody confirm that the value is in milliseconds and not in seconds? thanks v On Wed 11 Nov 2015 09:52:52, Viktor Villafuerte wrote: > I'm bumping this 'thread' up to the top since CJ Ess has just asked a > question related to threads also.. I hate doing this but I'm hoping that > somebody could (maybe) answer this too..? > > v > > On Mon 12 Oct 2015 15:17:05, Viktor Villafuerte wrote: > > Hi all you carpenters and other Varnish using folk, > > > > There are couple of things in the output of varnishstat that puzzle me a > > little.. > > > > MAIN.sess_drop 0 0.00 Sessions dropped > > MAIN.sess_dropped 3809332 0.32 Sessions dropped for > > thread > > MAIN.fetch_no_thread 58746 0.01 Fetch failed (no > > thread) > > MAIN.pools 2 . Number of thread pools > > MAIN.threads 1255 . Total number of > > threads > > > > I've got 2 pools of 4000 threads set in Varnish config and > > man varnish-counters says: > > > > sess_drop > > Count of sessions silently dropped due to lack of worker thread. > > > > sess_dropped > > Number of times session was dropped because the queue were too long > > already. See also parameter queue_max. > > > > fetch_no_thread > > beresp fetch failed, no thread available > > > > > > This tells me that there's no lack of worker threads (good!), but the > > thread queue length does get too long and subsequently sessions get > > dropped (bad!). Also backend fetch failed due to no threads being > > available (what?) > > > > > > Now the puzzling bit :) > > > > 1) why would the thread queue get too long if there seems to be NO lack > > of threads to use? > > > > 2) why would there be no threads if there seems to be NO lack of threads > > > > 3) 'See also the parameter queue_max' - but I cannot find any mention of > > such parameter anywhere around? Where does this ellusive paramater live > > then? > > > > > > Could anybody shed bit of light on this for me? > > > > > > > > > > -- > > Regards > > > > Viktor Villafuerte > > Optus Internet Engineering > > t: +61 2 80825265 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From viktor.villafuerte at optusnet.com.au Thu Nov 12 02:55:01 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Thu, 12 Nov 2015 13:55:01 +1100 Subject: Threads + thread queue length In-Reply-To: <20151112014027.GJ17755@optusnet.com.au> References: <20151012041705.GA1990@optusnet.com.au> <20151110225252.GI17755@optusnet.com.au> <20151112014027.GJ17755@optusnet.com.au> Message-ID: <20151112025501.GK17755@optusnet.com.au> >From Varnish source code: PARAM( /* name */ thread_pool_add_delay, /* tweak */ tweak_timeout, /* var */ thread_pool_add_delay, /* min */ 0.000, /* max */ none, /* default */ 0.000, /* units */ seconds, /* flags */ 0| EXPERIMENTAL, /* s-text */ "Wait at least this long after creating a thread.\n" "\n" "Some (buggy) systems may need a short (sub-second) delay between " "creating threads.\n" "Set this to a few milliseconds if you see the 'threads_failed' " "counter grow too much.\n" "Setting this too high results in insuffient worker threads.\n", /* l-text */ "", /* func */ NULL ) which would make the value SECONDS BUT >From Varnish docs: -p thread_pool_add_delay=2 (default: 20ms, default in master: 2ms) Reducing the add_delay lets you create threads faster which is essential - specially at startup - to avoid filling up the queue and dropping requests. v On Thu 12 Nov 2015 12:40:27, Viktor Villafuerte wrote: > I think I might have found the problem here.. :/ > > thread_pool_add_delay > > the doco says that the values are in milliseconds but if I set this to 2 > (which should be 2ms) it makes the server handle traffic very badly and > only very gradually increases perfomance.. However if I set this to 0 > then everything runs immediately and well. Furthermore in the docs it > says: > > Reducing the add_delay lets you create threads faster which is > essential - specially at startup - to avoid filling up the queue and > dropping requests. > > which would exactly fit my (sad) scenario below.. > > > Could anybody confirm that the value is in milliseconds and not in > seconds? > > thanks > > v > > > > On Wed 11 Nov 2015 09:52:52, Viktor Villafuerte wrote: > > I'm bumping this 'thread' up to the top since CJ Ess has just asked a > > question related to threads also.. I hate doing this but I'm hoping that > > somebody could (maybe) answer this too..? > > > > v > > > > On Mon 12 Oct 2015 15:17:05, Viktor Villafuerte wrote: > > > Hi all you carpenters and other Varnish using folk, > > > > > > There are couple of things in the output of varnishstat that puzzle me a > > > little.. > > > > > > MAIN.sess_drop 0 0.00 Sessions dropped > > > MAIN.sess_dropped 3809332 0.32 Sessions dropped for > > > thread > > > MAIN.fetch_no_thread 58746 0.01 Fetch failed (no > > > thread) > > > MAIN.pools 2 . Number of thread pools > > > MAIN.threads 1255 . Total number of > > > threads > > > > > > I've got 2 pools of 4000 threads set in Varnish config and > > > man varnish-counters says: > > > > > > sess_drop > > > Count of sessions silently dropped due to lack of worker thread. > > > > > > sess_dropped > > > Number of times session was dropped because the queue were too long > > > already. See also parameter queue_max. > > > > > > fetch_no_thread > > > beresp fetch failed, no thread available > > > > > > > > > This tells me that there's no lack of worker threads (good!), but the > > > thread queue length does get too long and subsequently sessions get > > > dropped (bad!). Also backend fetch failed due to no threads being > > > available (what?) > > > > > > > > > Now the puzzling bit :) > > > > > > 1) why would the thread queue get too long if there seems to be NO lack > > > of threads to use? > > > > > > 2) why would there be no threads if there seems to be NO lack of threads > > > > > > 3) 'See also the parameter queue_max' - but I cannot find any mention of > > > such parameter anywhere around? Where does this ellusive paramater live > > > then? > > > > > > > > > Could anybody shed bit of light on this for me? > > > > > > > > > > > > > > > -- > > > Regards > > > > > > Viktor Villafuerte > > > Optus Internet Engineering > > > t: +61 2 80825265 > > > > > > _______________________________________________ > > > varnish-misc mailing list > > > varnish-misc at varnish-cache.org > > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > -- > > Regards > > > > Viktor Villafuerte > > Optus Internet Engineering > > t: +61 2 80825265 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From apj at mutt.dk Thu Nov 12 13:44:11 2015 From: apj at mutt.dk (Andreas Plesner) Date: Thu, 12 Nov 2015 14:44:11 +0100 Subject: Threads + thread queue length In-Reply-To: <20151112014027.GJ17755@optusnet.com.au> References: <20151012041705.GA1990@optusnet.com.au> <20151110225252.GI17755@optusnet.com.au> <20151112014027.GJ17755@optusnet.com.au> Message-ID: <20151112134411.GV2656@nerd.dk> On Thu, Nov 12, 2015 at 12:40:27PM +1100, Viktor Villafuerte wrote: > > thread_pool_add_delay > > the doco says that the values are in milliseconds but if I set this to 2 Where? -- Andreas From viktor.villafuerte at optusnet.com.au Thu Nov 12 23:07:19 2015 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Fri, 13 Nov 2015 10:07:19 +1100 Subject: Threads + thread queue length In-Reply-To: <20151112025501.GK17755@optusnet.com.au> References: <20151012041705.GA1990@optusnet.com.au> <20151110225252.GI17755@optusnet.com.au> <20151112014027.GJ17755@optusnet.com.au> <20151112025501.GK17755@optusnet.com.au> Message-ID: <20151112230719.GL17755@optusnet.com.au> Hi Andreas, On Thu 12 Nov 2015 13:55:01, Viktor Villafuerte wrote: > >From Varnish source code: > > PARAM( > /* name */ thread_pool_add_delay, > /* tweak */ tweak_timeout, > /* var */ thread_pool_add_delay, > /* min */ 0.000, > /* max */ none, > /* default */ 0.000, > /* units */ seconds, > /* flags */ 0| EXPERIMENTAL, > /* s-text */ > "Wait at least this long after creating a thread.\n" > "\n" > "Some (buggy) systems may need a short (sub-second) delay between " > "creating threads.\n" > "Set this to a few milliseconds if you see the 'threads_failed' " > "counter grow too much.\n" > "Setting this too high results in insuffient worker threads.\n", > /* l-text */ "", > /* func */ NULL > ) > > which would make the value SECONDS > > BUT > > >From Varnish docs: > > -p thread_pool_add_delay=2 (default: 20ms, default in master: 2ms) > Reducing the add_delay lets you create threads faster which is essential > - specially at startup - to avoid filling up the queue and dropping requests. > The above section is taken from Varnish website. I guess it does not actually say that the unit value is seconds or milliseconds but I do find this confusing as it strongly (IMHO) suggests that the value is milliseconds. Also possibly there are other better resources on the web that state this clearly and I didn't find.. could be my bad v > > > v > > > > > On Thu 12 Nov 2015 12:40:27, Viktor Villafuerte wrote: > > I think I might have found the problem here.. :/ > > > > thread_pool_add_delay > > > > the doco says that the values are in milliseconds but if I set this to 2 > > (which should be 2ms) it makes the server handle traffic very badly and > > only very gradually increases perfomance.. However if I set this to 0 > > then everything runs immediately and well. Furthermore in the docs it > > says: > > > > Reducing the add_delay lets you create threads faster which is > > essential - specially at startup - to avoid filling up the queue and > > dropping requests. > > > > which would exactly fit my (sad) scenario below.. > > > > > > Could anybody confirm that the value is in milliseconds and not in > > seconds? > > > > thanks > > > > v > > > > > > > > On Wed 11 Nov 2015 09:52:52, Viktor Villafuerte wrote: > > > I'm bumping this 'thread' up to the top since CJ Ess has just asked a > > > question related to threads also.. I hate doing this but I'm hoping that > > > somebody could (maybe) answer this too..? > > > > > > v > > > > > > On Mon 12 Oct 2015 15:17:05, Viktor Villafuerte wrote: > > > > Hi all you carpenters and other Varnish using folk, > > > > > > > > There are couple of things in the output of varnishstat that puzzle me a > > > > little.. > > > > > > > > MAIN.sess_drop 0 0.00 Sessions dropped > > > > MAIN.sess_dropped 3809332 0.32 Sessions dropped for > > > > thread > > > > MAIN.fetch_no_thread 58746 0.01 Fetch failed (no > > > > thread) > > > > MAIN.pools 2 . Number of thread pools > > > > MAIN.threads 1255 . Total number of > > > > threads > > > > > > > > I've got 2 pools of 4000 threads set in Varnish config and > > > > man varnish-counters says: > > > > > > > > sess_drop > > > > Count of sessions silently dropped due to lack of worker thread. > > > > > > > > sess_dropped > > > > Number of times session was dropped because the queue were too long > > > > already. See also parameter queue_max. > > > > > > > > fetch_no_thread > > > > beresp fetch failed, no thread available > > > > > > > > > > > > This tells me that there's no lack of worker threads (good!), but the > > > > thread queue length does get too long and subsequently sessions get > > > > dropped (bad!). Also backend fetch failed due to no threads being > > > > available (what?) > > > > > > > > > > > > Now the puzzling bit :) > > > > > > > > 1) why would the thread queue get too long if there seems to be NO lack > > > > of threads to use? > > > > > > > > 2) why would there be no threads if there seems to be NO lack of threads > > > > > > > > 3) 'See also the parameter queue_max' - but I cannot find any mention of > > > > such parameter anywhere around? Where does this ellusive paramater live > > > > then? > > > > > > > > > > > > Could anybody shed bit of light on this for me? > > > > > > > > > > > > > > > > > > > > -- > > > > Regards > > > > > > > > Viktor Villafuerte > > > > Optus Internet Engineering > > > > t: +61 2 80825265 > > > > > > > > _______________________________________________ > > > > varnish-misc mailing list > > > > varnish-misc at varnish-cache.org > > > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > > > Regards > > > > > > Viktor Villafuerte > > > Optus Internet Engineering > > > t: +61 2 80825265 > > > > > > _______________________________________________ > > > varnish-misc mailing list > > > varnish-misc at varnish-cache.org > > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > -- > > Regards > > > > Viktor Villafuerte > > Optus Internet Engineering > > t: +61 2 80825265 > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From ruben at varnish-software.com Thu Nov 19 09:14:25 2015 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Thu, 19 Nov 2015 10:14:25 +0100 Subject: Welcome to VUG10 in Rotterdam, NL on Dec 3-4 2015 Message-ID: Hello, First of all, please spreading the word, specially if you are in BeNeLux or nearby :o) As you have probably noticed on the Varnish community site, we will be having the 10th Varnish User Group meeting in Rotterdam in a couple of weeks. Agenda and details are on . *Please grab right now your ticket for our User Day Conference on December 3rd from >* and you will enjoy talks from: - Poul-Henning Kamp (Varnish Chief Architect) - Phil Stanhope (Fellow @Dyn Inc) - Lasse Karstensen (Release Manager + Tech Lead, @Varnish Software) - Nils Goroll (Owner/Performance Engineer, Uplex) - Thijs Feryn (Evangelist @Combell + PHPBeNeLux) - Kristian Lyngst?l (Everything Varnish @Redpill-Linpro) - Dag Haavi Finstad (Software Developer @Varnish Software) - And more... We still have some slots left, so if you wish to share with us how you use Varnish, please contact me and I will be happy to give you a hand to make that happen (including help with slides and finding a good title for your talk ;o) ) For those of you who hack on Varnish Cache itself, make VMODs or utilities for the software (integration with a CMS, stats, dashboard, anything) please consider joining us also for the Varnish Dev/Hack Day on Friday 4th. Send me an email or simply add yourself to the wiki: https://www.varnish-cache.org/trac/wiki/VDD15Q4 This meeting would not be possible without our sponsors. Many, many thanks for making this event possible go to Floorplanner.com, Dynamic Network Services, Inc . and Varnish Software < https://www.varnish-software.com/> Hope to see you all in The Netherlands! PD: Congrats go to the Drupal community with the release of 8.0.0 expected today. All the best, -- *Rub?n Romero* Community Cheerleader Hat On | Varnish Software Group Cell: +47 95964088 / Office: +47 21989260 Skype, Twitter & IRC: ruben_varnish We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From colas.delmas at gmail.com Fri Nov 20 08:33:41 2015 From: colas.delmas at gmail.com (Nicolas Delmas) Date: Fri, 20 Nov 2015 09:33:41 +0100 Subject: Force caching with return(pass) Message-ID: Hello, I'm using varnish 4.1 I have a little problem I don't know how to solve. In vcl_recv I have some rules with a return(pass) : - if cookie "connected" exist - for some image But some url, for exemple an image, has 301 redirect. I want to cache this 301 even if I have a return(pass) in vcl_recv. I want this 301 be cached for everyone because it's not necessary to send request to the backend. It's the same for 404. Is it possble ? and how to do it ? thank you *Nicolas Delmas* colas.delmas at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Fri Nov 20 08:43:43 2015 From: apj at mutt.dk (Andreas Plesner) Date: Fri, 20 Nov 2015 09:43:43 +0100 Subject: Force caching with return(pass) In-Reply-To: References: Message-ID: <20151120084343.GY2656@nerd.dk> On Fri, Nov 20, 2015 at 09:33:41AM +0100, Nicolas Delmas wrote: > > But some url, for exemple an image, has 301 redirect. I want to cache this > 301 even if I have a return(pass) in vcl_recv. > I want this 301 be cached for everyone because it's not necessary to send > request to the backend. > It's the same for 404. > > Is it possble ? and how to do it ? No. Don't pass in recv. For uncacheable items, set beresp.uncacheable and a positive ttl in backend_response instead. -- Andreas From colas.delmas at gmail.com Fri Nov 20 12:22:21 2015 From: colas.delmas at gmail.com (Nicolas Delmas) Date: Fri, 20 Nov 2015 13:22:21 +0100 Subject: Force caching with return(pass) In-Reply-To: <20151120084343.GY2656@nerd.dk> References: <20151120084343.GY2656@nerd.dk> Message-ID: So I need to move all my rules from vcl_recv to vcl_backend_response ? If I do this what could be the consequence ? Some error to cache ? performance ? Thank you *Nicolas Delmas* colas.delmas at gmail.com 2015-11-20 9:43 GMT+01:00 Andreas Plesner : > On Fri, Nov 20, 2015 at 09:33:41AM +0100, Nicolas Delmas wrote: > > > > But some url, for exemple an image, has 301 redirect. I want to cache > this > > 301 even if I have a return(pass) in vcl_recv. > > I want this 301 be cached for everyone because it's not necessary to send > > request to the backend. > > It's the same for 404. > > > > Is it possble ? and how to do it ? > > No. Don't pass in recv. For uncacheable items, set beresp.uncacheable and a > positive ttl in backend_response instead. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgi.int at gmail.com Mon Nov 30 16:42:08 2015 From: georgi.int at gmail.com (georgi.int at gmail.com) Date: Mon, 30 Nov 2015 18:42:08 +0200 Subject: Varnish and mod_deflate support Message-ID: <565C7C60.1020502@gmail.com> Hello, I have been using varnish 3.7 only as a proxy server for apache and have a following lines in default.vcl which should handle the encodings: if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. #set req.http.Accept-Encoding = "gzip"; unset req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } Although, customers which have mod_deflate rules in .htaccess file experience the problem that their sites are not compressed. If I pipe the site to apache site is compressed. SO, my question is what is the problem with the deflate and my varnish configuration? Is it required to add something other to varnish to work the deflate? I tried a couple of things which I found in the net, but nothing worked. Thank you in advance for your answers! Best regards, Georgi