From hazarguney at gmail.com Sat Apr 1 11:14:44 2017 From: hazarguney at gmail.com (=?UTF-8?B?SGF6YXIgR8O8bmV5?=) Date: Sat, 1 Apr 2017 14:14:44 +0300 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: We see this error a few times in a day on a highly busy production environment. Unfortunately there is too much traffic on the server to keep tcpdump/ngrep running and we cannot re-produce it on test environment :( I have started tcpdump on a test environment of another implementation and will let you as soon as the issue gets triggerred again. On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: > Can you provide a tcpdump/ngrep of the requests between > Client/Varnish/Apache along with the varnishlog entry to see if that > uncovers anything? > > On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney wrote: > >> Any idea? >> >> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >> wrote: >> >>> It did not work either: >>> >>> * << BeReq >> 127418176 >>> - Begin bereq 127418175 fetch >>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>> - BereqMethod GET >>> - BereqURL XXXX >>> - BereqProtocol HTTP/1.1 >>> - BereqHeader Accept: text/css,*/*;q=0.1 >>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS >>> 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) Version/10.0 >>> Mobile/14C92 Safari/602.1 >>> - BereqHeader Accept-Language: tr-tr >>> - BereqHeader Referer: XXXX >>> - BereqHeader Host: XXXX >>> - BereqHeader RIP: XXXX >>> - BereqHeader X-Forwarded-For: XXXX >>> - BereqHeader Accept-Encoding: gzip >>> - BereqHeader X-Varnish: 127418176 >>> - VCL_call BACKEND_FETCH >>> - BereqHeader connection: Close >>> - VCL_return fetch >>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 10.35.78.11 80 >>> 172.17.0.2 59152 >>> - BackendStart 10.35.78.11 80 >>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>> - FetchError http first read error: EOF >>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>> - BerespProtocol HTTP/1.1 >>> - BerespStatus 503 >>> - BerespReason Service Unavailable >>> - BerespReason Backend fetch failed >>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>> - BerespHeader Server: Varnish >>> - VCL_call BACKEND_ERROR >>> - BereqHeader X-Varnish-Backend-5xx: 1 >>> - VCL_return retry >>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>> - Link bereq 127298071 retry >>> - End >>> >>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> It does, I'm suspecting that the connection reuse is creating some >>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>> always blame Apache). >>>> >>>> -- >>>> Guillaume Quintard >>>> >>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>> wrote: >>>> >>>>> "Connection: close" supersedes keep-alive behavior, is that correct? >>>>> >>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Can you try something: add 'set bereq.http.connection = "Close"; ' at >>>>>> the beginning of vcl_backend_fetch and see if that helps? >>>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney >>>>>> wrote: >>>>>> >>>>>>> MaxKeepAliveRequests 20 >>>>>>> KeepAliveTimeout 2 >>>>>>> >>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>> example from today: >>>>>>> >>>>>>> * << BeReq >> 126635444 >>>>>>> - Begin bereq 126635443 fetch >>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>> - BereqMethod GET >>>>>>> - BereqURL XXXX >>>>>>> - BereqProtocol HTTP/1.1 >>>>>>> - BereqHeader Host: XXXX >>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; >>>>>>> x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>> Safari/537.36 >>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>> - BereqHeader Referer: XXXX >>>>>>> - BereqHeader Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en; >>>>>>> q=0.4 >>>>>>> - BereqHeader RIP: XXXX >>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>> - VCL_call BACKEND_FETCH >>>>>>> - VCL_return fetch >>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 10.35.78.11 80 >>>>>>> 172.17.0.2 48896 >>>>>>> - BackendStart 10.35.78.11 80 >>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>> *- FetchError http first read error: EOF* >>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>> - BerespProtocol HTTP/1.1 >>>>>>> - BerespStatus 503 >>>>>>> - BerespReason Service Unavailable >>>>>>> - BerespReason Backend fetch failed >>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>> - BerespHeader Server: Varnish >>>>>>> - VCL_call BACKEND_ERROR >>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>> - VCL_return retry >>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>> - Link bereq 126832283 retry >>>>>>> - End >>>>>>> >>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar >>>>>> > wrote: >>>>>>> >>>>>>>> > Backend is Apache. >>>>>>>> >>>>>>>> In older Varnish versions, you could sometimes see a similar error; >>>>>>>> >>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>> >>>>>>>> The error message you?re seeing might be related, as it mentions >>>>>>>> the EOF. >>>>>>>> >>>>>>>> This happens when the backend sends a Content-Length header that >>>>>>>> doesn?t match the _actual_ content length it?s sending. In Apache, this was >>>>>>>> commonly caused by a mod_deflate misconfiguration. >>>>>>>> >>>>>>>> For testing, could you try disabling Gzip either in your backend or >>>>>>>> strip the Accept-Encoding header in Varnish to force a plain text response? >>>>>>>> >>>>>>>> Mattias >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Sat Apr 1 18:44:09 2017 From: lagged at gmail.com (Andrei) Date: Sat, 1 Apr 2017 21:44:09 +0300 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: If it's during peak hours are you sure there aren't any rate limits being reached? Perhaps net.ipv4.ip_local_port_range might need a bump? Are Apache or syslog logging anything around those times? No silly periodic (Apache) graceful restarts? Just a few thoughts :) On Sat, Apr 1, 2017 at 2:14 PM, Hazar G?ney wrote: > We see this error a few times in a day on a highly busy production > environment. Unfortunately there is too much traffic on the server to keep > tcpdump/ngrep running and we cannot re-produce it on test environment :( > > I have started tcpdump on a test environment of another implementation and > will let you as soon as the issue gets triggerred again. > > On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: > >> Can you provide a tcpdump/ngrep of the requests between >> Client/Varnish/Apache along with the varnishlog entry to see if that >> uncovers anything? >> >> On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney >> wrote: >> >>> Any idea? >>> >>> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >>> wrote: >>> >>>> It did not work either: >>>> >>>> * << BeReq >> 127418176 >>>> - Begin bereq 127418175 fetch >>>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>>> - BereqMethod GET >>>> - BereqURL XXXX >>>> - BereqProtocol HTTP/1.1 >>>> - BereqHeader Accept: text/css,*/*;q=0.1 >>>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS >>>> 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) Version/10.0 >>>> Mobile/14C92 Safari/602.1 >>>> - BereqHeader Accept-Language: tr-tr >>>> - BereqHeader Referer: XXXX >>>> - BereqHeader Host: XXXX >>>> - BereqHeader RIP: XXXX >>>> - BereqHeader X-Forwarded-For: XXXX >>>> - BereqHeader Accept-Encoding: gzip >>>> - BereqHeader X-Varnish: 127418176 >>>> - VCL_call BACKEND_FETCH >>>> - BereqHeader connection: Close >>>> - VCL_return fetch >>>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 10.35.78.11 80 >>>> 172.17.0.2 59152 >>>> - BackendStart 10.35.78.11 80 >>>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>>> - FetchError http first read error: EOF >>>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>>> - BerespProtocol HTTP/1.1 >>>> - BerespStatus 503 >>>> - BerespReason Service Unavailable >>>> - BerespReason Backend fetch failed >>>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>>> - BerespHeader Server: Varnish >>>> - VCL_call BACKEND_ERROR >>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>> - VCL_return retry >>>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>>> - Link bereq 127298071 retry >>>> - End >>>> >>>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> It does, I'm suspecting that the connection reuse is creating some >>>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>>> always blame Apache). >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>>> wrote: >>>>> >>>>>> "Connection: close" supersedes keep-alive behavior, is that correct? >>>>>> >>>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Can you try something: add 'set bereq.http.connection = "Close"; ' >>>>>>> at the beginning of vcl_backend_fetch and see if that helps? >>>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney >>>>>>> wrote: >>>>>>> >>>>>>>> MaxKeepAliveRequests 20 >>>>>>>> KeepAliveTimeout 2 >>>>>>>> >>>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>>> example from today: >>>>>>>> >>>>>>>> * << BeReq >> 126635444 >>>>>>>> - Begin bereq 126635443 fetch >>>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>>> - BereqMethod GET >>>>>>>> - BereqURL XXXX >>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>> - BereqHeader Host: XXXX >>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; >>>>>>>> x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>>> Safari/537.36 >>>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>>> - BereqHeader Referer: XXXX >>>>>>>> - BereqHeader Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en; >>>>>>>> q=0.4 >>>>>>>> - BereqHeader RIP: XXXX >>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>> - VCL_return fetch >>>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 10.35.78.11 80 >>>>>>>> 172.17.0.2 48896 >>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>>> *- FetchError http first read error: EOF* >>>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>> - BerespStatus 503 >>>>>>>> - BerespReason Service Unavailable >>>>>>>> - BerespReason Backend fetch failed >>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>>> - BerespHeader Server: Varnish >>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>> - VCL_return retry >>>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>>> - Link bereq 126832283 retry >>>>>>>> - End >>>>>>>> >>>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar < >>>>>>>> mattias at nucleus.be> wrote: >>>>>>>> >>>>>>>>> > Backend is Apache. >>>>>>>>> >>>>>>>>> In older Varnish versions, you could sometimes see a similar error; >>>>>>>>> >>>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>>> >>>>>>>>> The error message you?re seeing might be related, as it mentions >>>>>>>>> the EOF. >>>>>>>>> >>>>>>>>> This happens when the backend sends a Content-Length header that >>>>>>>>> doesn?t match the _actual_ content length it?s sending. In Apache, this was >>>>>>>>> commonly caused by a mod_deflate misconfiguration. >>>>>>>>> >>>>>>>>> For testing, could you try disabling Gzip either in your backend >>>>>>>>> or strip the Accept-Encoding header in Varnish to force a plain text >>>>>>>>> response? >>>>>>>>> >>>>>>>>> Mattias >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazarguney at gmail.com Sun Apr 2 17:39:51 2017 From: hazarguney at gmail.com (=?UTF-8?B?SGF6YXIgR8O8bmV5?=) Date: Sun, 2 Apr 2017 20:39:51 +0300 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: Btw, I need to also note that traffic is routed to Varnish from load balancer: LB -> Varnish -> LB -> Backend pool Time does not matter.. It occurs during both peak and regular hours. Even during peak hours we do not reach the "local ports" limit. Unfortunately there is no any clue in the logs. There is no evidence that Apache restarts on the backend pool during occurence of the issue. On Sat, Apr 1, 2017 at 9:44 PM, Andrei wrote: > If it's during peak hours are you sure there aren't any rate limits being > reached? Perhaps net.ipv4.ip_local_port_range might need a bump? Are > Apache or syslog logging anything around those times? No silly periodic > (Apache) graceful restarts? Just a few thoughts :) > > On Sat, Apr 1, 2017 at 2:14 PM, Hazar G?ney wrote: > >> We see this error a few times in a day on a highly busy production >> environment. Unfortunately there is too much traffic on the server to keep >> tcpdump/ngrep running and we cannot re-produce it on test environment :( >> >> I have started tcpdump on a test environment of another implementation >> and will let you as soon as the issue gets triggerred again. >> >> On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: >> >>> Can you provide a tcpdump/ngrep of the requests between >>> Client/Varnish/Apache along with the varnishlog entry to see if that >>> uncovers anything? >>> >>> On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney >>> wrote: >>> >>>> Any idea? >>>> >>>> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >>>> wrote: >>>> >>>>> It did not work either: >>>>> >>>>> * << BeReq >> 127418176 >>>>> - Begin bereq 127418175 fetch >>>>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>>>> - BereqMethod GET >>>>> - BereqURL XXXX >>>>> - BereqProtocol HTTP/1.1 >>>>> - BereqHeader Accept: text/css,*/*;q=0.1 >>>>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS >>>>> 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) Version/10.0 >>>>> Mobile/14C92 Safari/602.1 >>>>> - BereqHeader Accept-Language: tr-tr >>>>> - BereqHeader Referer: XXXX >>>>> - BereqHeader Host: XXXX >>>>> - BereqHeader RIP: XXXX >>>>> - BereqHeader X-Forwarded-For: XXXX >>>>> - BereqHeader Accept-Encoding: gzip >>>>> - BereqHeader X-Varnish: 127418176 >>>>> - VCL_call BACKEND_FETCH >>>>> - BereqHeader connection: Close >>>>> - VCL_return fetch >>>>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 10.35.78.11 >>>>> 80 172.17.0.2 59152 >>>>> - BackendStart 10.35.78.11 80 >>>>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>>>> - FetchError http first read error: EOF >>>>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>>>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>>>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>>>> - BerespProtocol HTTP/1.1 >>>>> - BerespStatus 503 >>>>> - BerespReason Service Unavailable >>>>> - BerespReason Backend fetch failed >>>>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>>>> - BerespHeader Server: Varnish >>>>> - VCL_call BACKEND_ERROR >>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>> - VCL_return retry >>>>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>>>> - Link bereq 127298071 retry >>>>> - End >>>>> >>>>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> It does, I'm suspecting that the connection reuse is creating some >>>>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>>>> always blame Apache). >>>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>>>> wrote: >>>>>> >>>>>>> "Connection: close" supersedes keep-alive behavior, is that correct? >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> Can you try something: add 'set bereq.http.connection = "Close"; ' >>>>>>>> at the beginning of vcl_backend_fetch and see if that helps? >>>>>>>> >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> >>>>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney >>>>>>>> wrote: >>>>>>>> >>>>>>>>> MaxKeepAliveRequests 20 >>>>>>>>> KeepAliveTimeout 2 >>>>>>>>> >>>>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>>>> example from today: >>>>>>>>> >>>>>>>>> * << BeReq >> 126635444 >>>>>>>>> - Begin bereq 126635443 fetch >>>>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>>>> - BereqMethod GET >>>>>>>>> - BereqURL XXXX >>>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>>> - BereqHeader Host: XXXX >>>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; >>>>>>>>> Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>>>> Safari/537.36 >>>>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>>>> - BereqHeader Referer: XXXX >>>>>>>>> - BereqHeader Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en; >>>>>>>>> q=0.4 >>>>>>>>> - BereqHeader RIP: XXXX >>>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>>> - VCL_return fetch >>>>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 10.35.78.11 >>>>>>>>> 80 172.17.0.2 48896 >>>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>>>> *- FetchError http first read error: EOF* >>>>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>>> - BerespStatus 503 >>>>>>>>> - BerespReason Service Unavailable >>>>>>>>> - BerespReason Backend fetch failed >>>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>>>> - BerespHeader Server: Varnish >>>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>>> - VCL_return retry >>>>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>>>> - Link bereq 126832283 retry >>>>>>>>> - End >>>>>>>>> >>>>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar < >>>>>>>>> mattias at nucleus.be> wrote: >>>>>>>>> >>>>>>>>>> > Backend is Apache. >>>>>>>>>> >>>>>>>>>> In older Varnish versions, you could sometimes see a similar >>>>>>>>>> error; >>>>>>>>>> >>>>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>>>> >>>>>>>>>> The error message you?re seeing might be related, as it mentions >>>>>>>>>> the EOF. >>>>>>>>>> >>>>>>>>>> This happens when the backend sends a Content-Length header that >>>>>>>>>> doesn?t match the _actual_ content length it?s sending. In Apache, this was >>>>>>>>>> commonly caused by a mod_deflate misconfiguration. >>>>>>>>>> >>>>>>>>>> For testing, could you try disabling Gzip either in your backend >>>>>>>>>> or strip the Accept-Encoding header in Varnish to force a plain text >>>>>>>>>> response? >>>>>>>>>> >>>>>>>>>> Mattias >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio at dataspace.com.br Mon Apr 3 01:51:06 2017 From: fabio at dataspace.com.br (Fabio Fraga [DS]) Date: Sun, 2 Apr 2017 22:51:06 -0300 Subject: Issues with Varnish 3.0 + Multiple Wordpress sites Message-ID: Hey, folks. I have a setup on CentOS 6.8 server with 1 single ip address and including Varnish + Nginx + php-fpm. Php works on 7.0 version. My customer had a single website and the setup works fine so far. But he ask to include a two new websites. My headache starts here. When i set the backends pointing to hostname and port (in nginx), varnish redirects to the first site. But, when i set the sub vcl_recv correctly (using regexp), i get the correct websites. My issue is on wp-admin. I can post text content, but i cant post images (got http error on wordpress). But if i remove the configuration of new backends, all things works fine. Where am i going wrong? Below my default.vcl. =============================== backend default { .host = ?w.x.y.z?; .port = "8081"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } backend bk1 { .host = ?xyz.com.br"; .port = "8081"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } backend bk2 { .host = ?abc.com.br"; .port = "8084"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } backend bk3 { .host = ?def.com.br"; .port = "8083"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } acl purge { "localhost"; "127.0.0.1"; "w.x.y.z"; } sub vcl_recv { if (req.http.host ~ "^(www\.)?xyz\.com\.br$") { set req.backend = bk1; return (lookup); } if (req.http.host ~ "^(www\.)?abc\.com\.br$") { set req.backend = bk2; return (lookup); } if (req.http.host ~ "^(www\.)?def\.com\.br$") { set req.backend = bk3; return (lookup); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } if (req.request == "PURGE") { if ( !client.ip ~ purge ) { error 405 "Not allowed."; } return (lookup); } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { return (pipe); } if (req.request != "GET" && req.request != "HEAD") { return (pass); } if ( req.http.cookie ~ "wordpress_logged_in" ) { return(pass); } if ( !(req.url ~ "wp-(login|admin)") && !(req.url ~ "&preview=true" ) ){ unset req.http.cookie; } if (req.http.Authorization || req.http.Cookie) { return (pass); } if ( req.url ~ "preview" || req.url ~ "nocache" || req.url ~ "\.css$" || req.url ~ "\.js$" || req.url ~ "\.jpg$" || req.url ~ "\.jpeg$" || req.url ~ "\.gif$" || req.url ~ "\.png$" ) { return (pass); } return (lookup); } sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } return (deliver); } sub vcl_miss { if (req.request == "PURGE") { purge; error 200 "Purged."; } return (fetch); } sub vcl_fetch { set beresp.http.Vary = "Accept-Encoding"; if (!(req.url ~ "wp-(login|admin)") && !req.http.cookie ~ "wordpress_logged_in" ) { unset beresp.http.set-cookie; set beresp.ttl = 5m; } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_hash { if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } ======================= Thanks for help, Fabio Fraga Machado phone: (48) 4052-8300 web: www.dataspace.com.br email: fabio at dataspace.com.br skype: boinkbr -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Mon Apr 3 07:12:35 2017 From: lagged at gmail.com (Andrei) Date: Mon, 3 Apr 2017 02:12:35 -0500 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: So the Varnish backend requests go through a load balancer before reaching Apache? What about those logs? What if you cut that LB out, and just use directors to LB in Varnish directly? On Sun, Apr 2, 2017 at 12:39 PM, Hazar G?ney wrote: > Btw, I need to also note that traffic is routed to Varnish from load > balancer: > > LB -> Varnish -> LB -> Backend pool > > Time does not matter.. It occurs during both peak and regular hours. Even > during peak hours we do not reach the "local ports" limit. Unfortunately > there is no any clue in the logs. There is no evidence that Apache > restarts on the backend pool during occurence of the issue. > > On Sat, Apr 1, 2017 at 9:44 PM, Andrei wrote: > >> If it's during peak hours are you sure there aren't any rate limits being >> reached? Perhaps net.ipv4.ip_local_port_range might need a bump? Are >> Apache or syslog logging anything around those times? No silly periodic >> (Apache) graceful restarts? Just a few thoughts :) >> >> On Sat, Apr 1, 2017 at 2:14 PM, Hazar G?ney wrote: >> >>> We see this error a few times in a day on a highly busy production >>> environment. Unfortunately there is too much traffic on the server to keep >>> tcpdump/ngrep running and we cannot re-produce it on test environment :( >>> >>> I have started tcpdump on a test environment of another implementation >>> and will let you as soon as the issue gets triggerred again. >>> >>> On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: >>> >>>> Can you provide a tcpdump/ngrep of the requests between >>>> Client/Varnish/Apache along with the varnishlog entry to see if that >>>> uncovers anything? >>>> >>>> On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney >>>> wrote: >>>> >>>>> Any idea? >>>>> >>>>> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >>>>> wrote: >>>>> >>>>>> It did not work either: >>>>>> >>>>>> * << BeReq >> 127418176 >>>>>> - Begin bereq 127418175 fetch >>>>>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>>>>> - BereqMethod GET >>>>>> - BereqURL XXXX >>>>>> - BereqProtocol HTTP/1.1 >>>>>> - BereqHeader Accept: text/css,*/*;q=0.1 >>>>>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS >>>>>> 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) Version/10.0 >>>>>> Mobile/14C92 Safari/602.1 >>>>>> - BereqHeader Accept-Language: tr-tr >>>>>> - BereqHeader Referer: XXXX >>>>>> - BereqHeader Host: XXXX >>>>>> - BereqHeader RIP: XXXX >>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>> - BereqHeader Accept-Encoding: gzip >>>>>> - BereqHeader X-Varnish: 127418176 >>>>>> - VCL_call BACKEND_FETCH >>>>>> - BereqHeader connection: Close >>>>>> - VCL_return fetch >>>>>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 10.35.78.11 >>>>>> 80 172.17.0.2 59152 >>>>>> - BackendStart 10.35.78.11 80 >>>>>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>>>>> - FetchError http first read error: EOF >>>>>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>>>>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>>>>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>>>>> - BerespProtocol HTTP/1.1 >>>>>> - BerespStatus 503 >>>>>> - BerespReason Service Unavailable >>>>>> - BerespReason Backend fetch failed >>>>>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>>>>> - BerespHeader Server: Varnish >>>>>> - VCL_call BACKEND_ERROR >>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>> - VCL_return retry >>>>>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>>>>> - Link bereq 127298071 retry >>>>>> - End >>>>>> >>>>>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> It does, I'm suspecting that the connection reuse is creating some >>>>>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>>>>> always blame Apache). >>>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>>>>> wrote: >>>>>>> >>>>>>>> "Connection: close" supersedes keep-alive behavior, is that correct? >>>>>>>> >>>>>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Can you try something: add 'set bereq.http.connection = "Close"; ' >>>>>>>>> at the beginning of vcl_backend_fetch and see if that helps? >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>>> MaxKeepAliveRequests 20 >>>>>>>>>> KeepAliveTimeout 2 >>>>>>>>>> >>>>>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>>>>> example from today: >>>>>>>>>> >>>>>>>>>> * << BeReq >> 126635444 >>>>>>>>>> - Begin bereq 126635443 fetch >>>>>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>>>>> - BereqMethod GET >>>>>>>>>> - BereqURL XXXX >>>>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>>>> - BereqHeader Host: XXXX >>>>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; >>>>>>>>>> Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>>>>> Safari/537.36 >>>>>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>>>>> - BereqHeader Referer: XXXX >>>>>>>>>> - BereqHeader Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en; >>>>>>>>>> q=0.4 >>>>>>>>>> - BereqHeader RIP: XXXX >>>>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>>>> - VCL_return fetch >>>>>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 10.35.78.11 >>>>>>>>>> 80 172.17.0.2 48896 >>>>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>>>>> *- FetchError http first read error: EOF* >>>>>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>>>> - BerespStatus 503 >>>>>>>>>> - BerespReason Service Unavailable >>>>>>>>>> - BerespReason Backend fetch failed >>>>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>>>>> - BerespHeader Server: Varnish >>>>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>>>> - VCL_return retry >>>>>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>>>>> - Link bereq 126832283 retry >>>>>>>>>> - End >>>>>>>>>> >>>>>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar < >>>>>>>>>> mattias at nucleus.be> wrote: >>>>>>>>>> >>>>>>>>>>> > Backend is Apache. >>>>>>>>>>> >>>>>>>>>>> In older Varnish versions, you could sometimes see a similar >>>>>>>>>>> error; >>>>>>>>>>> >>>>>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>>>>> >>>>>>>>>>> The error message you?re seeing might be related, as it mentions >>>>>>>>>>> the EOF. >>>>>>>>>>> >>>>>>>>>>> This happens when the backend sends a Content-Length header that >>>>>>>>>>> doesn?t match the _actual_ content length it?s sending. In Apache, this was >>>>>>>>>>> commonly caused by a mod_deflate misconfiguration. >>>>>>>>>>> >>>>>>>>>>> For testing, could you try disabling Gzip either in your backend >>>>>>>>>>> or strip the Accept-Encoding header in Varnish to force a plain text >>>>>>>>>>> response? >>>>>>>>>>> >>>>>>>>>>> Mattias >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> varnish-misc mailing list >>>>> varnish-misc at varnish-cache.org >>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Apr 3 07:32:58 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 3 Apr 2017 09:32:58 +0200 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: Large requests/responses are dropped by the LB, maybe? -- Guillaume Quintard On Mon, Apr 3, 2017 at 9:12 AM, Andrei wrote: > So the Varnish backend requests go through a load balancer before reaching > Apache? What about those logs? What if you cut that LB out, and just use > directors to LB in Varnish directly? > > On Sun, Apr 2, 2017 at 12:39 PM, Hazar G?ney wrote: > >> Btw, I need to also note that traffic is routed to Varnish from load >> balancer: >> >> LB -> Varnish -> LB -> Backend pool >> >> Time does not matter.. It occurs during both peak and regular hours. Even >> during peak hours we do not reach the "local ports" limit. Unfortunately >> there is no any clue in the logs. There is no evidence that Apache >> restarts on the backend pool during occurence of the issue. >> >> On Sat, Apr 1, 2017 at 9:44 PM, Andrei wrote: >> >>> If it's during peak hours are you sure there aren't any rate limits >>> being reached? Perhaps net.ipv4.ip_local_port_range might need a bump? >>> Are Apache or syslog logging anything around those times? No silly periodic >>> (Apache) graceful restarts? Just a few thoughts :) >>> >>> On Sat, Apr 1, 2017 at 2:14 PM, Hazar G?ney >>> wrote: >>> >>>> We see this error a few times in a day on a highly busy production >>>> environment. Unfortunately there is too much traffic on the server to keep >>>> tcpdump/ngrep running and we cannot re-produce it on test environment :( >>>> >>>> I have started tcpdump on a test environment of another implementation >>>> and will let you as soon as the issue gets triggerred again. >>>> >>>> On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: >>>> >>>>> Can you provide a tcpdump/ngrep of the requests between >>>>> Client/Varnish/Apache along with the varnishlog entry to see if that >>>>> uncovers anything? >>>>> >>>>> On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney >>>>> wrote: >>>>> >>>>>> Any idea? >>>>>> >>>>>> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >>>>>> wrote: >>>>>> >>>>>>> It did not work either: >>>>>>> >>>>>>> * << BeReq >> 127418176 >>>>>>> - Begin bereq 127418175 fetch >>>>>>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>>>>>> - BereqMethod GET >>>>>>> - BereqURL XXXX >>>>>>> - BereqProtocol HTTP/1.1 >>>>>>> - BereqHeader Accept: text/css,*/*;q=0.1 >>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone >>>>>>> OS 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) >>>>>>> Version/10.0 Mobile/14C92 Safari/602.1 >>>>>>> - BereqHeader Accept-Language: tr-tr >>>>>>> - BereqHeader Referer: XXXX >>>>>>> - BereqHeader Host: XXXX >>>>>>> - BereqHeader RIP: XXXX >>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>> - BereqHeader X-Varnish: 127418176 >>>>>>> - VCL_call BACKEND_FETCH >>>>>>> - BereqHeader connection: Close >>>>>>> - VCL_return fetch >>>>>>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 10.35.78.11 >>>>>>> 80 172.17.0.2 59152 >>>>>>> - BackendStart 10.35.78.11 80 >>>>>>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>>>>>> - FetchError http first read error: EOF >>>>>>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>>>>>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>>>>>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>>>>>> - BerespProtocol HTTP/1.1 >>>>>>> - BerespStatus 503 >>>>>>> - BerespReason Service Unavailable >>>>>>> - BerespReason Backend fetch failed >>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>>>>>> - BerespHeader Server: Varnish >>>>>>> - VCL_call BACKEND_ERROR >>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>> - VCL_return retry >>>>>>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>>>>>> - Link bereq 127298071 retry >>>>>>> - End >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> It does, I'm suspecting that the connection reuse is creating some >>>>>>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>>>>>> always blame Apache). >>>>>>>> >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> >>>>>>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>>>>>> wrote: >>>>>>>> >>>>>>>>> "Connection: close" supersedes keep-alive behavior, is that >>>>>>>>> correct? >>>>>>>>> >>>>>>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>> >>>>>>>>>> Can you try something: add 'set bereq.http.connection = "Close"; >>>>>>>>>> ' at the beginning of vcl_backend_fetch and see if that helps? >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Guillaume Quintard >>>>>>>>>> >>>>>>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney < >>>>>>>>>> hazarguney at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> MaxKeepAliveRequests 20 >>>>>>>>>>> KeepAliveTimeout 2 >>>>>>>>>>> >>>>>>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>>>>>> example from today: >>>>>>>>>>> >>>>>>>>>>> * << BeReq >> 126635444 >>>>>>>>>>> - Begin bereq 126635443 fetch >>>>>>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>>>>>> - BereqMethod GET >>>>>>>>>>> - BereqURL XXXX >>>>>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>>>>> - BereqHeader Host: XXXX >>>>>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; >>>>>>>>>>> Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>>>>>> Safari/537.36 >>>>>>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>>>>>> - BereqHeader Referer: XXXX >>>>>>>>>>> - BereqHeader Accept-Language: >>>>>>>>>>> tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4 >>>>>>>>>>> - BereqHeader RIP: XXXX >>>>>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>>>>> - VCL_return fetch >>>>>>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 10.35.78.11 >>>>>>>>>>> 80 172.17.0.2 48896 >>>>>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>>>>>> *- FetchError http first read error: EOF* >>>>>>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>>>>> - BerespStatus 503 >>>>>>>>>>> - BerespReason Service Unavailable >>>>>>>>>>> - BerespReason Backend fetch failed >>>>>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>>>>>> - BerespHeader Server: Varnish >>>>>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>>>>> - VCL_return retry >>>>>>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>>>>>> - Link bereq 126832283 retry >>>>>>>>>>> - End >>>>>>>>>>> >>>>>>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar < >>>>>>>>>>> mattias at nucleus.be> wrote: >>>>>>>>>>> >>>>>>>>>>>> > Backend is Apache. >>>>>>>>>>>> >>>>>>>>>>>> In older Varnish versions, you could sometimes see a similar >>>>>>>>>>>> error; >>>>>>>>>>>> >>>>>>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>>>>>> >>>>>>>>>>>> The error message you?re seeing might be related, as it >>>>>>>>>>>> mentions the EOF. >>>>>>>>>>>> >>>>>>>>>>>> This happens when the backend sends a Content-Length header >>>>>>>>>>>> that doesn?t match the _actual_ content length it?s sending. In Apache, >>>>>>>>>>>> this was commonly caused by a mod_deflate misconfiguration. >>>>>>>>>>>> >>>>>>>>>>>> For testing, could you try disabling Gzip either in your >>>>>>>>>>>> backend or strip the Accept-Encoding header in Varnish to force a plain >>>>>>>>>>>> text response? >>>>>>>>>>>> >>>>>>>>>>>> Mattias >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> varnish-misc mailing list >>>>>> varnish-misc at varnish-cache.org >>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazarguney at gmail.com Mon Apr 3 08:37:12 2017 From: hazarguney at gmail.com (=?UTF-8?B?SGF6YXIgR8O8bmV5?=) Date: Mon, 3 Apr 2017 11:37:12 +0300 Subject: =?UTF-8?Q?Re=3A_Random_=E2=80=9Chttp_first_read_error=3A_EOF=E2=80=9D_errors?= In-Reply-To: References: <01F55BD6-0E30-444D-9F7D-470DDA52F329@nucleus.be> Message-ID: I cannot cut the LB out due to network design. There is no log on the LB also. Problematic requests are not large (js/css files), I suspect that this is a miscommunication issue between Varnish and Apache but I am not able to detect the reason yet. On Mon, Apr 3, 2017 at 10:32 AM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Large requests/responses are dropped by the LB, maybe? > > -- > Guillaume Quintard > > On Mon, Apr 3, 2017 at 9:12 AM, Andrei wrote: > >> So the Varnish backend requests go through a load balancer before >> reaching Apache? What about those logs? What if you cut that LB out, and >> just use directors to LB in Varnish directly? >> >> On Sun, Apr 2, 2017 at 12:39 PM, Hazar G?ney >> wrote: >> >>> Btw, I need to also note that traffic is routed to Varnish from load >>> balancer: >>> >>> LB -> Varnish -> LB -> Backend pool >>> >>> Time does not matter.. It occurs during both peak and regular hours. Even >>> during peak hours we do not reach the "local ports" limit. Unfortunately >>> there is no any clue in the logs. There is no evidence that Apache >>> restarts on the backend pool during occurence of the issue. >>> >>> On Sat, Apr 1, 2017 at 9:44 PM, Andrei wrote: >>> >>>> If it's during peak hours are you sure there aren't any rate limits >>>> being reached? Perhaps net.ipv4.ip_local_port_range might need a bump? >>>> Are Apache or syslog logging anything around those times? No silly periodic >>>> (Apache) graceful restarts? Just a few thoughts :) >>>> >>>> On Sat, Apr 1, 2017 at 2:14 PM, Hazar G?ney >>>> wrote: >>>> >>>>> We see this error a few times in a day on a highly busy production >>>>> environment. Unfortunately there is too much traffic on the server to keep >>>>> tcpdump/ngrep running and we cannot re-produce it on test environment :( >>>>> >>>>> I have started tcpdump on a test environment of another implementation >>>>> and will let you as soon as the issue gets triggerred again. >>>>> >>>>> On Fri, Mar 31, 2017 at 4:17 PM, Andrei wrote: >>>>> >>>>>> Can you provide a tcpdump/ngrep of the requests between >>>>>> Client/Varnish/Apache along with the varnishlog entry to see if that >>>>>> uncovers anything? >>>>>> >>>>>> On Fri, Mar 31, 2017 at 7:25 AM, Hazar G?ney >>>>>> wrote: >>>>>> >>>>>>> Any idea? >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 3:41 PM, Hazar G?ney >>>>>>> wrote: >>>>>>> >>>>>>>> It did not work either: >>>>>>>> >>>>>>>> * << BeReq >> 127418176 >>>>>>>> - Begin bereq 127418175 fetch >>>>>>>> - Timestamp Start: 1490877149.450124 0.000000 0.000000 >>>>>>>> - BereqMethod GET >>>>>>>> - BereqURL XXXX >>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>> - BereqHeader Accept: text/css,*/*;q=0.1 >>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (iPhone; CPU iPhone >>>>>>>> OS 10_2 like Mac OS X) AppleWebKit/602.3.12 (KHTML, like Gecko) >>>>>>>> Version/10.0 Mobile/14C92 Safari/602.1 >>>>>>>> - BereqHeader Accept-Language: tr-tr >>>>>>>> - BereqHeader Referer: XXXX >>>>>>>> - BereqHeader Host: XXXX >>>>>>>> - BereqHeader RIP: XXXX >>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>> - BereqHeader X-Varnish: 127418176 >>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>> - BereqHeader connection: Close >>>>>>>> - VCL_return fetch >>>>>>>> - BackendOpen 25 reload_2017-03-30T14:53:46.st2 >>>>>>>> 10.35.78.11 80 172.17.0.2 59152 >>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>> - Timestamp Bereq: 1490877149.450594 0.000470 0.000470 >>>>>>>> - FetchError http first read error: EOF >>>>>>>> - BackendClose 25 reload_2017-03-30T14:53:46.st2 >>>>>>>> - Timestamp Beresp: 1490877149.451184 0.001060 0.000590 >>>>>>>> - Timestamp Error: 1490877149.451189 0.001065 0.000005 >>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>> - BerespStatus 503 >>>>>>>> - BerespReason Service Unavailable >>>>>>>> - BerespReason Backend fetch failed >>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 12:32:29 GMT >>>>>>>> - BerespHeader Server: Varnish >>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>> - VCL_return retry >>>>>>>> - Timestamp Retry: 1490877149.451205 0.001081 0.000016 >>>>>>>> - Link bereq 127298071 retry >>>>>>>> - End >>>>>>>> >>>>>>>> On Thu, Mar 30, 2017 at 2:34 PM, Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> It does, I'm suspecting that the connection reuse is creating some >>>>>>>>> issues, probably because Apache is doing some non-standard stuff (protip: >>>>>>>>> always blame Apache). >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> On Thu, Mar 30, 2017 at 1:17 PM, Hazar G?ney >>>>>>>> > wrote: >>>>>>>>> >>>>>>>>>> "Connection: close" supersedes keep-alive behavior, is that >>>>>>>>>> correct? >>>>>>>>>> >>>>>>>>>> On Thu, Mar 30, 2017 at 2:08 PM, Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Can you try something: add 'set bereq.http.connection = "Close"; >>>>>>>>>>> ' at the beginning of vcl_backend_fetch and see if that helps? >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Guillaume Quintard >>>>>>>>>>> >>>>>>>>>>> On Thu, Mar 30, 2017 at 1:04 PM, Hazar G?ney < >>>>>>>>>>> hazarguney at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> MaxKeepAliveRequests 20 >>>>>>>>>>>> KeepAliveTimeout 2 >>>>>>>>>>>> >>>>>>>>>>>> Version is "4.1.3 revision 5e3b6d2". We have also seen "straight >>>>>>>>>>>> insufficient bytes" error with POST requests to a specific php script >>>>>>>>>>>> hosted by another backend and fixed it by using "pipe" instead of "pass" >>>>>>>>>>>> but this specific backend gives "http first read error: EOF" error. Another >>>>>>>>>>>> example from today: >>>>>>>>>>>> >>>>>>>>>>>> * << BeReq >> 126635444 >>>>>>>>>>>> - Begin bereq 126635443 fetch >>>>>>>>>>>> - Timestamp Start: 1490870598.921499 0.000000 0.000000 >>>>>>>>>>>> - BereqMethod GET >>>>>>>>>>>> - BereqURL XXXX >>>>>>>>>>>> - BereqProtocol HTTP/1.1 >>>>>>>>>>>> - BereqHeader Host: XXXX >>>>>>>>>>>> - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; >>>>>>>>>>>> Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 >>>>>>>>>>>> Safari/537.36 >>>>>>>>>>>> - BereqHeader Accept: image/webp,image/*,*/*;q=0.8 >>>>>>>>>>>> - BereqHeader Referer: XXXX >>>>>>>>>>>> - BereqHeader Accept-Language: >>>>>>>>>>>> tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4 >>>>>>>>>>>> - BereqHeader RIP: XXXX >>>>>>>>>>>> - BereqHeader X-Forwarded-For: XXXX >>>>>>>>>>>> - BereqHeader Accept-Encoding: gzip >>>>>>>>>>>> - BereqHeader X-Varnish: 126635444 >>>>>>>>>>>> - VCL_call BACKEND_FETCH >>>>>>>>>>>> - VCL_return fetch >>>>>>>>>>>> - BackendOpen 35 reload_2017-03-20T11:32:44.st2 >>>>>>>>>>>> 10.35.78.11 80 172.17.0.2 48896 >>>>>>>>>>>> - BackendStart 10.35.78.11 80 >>>>>>>>>>>> - Timestamp Bereq: 1490870598.922050 0.000552 0.000552 >>>>>>>>>>>> *- FetchError http first read error: EOF* >>>>>>>>>>>> - BackendClose 35 reload_2017-03-20T11:32:44.st2 >>>>>>>>>>>> - Timestamp Beresp: 1490870598.922622 0.001124 0.000572 >>>>>>>>>>>> - Timestamp Error: 1490870598.922627 0.001129 0.000005 >>>>>>>>>>>> - BerespProtocol HTTP/1.1 >>>>>>>>>>>> - BerespStatus 503 >>>>>>>>>>>> - BerespReason Service Unavailable >>>>>>>>>>>> - BerespReason Backend fetch failed >>>>>>>>>>>> - BerespHeader Date: Thu, 30 Mar 2017 10:43:18 GMT >>>>>>>>>>>> - BerespHeader Server: Varnish >>>>>>>>>>>> - VCL_call BACKEND_ERROR >>>>>>>>>>>> - BereqHeader X-Varnish-Backend-5xx: 1 >>>>>>>>>>>> - VCL_return retry >>>>>>>>>>>> - Timestamp Retry: 1490870598.922657 0.001159 0.000030 >>>>>>>>>>>> - Link bereq 126832283 retry >>>>>>>>>>>> - End >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Mar 29, 2017 at 12:03 PM, Mattias Geniar < >>>>>>>>>>>> mattias at nucleus.be> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> > Backend is Apache. >>>>>>>>>>>>> >>>>>>>>>>>>> In older Varnish versions, you could sometimes see a similar >>>>>>>>>>>>> error; >>>>>>>>>>>>> >>>>>>>>>>>>> > 11 FetchError c straight insufficient bytes >>>>>>>>>>>>> >>>>>>>>>>>>> The error message you?re seeing might be related, as it >>>>>>>>>>>>> mentions the EOF. >>>>>>>>>>>>> >>>>>>>>>>>>> This happens when the backend sends a Content-Length header >>>>>>>>>>>>> that doesn?t match the _actual_ content length it?s sending. In Apache, >>>>>>>>>>>>> this was commonly caused by a mod_deflate misconfiguration. >>>>>>>>>>>>> >>>>>>>>>>>>> For testing, could you try disabling Gzip either in your >>>>>>>>>>>>> backend or strip the Accept-Encoding header in Varnish to force a plain >>>>>>>>>>>>> text response? >>>>>>>>>>>>> >>>>>>>>>>>>> Mattias >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> varnish-misc mailing list >>>>>>> varnish-misc at varnish-cache.org >>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel_3_gonzalez at yahoo.es Mon Apr 3 18:47:13 2017 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Mon, 3 Apr 2017 20:47:13 +0200 Subject: Issues with Varnish 3.0 + Multiple Wordpress sites In-Reply-To: References: Message-ID: <76ba8521-f88a-282a-2ac0-ba631373fe2f@yahoo.es> Not sure about Varnish 3.0 syntax but you need to return a pass in vcl_recv for wp-admin and other woocommerce sections (if you have woocommerce): # --- WordPress specific configuration # Did not cache the admin and login pages if (req.url ~ "nocache|cart|my-account|checkout|addons|tienda|mi-cuenta|carro|producto/*|login|wp-admin|wp-(comments-post|login|signup|activate|mail|cron)\.php|preview\=true|admin-ajax\.php|xmlrpc\.php|bb-admin|whm-server-status|server-status|control\.php|bb-login\.php|bb-reset-password\.php|register\.php") { return (pass); } Not sure I have understood, those two extra sites run in the same Apache server? Why don?t you use virtualhosts instead of port virtualhosts? I run Varnish in front of 40 sites run by Apache without needing to specify different backends (just one). Regards, Miguel On 04/03/17 3:51 AM, Fabio Fraga [DS] wrote: > Hey, folks. > > I have a setup on CentOS 6.8 server with 1 single ip address and > including Varnish + Nginx + php-fpm. Php works on 7.0 version. > > My customer had a single website and the setup works fine so far. But he > ask to include a two new websites. My headache starts here. > > When i set the backends pointing to hostname and port (in nginx), > varnish redirects to the first site. But, when i set the sub vcl_recv > correctly (using regexp), i get the correct websites. > My issue is on wp-admin. I can post text content, but i cant post images > (got http error on wordpress). > But if i remove the configuration of new backends, all things works fine. > > Where am i going wrong? > > Below my default.vcl. > > =============================== > > backend default { > > .host = ?w.x.y.z?; > > .port = "8081"; > > .connect_timeout = 60s; > > .first_byte_timeout = 60s; > > .between_bytes_timeout = 60s; > > } > > backend bk1 { > > .host = ?xyz.com.br "; > > .port = "8081"; > > .connect_timeout = 60s; > > .first_byte_timeout = 60s; > > .between_bytes_timeout = 60s; > > } > > backend bk2 { > > .host = ?abc.com.br "; > > .port = "8084"; > > .connect_timeout = 60s; > > .first_byte_timeout = 60s; > > .between_bytes_timeout = 60s; > > } > > backend bk3 { > > .host = ?def.com.br "; > > .port = "8083"; > > .connect_timeout = 60s; > > .first_byte_timeout = 60s; > > .between_bytes_timeout = 60s; > > } > > acl purge { > > "localhost"; > > "127.0.0.1"; > > "w.x.y.z"; > > } > > > sub vcl_recv { > > > if (req.http.host ~ "^(www\.)?xyz\.com\.br$") { > > set req.backend = bk1; > > return (lookup); > > } > > if (req.http.host ~ "^(www\.)?abc\.com\.br$") { > > set req.backend = bk2; > > return (lookup); > > } > > if (req.http.host ~ "^(www\.)?def\.com\.br$") { > > set req.backend = bk3; > > return (lookup); > > } > > > if (req.restarts == 0) { > > if (req.http.x-forwarded-for) { > > set req.http.X-Forwarded-For = > > req.http.X-Forwarded-For + ", " + client.ip; > > } else { > > set req.http.X-Forwarded-For = client.ip; > > } > > } > > > if (req.http.Accept-Encoding) { > > if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { > > # No point in compressing these > > remove req.http.Accept-Encoding; > > } elsif (req.http.Accept-Encoding ~ "gzip") { > > set req.http.Accept-Encoding = "gzip"; > > } elsif (req.http.Accept-Encoding ~ "deflate") { > > set req.http.Accept-Encoding = "deflate"; > > } else { > > # unknown algorithm > > remove req.http.Accept-Encoding; > > } > > } > > > if (req.request == "PURGE") { > > if ( !client.ip ~ purge ) { > > error 405 "Not allowed."; > > } > > return (lookup); > > } > > > if (req.request != "GET" && > > req.request != "HEAD" && > > req.request != "PUT" && > > req.request != "POST" && > > req.request != "TRACE" && > > req.request != "OPTIONS" && > > req.request != "DELETE") { > > return (pipe); > > } > > > if (req.request != "GET" && req.request != "HEAD") { > > return (pass); > > } > > > if ( req.http.cookie ~ "wordpress_logged_in" ) { > > return(pass); > > } > > > if ( > > !(req.url ~ "wp-(login|admin)") > > && !(req.url ~ "&preview=true" ) > > ){ > > unset req.http.cookie; > > } > > > if (req.http.Authorization || req.http.Cookie) { > > return (pass); > > } > > > if ( > > req.url ~ "preview" > > || req.url ~ "nocache" > > || req.url ~ "\.css$" > > || req.url ~ "\.js$" > > || req.url ~ "\.jpg$" > > || req.url ~ "\.jpeg$" > > || req.url ~ "\.gif$" > > || req.url ~ "\.png$" > > ) { > > return (pass); > > } > > > return (lookup); > > } > > > sub vcl_hit { > > > if (req.request == "PURGE") { > > purge; > > error 200 "Purged."; > > } > > return (deliver); > > } > > > sub vcl_miss { > > if (req.request == "PURGE") { > > purge; > > error 200 "Purged."; > > } > > return (fetch); > > } > > > sub vcl_fetch { > > set beresp.http.Vary = "Accept-Encoding"; > > > if (!(req.url ~ "wp-(login|admin)") && !req.http.cookie ~ > "wordpress_logged_in" ) { > > unset beresp.http.set-cookie; > > set beresp.ttl = 5m; > > } > > > if (beresp.ttl <= 0s || > > beresp.http.Set-Cookie || > > beresp.http.Vary == "*") { > > set beresp.ttl = 120 s; > > return (hit_for_pass); > > } > > > return (deliver); > > > } > > > sub vcl_hash { > > > if (req.http.host) { > > hash_data(req.http.host); > > } else { > > hash_data(server.ip); > > } > > } > > > sub vcl_deliver { > > if (obj.hits > 0) { > > set resp.http.X-Cache = "HIT"; > > } else { > > set resp.http.X-Cache = "MISS"; > > } > > } > > ======================= > > Thanks for help, > > > Fabio Fraga Machado > phone: (48) 4052-8300 > web: www.dataspace.com.br > email: fabio at dataspace.com.br > skype: boinkbr > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From fabio at dataspace.com.br Tue Apr 4 14:23:19 2017 From: fabio at dataspace.com.br (Fabio Fraga [DS]) Date: Tue, 4 Apr 2017 11:23:19 -0300 Subject: Issues with Varnish 3.0 + Multiple Wordpress sites In-Reply-To: <76ba8521-f88a-282a-2ac0-ba631373fe2f@yahoo.es> References: <76ba8521-f88a-282a-2ac0-ba631373fe2f@yahoo.es> Message-ID: Hi, Miguel! Thanks for sharing the solution. Im not working with varnish a long time. So, i forgot this simple trick to solve the problem. The setup are varnish + nginx + fpm. Thanks! Fabio Fraga Machado phone: (48) 4052-8300 web: www.dataspace.com.br email: fabio at dataspace.com.br skype: boinkbr On Mon, Apr 3, 2017 at 3:47 PM, Miguel Gonz?lez wrote: > Not sure about Varnish 3.0 syntax but you need to return a pass in > vcl_recv for wp-admin and other woocommerce sections (if you have > woocommerce): > > # --- WordPress specific configuration > # Did not cache the admin and login pages > if (req.url ~ > "nocache|cart|my-account|checkout|addons|tienda|mi- > cuenta|carro|producto/*|login|wp-admin|wp-(comments-post| > login|signup|activate|mail|cron)\.php|preview\=true| > admin-ajax\.php|xmlrpc\.php|bb-admin|whm-server-status| > server-status|control\.php|bb-login\.php|bb-reset-password\. > php|register\.php") > { > return (pass); > } > > > Not sure I have understood, those two extra sites run in the same Apache > server? Why don?t you use virtualhosts instead of port virtualhosts? I > run Varnish in front of 40 sites run by Apache without needing to > specify different backends (just one). > > Regards, > > Miguel > > > > On 04/03/17 3:51 AM, Fabio Fraga [DS] wrote: > > Hey, folks. > > > > I have a setup on CentOS 6.8 server with 1 single ip address and > > including Varnish + Nginx + php-fpm. Php works on 7.0 version. > > > > My customer had a single website and the setup works fine so far. But he > > ask to include a two new websites. My headache starts here. > > > > When i set the backends pointing to hostname and port (in nginx), > > varnish redirects to the first site. But, when i set the sub vcl_recv > > correctly (using regexp), i get the correct websites. > > My issue is on wp-admin. I can post text content, but i cant post images > > (got http error on wordpress). > > But if i remove the configuration of new backends, all things works fine. > > > > Where am i going wrong? > > > > Below my default.vcl. > > > > =============================== > > > > backend default { > > > > .host = ?w.x.y.z?; > > > > .port = "8081"; > > > > .connect_timeout = 60s; > > > > .first_byte_timeout = 60s; > > > > .between_bytes_timeout = 60s; > > > > } > > > > backend bk1 { > > > > .host = ?xyz.com.br "; > > > > .port = "8081"; > > > > .connect_timeout = 60s; > > > > .first_byte_timeout = 60s; > > > > .between_bytes_timeout = 60s; > > > > } > > > > backend bk2 { > > > > .host = ?abc.com.br "; > > > > .port = "8084"; > > > > .connect_timeout = 60s; > > > > .first_byte_timeout = 60s; > > > > .between_bytes_timeout = 60s; > > > > } > > > > backend bk3 { > > > > .host = ?def.com.br "; > > > > .port = "8083"; > > > > .connect_timeout = 60s; > > > > .first_byte_timeout = 60s; > > > > .between_bytes_timeout = 60s; > > > > } > > > > acl purge { > > > > "localhost"; > > > > "127.0.0.1"; > > > > "w.x.y.z"; > > > > } > > > > > > sub vcl_recv { > > > > > > if (req.http.host ~ "^(www\.)?xyz\.com\.br$") { > > > > set req.backend = bk1; > > > > return (lookup); > > > > } > > > > if (req.http.host ~ "^(www\.)?abc\.com\.br$") { > > > > set req.backend = bk2; > > > > return (lookup); > > > > } > > > > if (req.http.host ~ "^(www\.)?def\.com\.br$") { > > > > set req.backend = bk3; > > > > return (lookup); > > > > } > > > > > > if (req.restarts == 0) { > > > > if (req.http.x-forwarded-for) { > > > > set req.http.X-Forwarded-For = > > > > req.http.X-Forwarded-For + ", " + client.ip; > > > > } else { > > > > set req.http.X-Forwarded-For = client.ip; > > > > } > > > > } > > > > > > if (req.http.Accept-Encoding) { > > > > if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { > > > > # No point in compressing these > > > > remove req.http.Accept-Encoding; > > > > } elsif (req.http.Accept-Encoding ~ "gzip") { > > > > set req.http.Accept-Encoding = "gzip"; > > > > } elsif (req.http.Accept-Encoding ~ "deflate") { > > > > set req.http.Accept-Encoding = "deflate"; > > > > } else { > > > > # unknown algorithm > > > > remove req.http.Accept-Encoding; > > > > } > > > > } > > > > > > if (req.request == "PURGE") { > > > > if ( !client.ip ~ purge ) { > > > > error 405 "Not allowed."; > > > > } > > > > return (lookup); > > > > } > > > > > > if (req.request != "GET" && > > > > req.request != "HEAD" && > > > > req.request != "PUT" && > > > > req.request != "POST" && > > > > req.request != "TRACE" && > > > > req.request != "OPTIONS" && > > > > req.request != "DELETE") { > > > > return (pipe); > > > > } > > > > > > if (req.request != "GET" && req.request != "HEAD") { > > > > return (pass); > > > > } > > > > > > if ( req.http.cookie ~ "wordpress_logged_in" ) { > > > > return(pass); > > > > } > > > > > > if ( > > > > !(req.url ~ "wp-(login|admin)") > > > > && !(req.url ~ "&preview=true" ) > > > > ){ > > > > unset req.http.cookie; > > > > } > > > > > > if (req.http.Authorization || req.http.Cookie) { > > > > return (pass); > > > > } > > > > > > if ( > > > > req.url ~ "preview" > > > > || req.url ~ "nocache" > > > > || req.url ~ "\.css$" > > > > || req.url ~ "\.js$" > > > > || req.url ~ "\.jpg$" > > > > || req.url ~ "\.jpeg$" > > > > || req.url ~ "\.gif$" > > > > || req.url ~ "\.png$" > > > > ) { > > > > return (pass); > > > > } > > > > > > return (lookup); > > > > } > > > > > > sub vcl_hit { > > > > > > if (req.request == "PURGE") { > > > > purge; > > > > error 200 "Purged."; > > > > } > > > > return (deliver); > > > > } > > > > > > sub vcl_miss { > > > > if (req.request == "PURGE") { > > > > purge; > > > > error 200 "Purged."; > > > > } > > > > return (fetch); > > > > } > > > > > > sub vcl_fetch { > > > > set beresp.http.Vary = "Accept-Encoding"; > > > > > > if (!(req.url ~ "wp-(login|admin)") && !req.http.cookie ~ > > "wordpress_logged_in" ) { > > > > unset beresp.http.set-cookie; > > > > set beresp.ttl = 5m; > > > > } > > > > > > if (beresp.ttl <= 0s || > > > > beresp.http.Set-Cookie || > > > > beresp.http.Vary == "*") { > > > > set beresp.ttl = 120 s; > > > > return (hit_for_pass); > > > > } > > > > > > return (deliver); > > > > > > } > > > > > > sub vcl_hash { > > > > > > if (req.http.host) { > > > > hash_data(req.http.host); > > > > } else { > > > > hash_data(server.ip); > > > > } > > > > } > > > > > > sub vcl_deliver { > > > > if (obj.hits > 0) { > > > > set resp.http.X-Cache = "HIT"; > > > > } else { > > > > set resp.http.X-Cache = "MISS"; > > > > } > > > > } > > > > ======================= > > > > Thanks for help, > > > > > > Fabio Fraga Machado > > phone: (48) 4052-8300 > > web: www.dataspace.com.br > > email: fabio at dataspace.com.br > > skype: boinkbr > > > > > > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at hanfordonline.co.uk Wed Apr 5 15:17:07 2017 From: mark at hanfordonline.co.uk (Mark Hanford) Date: Wed, 5 Apr 2017 16:17:07 +0100 Subject: Determining the director used v3 to v5 Message-ID: Hi folks. In order to make diagnostics easier, in v3 I would set some headers so we could see in the browser how Varnish decided which Director and backend to use. So in vcl_recv, we would make all sorts of judgements based on host names and urls etc, then set the backend to the relevant director: if (req.host == "mydomain.com") { set req.backend = product1_randomdirector; } if (req.host == "myotherdomain.com") { set req.backend = product2_clientdirector; } .... set req.http.X-Director = req.backend; And then later on in vcl_deliver, I would put that into information into a header so we can see it in the browser (if it is a trusted IP): set resp.http.X-Host = req.http.host; set resp.http.X-Director = req.http.X-Director; But in v5, I don't set the backend to a director, I set it to an actual backend: if (req.host == "mydomain.com") { set req.backend_hint = product1_randomdirector.backend(); } if (req.host == "myotherdomain.com") { set req.backend_hint = product2_sharddirector.backend(KEY, product2_sharddirector.key(req.http.X-Real-Ip)); } How can I set a response header with the director that was chosen? I need to be able to have tests that look at the X-Director for a give request, to confirm the correct choices are being made. I'd rather not manually set a header every time I use `set backend_hint`. thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.rusu at lola.tech Wed Apr 5 16:25:18 2017 From: vlad.rusu at lola.tech (Vlad Rusu) Date: Wed, 5 Apr 2017 19:25:18 +0300 Subject: ESI and 304 Message-ID: Hi guys, https://www.varnish-cache.org/trac/wiki/Future_ESI ? is the "ESI 304 handling" bit addressed? I am seeing this behaviour so I?d assume it isn?t. Are the bits mentioned there getting some traction anytime soon? :) Thanks! -- Vlad Rusu skypeid: rusu.h.vlad | cell: +40758066019 Lola Tech | lola.tech -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Wed Apr 5 18:46:12 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Wed, 5 Apr 2017 18:46:12 +0000 Subject: Backend Fetch failed Message-ID: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> Not sure what is causing this error any help would be appreciated Error 503 Backend Fetch Failed Here is a copy of default.vcl # This is an example VCL file for Varnish. # # It does not do anything by default, delegating control to the # builtin VCL. The builtin VCL is called when there is no explicit # return statement. # # See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/ # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples. # Marker to tell the VCL compiler that this VCL has been adapted to the # new 4.0 format. vcl 4.0; # Default backend definition. Set this to point to your content server. import std; backend drupal { .host = "drupal.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "drupal.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend ncwrite { .host = "ncwrite.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "ncwrite.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend measurementinc { .host = "www.measurementinc.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.measurementinc.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwriting { .host = "www.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwritingscholar { .host = "www.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriitingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend utahcompose { .host = "www.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.utahcompose.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend wpponline { .host = "www.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend support { .host = "support.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpw { .host = "support.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpws { .host = "support.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwritingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportncw { .host = "support.ncwrite.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.ncwrite.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportutc { .host = "support.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.utahcompose"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } sub vcl_recv { if (req.http.host == "drupal.miat.co"){ set req.backend_hint = drupal; } else if (req.http.host == "ncwrite.miat.co"){ set req.backend_hint = ncwrite; } else if (req.http.host == "www.measurementinc.com"){ set req.backend_hint = measurementinc; } else if (req.http.host == "www.pegwriting.com"){ set req.backend_hint = pegwriting; } else if (req.http.host == "pegwritingscholar.com"){ set req.backend_hint = pegwritingscholar; } else if (req.http.host == "www.utahcompose.com"){ set req.backend_hint = utahcompose; } else if (req.http.host == "www.wpponline.com"){ set req.backend_hint = wpponline; } else if (req.http.host == "<^(?=.*?\bsupport\b)(?=.*?\bwpponline\b)(?=.*?\bcom\b).*$>"){ set req.backend_hint = support; } else if (req.http.host == "<^(?=.*?\bsupport\b)(?=.*?\bpegwriting\b)(?=.*?\bcom\b).*$>"){ set req.backend_hint = supportpw; } else if (req.http.host == "<^(?=.*?\bsupport\b)(?=.*?\bpegwritingscholar\b)(?=.*?\bcom\b).*$>"){ set req.backend_hint = supportpws; } else if (req.http.host == "<^(?=.*?\bsupport\b)(?=.*?\bncwrite\b)(?=.*?\bcom\b).*$>"){ set req.backend_hint = supportncw; } else if (req.http.host == "<^(?=.*?\bsupport\b)(?=.*?\butahcompose\b)(?=.*?\bcom\b).*$>"){ set req.backend_hint = supportutc; return (hash); } } #sub vcl_pass { sub vcl_backend_response { set beresp.grace = 6h; set beresp.ttl = 5m; } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. } This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Wed Apr 5 19:37:43 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Wed, 5 Apr 2017 19:37:43 +0000 Subject: Error 503 BAckend Fetch Message-ID: I fixed the typo under the .probe section for the url but I am still having the same error if I comment out the .probe section everything works. I have looked at the examples for the .probe section not sure what I am missung This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Apr 6 07:11:55 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 6 Apr 2017 09:11:55 +0200 Subject: Backend Fetch failed In-Reply-To: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> Message-ID: Can be anything, really, care to share a varnishlog? -- Guillaume Quintard On Wed, Apr 5, 2017 at 8:46 PM, Rodney Bizzell wrote: > Not sure what is causing this error any help would be appreciated > > > > Error 503 Backend Fetch Failed > > > > Here is a copy of default.vcl > > > > > > # This is an example VCL file for Varnish. > > # > > # It does not do anything by default, delegating control to the > > # builtin VCL. The builtin VCL is called when there is no explicit > > # return statement. > > # > > # See the VCL chapters in the Users Guide at > https://www.varnish-cache.org/docs/ > > # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more > examples. > > > > # Marker to tell the VCL compiler that this VCL has been adapted to the > > # new 4.0 format. > > vcl 4.0; > > > > # Default backend definition. Set this to point to your content server. > > import std; > > > > backend drupal { > > .host = "drupal.miat.co"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "drupal.miat.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > > > backend ncwrite { > > .host = "ncwrite.miat.co"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "ncwrite.miat.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > > > > > > > backend measurementinc { > > .host = "www.measurementinc.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "www.measurementinc.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend pegwriting { > > .host = "www.pegwriting.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "www.pegwriting.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend pegwritingscholar { > > .host = "www.pegwritingscholar.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "www.pegwriitingscholar.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend utahcompose { > > .host = "www.utahcompose.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "www.utahcompose.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend wpponline { > > .host = "www.wpponline.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "www.wpponline.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend support { > > .host = "support.wpponline.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "support.wpponline.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend supportpw { > > .host = "support.pegwriting.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "support.pegwriting.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > > > > > backend supportpws { > > .host = "support.pegwritingscholar.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "support.pegwritingscholar.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend supportncw { > > .host = "support.ncwrite.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "support.ncwrite.com"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > } > > > > backend supportutc { > > .host = "support.utahcompose.com"; > > .port = "80"; > > .connect_timeout = 6000s; > > .first_byte_timeout = 6000s; > > .between_bytes_timeout = 6000s; > > .probe = { > > .url = "support.utahcompose"; > > .timeout = 60ms; > > .interval = 1s; > > .window = 10; > > .threshold = 8; > > } > > > > > > } > > > > > > > > > > sub vcl_recv { > > if (req.http.host == "drupal.miat.co"){ > > set req.backend_hint = drupal; > > } else if (req.http.host == "ncwrite.miat.co"){ > > set req.backend_hint = ncwrite; > > } else if (req.http.host == "www.measurementinc.com"){ > > set req.backend_hint = measurementinc; > > } else if (req.http.host == "www.pegwriting.com"){ > > set req.backend_hint = pegwriting; > > } else if (req.http.host == "pegwritingscholar.com"){ > > set req.backend_hint = pegwritingscholar; > > } else if (req.http.host == "www.utahcompose.com"){ > > set req.backend_hint = utahcompose; > > } else if (req.http.host == "www.wpponline.com"){ > > set req.backend_hint = wpponline; > > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bwpponline\b)(?=.*?\bcom\b).*$?"){ > > set req.backend_hint = support; > > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bpegwriting\b)(?=.*?\bcom\b).*$?"){ > > set req.backend_hint = supportpw; > > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bpegwritingscholar\b)(?=.*?\bcom\b).*$?"){ > > set req.backend_hint = supportpws; > > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bncwrite\b)(?=.*?\bcom\b).*$?"){ > > set req.backend_hint = supportncw; > > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > butahcompose\b)(?=.*?\bcom\b).*$?"){ > > set req.backend_hint = supportutc; > > return (hash); > > } > > } > > > > > > #sub vcl_pass { > > > > > > > > > > > > sub vcl_backend_response { > > set beresp.grace = 6h; > > set beresp.ttl = 5m; > > } > > > > > > > > > > > > > > > > > > > > sub vcl_deliver { > > # Happens when we have all the pieces we need, and are about to send > the > > # response to the client. > > # > > # You can do accounting or modifying the final object here. > > } > > > > > This email (including any attachments) may contain confidential > information intended solely for acknowledged recipients. If you think you > have received this information in error, please reply to the sender and > delete all copies from your system. Please note that unauthorized use, > disclosure, or further distribution of this information is prohibited by > the sender. Note also that we may monitor email directed to or originating > from our network. Thank you for your consideration and assistance. | > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Apr 6 07:13:26 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 6 Apr 2017 09:13:26 +0200 Subject: Error 503 BAckend Fetch In-Reply-To: References: Message-ID: Please reply to your emails instead of creating new threads every time. -- Guillaume Quintard On Wed, Apr 5, 2017 at 9:37 PM, Rodney Bizzell wrote: > I fixed the typo under the .probe section for the url but I am still > having the same error if I comment out the .probe section everything works. > I have looked at the examples for the .probe section not sure what I am > missung > > > This email (including any attachments) may contain confidential > information intended solely for acknowledged recipients. If you think you > have received this information in error, please reply to the sender and > delete all copies from your system. Please note that unauthorized use, > disclosure, or further distribution of this information is prohibited by > the sender. Note also that we may monitor email directed to or originating > from our network. Thank you for your consideration and assistance. | > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hans-Peter.Keck at haufe-lexware.com Thu Apr 6 10:42:02 2017 From: Hans-Peter.Keck at haufe-lexware.com (Keck, Hans-Peter) Date: Thu, 6 Apr 2017 10:42:02 +0000 Subject: Python 2.7 Message-ID: <410858cd0bc746fdbc420b3ea9fadfba@vw-x13-4.grp.haufemg.com> Hi all, I tried to compile Varnish 5.1.1 on CentOS 6, but the configure script reported to me that Python 2.7 is now required (CentOS 6 only has Python 2.6). This came as a surprise to me, because it wasn't mentioned in the changes (https://varnish-cache.org/docs/5.1/whats-new/changes-5.1.html) Why is 2.7 needed? If I do not use the VMODs, will I be affected by this? Thanks & Regards, Hans-Peter ---- Dr. Hans-Peter Keck Senior Development Engineer ----------------------------------------------------------------- Haufe-Lexware GmbH & Co. KG Ein Unternehmen der Haufe Gruppe Munzinger Str. 9, 79111 Freiburg Tel. +49 761 898-5401 E-Mail: hans-peter.keck at haufe-lexware.com Internet: http://www.haufe-lexware.com ----------------------------------------------------------------- Kommanditgesellschaft, Sitz und Registergericht Freiburg, HRA 4408 Komplement?re: Haufe-Lexware Verwaltungs GmbH, Sitz und Registergericht Freiburg, HRB 5557; Martin Laqua Beiratsvorsitzende: Andrea Haufe Gesch?ftsf?hrung: Isabel Blank, Sandra Dittert, Markus Dr?nert, J?rg Frey, Birte Hackenjos, Markus Reithwiesner, Joachim Rotzinger, Dr. Carsten Thies ----------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciapnz at gmail.com Thu Apr 6 11:04:53 2017 From: ciapnz at gmail.com (Danila Vershinin) Date: Thu, 6 Apr 2017 14:04:53 +0300 Subject: Python 2.7 In-Reply-To: <410858cd0bc746fdbc420b3ea9fadfba@vw-x13-4.grp.haufemg.com> References: <410858cd0bc746fdbc420b3ea9fadfba@vw-x13-4.grp.haufemg.com> Message-ID: <7F94A1A7-9E04-4693-A3DD-1C9E52032042@gmail.com> Hi, There?s existing Varnish 5.1.1 + CentOS 6 build by @ingvar here: https://copr.fedorainfracloud.org/coprs/ingvar/varnish51/ If you need your own I guess you can expand on the .spec file (can be freely extracted from SRPM file of the package). > On 6 Apr 2017, at 13:42, Keck, Hans-Peter wrote: > > Hi all, > > I tried to compile Varnish 5.1.1 on CentOS 6, but the configure script reported to me that Python 2.7 is now required (CentOS 6 only has Python 2.6). > This came as a surprise to me, because it wasn?t mentioned in the changes (https://varnish-cache.org/docs/5.1/whats-new/changes-5.1.html ) > Why is 2.7 needed? If I do not use the VMODs, will I be affected by this? > > Thanks & Regards, > Hans-Peter > ---- > Dr. Hans-Peter Keck > Senior Development Engineer > ----------------------------------------------------------------- > Haufe-Lexware GmbH & Co. KG > Ein Unternehmen der Haufe Gruppe > Munzinger Str. 9, 79111 Freiburg > Tel. +49 761 898-5401 > E-Mail: hans-peter.keck at haufe-lexware.com > Internet: http://www.haufe-lexware.com > ----------------------------------------------------------------- > Kommanditgesellschaft, Sitz und Registergericht Freiburg, HRA 4408 > Komplement?re: Haufe-Lexware Verwaltungs GmbH, > Sitz und Registergericht Freiburg, HRB 5557; Martin Laqua > Beiratsvorsitzende: Andrea Haufe > Gesch?ftsf?hrung: Isabel Blank, Sandra Dittert, Markus Dr?nert, J?rg Frey, > Birte Hackenjos, Markus Reithwiesner, Joachim Rotzinger, Dr. Carsten Thies > ----------------------------------------------------------------- > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Thu Apr 6 13:26:04 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 13:26:04 +0000 Subject: Backend Fetch failed In-Reply-To: References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> Message-ID: <5289471cd3754fd9a23a060b4eaa9942@mbx2serv.meas-inc.com> When I issue the varnishlog and varnishncsa command nothing happens. I am not using Apache I am caching IIS servers. From: Guillaume Quintard [mailto:guillaume at varnish-software.com] Sent: Thursday, April 06, 2017 3:12 AM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Backend Fetch failed Can be anything, really, care to share a varnishlog? -- Guillaume Quintard On Wed, Apr 5, 2017 at 8:46 PM, Rodney Bizzell > wrote: Not sure what is causing this error any help would be appreciated Error 503 Backend Fetch Failed Here is a copy of default.vcl # This is an example VCL file for Varnish. # # It does not do anything by default, delegating control to the # builtin VCL. The builtin VCL is called when there is no explicit # return statement. # # See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/ # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples. # Marker to tell the VCL compiler that this VCL has been adapted to the # new 4.0 format. vcl 4.0; # Default backend definition. Set this to point to your content server. import std; backend drupal { .host = "drupal.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "drupal.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend ncwrite { .host = "ncwrite.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "ncwrite.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend measurementinc { .host = "www.measurementinc.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.measurementinc.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwriting { .host = "www.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwritingscholar { .host = "www.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriitingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend utahcompose { .host = "www.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.utahcompose.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend wpponline { .host = "www.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend support { .host = "support.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpw { .host = "support.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpws { .host = "support.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwritingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportncw { .host = "support.ncwrite.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.ncwrite.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportutc { .host = "support.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.utahcompose"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } sub vcl_recv { if (req.http.host == "drupal.miat.co"){ set req.backend_hint = drupal; } else if (req.http.host == "ncwrite.miat.co"){ set req.backend_hint = ncwrite; } else if (req.http.host == "www.measurementinc.com"){ set req.backend_hint = measurementinc; } else if (req.http.host == "www.pegwriting.com"){ set req.backend_hint = pegwriting; } else if (req.http.host == "pegwritingscholar.com"){ set req.backend_hint = pegwritingscholar; } else if (req.http.host == "www.utahcompose.com"){ set req.backend_hint = utahcompose; } else if (req.http.host == "www.wpponline.com"){ set req.backend_hint = wpponline; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bwpponline\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = support; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwriting\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwritingscholar\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpws; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bncwrite\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportncw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\butahcompose\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportutc; return (hash); } } #sub vcl_pass { sub vcl_backend_response { set beresp.grace = 6h; set beresp.ttl = 5m; } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. } This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Thu Apr 6 13:28:56 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 13:28:56 +0000 Subject: Backend Fetch failed In-Reply-To: References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> Message-ID: <3c5f03b323fd451ba307f68a4215d44c@mbx2serv.meas-inc.com> I will attach the logs I found them From: Guillaume Quintard [mailto:guillaume at varnish-software.com] Sent: Thursday, April 06, 2017 3:12 AM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Backend Fetch failed Can be anything, really, care to share a varnishlog? -- Guillaume Quintard On Wed, Apr 5, 2017 at 8:46 PM, Rodney Bizzell > wrote: Not sure what is causing this error any help would be appreciated Error 503 Backend Fetch Failed Here is a copy of default.vcl # This is an example VCL file for Varnish. # # It does not do anything by default, delegating control to the # builtin VCL. The builtin VCL is called when there is no explicit # return statement. # # See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/ # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples. # Marker to tell the VCL compiler that this VCL has been adapted to the # new 4.0 format. vcl 4.0; # Default backend definition. Set this to point to your content server. import std; backend drupal { .host = "drupal.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "drupal.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend ncwrite { .host = "ncwrite.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "ncwrite.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend measurementinc { .host = "www.measurementinc.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.measurementinc.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwriting { .host = "www.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwritingscholar { .host = "www.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriitingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend utahcompose { .host = "www.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.utahcompose.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend wpponline { .host = "www.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend support { .host = "support.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpw { .host = "support.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpws { .host = "support.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwritingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportncw { .host = "support.ncwrite.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.ncwrite.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportutc { .host = "support.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.utahcompose"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } sub vcl_recv { if (req.http.host == "drupal.miat.co"){ set req.backend_hint = drupal; } else if (req.http.host == "ncwrite.miat.co"){ set req.backend_hint = ncwrite; } else if (req.http.host == "www.measurementinc.com"){ set req.backend_hint = measurementinc; } else if (req.http.host == "www.pegwriting.com"){ set req.backend_hint = pegwriting; } else if (req.http.host == "pegwritingscholar.com"){ set req.backend_hint = pegwritingscholar; } else if (req.http.host == "www.utahcompose.com"){ set req.backend_hint = utahcompose; } else if (req.http.host == "www.wpponline.com"){ set req.backend_hint = wpponline; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bwpponline\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = support; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwriting\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwritingscholar\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpws; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bncwrite\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportncw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\butahcompose\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportutc; return (hash); } } #sub vcl_pass { sub vcl_backend_response { set beresp.grace = 6h; set beresp.ttl = 5m; } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. } This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Thu Apr 6 13:38:45 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 13:38:45 +0000 Subject: Backend Fetch failed In-Reply-To: References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com>, Message-ID: <1491486001405.83313@measinc.com> << BeReq >> 65547 - Begin bereq 65546 pass - Timestamp Start: 1491485655.912819 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: ncwrite.miat.co - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 - BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - BereqHeader Accept-Language: en-US,en;q=0.5 - BereqHeader Accept-Encoding: gzip, deflate - BereqHeader Cookie: has_js=1 - BereqHeader Upgrade-Insecure-Requests: 1 - BereqHeader X-Forwarded-For: 172.16.5.21 - BereqHeader X-Varnish: 65547 - VCL_call BACKEND_FETCH - VCL_return fetch - FetchError no backend connection - Timestamp Beresp: 1491485655.912871 0.000051 0.000051 - Timestamp Error: 1491485655.912878 0.000059 0.000007 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Thu, 06 Apr 2017 13:34:15 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc Transient - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Thu, 06 Apr 2017 13:34:15 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 282 - BereqAcct 0 0 0 0 0 0 - End * << Request >> 65546 - Begin req 65545 rxreq - Timestamp Start: 1491485655.912700 0.000000 0.000000 - Timestamp Req: 1491485655.912700 0.000000 0.000000 - ReqStart 172.16.5.21 55234 - ReqMethod GET - ReqURL / - ReqProtocol HTTP/1.1 - ReqHeader Host: ncwrite.miat.co - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Cookie: has_js=1 - ReqHeader Connection: keep-alive - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader X-Forwarded-For: 172.16.5.21 - VCL_call RECV - VCL_return pass - VCL_call HASH - VCL_return lookup - VCL_call PASS - VCL_return fetch - Link bereq 65547 pass - Timestamp Fetch: 1491485655.913049 0.000349 0.000349 - RespProtocol HTTP/1.1 - RespStatus 503 - RespReason Backend fetch failed - RespHeader Date: Thu, 06 Apr 2017 13:34:15 GMT - RespHeader Server: Varnish - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader Retry-After: 5 - RespHeader X-Varnish: 65546 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish (Varnish/5.1) - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1491485655.913070 0.000370 0.000021 - RespHeader Content-Length: 282 - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1491485655.913134 0.000433 0.000064 - ReqAcct 337 0 337 250 282 532 - End * << Request >> 5 - Begin req 4 rxreq - Timestamp Start: 1491485659.606174 0.000000 0.000000 - Timestamp Req: 1491485659.606174 0.000000 0.000000 - ReqStart 172.16.5.21 55235 - ReqMethod GET - ReqURL / - ReqProtocol HTTP/1.1 - ReqHeader Host: drupal.miat.co - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Cookie: has_js=1 - ReqHeader Connection: keep-alive - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader X-Forwarded-For: 172.16.5.21 - VCL_call RECV - VCL_return pass - VCL_call HASH - VCL_return lookup - VCL_call PASS - VCL_return fetch - Link bereq 6 pass - Timestamp Fetch: 1491485659.606469 0.000295 0.000295 - RespProtocol HTTP/1.1 - RespStatus 503 - RespReason Backend fetch failed - RespHeader Date: Thu, 06 Apr 2017 13:34:19 GMT - RespHeader Server: Varnish - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader Retry-After: 5 - RespHeader X-Varnish: 5 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish (Varnish/5.1) - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1491485659.606486 0.000311 0.000017 - RespHeader Content-Length: 278 - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1491485659.606557 0.000382 0.000071 - ReqAcct 336 0 336 246 278 524 - End * << BeReq >> 6 - Begin bereq 5 pass - Timestamp Start: 1491485659.606284 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: drupal.miat.co - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0 - BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - BereqHeader Accept-Language: en-US,en;q=0.5 - BereqHeader Accept-Encoding: gzip, deflate - BereqHeader Cookie: has_js=1 - BereqHeader Upgrade-Insecure-Requests: 1 - BereqHeader X-Forwarded-For: 172.16.5.21 - BereqHeader X-Varnish: 6 - VCL_call BACKEND_FETCH - VCL_return fetch - FetchError no backend connection - Timestamp Beresp: 1491485659.606340 0.000056 0.000056 - Timestamp Error: 1491485659.606347 0.000062 0.000006 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Thu, 06 Apr 2017 13:34:19 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR Here is the varnish log - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc Transient - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Thu, 06 Apr 2017 13:34:19 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 278 - BereqAcct 0 0 0 0 0 0 - End * << Session >> 65545 - Begin sess 0 HTTP/1 - SessOpen 172.16.5.21 55234 :80 172.16.2.139 80 1491485655.912552 21 - Link req 65546 rxreq - SessClose RX_TIMEOUT 5.004 - End * << Session >> 4 - Begin sess 0 HTTP/1 - SessOpen 172.16.5.21 55235 :80 172.16.2.139 80 1491485659.606121 22 - Link req 5 rxreq - SessClose RX_TIMEOUT 5.005 - End ________________________________ From: Guillaume Quintard Sent: Thursday, April 6, 2017 3:11 AM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Backend Fetch failed Can be anything, really, care to share a varnishlog? -- Guillaume Quintard On Wed, Apr 5, 2017 at 8:46 PM, Rodney Bizzell > wrote: Not sure what is causing this error any help would be appreciated Error 503 Backend Fetch Failed Here is a copy of default.vcl # This is an example VCL file for Varnish. # # It does not do anything by default, delegating control to the # builtin VCL. The builtin VCL is called when there is no explicit # return statement. # # See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/ # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more examples. # Marker to tell the VCL compiler that this VCL has been adapted to the # new 4.0 format. vcl 4.0; # Default backend definition. Set this to point to your content server. import std; backend drupal { .host = "drupal.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "drupal.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend ncwrite { .host = "ncwrite.miat.co"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "ncwrite.miat.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend measurementinc { .host = "www.measurementinc.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.measurementinc.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwriting { .host = "www.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend pegwritingscholar { .host = "www.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.pegwriitingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend utahcompose { .host = "www.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.utahcompose.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend wpponline { .host = "www.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "www.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend support { .host = "support.wpponline.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.wpponline.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpw { .host = "support.pegwriting.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwriting.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportpws { .host = "support.pegwritingscholar.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.pegwritingscholar.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportncw { .host = "support.ncwrite.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.ncwrite.com"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } backend supportutc { .host = "support.utahcompose.com"; .port = "80"; .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; .probe = { .url = "support.utahcompose"; .timeout = 60ms; .interval = 1s; .window = 10; .threshold = 8; } } sub vcl_recv { if (req.http.host == "drupal.miat.co"){ set req.backend_hint = drupal; } else if (req.http.host == "ncwrite.miat.co"){ set req.backend_hint = ncwrite; } else if (req.http.host == "www.measurementinc.com"){ set req.backend_hint = measurementinc; } else if (req.http.host == "www.pegwriting.com"){ set req.backend_hint = pegwriting; } else if (req.http.host == "pegwritingscholar.com"){ set req.backend_hint = pegwritingscholar; } else if (req.http.host == "www.utahcompose.com"){ set req.backend_hint = utahcompose; } else if (req.http.host == "www.wpponline.com"){ set req.backend_hint = wpponline; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bwpponline\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = support; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwriting\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bpegwritingscholar\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportpws; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\bncwrite\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportncw; } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\butahcompose\b)(?=.*?\bcom\b).*$?"){ set req.backend_hint = supportutc; return (hash); } } #sub vcl_pass { sub vcl_backend_response { set beresp.grace = 6h; set beresp.ttl = 5m; } sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. } This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From guillaume at varnish-software.com Thu Apr 6 13:40:38 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 6 Apr 2017 15:40:38 +0200 Subject: Backend Fetch failed In-Reply-To: <1491486001405.83313@measinc.com> References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> <1491486001405.83313@measinc.com> Message-ID: - BerespReason Service Unavailable run varnishadm backend.list -p your probes are reporting sick backends. -- Guillaume Quintard On Thu, Apr 6, 2017 at 3:38 PM, Rodney Bizzell wrote: > << BeReq >> 65547 > - Begin bereq 65546 pass > - Timestamp Start: 1491485655.912819 0.000000 0.000000 > - BereqMethod GET > - BereqURL / > - BereqProtocol HTTP/1.1 > - BereqHeader Host: ncwrite.miat.co > - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; > rv:52.0) Gecko/20100101 Firefox/52.0 > - BereqHeader Accept: text/html,application/xhtml+ > xml,application/xml;q=0.9,*/*;q=0.8 > - BereqHeader Accept-Language: en-US,en;q=0.5 > - BereqHeader Accept-Encoding: gzip, deflate > - BereqHeader Cookie: has_js=1 > - BereqHeader Upgrade-Insecure-Requests: 1 > - BereqHeader X-Forwarded-For: 172.16.5.21 > - BereqHeader X-Varnish: 65547 > - VCL_call BACKEND_FETCH > - VCL_return fetch > - FetchError no backend connection > - Timestamp Beresp: 1491485655.912871 0.000051 0.000051 > - Timestamp Error: 1491485655.912878 0.000059 0.000007 > - BerespProtocol HTTP/1.1 > - BerespStatus 503 > - BerespReason Service Unavailable > - BerespReason Backend fetch failed > - BerespHeader Date: Thu, 06 Apr 2017 13:34:15 GMT > - BerespHeader Server: Varnish > - VCL_call BACKEND_ERROR > - BerespHeader Content-Type: text/html; charset=utf-8 > - BerespHeader Retry-After: 5 > - VCL_return deliver > - Storage malloc Transient > - ObjProtocol HTTP/1.1 > - ObjStatus 503 > - ObjReason Backend fetch failed > - ObjHeader Date: Thu, 06 Apr 2017 13:34:15 GMT > - ObjHeader Server: Varnish > - ObjHeader Content-Type: text/html; charset=utf-8 > - ObjHeader Retry-After: 5 > - Length 282 > - BereqAcct 0 0 0 0 0 0 > - End > > * << Request >> 65546 > - Begin req 65545 rxreq > - Timestamp Start: 1491485655.912700 0.000000 0.000000 > - Timestamp Req: 1491485655.912700 0.000000 0.000000 > - ReqStart 172.16.5.21 55234 > - ReqMethod GET > - ReqURL / > - ReqProtocol HTTP/1.1 > - ReqHeader Host: ncwrite.miat.co > - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; > rv:52.0) Gecko/20100101 Firefox/52.0 > - ReqHeader Accept: text/html,application/xhtml+ > xml,application/xml;q=0.9,*/*;q=0.8 > - ReqHeader Accept-Language: en-US,en;q=0.5 > - ReqHeader Accept-Encoding: gzip, deflate > - ReqHeader Cookie: has_js=1 > - ReqHeader Connection: keep-alive > - ReqHeader Upgrade-Insecure-Requests: 1 > - ReqHeader X-Forwarded-For: 172.16.5.21 > - VCL_call RECV > - VCL_return pass > - VCL_call HASH > - VCL_return lookup > - VCL_call PASS > - VCL_return fetch > - Link bereq 65547 pass > - Timestamp Fetch: 1491485655.913049 0.000349 0.000349 > - RespProtocol HTTP/1.1 > - RespStatus 503 > - RespReason Backend fetch failed > - RespHeader Date: Thu, 06 Apr 2017 13:34:15 GMT > - RespHeader Server: Varnish > - RespHeader Content-Type: text/html; charset=utf-8 > - RespHeader Retry-After: 5 > - RespHeader X-Varnish: 65546 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish (Varnish/5.1) > - VCL_call DELIVER > - VCL_return deliver > - Timestamp Process: 1491485655.913070 0.000370 0.000021 > - RespHeader Content-Length: 282 > - Debug "RES_MODE 2" > - RespHeader Connection: keep-alive > - Timestamp Resp: 1491485655.913134 0.000433 0.000064 > - ReqAcct 337 0 337 250 282 532 > - End > > * << Request >> 5 > - Begin req 4 rxreq > - Timestamp Start: 1491485659.606174 0.000000 0.000000 > - Timestamp Req: 1491485659.606174 0.000000 0.000000 > - ReqStart 172.16.5.21 55235 > - ReqMethod GET > - ReqURL / > - ReqProtocol HTTP/1.1 > - ReqHeader Host: drupal.miat.co > - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; > rv:52.0) Gecko/20100101 Firefox/52.0 > - ReqHeader Accept: text/html,application/xhtml+ > xml,application/xml;q=0.9,*/*;q=0.8 > - ReqHeader Accept-Language: en-US,en;q=0.5 > - ReqHeader Accept-Encoding: gzip, deflate > - ReqHeader Cookie: has_js=1 > - ReqHeader Connection: keep-alive > - ReqHeader Upgrade-Insecure-Requests: 1 > - ReqHeader X-Forwarded-For: 172.16.5.21 > - VCL_call RECV > - VCL_return pass > - VCL_call HASH > - VCL_return lookup > - VCL_call PASS > - VCL_return fetch > - Link bereq 6 pass > - Timestamp Fetch: 1491485659.606469 0.000295 0.000295 > - RespProtocol HTTP/1.1 > - RespStatus 503 > - RespReason Backend fetch failed > - RespHeader Date: Thu, 06 Apr 2017 13:34:19 GMT > - RespHeader Server: Varnish > - RespHeader Content-Type: text/html; charset=utf-8 > - RespHeader Retry-After: 5 > - RespHeader X-Varnish: 5 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish (Varnish/5.1) > - VCL_call DELIVER > - VCL_return deliver > - Timestamp Process: 1491485659.606486 0.000311 0.000017 > - RespHeader Content-Length: 278 > - Debug "RES_MODE 2" > - RespHeader Connection: keep-alive > - Timestamp Resp: 1491485659.606557 0.000382 0.000071 > - ReqAcct 336 0 336 246 278 524 > - End > > * << BeReq >> 6 > - Begin bereq 5 pass > - Timestamp Start: 1491485659.606284 0.000000 0.000000 > - BereqMethod GET > - BereqURL / > - BereqProtocol HTTP/1.1 > - BereqHeader Host: drupal.miat.co > - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; > rv:52.0) Gecko/20100101 Firefox/52.0 > - BereqHeader Accept: text/html,application/xhtml+ > xml,application/xml;q=0.9,*/*;q=0.8 > - BereqHeader Accept-Language: en-US,en;q=0.5 > - BereqHeader Accept-Encoding: gzip, deflate > - BereqHeader Cookie: has_js=1 > - BereqHeader Upgrade-Insecure-Requests: 1 > - BereqHeader X-Forwarded-For: 172.16.5.21 > - BereqHeader X-Varnish: 6 > - VCL_call BACKEND_FETCH > - VCL_return fetch > - FetchError no backend connection > - Timestamp Beresp: 1491485659.606340 0.000056 0.000056 > - Timestamp Error: 1491485659.606347 0.000062 0.000006 > - BerespProtocol HTTP/1.1 > - BerespStatus 503 > - BerespReason Service Unavailable > - BerespReason Backend fetch failed > - BerespHeader Date: Thu, 06 Apr 2017 13:34:19 GMT > - BerespHeader Server: Varnish > - VCL_call BACKEND_ERROR > > Here is the varnish log > > - BerespHeader Content-Type: text/html; charset=utf-8 > - BerespHeader Retry-After: 5 > - VCL_return deliver > - Storage malloc Transient > - ObjProtocol HTTP/1.1 > - ObjStatus 503 > - ObjReason Backend fetch failed > - ObjHeader Date: Thu, 06 Apr 2017 13:34:19 GMT > - ObjHeader Server: Varnish > - ObjHeader Content-Type: text/html; charset=utf-8 > - ObjHeader Retry-After: 5 > - Length 278 > - BereqAcct 0 0 0 0 0 0 > - End > > * << Session >> 65545 > - Begin sess 0 HTTP/1 > - SessOpen 172.16.5.21 55234 :80 172.16.2.139 80 1491485655.912552 > 21 > - Link req 65546 rxreq > - SessClose RX_TIMEOUT 5.004 > - End > > * << Session >> 4 > - Begin sess 0 HTTP/1 > - SessOpen 172.16.5.21 55235 :80 172.16.2.139 80 1491485659.606121 > 22 > - Link req 5 rxreq > - SessClose RX_TIMEOUT 5.005 > - End > > > ________________________________ > From: Guillaume Quintard > Sent: Thursday, April 6, 2017 3:11 AM > To: Rodney Bizzell > Cc: varnish-misc at varnish-cache.org > Subject: Re: Backend Fetch failed > > Can be anything, really, care to share a varnishlog? > > -- > Guillaume Quintard > > On Wed, Apr 5, 2017 at 8:46 PM, Rodney Bizzell > wrote: > Not sure what is causing this error any help would be appreciated > > Error 503 Backend Fetch Failed > > Here is a copy of default.vcl > > > # This is an example VCL file for Varnish. > # > # It does not do anything by default, delegating control to the > # builtin VCL. The builtin VCL is called when there is no explicit > # return statement. > # > # See the VCL chapters in the Users Guide at > https://www.varnish-cache.org/docs/ > # and https://www.varnish-cache.org/trac/wiki/VCLExamples for more > examples. > > # Marker to tell the VCL compiler that this VCL has been adapted to the > # new 4.0 format. > vcl 4.0; > > # Default backend definition. Set this to point to your content server. > import std; > > backend drupal { > .host = "drupal.miat.co"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "drupal.miat.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > > backend ncwrite { > .host = "ncwrite.miat.co"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "ncwrite.miat.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > > > > backend measurementinc { > .host = "www.measurementinc.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "www.measurementinc.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend pegwriting { > .host = "www.pegwriting.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "www.pegwriting.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend pegwritingscholar { > .host = "www.pegwritingscholar.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "www.pegwriitingscholar.com com>"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend utahcompose { > .host = "www.utahcompose.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "www.utahcompose.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend wpponline { > .host = "www.wpponline.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "www.wpponline.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend support { > .host = "support.wpponline.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "support.wpponline.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend supportpw { > .host = "support.pegwriting.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "support.pegwriting.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > > > backend supportpws { > .host = "support.pegwritingscholar.com pegwritingscholar.com>"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "support.pegwritingscholar.com pegwritingscholar.com>"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend supportncw { > .host = "support.ncwrite.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "support.ncwrite.com"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > } > > backend supportutc { > .host = "support.utahcompose.com"; > .port = "80"; > .connect_timeout = 6000s; > .first_byte_timeout = 6000s; > .between_bytes_timeout = 6000s; > .probe = { > .url = "support.utahcompose"; > .timeout = 60ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > > > } > > > > > sub vcl_recv { > if (req.http.host == "drupal.miat.co"){ > set req.backend_hint = drupal; > } else if (req.http.host == "ncwrite.miat.co >"){ > set req.backend_hint = ncwrite; > } else if (req.http.host == "www.measurementinc.com /www.measurementinc.com>"){ > set req.backend_hint = measurementinc; > } else if (req.http.host == "www.pegwriting.com www.pegwriting.com>"){ > set req.backend_hint = pegwriting; > } else if (req.http.host == "pegwritingscholar.com pegwritingscholar.com>"){ > set req.backend_hint = pegwritingscholar; > } else if (req.http.host == "www.utahcompose.com www.utahcompose.com>"){ > set req.backend_hint = utahcompose; > } else if (req.http.host == "www.wpponline.com wpponline.com>"){ > set req.backend_hint = wpponline; > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bwpponline\b)(?=.*?\bcom\b).*$?"){ > set req.backend_hint = support; > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bpegwriting\b)(?=.*?\bcom\b).*$?"){ > set req.backend_hint = supportpw; > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bpegwritingscholar\b)(?=.*?\bcom\b).*$?"){ > set req.backend_hint = supportpws; > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > bncwrite\b)(?=.*?\bcom\b).*$?"){ > set req.backend_hint = supportncw; > } else if (req.http.host == "?^(?=.*?\bsupport\b)(?=.*?\ > butahcompose\b)(?=.*?\bcom\b).*$?"){ > set req.backend_hint = supportutc; > return (hash); > } > } > > > #sub vcl_pass { > > > > > > sub vcl_backend_response { > set beresp.grace = 6h; > set beresp.ttl = 5m; > } > > > > > > > > > > sub vcl_deliver { > # Happens when we have all the pieces we need, and are about to send > the > # response to the client. > # > # You can do accounting or modifying the final object here. > } > > > > This email (including any attachments) may contain confidential > information intended solely for acknowledged recipients. If you think you > have received this information in error, please reply to the sender and > delete all copies from your system. Please note that unauthorized use, > disclosure, or further distribution of this information is prohibited by > the sender. Note also that we may monitor email directed to or originating > from our network. Thank you for your consideration and assistance. | > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbizzell at measinc.com Thu Apr 6 13:43:03 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 13:43:03 +0000 Subject: Error 503 BAckend Fetch In-Reply-To: References: , Message-ID: <1491486259798.34465@measinc.com> varnishadm backend.list -p Backend name Admin Probe Last updated boot.drupal probe Sick 0/5 Current states good: 0 threshold: 3 window: 5 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.ncwrite probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit ---------------------------------------------------rr--------rrr Error Recv RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR--RRRRRRRR--- Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.measurementinc probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr---rrrrrrrr Error Recv -----------------------------------------------------RRR-------- Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.pegwriting probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr Error Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.pegwritingscholar probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr Error Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.utahcompose probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr Error Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.wpponline probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr Error Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.support probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr--r-r---rr-rrrr Error Recv -------------------------------------------------RR-R-RRR--R---- Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.supportpw probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr--r-r-r----rrrrr Error Recv ------------------------------------------------RR-R-R-RRRR----- Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.supportpws probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr--r-rrr-----rrr Error Recv -------------------------------------------------RR-R---RRRRR--- Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.supportncw probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr--r---rrrrr-rr- Error Recv -------------------------------------------------RR-RRR-----R--R Good Recv ---------------------------------------------------------------- Happy Wed, 05 Apr 2017 19:52:05 GMT boot.supportutc probe Sick 0/10 Current states good: 0 threshold: 8 window: 10 Average response time of good probes: 0.000000 Oldest ================================================== Newest 4444444444444444444444444444444444444444444444444444444444444444 Good IPv4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Good Xmit rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr--rrrr-r-r-r-rrr Error Recv ------------------------------------------------RR----R-R-R-R--- Good Recv ---------------------------------------------------------------- Happy ________________________________ From: Guillaume Quintard Sent: Thursday, April 6, 2017 3:13 AM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Error 503 BAckend Fetch Please reply to your emails instead of creating new threads every time. -- Guillaume Quintard On Wed, Apr 5, 2017 at 9:37 PM, Rodney Bizzell > wrote: I fixed the typo under the .probe section for the url but I am still having the same error if I comment out the .probe section everything works. I have looked at the examples for the .probe section not sure what I am missung This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From rbizzell at measinc.com Thu Apr 6 13:43:57 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 13:43:57 +0000 Subject: Error 503 BAckend Fetch In-Reply-To: References: Message-ID: Here is a copy of ncsa.log From: Guillaume Quintard [mailto:guillaume at varnish-software.com] Sent: Thursday, April 06, 2017 3:13 AM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Error 503 BAckend Fetch Please reply to your emails instead of creating new threads every time. -- Guillaume Quintard On Wed, Apr 5, 2017 at 9:37 PM, Rodney Bizzell > wrote: I fixed the typo under the .probe section for the url but I am still having the same error if I comment out the .probe section everything works. I have looked at the examples for the .probe section not sure what I am missung This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: varnishncsa.log.1 Type: application/octet-stream Size: 32821 bytes Desc: varnishncsa.log.1 URL: From geoff at uplex.de Thu Apr 6 17:38:22 2017 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 6 Apr 2017 19:38:22 +0200 Subject: Backend Fetch failed In-Reply-To: <1491486001405.83313@measinc.com> References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> <1491486001405.83313@measinc.com> Message-ID: <935ded9c-fb8e-9013-2f00-99257bf8ea84@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 For problems like this, *always look for the FetchError entry in the backend logs*. > << BeReq >> 65547 [...] > - FetchError no backend connection - Timestamp Beresp: > 1491485655.912871 0.000051 0.000051 - Timestamp Error: > 1491485655.912878 0.000059 0.000007 [...] > - End The client-side logs, on the other hand, frankly don't matter -- not for the purposes of diagnosing the problem with the backend fetch. So I'll just ignore them altogether. > * << Request >> 65546 [...] > - End > * << Request >> 5 [...] > - End > * << BeReq >> 6 [...] > - FetchError no backend connection - Timestamp Beresp: > 1491485659.606340 0.000056 0.000056 - Timestamp Error: > 1491485659.606347 0.000062 0.000006 [...] FetchError "no backend connection" very likely means, in this case, that your backend is failing its health checks, so that Varnish determines that there is no healthy backend to which it can direct the requests. There is one other possibility for "no backend connection", which is that Varnish attempted to initiate a network connection to the backend, but the connection could not be obtained before connect_timeout expired. In that case, the timestamps would have shown that almost exactly as much time as connect_timeout would have been taken, which for your config would be very obvious (more about that further down). But as you see here in the Timestamp entries, Varnish determined the error after about 50 microseconds, which is near-certain proof that the health checks failed (about enough time for Varnish to check its record that the backend is unhealthy). You can see the results of the health checks in the log, but for that you need raw grouping, since health checks are not transactional (they are not part of requests/responses that Varnish serves): $ varnishlog -g raw -i Backend_health Your health checks are probably failing because you've written the probes incorrectly: backend drupal { [...] .probe = { .url = "drupal.miat.com"; [...] } } This is very common misunderstanding: "url" in the conceptual world of Varnish only ever refers to the *path*; the domain should not appear there. So your probes should say something like: backend drupal { [...] .probe = { .url = "/"; # or whatever path should be used for probes [...] } } Even after you fix that, you're really taking chances with the short timeout for the probes: .probe = { [...] .timeout = 60ms; [...] } Are you sure that your backends will always respond to the health probes within 60 milliseconds? Set it to 1s and give them a chance. That, I think, is the cause of your 503 problem, but I have to say something about this as well, the timeouts you have set for all of your backends: .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; Those timeouts are astonishingly, appallingly, gobsmackingly too long. Just looking at that is almost making my head explode. This is another common mistake: setting the Varnish timeouts to "forever and ever and ever". On the contrary, you're much better off if the timeouts are set to *fail fast*. Setting your timeouts to 100 minutes helps absolutely no one at all -- it means that a worker thread in Varnish will sit idle for 100 minutes, waiting for something to happen. Worker threads are a limited resource in Varnish; you want them to keep doing useful work, and give up relatively soon if a backend is not responding. If there is a serious problem in your system, so that many backends are not responding, then your worker threads will all go idle waiting for the timeouts to expire, and Varnish will have to start new threads. Eventually the maximum number of threads will be reached, and when that happens, Varnish will start to refuse new requests, which usually means that your site goes down altogether. It's a recipe for disaster. Rest assured that if your backend has not responded for 5999 seconds, then it's not going to respond in the 6000th second either. It's not responding at all. Consider just going with the default timeouts, or with something on the order of 6 seconds, rather than 6000 seconds. Or maybe 60 seconds, but that's already getting too long. If your backend developers can't get their apps to respond within a few seconds, then go yell at them until they do. As the Varnish admin, you *cannot* solve that problem for them, by setting your timeouts to "until the end of time". HTH, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJY5n0OAAoJEOUwvh9pJNUReBEP/1tFl8TigTIpQgng09dNc9jT XYLLnFxZXFsjDsENX8kkemyk94AfW95AOpbFNoqtALkGiHLXTDWy0h++Lw3hT1ll GxS8m5/qQ1+IpXXHpjHC86et1PTq7aKWtNTud0riA4b9jirlNYcdk/zaZCB/zRyA 5FzHh7By3LzJZ6qHXycYZWBy3PUQZfG1awX3VWtOzj+UP/hfHIlb6CcY97uF/8L9 Z7uff42o14iYFCGyALsy0JP3la/3qtb1tuzTn1vgqvBM9pVTdRKQXmL9Q/8XsX+Z ySdHMaGG8/5WnUFznwXayEN84Y5fdYk6ZzGbAV3sZtQJkpHXquhj/LRQYDIjxESp ILDh/FobMqevvXFBL/IcjaEj22xYyviu/8fYK+/QPfQ2yv5B0FWX1yIQDyNZx+4e 37XVDd96EMxA/t1XfTVk2DGw9kEtFPmLdatQx487vJsd4OyT3HX6Tiug5T2pHyPT H/a2qKoRMOySD9i0SYMJG0v81Fi/jrJknZJZ/WHAIo4GAs2CRvFH+oI2/USMQzPj brT/JeyVGOUObXkVA1uEYtrucUU07qOtdeVP5RBs6zaULJyu/KbIIF0cQMd0YBam yXBwNVl89ec1RIcHl7TuTzNQ0euqgFyNZW1OAlQIbJKDProf6BHsyGwAXX7jexio PkdtqxaiGBWa5OBR4Gws =p7Ha -----END PGP SIGNATURE----- From rbizzell at measinc.com Thu Apr 6 18:46:58 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Thu, 6 Apr 2017 18:46:58 +0000 Subject: Backend Fetch failed In-Reply-To: <935ded9c-fb8e-9013-2f00-99257bf8ea84@uplex.de> References: <8f512d0b205348e2858ee757eba312b6@mbx2serv.meas-inc.com> <1491486001405.83313@measinc.com> <935ded9c-fb8e-9013-2f00-99257bf8ea84@uplex.de> Message-ID: <7d77d8b4d64e4078a8afab483975b159@mbx2serv.meas-inc.com> I will definitely make those changes appreciate your help. I make that change under the .probe and the site started working "/"; -----Original Message----- From: Geoff Simmons [mailto:geoff at uplex.de] Sent: Thursday, April 06, 2017 1:38 PM To: Rodney Bizzell Cc: varnish-misc at varnish-cache.org Subject: Re: Backend Fetch failed -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 For problems like this, *always look for the FetchError entry in the backend logs*. > << BeReq >> 65547 [...] > - FetchError no backend connection - Timestamp Beresp: > 1491485655.912871 0.000051 0.000051 - Timestamp Error: > 1491485655.912878 0.000059 0.000007 [...] > - End The client-side logs, on the other hand, frankly don't matter -- not for the purposes of diagnosing the problem with the backend fetch. So I'll just ignore them altogether. > * << Request >> 65546 [...] > - End > * << Request >> 5 [...] > - End > * << BeReq >> 6 [...] > - FetchError no backend connection - Timestamp Beresp: > 1491485659.606340 0.000056 0.000056 - Timestamp Error: > 1491485659.606347 0.000062 0.000006 [...] FetchError "no backend connection" very likely means, in this case, that your backend is failing its health checks, so that Varnish determines that there is no healthy backend to which it can direct the requests. There is one other possibility for "no backend connection", which is that Varnish attempted to initiate a network connection to the backend, but the connection could not be obtained before connect_timeout expired. In that case, the timestamps would have shown that almost exactly as much time as connect_timeout would have been taken, which for your config would be very obvious (more about that further down). But as you see here in the Timestamp entries, Varnish determined the error after about 50 microseconds, which is near-certain proof that the health checks failed (about enough time for Varnish to check its record that the backend is unhealthy). You can see the results of the health checks in the log, but for that you need raw grouping, since health checks are not transactional (they are not part of requests/responses that Varnish serves): $ varnishlog -g raw -i Backend_health Your health checks are probably failing because you've written the probes incorrectly: backend drupal { [...] .probe = { .url = "drupal.miat.com"; [...] } } This is very common misunderstanding: "url" in the conceptual world of Varnish only ever refers to the *path*; the domain should not appear there. So your probes should say something like: backend drupal { [...] .probe = { .url = "/"; # or whatever path should be used for probes [...] } } Even after you fix that, you're really taking chances with the short timeout for the probes: .probe = { [...] .timeout = 60ms; [...] } Are you sure that your backends will always respond to the health probes within 60 milliseconds? Set it to 1s and give them a chance. That, I think, is the cause of your 503 problem, but I have to say something about this as well, the timeouts you have set for all of your backends: .connect_timeout = 6000s; .first_byte_timeout = 6000s; .between_bytes_timeout = 6000s; Those timeouts are astonishingly, appallingly, gobsmackingly too long. Just looking at that is almost making my head explode. This is another common mistake: setting the Varnish timeouts to "forever and ever and ever". On the contrary, you're much better off if the timeouts are set to *fail fast*. Setting your timeouts to 100 minutes helps absolutely no one at all -- it means that a worker thread in Varnish will sit idle for 100 minutes, waiting for something to happen. Worker threads are a limited resource in Varnish; you want them to keep doing useful work, and give up relatively soon if a backend is not responding. If there is a serious problem in your system, so that many backends are not responding, then your worker threads will all go idle waiting for the timeouts to expire, and Varnish will have to start new threads. Eventually the maximum number of threads will be reached, and when that happens, Varnish will start to refuse new requests, which usually means that your site goes down altogether. It's a recipe for disaster. Rest assured that if your backend has not responded for 5999 seconds, then it's not going to respond in the 6000th second either. It's not responding at all. Consider just going with the default timeouts, or with something on the order of 6 seconds, rather than 6000 seconds. Or maybe 60 seconds, but that's already getting too long. If your backend developers can't get their apps to respond within a few seconds, then go yell at them until they do. As the Varnish admin, you *cannot* solve that problem for them, by setting your timeouts to "until the end of time". HTH, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJY5n0OAAoJEOUwvh9pJNUReBEP/1tFl8TigTIpQgng09dNc9jT XYLLnFxZXFsjDsENX8kkemyk94AfW95AOpbFNoqtALkGiHLXTDWy0h++Lw3hT1ll GxS8m5/qQ1+IpXXHpjHC86et1PTq7aKWtNTud0riA4b9jirlNYcdk/zaZCB/zRyA 5FzHh7By3LzJZ6qHXycYZWBy3PUQZfG1awX3VWtOzj+UP/hfHIlb6CcY97uF/8L9 Z7uff42o14iYFCGyALsy0JP3la/3qtb1tuzTn1vgqvBM9pVTdRKQXmL9Q/8XsX+Z ySdHMaGG8/5WnUFznwXayEN84Y5fdYk6ZzGbAV3sZtQJkpHXquhj/LRQYDIjxESp ILDh/FobMqevvXFBL/IcjaEj22xYyviu/8fYK+/QPfQ2yv5B0FWX1yIQDyNZx+4e 37XVDd96EMxA/t1XfTVk2DGw9kEtFPmLdatQx487vJsd4OyT3HX6Tiug5T2pHyPT H/a2qKoRMOySD9i0SYMJG0v81Fi/jrJknZJZ/WHAIo4GAs2CRvFH+oI2/USMQzPj brT/JeyVGOUObXkVA1uEYtrucUU07qOtdeVP5RBs6zaULJyu/KbIIF0cQMd0YBam yXBwNVl89ec1RIcHl7TuTzNQ0euqgFyNZW1OAlQIbJKDProf6BHsyGwAXX7jexio PkdtqxaiGBWa5OBR4Gws =p7Ha -----END PGP SIGNATURE----- This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | From np.lists at sharphosting.uk Fri Apr 7 06:21:17 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Fri, 7 Apr 2017 01:21:17 -0500 Subject: systemd Piping varnishncsa Output Message-ID: <50550ADE-3128-4589-ADFA-92F2D541C807@sharphosting.uk> Hi, I'm trying to pipe the output from varnishncsa when running through systemd on CentOS 7 and having a lot of trouble. I read through every relevant post in the archives and search for hours on Google but haven't solved it yet. I'm trying to pipe to cronolog. The only thing I've got to work takes over the foreground, and nothing I've tried for running it in the background works. How can I set up varnishncsa.service so that output is piped to a program instead of written to a file? Should I still run varnishncsa with -D or not? If not how does the PID get written so it can be stopped again by systemctl? I got it to work while taking over the foreground with this script: #!/bin/bash /usr/bin/varnishncsa -F '%{X-Real-IP}i %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i"' -q "ReqHeader:Host ~ ' (^|\.)example\.com$'" -C |/usr/sbin/cronolog "/example/varnish_access_log .%Y-%m-%d" Any assistance greatly appreciated. Thanks Nigel From cbj at touristonline.dk Fri Apr 7 15:21:56 2017 From: cbj at touristonline.dk (=?UTF-8?Q?Christian_Bj=C3=B8rnbak?=) Date: Fri, 7 Apr 2017 17:21:56 +0200 Subject: Getting connection refused on all requests after installation of varnish 5.1.2 Message-ID: Was written on a bug report on http/2 in 5.1.1 and was happy to see that 5.1.2 was released. But after installing it from https://packagecloud.io/varnishcache/varnish5/ubuntu/ nothing works at all.. Varnish restarts without complaints but all requests gets connection refused. Nothing from varnishlog. Med venlig hilsen / Kind regards, Christian Bj?rnbak Chefudvikler / Lead Developer TouristOnline A/S Islands Brygge 43 2300 K?benhavn S Denmark TLF: +45 32888230 Dir. TLF: +45 32888235 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbj at touristonline.dk Fri Apr 7 16:14:37 2017 From: cbj at touristonline.dk (=?UTF-8?Q?Christian_Bj=C3=B8rnbak?=) Date: Fri, 7 Apr 2017 18:14:37 +0200 Subject: Getting connection refused on all requests after installation of varnish 5.1.2 In-Reply-To: References: Message-ID: Sorry did not realize that the upgrade wrote over my custom /lib/systemd/system/varnish.service After rewritten my changes varnish 5.1.2 works. -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonfauster at googlemail.com Fri Apr 7 21:28:34 2017 From: leonfauster at googlemail.com (Leon Fauster) Date: Fri, 7 Apr 2017 23:28:34 +0200 Subject: Python 2.7 In-Reply-To: <410858cd0bc746fdbc420b3ea9fadfba@vw-x13-4.grp.haufemg.com> References: <410858cd0bc746fdbc420b3ea9fadfba@vw-x13-4.grp.haufemg.com> Message-ID: <5C0ECDE3-5D18-4768-AE3D-40342A53D01B@googlemail.com> > Am 06.04.2017 um 12:42 schrieb Keck, Hans-Peter : > > Hi all, > > I tried to compile Varnish 5.1.1 on CentOS 6, but the configure script reported to me that Python 2.7 is now required (CentOS 6 only has Python 2.6). > This came as a surprise to me, because it wasn?t mentioned in the changes (https://varnish-cache.org/docs/5.1/whats-new/changes-5.1.html) > Why is 2.7 needed? If I do not use the VMODs, will I be affected by this? Python 2.7 can be installed via SCL in CentOS6 ... https://wiki.centos.org/AdditionalResources/Repositories/SCL -- LF From np.lists at sharphosting.uk Fri Apr 7 21:51:48 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Fri, 7 Apr 2017 16:51:48 -0500 Subject: systemd Piping varnishncsa Output In-Reply-To: <50550ADE-3128-4589-ADFA-92F2D541C807@sharphosting.uk> References: <50550ADE-3128-4589-ADFA-92F2D541C807@sharphosting.uk> Message-ID: <2b6fb579-c451-5553-96ac-f8af74c87e99@sharphosting.uk> Just to update this thread, I got it resolved following some advice on the StackExchange Unix & Linux site: http://unix.stackexchange.com/questions/356648/pipe-output-to-program-with-systemd On 07/04/2017 01:21, Nigel Peck wrote: > > Hi, > > I'm trying to pipe the output from varnishncsa when running through systemd on CentOS 7 and having a lot of trouble. I read through every relevant post in the archives and search for hours on Google but haven't solved it yet. > > I'm trying to pipe to cronolog. The only thing I've got to work takes over the foreground, and nothing I've tried for running it in the background works. How can I set up varnishncsa.service so that output is piped to a program instead of written to a file? Should I still run varnishncsa with -D or not? If not how does the PID get written so it can be stopped again by systemctl? > > I got it to work while taking over the foreground with this script: > > #!/bin/bash > /usr/bin/varnishncsa -F '%{X-Real-IP}i %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i"' -q "ReqHeader:Host ~ ' > (^|\.)example\.com$'" -C |/usr/sbin/cronolog "/example/varnish_access_log > .%Y-%m-%d" > > Any assistance greatly appreciated. > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From jprins at betterbe.com Sat Apr 8 15:14:55 2017 From: jprins at betterbe.com (Jan Hugo Prins | BetterBe) Date: Sat, 8 Apr 2017 17:14:55 +0200 Subject: Varnish Proxy protocol and CloudFlare. Message-ID: Hi, I have the following test setup running at the moment: Cloudflare -> HaProxy --> Varnish -> Haproxy -> Backend application. |-------------------------------------------------| |---------------------------------| CDN API Between the first HaProxy, Varnish and the second HaProxy I use the proxy protocol to make sure that the requests that enter my environment using the CDN are restricted using the same IP whitelist rules as they would when accessing directly using the API. To get the external IP into the request information and the proxy protocol I have the following configuration in my first haproxy config: acl FROM_CLOUDFLARE req.hdr(CF-Connecting-IP) -m found http-request set-src hdr(CF-Connecting-IP) if FROM_CLOUDFLARE Normal users connect to the API from the outside world and we use IP whitelists to allow certain people access to this API. I'm trying to setup an CDN in front of my application and to build this I use CloudFlare, HaProxy (SSL Termination and some minimal rewrites) and Varnish (to offload requests from my backend application) This all works fine so far, but today I noticed that access using the CDN is not restricted enough and I found out that it looks like the connection between Varnish and my API is not using the proxy protocol, or at least the information that should be available from the CF-Connecting-IP is not visible in the HaProxy on the API backend. The result is that all requests that enter the environment using the CDN seem to be coming from the Varnish hosts instead of the external world. My backend configuration in Varnish config looks like this: import directors; # load the directors backend blsproxy01 { .host = "95.130.232.181"; .port = "81"; .proxy_header = 2; .probe = { .request = "GET /haproxy_test HTTP/1.1" "Host: leaseservices.eu" "Connection: close"; } } backend blsproxy02 { .host = "95.130.232.182"; .port = "81"; .proxy_header = 2; .probe = { .request = "GET /haproxy_test HTTP/1.1" "Host: leaseservices.eu" "Connection: close"; } } backend blsproxy03 { .host = "95.130.232.183"; .port = "81"; .proxy_header = 2; .probe = { .request = "GET /haproxy_test HTTP/1.1" "Host: leaseservices.eu" "Connection: close"; } } sub vcl_init { # new blsproxy = directors.round_robin(); new blsproxy = directors.random(); blsproxy.add_backend(blsproxy01,10); blsproxy.add_backend(blsproxy02,10); blsproxy.add_backend(blsproxy03,10); } I upgraded to Varnish 5.1 a little while back and I think the problem might have started at that time, but I'm not sure at the moment. It's all a test setup, so this was only noticed because I was doing some tests from my home where my home should not be able to request any CDN content at the moment. -- Kind regards Jan Hugo Prins /DevOps Engineer/ Auke Vleerstraat 140 E 7547 AN Enschede CC no. 08097527 *T* +31 (0) 53 48 00 694 *E* jprins at betterbe.com *M* +31 (0)6 263 58 951 www.betterbe.com BetterBe accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: daldffnbnbhodlci.png Type: image/png Size: 13988 bytes Desc: not available URL: From jprins at betterbe.com Sat Apr 8 16:00:31 2017 From: jprins at betterbe.com (Jan Hugo Prins | BetterBe) Date: Sat, 8 Apr 2017 18:00:31 +0200 Subject: Varnish Proxy protocol and CloudFlare. In-Reply-To: References: Message-ID: <6d91511b-bcb7-c5f3-ea4e-e4ba5c194e96@betterbe.com> Ok. When using strictly IPv4 my setup works just fine. Looks like this is an IPv6 only problem. Jan Hugo On 04/08/2017 05:14 PM, Jan Hugo Prins | BetterBe wrote: > Hi, > > I have the following test setup running at the moment: > > Cloudflare -> HaProxy --> Varnish -> Haproxy -> Backend > application. > |-------------------------------------------------| > |---------------------------------| > > CDN API > > Between the first HaProxy, Varnish and the second HaProxy I use the > proxy protocol to make sure that the requests that enter my > environment using the CDN are restricted using the same IP whitelist > rules as they would when accessing directly using the API. To get the > external IP into the request information and the proxy protocol I have > the following configuration in my first haproxy config: > > acl FROM_CLOUDFLARE req.hdr(CF-Connecting-IP) -m found > http-request set-src hdr(CF-Connecting-IP) if FROM_CLOUDFLARE > > Normal users connect to the API from the outside world and we use IP > whitelists to allow certain people access to this API. > I'm trying to setup an CDN in front of my application and to build > this I use CloudFlare, HaProxy (SSL Termination and some minimal > rewrites) and Varnish (to offload requests from my backend application) > > This all works fine so far, but today I noticed that access using the > CDN is not restricted enough and I found out that it looks like the > connection between Varnish and my API is not using the proxy protocol, > or at least the information that should be available from the > CF-Connecting-IP is not visible in the HaProxy on the API backend. The > result is that all requests that enter the environment using the CDN > seem to be coming from the Varnish hosts instead of the external world. > > My backend configuration in Varnish config looks like this: > > import directors; # load the directors > > backend blsproxy01 { > .host = "95.130.232.181"; > .port = "81"; > .proxy_header = 2; > .probe = { > .request = > "GET /haproxy_test HTTP/1.1" > "Host: leaseservices.eu" > "Connection: close"; > } > } > > backend blsproxy02 { > .host = "95.130.232.182"; > .port = "81"; > .proxy_header = 2; > .probe = { > .request = > "GET /haproxy_test HTTP/1.1" > "Host: leaseservices.eu" > "Connection: close"; > } > } > > > backend blsproxy03 { > .host = "95.130.232.183"; > .port = "81"; > .proxy_header = 2; > .probe = { > .request = > "GET /haproxy_test HTTP/1.1" > "Host: leaseservices.eu" > "Connection: close"; > } > } > > sub vcl_init { > # new blsproxy = directors.round_robin(); > new blsproxy = directors.random(); > blsproxy.add_backend(blsproxy01,10); > blsproxy.add_backend(blsproxy02,10); > blsproxy.add_backend(blsproxy03,10); > } > > I upgraded to Varnish 5.1 a little while back and I think the problem > might have started at that time, but I'm not sure at the moment. It's > all a test setup, so this was only noticed because I was doing some > tests from my home where my home should not be able to request any CDN > content at the moment. > > > > -- > Kind regards > > Jan Hugo Prins > /DevOps Engineer/ > > Auke Vleerstraat 140 E > 7547 AN Enschede > CC no. 08097527 > > *T* +31 (0) 53 48 00 694 > *E* jprins at betterbe.com > *M* +31 (0)6 263 58 951 > www.betterbe.com > BetterBe accepts no liability for the content of this email, or for > the consequences of any actions taken on the basis > of the information provided, unless that information is subsequently > confirmed in writing. If you are not the intended > recipient you are notified that disclosing, copying, distributing or > taking any action in reliance on the contents of this > information is strictly prohibited. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Kind regards Jan Hugo Prins /DevOps Engineer/ Auke Vleerstraat 140 E 7547 AN Enschede CC no. 08097527 *T* +31 (0) 53 48 00 694 *E* jprins at betterbe.com *M* +31 (0)6 263 58 951 www.betterbe.com BetterBe accepts no liability for the content of this email, or for the consequences of any actions taken on the basis of the information provided, unless that information is subsequently confirmed in writing. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 13988 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jllmnmplkkgafcje.png Type: image/png Size: 13988 bytes Desc: not available URL: From np.lists at sharphosting.uk Sat Apr 8 18:49:55 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Sat, 8 Apr 2017 13:49:55 -0500 Subject: Compression Policy Message-ID: I am looking at how best to set up compression on my setup, that is a Varnish server handing out cached content from a separate back-end server. In his notes on the subject, Poul-Henning says that there is no need to store both a gzipped and an un-gzipped copy of requests in the cache, since Varnish can gunzip on the fly. https://varnish-cache.org/docs/4.1/phk/gzip.html My question is, wouldn't it be quicker to have both a gzipped and ungzipped copy stored in memory, so that this does not need to be changed on the fly? Or is the time taken to ungzip so negligible as to make this unnecessary? Thanks Nigel From guillaume at varnish-software.com Sun Apr 9 20:36:10 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 9 Apr 2017 22:36:10 +0200 Subject: Compression Policy In-Reply-To: References: Message-ID: You can test, but I don't think it's worth the trouble. Virtually all clients support gzip, so you'll only really use one version of your object. -- Guillaume Quintard On Sat, Apr 8, 2017 at 8:49 PM, Nigel Peck wrote: > > I am looking at how best to set up compression on my setup, that is a > Varnish server handing out cached content from a separate back-end server. > In his notes on the subject, Poul-Henning says that there is no need to > store both a gzipped and an un-gzipped copy of requests in the cache, since > Varnish can gunzip on the fly. > > https://varnish-cache.org/docs/4.1/phk/gzip.html > > My question is, wouldn't it be quicker to have both a gzipped and > ungzipped copy stored in memory, so that this does not need to be changed > on the fly? Or is the time taken to ungzip so negligible as to make this > unnecessary? > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Mon Apr 10 05:40:07 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Mon, 10 Apr 2017 00:40:07 -0500 Subject: Compression Policy In-Reply-To: References: Message-ID: <0DC4FFB3-D3AC-4D64-908C-C214F1BBD927@sharphosting.uk> Thanks Guillaume, that's good to know. I'll give it some thoughts and perhaps implement it and keep an eye on the TTFB. Nigel > On 9 Apr 2017, at 15:36, Guillaume Quintard wrote: > > You can test, but I don't think it's worth the trouble. Virtually all clients support gzip, so you'll only really use one version of your object. > > -- > Guillaume Quintard > >> On Sat, Apr 8, 2017 at 8:49 PM, Nigel Peck wrote: >> >> I am looking at how best to set up compression on my setup, that is a Varnish server handing out cached content from a separate back-end server. In his notes on the subject, Poul-Henning says that there is no need to store both a gzipped and an un-gzipped copy of requests in the cache, since Varnish can gunzip on the fly. >> >> https://varnish-cache.org/docs/4.1/phk/gzip.html >> >> My question is, wouldn't it be quicker to have both a gzipped and ungzipped copy stored in memory, so that this does not need to be changed on the fly? Or is the time taken to ungzip so negligible as to make this unnecessary? >> >> Thanks >> Nigel >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciapnz at gmail.com Mon Apr 10 13:18:45 2017 From: ciapnz at gmail.com (Danila Vershinin) Date: Mon, 10 Apr 2017 16:18:45 +0300 Subject: Varnishlog. Get entries by XID Message-ID: Hi, Say I have stumbled upon a "Backend Fetch Failed? page and that gives me an XID. How do I easily query varnishlog for corresponding client and backend requests? varnishlog -q ?XID = 12345? doesn?t work From geoff at uplex.de Mon Apr 10 13:36:49 2017 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 10 Apr 2017 15:36:49 +0200 Subject: Varnishlog. Get entries by XID In-Reply-To: References: Message-ID: <6e5c5063-576d-089d-35f9-82df39e8cab1@uplex.de> On 04/10/2017 03:18 PM, Danila Vershinin wrote: > > Say I have stumbled upon a "Backend Fetch Failed? page and that > gives me an XID. > > How do I easily query varnishlog for corresponding client and > backend requests? > > varnishlog -q ?XID = 12345? doesn?t work It can be done you're running Varnish 5.1: https://www.varnish-cache.org/docs/5.1/whats-new/changes-5.1.html#vxid-in-vsl-queries The left-hand side is named 'vxid', and remember to use '==' as the comparison operator ('=' is an error): $ varnishlog -q ?vxid == 12345? In versions prior to 5.1 you can use the X-Varnish header: # client side varnishlog -d -q 'RespHeader:X-Varnish[1] == 12345' # backend side varnishlog -d -q 'BereqHeader:X-Varnish == 12345' HTH, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From kokoniimasu at gmail.com Mon Apr 10 13:40:10 2017 From: kokoniimasu at gmail.com (kokoniimasu) Date: Mon, 10 Apr 2017 22:40:10 +0900 Subject: Varnishlog. Get entries by XID In-Reply-To: References: Message-ID: Hi, Danila I think this will be helpful for you. https://www.varnish-cache.org/docs/trunk/whats-new/changes-5.1.html#vxid-in-vsl-queries Regards, -- Shohei Tanaka(@xcir) http://blog.xcir.net/ 2017-04-10 22:18 GMT+09:00 Danila Vershinin : > Hi, > > Say I have stumbled upon a "Backend Fetch Failed? page and that gives me an XID. > > How do I easily query varnishlog for corresponding client and backend requests? > > varnishlog -q ?XID = 12345? doesn?t work > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From ciapnz at gmail.com Tue Apr 11 00:50:20 2017 From: ciapnz at gmail.com (Danila Vershinin) Date: Tue, 11 Apr 2017 03:50:20 +0300 Subject: Varnishlog. Get entries by XID In-Reply-To: <6e5c5063-576d-089d-35f9-82df39e8cab1@uplex.de> References: <6e5c5063-576d-089d-35f9-82df39e8cab1@uplex.de> Message-ID: <37BEB7D3-2497-4182-A165-604B4B543015@gmail.com> Thanks much, Geoff. Best Regards, Danila > On 10 Apr 2017, at 16:36, Geoff Simmons wrote: > > On 04/10/2017 03:18 PM, Danila Vershinin wrote: >> >> Say I have stumbled upon a "Backend Fetch Failed? page and that >> gives me an XID. >> >> How do I easily query varnishlog for corresponding client and >> backend requests? >> >> varnishlog -q ?XID = 12345? doesn?t work > > It can be done you're running Varnish 5.1: > > https://www.varnish-cache.org/docs/5.1/whats-new/changes-5.1.html#vxid-in-vsl-queries > > The left-hand side is named 'vxid', and remember to use '==' as the > comparison operator ('=' is an error): > > $ varnishlog -q ?vxid == 12345? > > In versions prior to 5.1 you can use the X-Varnish header: > > # client side > varnishlog -d -q 'RespHeader:X-Varnish[1] == 12345' > > # backend side > varnishlog -d -q 'BereqHeader:X-Varnish == 12345' > > > HTH, > Geoff > -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinakee at waltzz.com Wed Apr 12 07:32:42 2017 From: pinakee at waltzz.com (Pinakee BIswas) Date: Wed, 12 Apr 2017 13:02:42 +0530 Subject: A/B testing Message-ID: Hi, We have been using Varnish as web accelerator for ecommerce site. I would like to know if it is possible to do A/B testing using Varnish. If so, would appreciate if you could share the steps or related documents. Thanks, Pinakee From guillaume at varnish-software.com Wed Apr 12 09:00:50 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 12 Apr 2017 11:00:50 +0200 Subject: "http first read error: EOF" errors from WordPress backend In-Reply-To: References: <9FC546D0-BB08-49F5-93B8-7A3C9F5A49E7@nucleus.be> Message-ID: Can you try and report? It may be a bug on either side, deactivating temporarily could confirm that it's indeed a gzip problem. -- Guillaume Quintard On Fri, Mar 31, 2017 at 2:31 PM, Hazar G?ney wrote: > But "Content-Length" header is not available at all. I'm afraid we cannot > keep gzip disabled even if it solves the issue, Varnish has to be able to > handle gzipped inputs from the backend. > > On Fri, Mar 31, 2017 at 11:11 AM, Mattias Geniar > wrote: > >> > - FetchError http first read error: EOF >> > - BackendClose 21 reload_2017-03-21T100643.default >> > - Timestamp Beresp: 1490916389.664967 60.000557 60.000114 >> > - Timestamp Error: 1490916389.664978 60.000567 0.000011 >> >> At the risk of repeating myself: try to disable gzip & any wordpress >> plugins that might be trying to gzip on their own (aka: output buffering in >> PHP). >> >> To me, this seems like Varnish is waiting for the backend to send more >> data, because it replied with a certain Content-Length header but sent a >> few bytes less than it advertised, and Varnish is waiting for those missing >> bytes. >> >> Mattias >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Wed Apr 12 22:49:08 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Wed, 12 Apr 2017 17:49:08 -0500 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response Message-ID: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Hi, I am implementing compression in vcl_backend_response, by setting "beresp.do_gzip" on appropriate Content-Type's. I am wondering if it is safe to do this even on responses that may subsequently get set as uncacheable by later code? It seems safe to me, but just wanting to check in case I'm missing something. I assume that if a response is subsequently set uncacheable then "beresp.do_gzip" will simply have no affect? Thanks Nigel From np.lists at sharphosting.uk Thu Apr 13 06:21:29 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 13 Apr 2017 01:21:29 -0500 Subject: HIT after PURGE & Restart Message-ID: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> I have a strange problem, where it seems that purged objects are not being purged immediately, so the restarted request hits the object that was supposed to be purged, and then the next request for that object misses. Here are the relevant fields from a set of requests for the same object showing the sequence. That second restarted request should not be getting a hit, since it is a restart from a purge of that same object. This is CentOS 7/Varnish 4.0.4. No varying except on Accept-Encoding. Very little VCL. All cookies are removed in VCL_recv. TTL is set to 7 days in vcl_backend_response on anything that comes back with a 200 or 304. Any input greatly appreciated. It only seems to happen on images. -- My VCL_purge is: sub vcl_purge { set req.method = "GET"; return (restart); } In VCL_recv I have: if (req.method == "PURGE") { if (!client.ip ~ purgers) { return (synth(405, "Purging not allowed for " + client.ip)); } return (purge); } -- * << Request >> 100892 - Begin req 100891 rxreq - ReqMethod PURGE - VCL_call RECV - VCL_return purge - VCL_call HASH - VCL_return lookup - VCL_call PURGE - ReqMethod GET - VCL_return restart - Timestamp Restart: 1492038073.166852 0.000092 0.000092 - Link req 100893 restart - End * << Request >> 100893 - Begin req 100892 restart - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 100603 - VCL_call HIT - VCL_return deliver - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1492038073.166894 0.000133 0.000041 - Debug "RES_MODE 2" - Timestamp Resp: 1492038073.166977 0.000216 0.000083 - Debug "XXX REF 2" - ReqAcct 275 0 275 347 4481 4828 - End * << Request >> 364403 - Begin req 364402 rxreq - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - VCL_call DELIVER - RespHeader X-Cache: MISS - VCL_return deliver - Timestamp Process: 1492046821.365675 0.003335 0.000031 - Debug "RES_MODE 2" - Timestamp Resp: 1492046821.365740 0.003400 0.000065 - Debug "XXX REF 2" - ReqAcct 545 0 545 323 4481 4804 - End * << Request >> 364767 - Begin req 364766 rxreq - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 364404 - VCL_call HIT - VCL_return deliver - VCL_call DELIVER - RespHeader X-Cache: HIT - VCL_return deliver - Timestamp Process: 1492056635.579030 0.000051 0.000051 - Debug "RES_MODE 2" - Timestamp Resp: 1492056635.579060 0.000081 0.000031 - Debug "XXX REF 2" - ReqAcct 487 0 487 349 4481 4830 - End From guillaume at varnish-software.com Thu Apr 13 06:44:20 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 13 Apr 2017 08:44:20 +0200 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: You are right, subsequent requests will just be passed to the backend, so no gzip manipulation/processing will occur. -- Guillaume Quintard On Thu, Apr 13, 2017 at 12:49 AM, Nigel Peck wrote: > > Hi, > > I am implementing compression in vcl_backend_response, by setting > "beresp.do_gzip" on appropriate Content-Type's. I am wondering if it is > safe to do this even on responses that may subsequently get set as > uncacheable by later code? > > It seems safe to me, but just wanting to check in case I'm missing > something. I assume that if a response is subsequently set uncacheable then > "beresp.do_gzip" will simply have no affect? > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Apr 13 06:48:12 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 13 Apr 2017 08:48:12 +0200 Subject: HIT after PURGE & Restart In-Reply-To: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> References: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> Message-ID: Is there any chance that: - someone requested the object between the purge and the subsequent hit? - you re-processed the request again, changing the cache (non-idempotent URL rewrite, maybe?) -- Guillaume Quintard On Thu, Apr 13, 2017 at 8:21 AM, Nigel Peck wrote: > > I have a strange problem, where it seems that purged objects are not being > purged immediately, so the restarted request hits the object that was > supposed to be purged, and then the next request for that object misses. > Here are the relevant fields from a set of requests for the same object > showing the sequence. That second restarted request should not be getting a > hit, since it is a restart from a purge of that same object. > > This is CentOS 7/Varnish 4.0.4. No varying except on Accept-Encoding. Very > little VCL. All cookies are removed in VCL_recv. TTL is set to 7 days in > vcl_backend_response on anything that comes back with a 200 or 304. > > Any input greatly appreciated. It only seems to happen on images. > > -- > > My VCL_purge is: > > sub vcl_purge { > set req.method = "GET"; > return (restart); > } > > In VCL_recv I have: > > if (req.method == "PURGE") { > if (!client.ip ~ purgers) { > return (synth(405, "Purging not allowed for " + client.ip)); > } > return (purge); > } > > -- > > * << Request >> 100892 > - Begin req 100891 rxreq > - ReqMethod PURGE > - VCL_call RECV > - VCL_return purge > - VCL_call HASH > - VCL_return lookup > - VCL_call PURGE > - ReqMethod GET > - VCL_return restart > - Timestamp Restart: 1492038073.166852 0.000092 0.000092 > - Link req 100893 restart > - End > > * << Request >> 100893 > - Begin req 100892 restart > - ReqMethod GET > - VCL_call RECV > - VCL_return hash > - VCL_call HASH > - VCL_return lookup > - Hit 100603 > - VCL_call HIT > - VCL_return deliver > - VCL_call DELIVER > - VCL_return deliver > - Timestamp Process: 1492038073.166894 0.000133 0.000041 > - Debug "RES_MODE 2" > - Timestamp Resp: 1492038073.166977 0.000216 0.000083 > - Debug "XXX REF 2" > - ReqAcct 275 0 275 347 4481 4828 > - End > > * << Request >> 364403 > - Begin req 364402 rxreq > - ReqMethod GET > - VCL_call RECV > - VCL_return hash > - VCL_call HASH > - VCL_return lookup > - Debug "XXXX MISS" > - VCL_call MISS > - VCL_return fetch > - VCL_call DELIVER > - RespHeader X-Cache: MISS > - VCL_return deliver > - Timestamp Process: 1492046821.365675 0.003335 0.000031 > - Debug "RES_MODE 2" > - Timestamp Resp: 1492046821.365740 0.003400 0.000065 > - Debug "XXX REF 2" > - ReqAcct 545 0 545 323 4481 4804 > - End > > * << Request >> 364767 > - Begin req 364766 rxreq > - ReqMethod GET > - VCL_call RECV > - VCL_return hash > - VCL_call HASH > - VCL_return lookup > - Hit 364404 > - VCL_call HIT > - VCL_return deliver > - VCL_call DELIVER > - RespHeader X-Cache: HIT > - VCL_return deliver > - Timestamp Process: 1492056635.579030 0.000051 0.000051 > - Debug "RES_MODE 2" > - Timestamp Resp: 1492056635.579060 0.000081 0.000031 > - Debug "XXX REF 2" > - ReqAcct 487 0 487 349 4481 4830 > - End > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Apr 13 09:27:49 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 13 Apr 2017 11:27:49 +0200 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: On Thu, Apr 13, 2017 at 8:44 AM, Guillaume Quintard wrote: > You are right, subsequent requests will just be passed to the backend, so no > gzip manipulation/processing will occur. I had no idea [1] so I wrote a test case [2] to clear up my doubts: varnishtest "uncacheable gzip" server s1 { rxreq txresp -bodylen 100 } -start varnish v1 -vcl+backend { sub vcl_backend_response { set beresp.do_gzip = true; set beresp.uncacheable = true; return (deliver); } } -start client c1 { txreq rxresp } -run varnish v1 -expect n_gzip == 1 varnish v1 -expect n_gunzip == 1 Despite the fact that the response is not cached, it is actually gzipped, because in all cases backend responses are buffered through storage (in this case Transient). It means that for clients that don't advertise gzip support like in this example, on passed transactions you will effectively waste cycles on doing both on-the-fly gzip and gunzip for a single client transaction. That being said, it might be worth it if you have a high rate of non-cacheable contents, but suitable for compression: less transient storage consumption. I'd say it's a trade off between CPU and memory, depending on what you wish to preserve you can decide how to go about that. You can even do on-the-fly gzip on passed transactions only if the client supports it and the backend doesn't, so that you save storage and bandwidth, at the expense of CPU time you'd have consumed on the client side if you wanted to save bandwidth anyway. The only caveat I see is the handling of the built-in VCL: > I am wondering if it is safe to do this even on responses that may > subsequently get set as uncacheable by later code? If you let your VCL flow through the built-in rules, then you have no way to cancel the do_gzip if the response is marked as uncacheable. Dridi [1] well I had an idea that turned out to be correct, but wasn't sure [2] tested only with 5.0, but I'm convinced it is stable behavior for 4.0+ From guillaume at varnish-software.com Thu Apr 13 09:30:54 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 13 Apr 2017 11:30:54 +0200 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: Oooooooh, thanks Dridi for checking, I was wrong. -- Guillaume Quintard On Thu, Apr 13, 2017 at 11:27 AM, Dridi Boukelmoune wrote: > On Thu, Apr 13, 2017 at 8:44 AM, Guillaume Quintard > wrote: > > You are right, subsequent requests will just be passed to the backend, > so no > > gzip manipulation/processing will occur. > > I had no idea [1] so I wrote a test case [2] to clear up my doubts: > > varnishtest "uncacheable gzip" > > server s1 { > rxreq > txresp -bodylen 100 > } -start > > varnish v1 -vcl+backend { > sub vcl_backend_response { > set beresp.do_gzip = true; > set beresp.uncacheable = true; > return (deliver); > } > } -start > > client c1 { > txreq > rxresp > } -run > > varnish v1 -expect n_gzip == 1 > varnish v1 -expect n_gunzip == 1 > > Despite the fact that the response is not cached, it is actually > gzipped, because in all cases backend responses are buffered through > storage (in this case Transient). It means that for clients that don't > advertise gzip support like in this example, on passed transactions > you will effectively waste cycles on doing both on-the-fly gzip and > gunzip for a single client transaction. > > That being said, it might be worth it if you have a high rate of > non-cacheable contents, but suitable for compression: less transient > storage consumption. I'd say it's a trade off between CPU and memory, > depending on what you wish to preserve you can decide how to go about > that. > > You can even do on-the-fly gzip on passed transactions only if the > client supports it and the backend doesn't, so that you save storage > and bandwidth, at the expense of CPU time you'd have consumed on the > client side if you wanted to save bandwidth anyway. > > The only caveat I see is the handling of the built-in VCL: > > > I am wondering if it is safe to do this even on responses that may > > subsequently get set as uncacheable by later code? > > If you let your VCL flow through the built-in rules, then you have no > way to cancel the do_gzip if the response is marked as uncacheable. > > Dridi > > [1] well I had an idea that turned out to be correct, but wasn't sure > [2] tested only with 5.0, but I'm convinced it is stable behavior for 4.0+ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Apr 13 09:36:44 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 13 Apr 2017 11:36:44 +0200 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: On Thu, Apr 13, 2017 at 11:30 AM, Guillaume Quintard wrote: > Oooooooh, thanks Dridi for checking, I was wrong. You were right up to "backend" :) From dridi at varni.sh Thu Apr 13 12:45:15 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 13 Apr 2017 14:45:15 +0200 Subject: HIT after PURGE & Restart In-Reply-To: References: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> Message-ID: On Thu, Apr 13, 2017 at 8:48 AM, Guillaume Quintard wrote: > Is there any chance that: > - someone requested the object between the purge and the subsequent hit? > - you re-processed the request again, changing the cache (non-idempotent URL > rewrite, maybe?) At first glance things look OK on 4.0, 4.1, 5.0 and 5.1: varnishtest "purge+restart should miss" server s1 { rxreq txresp -body first rxreq txresp -body second } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.method == "PURGE") { return (purge); } } sub vcl_purge { set req.method = "GET"; return (restart); } } -start client c1 { txreq rxresp expect resp.body == first txreq -req PURGE rxresp expect resp.body == second } -run I will try a more involved test later on. Dridi From np.lists at sharphosting.uk Thu Apr 13 20:45:24 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 13 Apr 2017 15:45:24 -0500 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: Thanks for this, great to have the detailed info. So it looks like the most efficient solution is going to be to only "do_gzip" uncacheable responses if the client supports it, which means also implementing (and modifying) the builtin code in my VCL, and not flowing through to it. Got it, thanks. Nigel On 13/04/2017 04:27, Dridi Boukelmoune wrote: > On Thu, Apr 13, 2017 at 8:44 AM, Guillaume Quintard > wrote: >> You are right, subsequent requests will just be passed to the backend, so no >> gzip manipulation/processing will occur. > > I had no idea [1] so I wrote a test case [2] to clear up my doubts: > > varnishtest "uncacheable gzip" > > server s1 { > rxreq > txresp -bodylen 100 > } -start > > varnish v1 -vcl+backend { > sub vcl_backend_response { > set beresp.do_gzip = true; > set beresp.uncacheable = true; > return (deliver); > } > } -start > > client c1 { > txreq > rxresp > } -run > > varnish v1 -expect n_gzip == 1 > varnish v1 -expect n_gunzip == 1 > > Despite the fact that the response is not cached, it is actually > gzipped, because in all cases backend responses are buffered through > storage (in this case Transient). It means that for clients that don't > advertise gzip support like in this example, on passed transactions > you will effectively waste cycles on doing both on-the-fly gzip and > gunzip for a single client transaction. > > That being said, it might be worth it if you have a high rate of > non-cacheable contents, but suitable for compression: less transient > storage consumption. I'd say it's a trade off between CPU and memory, > depending on what you wish to preserve you can decide how to go about > that. > > You can even do on-the-fly gzip on passed transactions only if the > client supports it and the backend doesn't, so that you save storage > and bandwidth, at the expense of CPU time you'd have consumed on the > client side if you wanted to save bandwidth anyway. > > The only caveat I see is the handling of the built-in VCL: > >> I am wondering if it is safe to do this even on responses that may >> subsequently get set as uncacheable by later code? > > If you let your VCL flow through the built-in rules, then you have no > way to cancel the do_gzip if the response is marked as uncacheable. > > Dridi > > [1] well I had an idea that turned out to be correct, but wasn't sure > [2] tested only with 5.0, but I'm convinced it is stable behavior for 4.0+ > From np.lists at sharphosting.uk Thu Apr 13 21:26:59 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 13 Apr 2017 16:26:59 -0500 Subject: HIT after PURGE & Restart In-Reply-To: References: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> Message-ID: Thanks for this, really helpful to have the assistance. There's no chance that the object was requested between the purge and hit, because those log entries are from a simple varnishlog query for the specific URI. varnishlog -d -q "ReqURL eq '/media/images/logo-glyph-2x.png'" So it would be shown in the log. I don't think there's anything in my VCL that is causing this. But I'm including it all below in case and to help in diagnosing this. Regarding the test, Dridi, I wouldn't expect that to fail. This is not happening in the majority of cases. But what is happening is that I'm monitoring the log for any misses, and noticed that there are unexplainable misses sometimes. I have a script that issues a PURGE request for every URI on the site, with TTL being set to 7 days, so nothing should be getting missed, yet sometimes things are. After a few days looking at these misses from various angles, to try and work out what's going on, the description in my original message *seems* to be what is happening. There is no Vary issue since nothing is varied on, and all cookies are removed in VCL_recv. They are definitely being cached with the 7d TTL. So there should be no need for a miss. The real issue for me is the miss on something that has been purged and re-cached a short time before, but the purge and subsequent HIT seems to be related to the problem. I just checked the log again and there are no fresh misses since yesterday, which confirms the theory that this is related to the PURGE and otherwise all ok. I also included below a second example of the same thing from yesterday. It's not a high traffic site, so I may need to spend some more time gathering info, but I think I have enough to present to you, hence the query. Thanks again. ======= VCL (very simple as mentioned) ======= vcl 4.0; import std; # Default backend definition. Set this to point to your content server. backend default { .host = "x.x.x.x"; .port = "80"; } # Access list for purging, local only acl purgers { "127.0.0.1"; "x.x.x.x"; } # Process any "PURGE" requests converting # them to GET and restarting sub vcl_purge { set req.method = "GET"; return (restart); } sub vcl_synth { # Handle 301 redirects, taking reason as the URL # and then replacing it with the standard reason # Recommended at: # https://varnish-cache.org/tips/vcl/redirect.html if (resp.status == 301) { set resp.http.location = resp.reason; set resp.reason = "Moved Permanently"; return (deliver); } } sub vcl_recv { # Server_Name was here if (req.restarts == 0) { set req.http.X-Processed-By = "Server_Name"; } # allow PURGE from localhost and x.x.x.x if (req.method == "PURGE") { if (!client.ip ~ purgers) { return (synth(405, "Purging not allowed for " + client.ip)); } return (purge); } # Forward client's IP to the backend if (req.restarts == 0) { if (req.http.X-Real-IP) { # Do nothing, we already have all we need recorded } elsif (req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } # Redirect non-HTTPS to HTTPS # Identified by the fact it does not have the X-Forwarded-Port header if (req.http.X-Forwarded-Port != "443") { return (synth(301, "https://www.example.com" + req.url)); } # Unset all cookies unset req.http.Cookie; } sub vcl_backend_response { # Server_Name was here set beresp.http.X-Processed-By = "Server_Name"; # Don't cache 404 responses if ( beresp.status == 404 ) { set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } # Compress appropriate responses if (beresp.http.content-type ~ "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b") { set beresp.do_gzip = true; } # Set long TTL and grace time for 200 and 304 responses if ( beresp.status == 200 || beresp.status == 304 ) { # Allow stale content, in case the backend goes down set beresp.grace = 6h; # This is how long Varnish will keep cached content set beresp.ttl = 7d; } } sub vcl_deliver { # Send special headers that indicate the cache status of each response if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; set resp.http.X-Cache-Hits = obj.hits; } else { set resp.http.X-Cache = "MISS"; } return (deliver); } ======= Second Example ======= * << Request >> 230660 - Begin req 230659 rxreq - Timestamp Start: 1492038071.998363 0.000000 0.000000 - Timestamp Req: 1492038071.998363 0.000000 0.000000 - ReqMethod PURGE - VCL_call RECV - VCL_return purge - VCL_call HASH - VCL_return lookup - VCL_call PURGE - ReqMethod GET - VCL_return restart - Timestamp Restart: 1492038071.998436 0.000074 0.000074 - Link req 230661 restart - End * << Request >> 230661 - Begin req 230660 restart - Timestamp Start: 1492038071.998436 0.000074 0.000000 - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 167648 - VCL_call HIT - VCL_return deliver - VCL_call DELIVER - RespHeader X-Cache: HIT - RespHeader X-Cache-Hits: 1 - VCL_return deliver - Timestamp Process: 1492038071.998474 0.000112 0.000038 - Debug "RES_MODE 2" - Timestamp Resp: 1492038071.998528 0.000166 0.000054 - Debug "XXX REF 2" - ReqAcct 269 0 269 346 1538 1884 - End * << Request >> 364940 - Begin req 364939 rxreq - Timestamp Start: 1492060849.803654 0.000000 0.000000 - Timestamp Req: 1492060849.803654 0.000000 0.000000 - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - Link bereq 364941 fetch - Timestamp Fetch: 1492060849.805443 0.001789 0.001789 - VCL_call DELIVER - RespHeader X-Cache: MISS - VCL_return deliver - Timestamp Process: 1492060849.805531 0.001876 0.000087 - Debug "RES_MODE 2" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1492060849.805594 0.001939 0.000063 - Debug "XXX REF 2" - ReqAcct 347 0 347 322 1538 1860 - End * << Request >> 364994 - Begin req 364993 rxreq - Timestamp Start: 1492061706.708306 0.000000 0.000000 - Timestamp Req: 1492061706.708306 0.000000 0.000000 - ReqMethod GET - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 364941 - VCL_call HIT - VCL_return deliver - VCL_call DELIVER - RespHeader X-Cache: HIT - RespHeader X-Cache-Hits: 1 - VCL_return deliver - Timestamp Process: 1492061706.708382 0.000076 0.000076 - Debug "RES_MODE 2" - RespHeader Connection: close - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1492061706.708417 0.000111 0.000035 - Debug "XXX REF 2" - ReqAcct 447 0 447 347 1538 1885 - End On 13/04/2017 07:45, Dridi Boukelmoune wrote: > On Thu, Apr 13, 2017 at 8:48 AM, Guillaume Quintard > wrote: >> Is there any chance that: >> - someone requested the object between the purge and the subsequent hit? >> - you re-processed the request again, changing the cache (non-idempotent URL >> rewrite, maybe?) > > At first glance things look OK on 4.0, 4.1, 5.0 and 5.1: > > varnishtest "purge+restart should miss" > > server s1 { > rxreq > txresp -body first > > rxreq > txresp -body second > } -start > > varnish v1 -vcl+backend { > sub vcl_recv { > if (req.method == "PURGE") { > return (purge); > } > } > > sub vcl_purge { > set req.method = "GET"; > return (restart); > } > } -start > > client c1 { > txreq > rxresp > expect resp.body == first > > txreq -req PURGE > rxresp > expect resp.body == second > } -run > > I will try a more involved test later on. > > Dridi > From dridi at varni.sh Fri Apr 14 13:11:29 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 14 Apr 2017 15:11:29 +0200 Subject: HIT after PURGE & Restart In-Reply-To: References: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> Message-ID: On Thu, Apr 13, 2017 at 11:26 PM, Nigel Peck wrote: > > Thanks for this, really helpful to have the assistance. There's no chance > that the object was requested between the purge and hit, because those log > entries are from a simple varnishlog query for the specific URI. > > varnishlog -d -q "ReqURL eq '/media/images/logo-glyph-2x.png'" > > So it would be shown in the log. Focus on the URL might be a hint of something going on with the host header. Since your VCL relies solely on the built-in for hashing, please bear in mind that the host is part of the cache key. You can enable Hash records in the log (man varnishd) or simply look at the Host+URL combo. > I don't think there's anything in my VCL that is causing this. But I'm > including it all below in case and to help in diagnosing this. > > Regarding the test, Dridi, I wouldn't expect that to fail. I went ahead and tried to tickle the purge further by purging a resource while it's referenced by another transaction and couldn't get a spurious hit. I needed some synchronization and since 4.0 is EOL and it doesn't have barriers in varnishtest, I didn't feel like adapting the test to the good old semas. > This is not > happening in the majority of cases. But what is happening is that I'm > monitoring the log for any misses, and noticed that there are unexplainable > misses sometimes. I have a script that issues a PURGE request for every URI > on the site, with TTL being set to 7 days, so nothing should be getting > missed, yet sometimes things are. After a few days looking at these misses > from various angles, to try and work out what's going on, the description in > my original message *seems* to be what is happening. There is no Vary issue > since nothing is varied on, and all cookies are removed in VCL_recv. They > are definitely being cached with the 7d TTL. So there should be no need for > a miss. Regarding unexpected misses, objects could be evicted forcefully to make space during insertions. See n_lru_nuked (man varnish-counters). > The real issue for me is the miss on something that has been purged and > re-cached a short time before, but the purge and subsequent HIT seems to be > related to the problem. Yes, but with truncated logs, it's hard to tell further. > I just checked the log again and there are no fresh misses since yesterday, > which confirms the theory that this is related to the PURGE and otherwise > all ok. > > I also included below a second example of the same thing from yesterday. > It's not a high traffic site, so I may need to spend some more time > gathering info, but I think I have enough to present to you, hence the > query. Thanks again. Please don't send more, we're no longer supporting 4.0 and I would recommend an upgrade to 4.1, especially with VCL looking simple enough. > ======= > VCL > (very simple as mentioned) > ======= > > vcl 4.0; > import std; > > # Default backend definition. Set this to point to your content server. > backend default { > .host = "x.x.x.x"; > .port = "80"; > } > > # Access list for purging, local only > acl purgers { > "127.0.0.1"; > "x.x.x.x"; > } > > # Process any "PURGE" requests converting > # them to GET and restarting > sub vcl_purge { > set req.method = "GET"; > return (restart); > } > > sub vcl_synth { > # Handle 301 redirects, taking reason as the URL > # and then replacing it with the standard reason > # Recommended at: > # https://varnish-cache.org/tips/vcl/redirect.html > if (resp.status == 301) { > set resp.http.location = resp.reason; > set resp.reason = "Moved Permanently"; > return (deliver); > } > } > > sub vcl_recv { > # Server_Name was here > if (req.restarts == 0) { > set req.http.X-Processed-By = "Server_Name"; > } You can use the server.identity variable instead of hard-coding it in VCL (see man vcl). > # allow PURGE from localhost and x.x.x.x > if (req.method == "PURGE") { > if (!client.ip ~ purgers) { > return (synth(405, "Purging not allowed for " + client.ip)); > } > return (purge); > } > # Forward client's IP to the backend > if (req.restarts == 0) { > if (req.http.X-Real-IP) { > # Do nothing, we already have all we need recorded > } elsif (req.http.X-Forwarded-For) { > set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + > client.ip; > } else { > set req.http.X-Forwarded-For = client.ip; > } > } This is Varnish 4.0, you don't need to update the XFF header in VCL. > # Redirect non-HTTPS to HTTPS > # Identified by the fact it does not have the X-Forwarded-Port header > if (req.http.X-Forwarded-Port != "443") { > return (synth(301, "https://www.example.com" + req.url)); > } > > # Unset all cookies > unset req.http.Cookie; > > } > > sub vcl_backend_response { > > # Server_Name was here > set beresp.http.X-Processed-By = "Server_Name"; See comment above about the server.identity variable. > > # Don't cache 404 responses > if ( beresp.status == 404 ) { > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > } > > # Compress appropriate responses > if (beresp.http.content-type ~ > "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b") > { > set beresp.do_gzip = true; > } > > # Set long TTL and grace time for 200 and 304 responses > if ( beresp.status == 200 || beresp.status == 304 ) { > > # Allow stale content, in case the backend goes down > set beresp.grace = 6h; > > # This is how long Varnish will keep cached content > set beresp.ttl = 7d; > } > > } > > sub vcl_deliver { > # Send special headers that indicate the cache status of each response > if (obj.hits > 0) { > set resp.http.X-Cache = "HIT"; > set resp.http.X-Cache-Hits = obj.hits; > } else { > set resp.http.X-Cache = "MISS"; > } You don't need a shiny HIT or MISS in the response. The X-Varnish header will tell you that already: one id is a miss, two of them a hit. Dridi From dridi at varni.sh Fri Apr 14 14:18:14 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 14 Apr 2017 16:18:14 +0200 Subject: Safety of setting "beresp.do_gzip" in vcl_backend_response In-Reply-To: References: <81deafe6-7a08-0a2b-529d-f16dcca88777@sharphosting.uk> Message-ID: On Thu, Apr 13, 2017 at 10:45 PM, Nigel Peck wrote: > > Thanks for this, great to have the detailed info. So it looks like the most > efficient solution is going to be to only "do_gzip" uncacheable responses if > the client supports it, which means also implementing (and modifying) the > builtin code in my VCL, and not flowing through to it. Got it, thanks. Maybe I should start a section on gzip tuning in the docs. From np.lists at sharphosting.uk Sat Apr 15 01:50:38 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Fri, 14 Apr 2017 20:50:38 -0500 Subject: HIT after PURGE & Restart In-Reply-To: References: <15d0f3e5-ae33-5c18-2ae5-d0c3c5e6af8c@sharphosting.uk> Message-ID: <3FB2F885-04C4-4A18-97CF-E8CEF5077F4A@sharphosting.uk> This is great, thanks Dridi for all the awesome tips! Nigel > On 14 Apr 2017, at 08:11, Dridi Boukelmoune wrote: > >> On Thu, Apr 13, 2017 at 11:26 PM, Nigel Peck wrote: >> >> Thanks for this, really helpful to have the assistance. There's no chance >> that the object was requested between the purge and hit, because those log >> entries are from a simple varnishlog query for the specific URI. >> >> varnishlog -d -q "ReqURL eq '/media/images/logo-glyph-2x.png'" >> >> So it would be shown in the log. > > Focus on the URL might be a hint of something going on with the host > header. Since your VCL relies solely on the built-in for hashing, > please bear in mind that the host is part of the cache key. You can > enable Hash records in the log (man varnishd) or simply look at the > Host+URL combo. > >> I don't think there's anything in my VCL that is causing this. But I'm >> including it all below in case and to help in diagnosing this. >> >> Regarding the test, Dridi, I wouldn't expect that to fail. > > I went ahead and tried to tickle the purge further by purging a > resource while it's referenced by another transaction and couldn't > get a spurious hit. I needed some synchronization and since 4.0 is > EOL and it doesn't have barriers in varnishtest, I didn't feel like > adapting the test to the good old semas. > >> This is not >> happening in the majority of cases. But what is happening is that I'm >> monitoring the log for any misses, and noticed that there are unexplainable >> misses sometimes. I have a script that issues a PURGE request for every URI >> on the site, with TTL being set to 7 days, so nothing should be getting >> missed, yet sometimes things are. After a few days looking at these misses >> from various angles, to try and work out what's going on, the description in >> my original message *seems* to be what is happening. There is no Vary issue >> since nothing is varied on, and all cookies are removed in VCL_recv. They >> are definitely being cached with the 7d TTL. So there should be no need for >> a miss. > > Regarding unexpected misses, objects could be evicted forcefully to > make space during insertions. See n_lru_nuked (man varnish-counters). > >> The real issue for me is the miss on something that has been purged and >> re-cached a short time before, but the purge and subsequent HIT seems to be >> related to the problem. > > Yes, but with truncated logs, it's hard to tell further. > >> I just checked the log again and there are no fresh misses since yesterday, >> which confirms the theory that this is related to the PURGE and otherwise >> all ok. >> >> I also included below a second example of the same thing from yesterday. >> It's not a high traffic site, so I may need to spend some more time >> gathering info, but I think I have enough to present to you, hence the >> query. Thanks again. > > Please don't send more, we're no longer supporting 4.0 and I would > recommend an upgrade to 4.1, especially with VCL looking simple > enough. > >> ======= >> VCL >> (very simple as mentioned) >> ======= >> >> vcl 4.0; >> import std; >> >> # Default backend definition. Set this to point to your content server. >> backend default { >> .host = "x.x.x.x"; >> .port = "80"; >> } >> >> # Access list for purging, local only >> acl purgers { >> "127.0.0.1"; >> "x.x.x.x"; >> } >> >> # Process any "PURGE" requests converting >> # them to GET and restarting >> sub vcl_purge { >> set req.method = "GET"; >> return (restart); >> } >> >> sub vcl_synth { >> # Handle 301 redirects, taking reason as the URL >> # and then replacing it with the standard reason >> # Recommended at: >> # https://varnish-cache.org/tips/vcl/redirect.html >> if (resp.status == 301) { >> set resp.http.location = resp.reason; >> set resp.reason = "Moved Permanently"; >> return (deliver); >> } >> } >> >> sub vcl_recv { >> # Server_Name was here >> if (req.restarts == 0) { >> set req.http.X-Processed-By = "Server_Name"; >> } > > You can use the server.identity variable instead of hard-coding it in > VCL (see man vcl). > >> # allow PURGE from localhost and x.x.x.x >> if (req.method == "PURGE") { >> if (!client.ip ~ purgers) { >> return (synth(405, "Purging not allowed for " + client.ip)); >> } >> return (purge); >> } >> # Forward client's IP to the backend >> if (req.restarts == 0) { >> if (req.http.X-Real-IP) { >> # Do nothing, we already have all we need recorded >> } elsif (req.http.X-Forwarded-For) { >> set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + >> client.ip; >> } else { >> set req.http.X-Forwarded-For = client.ip; >> } >> } > > This is Varnish 4.0, you don't need to update the XFF header in VCL. > >> # Redirect non-HTTPS to HTTPS >> # Identified by the fact it does not have the X-Forwarded-Port header >> if (req.http.X-Forwarded-Port != "443") { >> return (synth(301, "https://www.example.com" + req.url)); >> } >> >> # Unset all cookies >> unset req.http.Cookie; >> >> } >> >> sub vcl_backend_response { >> >> # Server_Name was here >> set beresp.http.X-Processed-By = "Server_Name"; > > See comment above about the server.identity variable. > >> >> # Don't cache 404 responses >> if ( beresp.status == 404 ) { >> set beresp.ttl = 120s; >> set beresp.uncacheable = true; >> return (deliver); >> } >> >> # Compress appropriate responses >> if (beresp.http.content-type ~ >> "\b((text/(html|plain|css|javascript|xml|xsl))|(application/(javascript|xml|xhtml\+xml)))\b") >> { >> set beresp.do_gzip = true; >> } >> >> # Set long TTL and grace time for 200 and 304 responses >> if ( beresp.status == 200 || beresp.status == 304 ) { >> >> # Allow stale content, in case the backend goes down >> set beresp.grace = 6h; >> >> # This is how long Varnish will keep cached content >> set beresp.ttl = 7d; >> } >> >> } >> >> sub vcl_deliver { >> # Send special headers that indicate the cache status of each response >> if (obj.hits > 0) { >> set resp.http.X-Cache = "HIT"; >> set resp.http.X-Cache-Hits = obj.hits; >> } else { >> set resp.http.X-Cache = "MISS"; >> } > > You don't need a shiny HIT or MISS in the response. The X-Varnish > header will tell you that already: one id is a miss, two of them a > hit. > > Dridi From leonfauster at googlemail.com Sat Apr 15 16:04:56 2017 From: leonfauster at googlemail.com (Leon Fauster) Date: Sat, 15 Apr 2017 18:04:56 +0200 Subject: 5.1.2: debug code enabled by default? Message-ID: I'm seeing some debug output while restarting varnish-5.1.2. Sure, such things are enabled while compiling but are there some debug code enabled by default that could reduce performance, especially for 5.1.2? # cd / ; env -i /sbin/service varnish restart Stopping Varnish Cache: [ OK ] Starting Varnish Cache: Debug: Platform: Linux,2.6.32-696.1.1.el6.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit Debug: Child (18827) Started [ OK ] Just curious. Can't remember seeing this with varnish-4. -- Thanks LF From dridi at varni.sh Tue Apr 18 08:36:40 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 18 Apr 2017 10:36:40 +0200 Subject: 5.1.2: debug code enabled by default? In-Reply-To: References: Message-ID: On Sat, Apr 15, 2017 at 6:04 PM, Leon Fauster wrote: > I'm seeing some debug output while restarting varnish-5.1.2. Sure, such > things are enabled while compiling but are there some debug code enabled > by default that could reduce performance, especially for 5.1.2? > > # cd / ; env -i /sbin/service varnish restart You don't need to cd and clear your environment according to the service manual: > service runs a System V init script in as predictable environment as possible, > removing most environment variables and with current working directory set to /. > Stopping Varnish Cache: [ OK ] > Starting Varnish Cache: > Debug: Platform: Linux,2.6.32-696.1.1.el6.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit > Debug: Child (18827) Started [ OK ] > > Just curious. Can't remember seeing this with varnish-4. It could be a consequence of initialization work done on varnishd. There were changes made after discussions about github issues 2141 and 2217, where we now have 3 distinct modes of execution for varnishd: 1) -C to compile VCLs 2) -x to emit docs 3) normal execution (including daemon) It could be the case that those statements usually seen in debug and foreground execution are now visible too in daemon execution. Dridi From guillaume at varnish-software.com Tue Apr 18 09:56:48 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 18 Apr 2017 11:56:48 +0200 Subject: A/B testing In-Reply-To: References: Message-ID: Hi there, Here's an article about this using VCS https://info.varnish-software.com/blog/live-ab-testing-varnish-and-vcs If you don't use VCS, the logic is the same, but you'll need to aggregate data from the whole cluster by yourself. The idea is that each user gets a group, either A or B, and get a cookie to track this. Then, you change the request of the element you are testing according to that group (for example, you change /button.gif to /buttonA.gif)., and you log this access (either "AccessA" or "AccessB") Then, when a user triggers a conversion (button is clicked, for example), you log it (either "ConversionA" or "ConversionB"). You can then count the occurences of each, and get the conversion rate by dividing (number of ConversionX) by (number of AccessX). The group with the highest ratio wins. Logging can be done using std.log() in vcl and checking varnishlog, or by settin a header and then using varnishncsa. -- Guillaume Quintard On Wed, Apr 12, 2017 at 9:32 AM, Pinakee BIswas wrote: > Hi, > > We have been using Varnish as web accelerator for ecommerce site. I would > like to know if it is possible to do A/B testing using Varnish. If so, > would appreciate if you could share the steps or related documents. > > Thanks, > Pinakee > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonfauster at googlemail.com Tue Apr 18 17:57:52 2017 From: leonfauster at googlemail.com (Leon Fauster) Date: Tue, 18 Apr 2017 19:57:52 +0200 Subject: 5.1.2: debug code enabled by default? In-Reply-To: References: Message-ID: <5C987496-A057-4994-BEBD-378B670C4305@googlemail.com> Am 18.04.2017 um 10:36 schrieb Dridi Boukelmoune : > > On Sat, Apr 15, 2017 at 6:04 PM, Leon Fauster wrote: >> I'm seeing some debug output while restarting varnish-5.1.2. Sure, such >> things are enabled while compiling but are there some debug code enabled >> by default that could reduce performance, especially for 5.1.2? >> >> # cd / ; env -i /sbin/service varnish restart > > You don't need to cd and clear your environment according to the service manual: yep, old behavior of executing init.d/scripts directly persists stubbornly :-) >> Stopping Varnish Cache: [ OK ] >> Starting Varnish Cache: >> Debug: Platform: Linux,2.6.32-696.1.1.el6.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit >> Debug: Child (18827) Started [ OK ] >> >> Just curious. Can't remember seeing this with varnish-4. > > It could be a consequence of initialization work done on varnishd. > There were changes made after discussions about github issues 2141 and > 2217, where we now have 3 distinct modes of execution for varnishd: > > 1) -C to compile VCLs > 2) -x to emit docs > 3) normal execution (including daemon) > > It could be the case that those statements usually seen in debug and > foreground execution are now visible too in daemon execution. thanks for the insights. I concluded that I have not to worry about that, right? -- LF From dridi at varni.sh Tue Apr 18 18:47:22 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 18 Apr 2017 20:47:22 +0200 Subject: 5.1.2: debug code enabled by default? In-Reply-To: <5C987496-A057-4994-BEBD-378B670C4305@googlemail.com> References: <5C987496-A057-4994-BEBD-378B670C4305@googlemail.com> Message-ID: > thanks for the insights. I concluded that I have not to worry about that, right? Yes, no need to worry. I should have a look at that, but no hurry since it's not harmful. Dridi From jonathan.huot at thomsonreuters.com Thu Apr 20 07:57:01 2017 From: jonathan.huot at thomsonreuters.com (jonathan.huot at thomsonreuters.com) Date: Thu, 20 Apr 2017 07:57:01 +0000 Subject: Websocket's better support Message-ID: <8E656B642592B942AE317E2AFAE0ABA18BFDF587@C111MFBLMBX07.ERF.thomson.com> Hi Varnish dev & users, Websocket is (still) not dying, and for us, it seems we have to handle more and more this kind of traffic. It's why I would like to open a discussion and see how we can enhance the websocket support in Varnish. Currently, the implementation is done thru pipe like this : cli (Upgrade) -> recv -> pipe <--> backend So basically, we're putting the ball into backend's hands and nothing else. It has several disadvantages; one of them is that we cannot interact with the handshake's response which is still in HTTP/1.1. E.g. adding a set-cookie for stickiness is not possible. E.g. testing if status code is 101 is not possible So, first problem, first question: do you think it is possible to allow opening the pipe after vcl_backend_response ? The flow will be: cli (Upgrade) -> recv -> pass -> b_fetch -> b_response -> pipe <---> backend Only this extra step would be a huge improvement of what we can do on websockets connections and will be very beneficial. Then, a bonus question, cuz it requires probably much more time: I was wondering if later, websocket protocol can be integrated to the core (e.g. similarly to HTTP2?), to have benefits of metrics, params and logs. Because, No, websocket messages are not just "bytes thru a pipe" :-) Thanks for your answers, -- Jonathan Huot Thomson Reuters ________________________________ This e-mail is for the sole use of the intended recipient and contains information that may be privileged and/or confidential. If you are not an intended recipient, please notify the sender by return e-mail and delete this e-mail and any attachments. Certain required legal entity disclosures can be accessed on our website. From dridi at varni.sh Thu Apr 20 15:17:34 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 20 Apr 2017 17:17:34 +0200 Subject: Websocket's better support In-Reply-To: <8E656B642592B942AE317E2AFAE0ABA18BFDF587@C111MFBLMBX07.ERF.thomson.com> References: <8E656B642592B942AE317E2AFAE0ABA18BFDF587@C111MFBLMBX07.ERF.thomson.com> Message-ID: On Thu, Apr 20, 2017 at 9:57 AM, wrote: > Hi Varnish dev & users, > > Websocket is (still) not dying, and for us, it seems we have to handle more and more this kind of traffic. > It's why I would like to open a discussion and see how we can enhance the websocket support in Varnish. > > Currently, the implementation is done thru pipe like this : > > cli (Upgrade) -> recv -> pipe <--> backend > > So basically, we're putting the ball into backend's hands and nothing else. > It has several disadvantages; one of them is that we cannot interact with the handshake's response which is still in HTTP/1.1. > E.g. adding a set-cookie for stickiness is not possible. > E.g. testing if status code is 101 is not possible > > So, first problem, first question: do you think it is possible to allow opening the pipe after vcl_backend_response ? > The flow will be: > > cli (Upgrade) -> recv -> pass -> b_fetch -> b_response -> pipe <---> backend It's not, but it's not the first time we discussed it: https://github.com/varnishcache/varnish-cache/wiki/VIP8%3A-No-pipe-in-builtin.vcl-in-V5#discussion See around 13:41. You can emulate an "Expect: 100-continue" dance with the backend to at least make sure an upgrade would be allowed before actually piping: varnishtest "websocket example" # Varnish won't let you use 100-continue so we need an ad-hoc solution # to ask the backend whether websocket is acceptable beforehand. server s1 { rxreq expect req.http.Connection == upgrade expect req.http.Upgrade == websocket expect req.http.Expect == 200-ok txresp rxreq expect req.http.Connection == upgrade expect req.http.Upgrade == websocket expect req.http.Expect == txresp -status 101 \ -hdr "Connection: upgrade" \ -hdr "Upgrade: websocket" send "pretend we do websocket here" } -start varnish v1 -vcl+backend { sub vcl_recv { if (req.restarts == 0) { unset req.http.Pipe; unset req.http.X-Upgrade; } if (req.http.Pipe) { set req.http.Upgrade = req.http.X-Upgrade; return (pipe); } elsif (req.http.Connection == "upgrade") { set req.http.X-Upgrade = req.http.Upgrade; return (pass); } } sub vcl_backend_fetch { if (bereq.http.X-Upgrade) { set bereq.http.Connection = "upgrade"; set bereq.http.Upgrade = bereq.http.X-Upgrade; set bereq.http.Expect = "200-ok"; unset bereq.http.X-Upgrade; } } sub vcl_backend_response { if (bereq.http.Upgrade && beresp.status == 200) { set beresp.http.Pipe = bereq.http.Upgrade; return (deliver); } } sub vcl_deliver { if (resp.http.Pipe) { set req.http.Pipe = resp.http.Pipe; return (restart); } } } -start client c1 { txreq -hdr "Connection: upgrade" -hdr "Upgrade: websocket" rxresp expect resp.status == 101 # receive the fake websocket traffic recv 28 # this will fail because there isn't anything left to read # recv 1 } -run This example is over-simplified, it only shows that you can act upon websocket requests. That doesn't solve the fact that you don't have access to the beresp once you return pipe. > Only this extra step would be a huge improvement of what we can do on websockets connections and will be very beneficial. > > > Then, a bonus question, cuz it requires probably much more time: > I was wondering if later, websocket protocol can be integrated to the core (e.g. similarly to HTTP2?), to have benefits of metrics, params and logs. > Because, No, websocket messages are not just "bytes thru a pipe" :-) Well, websocket is not HTTP, unlilke... HTTP/2? I don't think it would fit nicely in VCL, applications do what they want on top of the session. Unlike HTTP that specifies what happens on the session, we couldn't make more assumption than bytes passing through. Dridi From jonathan.huot at thomsonreuters.com Tue Apr 25 14:18:06 2017 From: jonathan.huot at thomsonreuters.com (jonathan.huot at thomsonreuters.com) Date: Tue, 25 Apr 2017 14:18:06 +0000 Subject: Websocket's better support In-Reply-To: References: <8E656B642592B942AE317E2AFAE0ABA18BFDF587@C111MFBLMBX07.ERF.thomson.com> Message-ID: <8E656B642592B942AE317E2AFAE0ABA18BFE9430@C111MFBLMBX07.ERF.thomson.com> On Thu, Apr 20, 2017 at 17:18 AM, wrote: > On Thu, Apr 20, 2017 at 9:57 AM, wrote: > > Websocket is (still) not dying, and for us, it seems we have to handle more and more this kind of traffic. > > It's why I would like to open a discussion and see how we can enhance the websocket support in Varnish. > > > > Currently, the implementation is done thru pipe like this : > > > > cli (Upgrade) -> recv -> pipe <--> backend > > > > So basically, we're putting the ball into backend's hands and nothing else. > > It has several disadvantages; one of them is that we cannot interact with the handshake's response which is still in HTTP/1.1. > > E.g. adding a set-cookie for stickiness is not possible. > > E.g. testing if status code is 101 is not possible > > > > So, first problem, first question: do you think it is possible to allow opening the pipe after vcl_backend_response ? > > The flow will be: > > > > cli (Upgrade) -> recv -> pass -> b_fetch -> b_response -> pipe <---> > > backend > > It's not, but it's not the first time we discussed it: > https://github.com/varnishcache/varnish-cache/wiki/VIP8%3A-No-pipe-in-builtin.vcl-in-V5#discussion Thanks for sharing this Varnish github "wiki", I was not aware of it. So, if I'm collecting information from VIP6 and VIP8, I can summarize with: VIP8 13:41 < dridi_> that's because we handle pipe backwards, we should only pipe when there's a 101 switching protocols, after a beresp, not a req 13:42 < phk> dridi_, relevant point there. VIP6 : (..) it's the backend that announces the protocol switch, so getting a transition from vcl_recv to vcl_pipe makes little sense (..) the pipe transition should belong to the vcl_deliver (..) It looks like we have a consensus to progress into this direction. I'm working on an implementation for myself but I'm trying to see how it can have benefit for the most. First, I'm not going to change any behavior of pipe in vcl_recv. Then, for Websockets, I think the best place to call pipe is in vcl_deliver. Then, I will relax Varnish to accept 1xx code (those responses are assumed without Body). Then, either we pipe in cnt_transmit after sending back resp to the client, or, we create a new vcl func vcl_deliver_pipe which is the equivalent of vcl_pipe; however I don't see the benefit since we already have access to req/resp in vcl_deliver. Then, technical questions arise: - do we update the director API with a new hook http1deliverpipe, or do we update the current http1pipe to be used by both? - about stats/logs, do we mix them or do we keep them separated? Do you think it is an doable plan, useful for the overall community? Or do I continue on a fork without trying to make it merge-able ? Thanks in advance, Regards -- Jonathan Huot Thomson Reuters From dridi at varni.sh Tue Apr 25 14:52:42 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 25 Apr 2017 16:52:42 +0200 Subject: Websocket's better support In-Reply-To: <8E656B642592B942AE317E2AFAE0ABA18BFE9430@C111MFBLMBX07.ERF.thomson.com> References: <8E656B642592B942AE317E2AFAE0ABA18BFDF587@C111MFBLMBX07.ERF.thomson.com> <8E656B642592B942AE317E2AFAE0ABA18BFE9430@C111MFBLMBX07.ERF.thomson.com> Message-ID: > It looks like we have a consensus to progress into this direction. I wouldn't go as far as saying there's a consensus. If you are interested in discussing this as a possibility you should attend next bugwash on IRC (next Monday at 13h if you are in France like I'm supposing). I put it on the agenda because of this thread: https://github.com/varnishcache/varnish-cache/issues/2318 > I'm working on an implementation for myself but I'm trying to see how it can have benefit for the most. > > First, I'm not going to change any behavior of pipe in vcl_recv. > Then, for Websockets, I think the best place to call pipe is in vcl_deliver. > Then, I will relax Varnish to accept 1xx code (those responses are assumed without Body). > Then, either > we pipe in cnt_transmit after sending back resp to the client, or, > we create a new vcl func vcl_deliver_pipe which is the equivalent of vcl_pipe; however I don't see the benefit since we already have access to req/resp in vcl_deliver. > > Then, technical questions arise: > - do we update the director API with a new hook http1deliverpipe, or do we update the current http1pipe to be used by both? > - about stats/logs, do we mix them or do we keep them separated? You should share your intentions during next bugwash if we manage to cover that point. I expect this bugwash to be a long one (protip: lunch at noon). > Do you think it is an doable plan, useful for the overall community? Or do I continue on a fork without trying to make it merge-able ? I don't reckon doing this on your own would be easy to merge. Especially for such a breaking change. However it would probably work better if you took part of the plan from day one. I really have no idea so join the bugwash instead. Cheers From jla at fcoo.dk Wed Apr 26 13:31:18 2017 From: jla at fcoo.dk (Jesper Larsen) Date: Wed, 26 Apr 2017 13:31:18 +0000 Subject: req.url modified in restart? Message-ID: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> Hi Varnish people I use something like this snippet: sub vcl_recv { if (req.url ~ "^\/foo\/") { set req.url = regsub(req.url, "^\/foo\/", "/"); set req.backend_hint = special_backend; } } for rewriting req.url before sending the request to the backend. As you can see I simply remove the first portion of the path before sending the request to the backend. For urls not matching the regex I do not change req.url. I have also enabled restarts. I am using the original url in vcl_hash. But I have noticed that when the "special_backend" is down my default backend gets the request. I suppose the reason is that I have modified req.url. And when the request is restarted it uses this modified url which does not start with "/foo/". My question is: What would you recommend that I do to avoid this? Should I modify the url in another subroutine? Best regards, Jesper From dridi at varni.sh Wed Apr 26 13:46:27 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 26 Apr 2017 15:46:27 +0200 Subject: req.url modified in restart? In-Reply-To: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> Message-ID: On Wed, Apr 26, 2017 at 3:31 PM, Jesper Larsen wrote: > Hi Varnish people > > I use something like this snippet: > > sub vcl_recv { > if (req.url ~ "^\/foo\/") { > set req.url = regsub(req.url, "^\/foo\/", "/"); > set req.backend_hint = special_backend; > } > } > > for rewriting req.url before sending the request to the backend. As you can see I simply remove the first portion of the path before sending the request to the backend. For urls not matching the regex I do not change req.url. I have also enabled restarts. I am using the original url in vcl_hash. > > But I have noticed that when the "special_backend" is down my default backend gets the request. I suppose the reason is that I have modified req.url. And when the request is restarted it uses this modified url which does not start with "/foo/". > > My question is: What would you recommend that I do to avoid this? Should I modify the url in another subroutine? Hello Jesper, Could you please describe in more details how to reproduce? Ideally with just enough VCL. Possibly with the offending transaction's logs. Please also mention Varnish's version. Thanks, Dridi From rbizzell at measinc.com Wed Apr 26 14:02:51 2017 From: rbizzell at measinc.com (Rodney Bizzell) Date: Wed, 26 Apr 2017 14:02:51 +0000 Subject: Monitoring Message-ID: <54fc923b3da54d13bf731dcfcfc86da0@mbx1serv.meas-inc.com> Hello, What are some suggestions for selecting what should be monitored with third party software such as Solar Winds to ensure optimal performance of the varnish servers. My Network operations team just asked me what specifically do I need to have monitored within Varish Name Description sess_conn Cumulative number of accepted client connections by Varnish Cache client_req Cumulative number of received client requests. Increments after a request is received, but before Varnish responds sess_dropped Number of connections dropped due to a full queue cache_hit Cumulative number of times a file was served from Varnish's cache cache_miss Cumulative number of times a file was requested but was not in the cache, and was therefore requested from the backend cache_hitpass Cumulative number of hits for a "pass" file n_expired Cumulative number of expired objects for example due to TTL n_lru_nuked Least Recently Used Nuked Objects: Cumulative number of cached objects that Varnish has evicted from the cache because of a lack of space threads Number of threads in all pools threads_created Number of times a thread has been created threads_failed Number of times that Varnish unsuccessfully tried to create a thread threads_limited Number of times a thread needed to be created but couldn't because varnishd maxed out its configured capacity for new threads thread_queue_len Current queue length: number of requests waiting on worker thread to become available sess_queued Number of times Varnish has been out of threads and had to queue up a request backend_conn Cumulative number of successful TCP connections to the backend backend_recycle Cumulative number of current backend connections which were put back to a pool of keep-alive connections and have not yet been used backend_reuse Cumulative number of connections that were reused from the keep-alive pool backend_toolate Cumulative number of backend connections that have been closed because they were idle for too long backend_fail Cumulative number of failed connections to the backend backend_unhealthy Cumulative number of backend connections which were not attempted because the backend has been marked as unhealthy backend_busy Cumulative number of times the maximum amount of connections to the backend has been reached backend_req Number of requests to the backend This email (including any attachments) may contain confidential information intended solely for acknowledged recipients. If you think you have received this information in error, please reply to the sender and delete all copies from your system. Please note that unauthorized use, disclosure, or further distribution of this information is prohibited by the sender. Note also that we may monitor email directed to or originating from our network. Thank you for your consideration and assistance. | -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Apr 26 14:07:03 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 26 Apr 2017 16:07:03 +0200 Subject: vmod-vtc for varnishtest users Message-ID: Greetings testers, I would like to draw your attention to a pull request that turned into a discussion (instead of being merged in) regarding a future VMOD for out-of-tree usage of varnishtest: https://github.com/varnishcache/varnish-cache/pull/2276#issuecomment-297406551 If you are a VMOD developer, or someone who writes varnish-related tools and use varnishtest to run your test suite, now is the time to let us know what would make testing more convenient with a vanilla install of Varnish. If you are not extending Varnish, you may also find varnishtest more and more useful to test your VCL as releases go by. If you also use varnishtest in this context, you may have run into shortcomings, let us know if that's the case. You may now think that using such a VMOD to test your production VCL wouldn't be appropriate. I will intentionally be vague and answer that not necessarily. If you aren't familiar with varnishtest yet, I will as usual recommend to get acquainted to it. It is even more accessible now that Guillaume put up a manual for the VTC syntax too. You can always learn from existing test cases. We will already ship a bunch of functions that we use in the varnish test suite that are considered useful outside of varnish too. There may be use cases you ran into, do not hesitate to suggest more fixtures. If time permits, and if it is put on the agenda, we will discuss this next Monday. However this should be part of the September release and it shouldn't be a problem suggest new goodies later too. Best, Dridi From np.lists at sharphosting.uk Wed Apr 26 19:17:32 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Wed, 26 Apr 2017 14:17:32 -0500 Subject: Monitoring In-Reply-To: <54fc923b3da54d13bf731dcfcfc86da0@mbx1serv.meas-inc.com> References: <54fc923b3da54d13bf731dcfcfc86da0@mbx1serv.meas-inc.com> Message-ID: <9b36e760-d7a4-8f94-3232-4e08c9e2de06@sharphosting.uk> Hi Rodney, There are far better qualified folks than me on here to answer this question, but to get you started, the following are worth monitoring: -- cache_miss If this is high then you can probably improve your configuration to get more requests served from the cache. -- n_lru_nuked If this has data then your cache memory is not enough to support everything you could be caching. -- And it's not in your list, but I would monitor: uptime If this keeps getting reset then there is a problem with your setup and the server is having to restart itself. -- As I said, some to get you started, there are certainly others you could monitor. You may want to consider reading the Varnish book and making your own decisions: http://book.varnish-software.com/4.0/ -or - https://info.varnish-software.com/the-varnish-book Best, Nigel On 26/04/2017 09:02, Rodney Bizzell wrote: > Hello, > > What are some suggestions for selecting what should be monitored with > third party software such as Solar Winds to ensure optimal performance > of the varnish servers. My Network operations team just asked me what > specifically do I need to have monitored within Varish > > *Name* > > > > *Description* > > *sess_conn* > > > > Cumulative number of accepted client connections by Varnish Cache > > *client_req* > > > > Cumulative number of received client requests. Increments after a > request is received, but before Varnish responds > > *sess_dropped* > > > > Number of connections dropped due to a full queue > > *cache_hit* > > > > Cumulative number of times a file was served from Varnish?s cache > > *cache_miss* > > > > Cumulative number of times a file was requested but was not in the > cache, and was therefore requested from the backend > > *cache_hitpass* > > > > Cumulative number of hits for a ?pass? file > > *n_expired* > > > > Cumulative number of expired objects for example due to TTL > > *n_lru_nuked* > > > > Least Recently Used Nuked Objects: Cumulative number of cached objects > that Varnish has evicted from the cache because of a lack of space > > *threads* > > > > Number of threads in all pools > > *threads_created* > > > > Number of times a thread has been created > > *threads_failed* > > > > Number of times that Varnish unsuccessfully tried to create a thread > > *threads_limited* > > > > Number of times a thread needed to be created but couldn't because > varnishd maxed out its configured capacity for new threads > > *thread_queue_len* > > > > Current queue length: number of requests waiting on worker thread to > become available > > *sess_queued* > > > > Number of times Varnish has been out of threads and had to queue up a > request > > *backend_conn* > > > > Cumulative number of successful TCP connections to the backend > > *backend_recycle* > > > > Cumulative number of current backend connections which were put back to > a pool of keep-alive connections and have not yet been used > > *backend_reuse* > > > > Cumulative number of connections that were reused from the keep-alive pool > > *backend_toolate* > > > > Cumulative number of backend connections that have been closed because > they were idle for too long > > *backend_fail* > > > > Cumulative number of failed connections to the backend > > *backend_unhealthy* > > > > Cumulative number of backend connections which were not attempted > because the backend has been marked as unhealthy > > *backend_busy* > > > > Cumulative number of times the maximum amount of connections to the > backend has been reached > > *backend_req* > > > > Number of requests to the backend > > > > This email (including any attachments) may contain confidential > information intended solely for acknowledged recipients. If you think > you have received this information in error, please reply to the sender > and delete all copies from your system. Please note that unauthorized > use, disclosure, or further distribution of this information is > prohibited by the sender. Note also that we may monitor email directed > to or originating from our network. Thank you for your consideration and > assistance. | > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From jla at fcoo.dk Thu Apr 27 11:09:54 2017 From: jla at fcoo.dk (Jesper Larsen) Date: Thu, 27 Apr 2017 11:09:54 +0000 Subject: SV: req.url modified in restart? In-Reply-To: References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk>, Message-ID: <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk> Hi Dridi On Wed, Apr 26, 2017 at 3:46 PM, Dridi Boukelmoune [dridi at varni.sh] wrote: >Hello Jesper, > >Could you please describe in more details how to reproduce? Ideally >with just enough VCL. Possibly with the offending transaction's logs. vcl 4.0; import std; import directors; backend default { .host = "yourbackendip"; .port = "8000"; } backend special_backend { .host = "yourbackendip"; .port = "8080"; } sub vcl_recv { if (req.url ~ "^\/foo\/") { set req.url = regsub(req.url, "^\/foo\/", "/"); set req.backend_hint = special_backend; } else { set req.backend_hint = default; } } For this test I ran: $ python -m SimpleHTTPServer 8000 on the backend as a default backend and no server on port 8080. When I do a request for: http://varnish_server/foo I get a 404 from the default backend even though I pressume that the request should time out instead. Output from Python server: Serving HTTP on 0.0.0.0 port 8000 ... 172.17.0.2 - - [27/Apr/2017 12:58:44] code 404, message File not found 172.17.0.2 - - [27/Apr/2017 12:58:44] "GET /foo HTTP/1.1" 404 - Varnish output: * << BeReq >> 3 - Begin bereq 2 fetch - Timestamp Start: 1493290724.892610 0.000000 0.000000 - BereqMethod GET - BereqURL /foo - BereqProtocol HTTP/1.1 - BereqHeader Host: localhost:9090 - BereqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - BereqHeader Accept-Language: en-US,en;q=0.5 - BereqHeader Upgrade-Insecure-Requests: 1 - BereqHeader X-Forwarded-For: 172.17.0.1 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 3 - VCL_call BACKEND_FETCH - VCL_return fetch - BackendOpen 24 boot.default 192.168.1.45 8000 172.17.0.2 40436 - BackendStart 192.168.1.45 8000 - Timestamp Bereq: 1493290724.892916 0.000306 0.000306 - Timestamp Beresp: 1493290724.893714 0.001105 0.000799 - BerespProtocol HTTP/1.0 - BerespStatus 404 - BerespReason File not found - BerespHeader Server: SimpleHTTP/0.6 Python/2.7.12 - BerespHeader Date: Thu, 27 Apr 2017 10:58:44 GMT - BerespHeader Connection: close - BerespHeader Content-Type: text/html - TTL RFC 120 10 -1 1493290725 1493290725 1493290724 0 0 - VCL_call BACKEND_RESPONSE - VCL_return deliver - Storage malloc s0 - ObjProtocol HTTP/1.0 - ObjStatus 404 - ObjReason File not found - ObjHeader Server: SimpleHTTP/0.6 Python/2.7.12 - ObjHeader Date: Thu, 27 Apr 2017 10:58:44 GMT - ObjHeader Content-Type: text/html - Fetch_Body 4 eof stream - BackendClose 24 boot.default - Timestamp BerespBody: 1493290724.893893 0.001284 0.000179 - Length 195 - BereqAcct 335 0 335 150 195 345 - End * << Request >> 2 - Begin req 1 rxreq - Timestamp Start: 1493290724.892466 0.000000 0.000000 - Timestamp Req: 1493290724.892466 0.000000 0.000000 - ReqStart 172.17.0.1 46312 - ReqMethod GET - ReqURL /foo - ReqProtocol HTTP/1.1 - ReqHeader Host: localhost:9090 - ReqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Connection: keep-alive - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader Cache-Control: max-age=0 - ReqHeader X-Forwarded-For: 172.17.0.1 - VCL_call RECV - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - VCL_call MISS - VCL_return fetch - Link bereq 3 fetch - Timestamp Fetch: 1493290724.893937 0.001471 0.001471 - RespProtocol HTTP/1.0 - RespStatus 404 - RespReason File not found - RespHeader Server: SimpleHTTP/0.6 Python/2.7.12 - RespHeader Date: Thu, 27 Apr 2017 10:58:44 GMT - RespHeader Content-Type: text/html - RespProtocol HTTP/1.1 - RespHeader X-Varnish: 2 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1493290724.894007 0.001541 0.000070 - RespHeader Content-Length: 195 - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1493290724.894064 0.001598 0.000057 - ReqAcct 351 0 351 219 195 414 - End * << Session >> 1 - Begin sess 0 HTTP/1 - SessOpen 172.17.0.1 46312 :80 172.17.0.2 80 1493290724.892237 16 - Link req 2 rxreq - SessClose RX_TIMEOUT 5.007 - End >Please also mention Varnish's version. varnishd (varnish-4.1.3 revision 5e3b6d2) >Thanks, >Dridi Best regards, Jesper From dridi at varni.sh Thu Apr 27 11:15:51 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 27 Apr 2017 13:15:51 +0200 Subject: req.url modified in restart? In-Reply-To: <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk> References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk> Message-ID: > sub vcl_recv { > if (req.url ~ "^\/foo\/") { Here you are looking for /foo/ at the beginning of the URL. > set req.url = regsub(req.url, "^\/foo\/", "/"); > set req.backend_hint = special_backend; > } else { > set req.backend_hint = default; > } > } > > For this test I ran: > > $ python -m SimpleHTTPServer 8000 > > on the backend as a default backend and no server on port 8080. > > When I do a request for: > > http://varnish_server/foo And here you are send /foo, which won't match. > varnishd (varnish-4.1.3 revision 5e3b6d2) You should also upgrade to 4.1.6 released today. Dridi From jla at fcoo.dk Thu Apr 27 11:59:10 2017 From: jla at fcoo.dk (Jesper Larsen) Date: Thu, 27 Apr 2017 11:59:10 +0000 Subject: SV: req.url modified in restart? In-Reply-To: References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk>, Message-ID: <15E71B0ACFC92F4DA016DFBAB5122957707B5BC3@mail01.fcoo.dk> Hi again Dridi >> sub vcl_recv { >> if (req.url ~ "^\/foo\/") { > >Here you are looking for /foo/ at the beginning of the URL. >And here you are send /foo, which won't match. Yes, sorry about that. When I request /foo/ it works with the config file you got. But it does not work with this one: vcl 4.0; import std; import directors; backend default { .host = "whatever"; .port = "8000"; } backend special_backend { .host = "whatever"; .port = "8080"; } sub vcl_recv { if (req.url ~ "^\/foo\/") { set req.url = regsub(req.url, "^\/foo\/", "/"); set req.backend_hint = special_backend; } else { set req.backend_hint = default; } } sub vcl_deliver { # Restart if backend has returned an error message if (resp.status >= 500 && req.restarts < 4) { return(restart); } } The difference it the vcl_deliver subroutine where I restart the request on 5xx errors. And these restarted requests seem to use the modified req.url for the restarted request. * << BeReq >> 3 - Begin bereq 2 fetch - Timestamp Start: 1493294118.610413 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: localhost:9090 - BereqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - BereqHeader Accept-Language: en-US,en;q=0.5 - BereqHeader Upgrade-Insecure-Requests: 1 - BereqHeader X-Forwarded-For: 172.17.0.1 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 3 - VCL_call BACKEND_FETCH - VCL_return fetch - FetchError no backend connection - Timestamp Beresp: 1493294118.610523 0.000109 0.000109 - Timestamp Error: 1493294118.610528 0.000114 0.000005 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc Transient - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 278 - BereqAcct 0 0 0 0 0 0 - End * << Request >> 2 - Begin req 1 rxreq - Timestamp Start: 1493294118.610336 0.000000 0.000000 - Timestamp Req: 1493294118.610336 0.000000 0.000000 - ReqStart 172.17.0.1 46896 - ReqMethod GET - ReqURL /foo/ - ReqProtocol HTTP/1.1 - ReqHeader Host: localhost:9090 - ReqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Connection: keep-alive - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader Cache-Control: max-age=0 - ReqHeader X-Forwarded-For: 172.17.0.1 - VCL_call RECV - ReqURL / - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - VCL_call MISS - VCL_return fetch - Link bereq 3 fetch - Timestamp Fetch: 1493294118.610608 0.000273 0.000273 - RespProtocol HTTP/1.1 - RespStatus 503 - RespReason Backend fetch failed - RespHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - RespHeader Server: Varnish - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader Retry-After: 5 - RespHeader X-Varnish: 2 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return restart - Timestamp Process: 1493294118.610649 0.000313 0.000041 - Timestamp Restart: 1493294118.610655 0.000319 0.000006 - Link req 4 restart - End * << BeReq >> 5 - Begin bereq 4 fetch - Timestamp Start: 1493294118.610691 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: localhost:9090 - BereqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - BereqHeader Accept-Language: en-US,en;q=0.5 - BereqHeader Upgrade-Insecure-Requests: 1 - BereqHeader X-Forwarded-For: 172.17.0.1 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 5 - VCL_call BACKEND_FETCH - VCL_return fetch - BackendOpen 24 boot.default 192.168.1.45 8000 172.17.0.2 41022 - BackendStart 192.168.1.45 8000 - Timestamp Bereq: 1493294118.610834 0.000143 0.000143 - Timestamp Beresp: 1493294118.611334 0.000644 0.000501 - BerespProtocol HTTP/1.0 - BerespStatus 200 - BerespReason OK - BerespHeader Server: SimpleHTTP/0.6 Python/2.7.12 - BerespHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - BerespHeader Content-type: text/html; charset=UTF-8 - BerespHeader Content-Length: 348 - TTL RFC 120 10 -1 1493294119 1493294119 1493294118 0 0 - VCL_call BACKEND_RESPONSE - VCL_return deliver - Storage malloc s0 - ObjProtocol HTTP/1.0 - ObjStatus 200 - ObjReason OK - ObjHeader Server: SimpleHTTP/0.6 Python/2.7.12 - ObjHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - ObjHeader Content-type: text/html; charset=UTF-8 - ObjHeader Content-Length: 348 - Fetch_Body 3 length stream - BackendClose 24 boot.default - Timestamp BerespBody: 1493294118.611433 0.000742 0.000099 - Length 348 - BereqAcct 332 0 332 155 348 503 - End * << Request >> 4 - Begin req 2 restart - Timestamp Start: 1493294118.610655 0.000319 0.000000 - ReqStart 172.17.0.1 46896 - ReqMethod GET - ReqURL / - ReqProtocol HTTP/1.1 - ReqHeader Host: localhost:9090 - ReqHeader User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader Accept-Language: en-US,en;q=0.5 - ReqHeader Connection: keep-alive - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader Cache-Control: max-age=0 - ReqHeader X-Forwarded-For: 172.17.0.1 - ReqHeader Accept-Encoding: gzip - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - VCL_call MISS - VCL_return fetch - Link bereq 5 fetch - Timestamp Fetch: 1493294118.611446 0.001110 0.000791 - RespProtocol HTTP/1.0 - RespStatus 200 - RespReason OK - RespHeader Server: SimpleHTTP/0.6 Python/2.7.12 - RespHeader Date: Thu, 27 Apr 2017 11:55:18 GMT - RespHeader Content-type: text/html; charset=UTF-8 - RespHeader Content-Length: 348 - RespProtocol HTTP/1.1 - RespHeader X-Varnish: 4 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1493294118.611479 0.001143 0.000033 - RespHeader Accept-Ranges: bytes - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1493294118.611523 0.001187 0.000044 - ReqAcct 352 0 352 244 348 592 - End I guess a solution for this issue is to store the original req.url and set it to the original value again in vcl_deliver in case of a restart? >Dridi Jesper From dridi at varni.sh Thu Apr 27 12:34:31 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 27 Apr 2017 14:34:31 +0200 Subject: req.url modified in restart? In-Reply-To: <15E71B0ACFC92F4DA016DFBAB5122957707B5BC3@mail01.fcoo.dk> References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5BC3@mail01.fcoo.dk> Message-ID: > Yes, sorry about that. When I request /foo/ it works with the config file you got. But it does not work with this one: OK, time to take a step back, I misread your first email: > And when the request is restarted it uses this modified url [...] Yes, the request remains as-is after a restart, that is intentional. I thought I had read that in some cases you didn't get the modifications after a restart, still not getting the hang of this "reading" thing. If you want to reset req, use the std.rollback function before returning restart. Dridi From jla at fcoo.dk Thu Apr 27 12:48:11 2017 From: jla at fcoo.dk (Jesper Larsen) Date: Thu, 27 Apr 2017 12:48:11 +0000 Subject: SV: req.url modified in restart? In-Reply-To: References: <15E71B0ACFC92F4DA016DFBAB5122957707B4A7E@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5B83@mail01.fcoo.dk> <15E71B0ACFC92F4DA016DFBAB5122957707B5BC3@mail01.fcoo.dk>, Message-ID: <15E71B0ACFC92F4DA016DFBAB5122957707B5BDE@mail01.fcoo.dk> >Yes, the request remains as-is after a restart, that is intentional. I >thought I had read that in some cases you didn't get the modifications >after a restart, still not getting the hang of this "reading" thing. > >If you want to reset req, use the std.rollback function before >returning restart. Great the std.rollback solves the issue. Thanks for quick and competent help to a varnish rookie:-) >Dridi Jesper