From r at roze.lv Tue Jul 4 02:09:30 2017 From: r at roze.lv (Reinis Rozitis) Date: Tue, 4 Jul 2017 05:09:30 +0300 Subject: Limit object count Message-ID: <80CE5B9E603A46ED8F0D196AA2A1D69D@Neiroze> Hello, is there a way to limit maximum cached object count in Varnish or better handle OOM situation? I use only file backend storage (on SSDs) and the nodes have 32Gb ram. While the maximum object count was previously limited by the backend storage now after upgrading the ssd capacity varnish triggers OOM killer (no swap on the instance) and restarts after reaching 35M objects which kind of makes sense considering the 1Kb per object overhead. So what would be the best way to handle it (besides adding more ram)? Based on average object size just limit the backend storage size so no more than ~30M objects fit / tweak the TTL so older objects get evicted sooner / add swap? rr From guillaume at varnish-software.com Tue Jul 4 07:44:58 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 4 Jul 2017 09:44:58 +0200 Subject: Limit object count In-Reply-To: <80CE5B9E603A46ED8F0D196AA2A1D69D@Neiroze> References: <80CE5B9E603A46ED8F0D196AA2A1D69D@Neiroze> Message-ID: Hello, Your best best is indeed to reduce the TTL to store less object, or just to reduce the storage size -- Guillaume Quintard On Tue, Jul 4, 2017 at 4:09 AM, Reinis Rozitis wrote: > Hello, > is there a way to limit maximum cached object count in Varnish or better > handle OOM situation? > > I use only file backend storage (on SSDs) and the nodes have 32Gb ram. > While the maximum object count was previously limited by the backend > storage now after upgrading the ssd capacity varnish triggers OOM killer > (no swap on the instance) and restarts after reaching 35M objects which > kind of makes sense considering the 1Kb per object overhead. > > So what would be the best way to handle it (besides adding more ram)? > Based on average object size just limit the backend storage size so no > more than ~30M objects fit / tweak the TTL so older objects get evicted > sooner / add swap? > > > rr > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Wed Jul 5 08:04:49 2017 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 5 Jul 2017 01:04:49 -0700 Subject: Invalidate URL cache with http header Message-ID: Nginx has a nifty command for invalidating a specific cache proxy_cache_bypass $http_cachepurge; curl -I myapp.example.com/api/ping -H "cachepurge: true" Is there something equivalent in varnish? - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 5 08:21:34 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 5 Jul 2017 10:21:34 +0200 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: On Wed, Jul 5, 2017 at 10:04 AM, Quintin Par wrote: > > Nginx has a nifty command for invalidating a specific cache > > proxy_cache_bypass $http_cachepurge; > > curl -I myapp.example.com/api/ping -H "cachepurge: true" > > Is there something equivalent in varnish? Hi, I'm not familiar with this nginx feature, but there is a hash_always_miss feature in Varnish that allows you to bypass a cache hit. You can probably do something like this in vcl_recv{}, I haven't tried: set req.hash_always_miss = req.http.cachepurge == "true"; However, this opens a DoS vector so you probably want to restrict this using an ACL or other means. Dridi From quintinpar at gmail.com Wed Jul 5 08:38:21 2017 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 5 Jul 2017 01:38:21 -0700 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: Does hash_always_miss invalidate the cache? - Quintin On Wed, Jul 5, 2017 at 1:21 AM, Dridi Boukelmoune wrote: > On Wed, Jul 5, 2017 at 10:04 AM, Quintin Par wrote: > > > > Nginx has a nifty command for invalidating a specific cache > > > > proxy_cache_bypass $http_cachepurge; > > > > curl -I myapp.example.com/api/ping > > -H "cachepurge: true" > > > > Is there something equivalent in varnish? > > Hi, > > I'm not familiar with this nginx feature, but there is a hash_always_miss > feature in Varnish that allows you to bypass a cache hit. > > You can probably do something like this in vcl_recv{}, I haven't tried: > > set req.hash_always_miss = req.http.cachepurge == "true"; > > However, this opens a DoS vector so you probably want to restrict this > using an ACL or other means. > > Dridi > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 5 08:46:31 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 5 Jul 2017 10:46:31 +0200 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: On Wed, Jul 5, 2017 at 10:38 AM, Quintin Par wrote: > > Does hash_always_miss invalidate the cache? Not as such, it will fetch a new copy regardless and once cached it will shadow the previous one (that will eventually go away). There are other means of invalidation in VCL: ban and purge. I picked hash_always_miss because that's how I interpreted nginx's proxy_cache_bypass. But I didn't check the nginx documentation, pure speculation. Dridi From guillaume at varnish-software.com Wed Jul 5 10:00:16 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 5 Jul 2017 12:00:16 +0200 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: >From what I understand, http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_bypass is saying that your code is equivalent to: sub vcl_recv { if (req.http.cachepurge) { return (pass); } } I don't see a thing about purging, only bypassing the cache. -- Guillaume Quintard On Wed, Jul 5, 2017 at 10:46 AM, Dridi Boukelmoune wrote: > On Wed, Jul 5, 2017 at 10:38 AM, Quintin Par wrote: > > > > Does hash_always_miss invalidate the cache? > > Not as such, it will fetch a new copy regardless and once cached it > will shadow the previous one (that will eventually go away). > > There are other means of invalidation in VCL: ban and purge. I picked > hash_always_miss because that's how I interpreted nginx's > proxy_cache_bypass. But I didn't check the nginx documentation, pure > speculation. > > Dridi > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Wed Jul 5 16:34:43 2017 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 5 Jul 2017 09:34:43 -0700 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: Guillaume, The documentation isn?t the most update but the code invalidates (reaches) the cache. https://stackoverflow.com/a/23774285/113247 this can shed light. Please guide me if this isn?t exactly the behavior to set req.hash_always_miss = req.http.secretheader == "true"; > Not as such, it will fetch a new copy regardless and once cached it > will shadow the previous one (that will eventually go away). > > Dridi, How can two copies of the same cache key exist (assuming the URL is the key here)? Won?t that create conflicts? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 5 16:45:45 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 5 Jul 2017 18:45:45 +0200 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: > How can two copies of the same cache key exist (assuming the URL is the key here)? Won?t that create conflicts? The new copy will prevail, while the older one may still be in use (large objects or slow clients...) Dridi From guillaume at varnish-software.com Wed Jul 5 16:50:35 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 5 Jul 2017 18:50:35 +0200 Subject: Invalidate URL cache with http header In-Reply-To: References: Message-ID: Ok, then Dridi's answer is correct. Not a problem to have more than one object under the same hash, it's notably possible using Vary header (different variant of the same object) -- Guillaume Quintard On Wed, Jul 5, 2017 at 6:34 PM, Quintin Par wrote: > Guillaume, > > The documentation isn?t the most update but the code invalidates (reaches) > the cache. > > https://stackoverflow.com/a/23774285/113247 > > this can shed light. Please guide me if this isn?t exactly the behavior to > > set req.hash_always_miss = req.http.secretheader == "true"; > > > >> Not as such, it will fetch a new copy regardless and once cached it >> will shadow the previous one (that will eventually go away). >> >> Dridi, > > How can two copies of the same cache key exist (assuming the URL is the > key here)? Won?t that create conflicts? > > >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Salmon at DEShawResearch.com Wed Jul 5 20:09:15 2017 From: John.Salmon at DEShawResearch.com (John Salmon) Date: Wed, 5 Jul 2017 16:09:15 -0400 Subject: Varnish and TCP Incast Throughput Collapse Message-ID: I've been using Varnish in an "intranet" application. The picture is roughly: origin <-> Varnish <-- 10G channel ---> switch <-- 1G channel --> client The machine running Varnish is a high-performance server. It can easily saturate a 10Gbit channel. The machine running the client is a more modest desktop workstation, but it's fully capable of saturating a 1Gbit channel. The client makes HTTP requests for objects of size 128kB. When the client makes those requests serially, "useful" data is transferred at about 80% of the channel bandwidth of the Gigabit link, which seems perfectly reasonable. But when the client makes the requests in parallel (typically 4-at-a-time, but it can vary), *total* throughput drops to about 25% of the channel bandwidth, i.e., about 30Mbyte/sec. After looking at traces and doing a fair amount of experimentation, we have reached the tentative conclusion that we're seeing "TCP Incast Throughput Collapse" (see references below) The literature on "TCP Incast Throughput Collapse" typically describes scenarios where a large number of servers overwhelm a single inbound port. I haven't found any discussion of incast collapse with only one server, but it seems like a natural consequence of a 10Gigabit-capable server feeding a 1-Gigabit downlink. Has anybody else seen anything similar? With Varnish or other single servers on 10Gbit to 1Gbit links. The literature offers a variety of mitigation strategies, but there are non-trivial tradeoffs and none appears to be a silver bullet. If anyone has seen TCP Incast Collapse with Varnish, were you able to work around it, and if so, how? Thanks, John Salmon References: http://www.pdl.cmu.edu/Incast/ Annotated Bibliography in: https://lists.freebsd.org/pipermail/freebsd-net/2015-November/043926.html -- *.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Jul 6 04:45:35 2017 From: lagged at gmail.com (Andrei) Date: Wed, 5 Jul 2017 23:45:35 -0500 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: References: Message-ID: Out of curiosity, what does ethtool show for the related nics on both servers? I also have Varnish on a 10G server, and can reach around 7.7Gbit/s serving anywhere between 6-28k requests/second, however it did take some sysctl tuning and the westwood TCP congestion control algo On Wed, Jul 5, 2017 at 3:09 PM, John Salmon wrote: > I've been using Varnish in an "intranet" application. The picture is > roughly: > > origin <-> Varnish <-- 10G channel ---> switch <-- 1G channel --> client > > The machine running Varnish is a high-performance server. It can > easily saturate a 10Gbit channel. The machine running the client is a > more modest desktop workstation, but it's fully capable of saturating > a 1Gbit channel. > > The client makes HTTP requests for objects of size 128kB. > > When the client makes those requests serially, "useful" data is > transferred at about 80% of the channel bandwidth of the Gigabit > link, which seems perfectly reasonable. > > But when the client makes the requests in parallel (typically > 4-at-a-time, but it can vary), *total* throughput drops to about 25% > of the channel bandwidth, i.e., about 30Mbyte/sec. > > After looking at traces and doing a fair amount of experimentation, we > have reached the tentative conclusion that we're seeing "TCP Incast > Throughput Collapse" (see references below) > > The literature on "TCP Incast Throughput Collapse" typically describes > scenarios where a large number of servers overwhelm a single inbound > port. I haven't found any discussion of incast collapse with only one > server, but it seems like a natural consequence of a 10Gigabit-capable > server feeding a 1-Gigabit downlink. > > Has anybody else seen anything similar? With Varnish or other single > servers on 10Gbit to 1Gbit links. > > The literature offers a variety of mitigation strategies, but there are > non-trivial tradeoffs and none appears to be a silver bullet. > > If anyone has seen TCP Incast Collapse with Varnish, were you able to work > around it, and if so, how? > > Thanks, > John Salmon > > References: > > http://www.pdl.cmu.edu/Incast/ > > Annotated Bibliography in: > https://lists.freebsd.org/pipermail/freebsd-net/2015- > November/043926.html > > -- > *.* > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Jul 6 07:08:20 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 6 Jul 2017 09:08:20 +0200 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: References: Message-ID: Two things: do you get the same results when the client is directly on the Varnish server? (ie. not going through the switch) And is each new request opening a new connection? -- Guillaume Quintard On Thu, Jul 6, 2017 at 6:45 AM, Andrei wrote: > Out of curiosity, what does ethtool show for the related nics on both > servers? I also have Varnish on a 10G server, and can reach around > 7.7Gbit/s serving anywhere between 6-28k requests/second, however it did > take some sysctl tuning and the westwood TCP congestion control algo > > On Wed, Jul 5, 2017 at 3:09 PM, John Salmon com> wrote: > >> I've been using Varnish in an "intranet" application. The picture is >> roughly: >> >> origin <-> Varnish <-- 10G channel ---> switch <-- 1G channel --> client >> >> The machine running Varnish is a high-performance server. It can >> easily saturate a 10Gbit channel. The machine running the client is a >> more modest desktop workstation, but it's fully capable of saturating >> a 1Gbit channel. >> >> The client makes HTTP requests for objects of size 128kB. >> >> When the client makes those requests serially, "useful" data is >> transferred at about 80% of the channel bandwidth of the Gigabit >> link, which seems perfectly reasonable. >> >> But when the client makes the requests in parallel (typically >> 4-at-a-time, but it can vary), *total* throughput drops to about 25% >> of the channel bandwidth, i.e., about 30Mbyte/sec. >> >> After looking at traces and doing a fair amount of experimentation, we >> have reached the tentative conclusion that we're seeing "TCP Incast >> Throughput Collapse" (see references below) >> >> The literature on "TCP Incast Throughput Collapse" typically describes >> scenarios where a large number of servers overwhelm a single inbound >> port. I haven't found any discussion of incast collapse with only one >> server, but it seems like a natural consequence of a 10Gigabit-capable >> server feeding a 1-Gigabit downlink. >> >> Has anybody else seen anything similar? With Varnish or other single >> servers on 10Gbit to 1Gbit links. >> >> The literature offers a variety of mitigation strategies, but there are >> non-trivial tradeoffs and none appears to be a silver bullet. >> >> If anyone has seen TCP Incast Collapse with Varnish, were you able to work >> around it, and if so, how? >> >> Thanks, >> John Salmon >> >> References: >> >> http://www.pdl.cmu.edu/Incast/ >> >> Annotated Bibliography in: >> https://lists.freebsd.org/pipermail/freebsd-net/2015-Novembe >> r/043926.html >> >> -- >> *.* >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Jul 6 07:42:45 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 6 Jul 2017 09:42:45 +0200 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: References: Message-ID: > If anyone has seen TCP Incast Collapse with Varnish, were you able to work > around it, and if so, how? I don't know, but maybe this could help: https://github.com/varnish/varnish-modules/blob/master/docs/vmod_tcp.rst#vmod_tcp Dridi From John.Salmon at DEShawResearch.com Thu Jul 6 17:15:17 2017 From: John.Salmon at DEShawResearch.com (John Salmon) Date: Thu, 6 Jul 2017 13:15:17 -0400 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: References: Message-ID: <03e04d30-a4bf-82f3-7cbd-cfefd24b0f87@DEShawResearch.com> Thanks for your suggestions. One more detail I didn't mention: Roughly speaking, the client is doing "read ahead", but it only reads ahead by a limited amount (about 4 blocks, each of 128KiB). The surprising behavior is that when four readahead threads are allowed to run concurrently their aggregate throughput is much lower than when all the readaheads are serialized through a single thread. Traces (with strace and/or tcpdump) show frequent stalls of roughly 200ms where nothing seems to move across the channel and all client-side system calls are waiting. 200ms is suspiciously close to the linux 'rto_min' parameter, which was the first thing that led me to suspect TCP incast collapse. We get some improvement by reducing rto_min on the server, and we also get some improvement by reducing SO_RCVBUF in the client. But as I said, both have tradeoffs, so I'm interested if anyone else has encountered or overcome this particular problem. I do not see the dropoff from single-thread to multi-thread when I client and server on the same host. I.e., I get around 500MB/s with one client and roughly the same total bandwidth with multiple clients. I'm sure that with some tuning, the 500MB/s could be improved, but that's not the issue here. Here are the ethtool reports: On the client: drdws0134$ ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: on (auto) Cannot get wake-on-lan settings: Operation not permitted Current message level: 0x00000007 (7) drv probe link Link detected: yes drdws0134$ On the server: $ ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: off MDI-X: Unknown Cannot get wake-on-lan settings: Operation not permitted Cannot get link status: Operation not permitted $ On 07/06/2017 03:08 AM, Guillaume Quintard wrote: > Two things: do you get the same results when the client is directly on > the Varnish server? (ie. not going through the switch) And is each new > request opening a new connection? > > -- > Guillaume Quintard > > On Thu, Jul 6, 2017 at 6:45 AM, Andrei > wrote: > > Out of curiosity, what does ethtool show for the related nics on > both servers? I also have Varnish on a 10G server, and can reach > around 7.7Gbit/s serving anywhere between 6-28k requests/second, > however it did take some sysctl tuning and the westwood TCP > congestion control algo > > On Wed, Jul 5, 2017 at 3:09 PM, John Salmon > > wrote: > > I've been using Varnish in an "intranet" application. The > picture is roughly: > > origin <-> Varnish <-- 10G channel ---> switch <-- 1G > channel --> client > > The machine running Varnish is a high-performance server. It can > easily saturate a 10Gbit channel. The machine running the > client is a > more modest desktop workstation, but it's fully capable of > saturating > a 1Gbit channel. > > The client makes HTTP requests for objects of size 128kB. > > When the client makes those requests serially, "useful" data is > transferred at about 80% of the channel bandwidth of the Gigabit > link, which seems perfectly reasonable. > > But when the client makes the requests in parallel (typically > 4-at-a-time, but it can vary), *total* throughput drops to > about 25% > of the channel bandwidth, i.e., about 30Mbyte/sec. > > After looking at traces and doing a fair amount of > experimentation, we > have reached the tentative conclusion that we're seeing "TCP > Incast > Throughput Collapse" (see references below) > > The literature on "TCP Incast Throughput Collapse" typically > describes > scenarios where a large number of servers overwhelm a single > inbound > port. I haven't found any discussion of incast collapse with > only one > server, but it seems like a natural consequence of a > 10Gigabit-capable > server feeding a 1-Gigabit downlink. > > Has anybody else seen anything similar? With Varnish or other > single > servers on 10Gbit to 1Gbit links. > > The literature offers a variety of mitigation strategies, but > there are > non-trivial tradeoffs and none appears to be a silver bullet. > > If anyone has seen TCP Incast Collapse with Varnish, were you > able to work > around it, and if so, how? > > Thanks, > John Salmon > > References: > > http://www.pdl.cmu.edu/Incast/ > > Annotated Bibliography in: > https://lists.freebsd.org/pipermail/freebsd-net/2015-November/043926.html > > > -- > *.* > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- *.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Jul 7 07:10:19 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 7 Jul 2017 09:10:19 +0200 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: <03e04d30-a4bf-82f3-7cbd-cfefd24b0f87@DEShawResearch.com> References: <03e04d30-a4bf-82f3-7cbd-cfefd24b0f87@DEShawResearch.com> Message-ID: I'm having trouble understanding the concept of readahead in an HTTP context. You are using the malloc cache storage, right? -- Guillaume Quintard On Thu, Jul 6, 2017 at 7:15 PM, John Salmon wrote: > Thanks for your suggestions. > > One more detail I didn't mention: Roughly speaking, the client is doing > "read ahead", but it only reads ahead by a limited amount (about 4 blocks, > each of 128KiB). The surprising behavior is that when four readahead > threads are allowed to run concurrently their aggregate throughput is much > lower than when all the readaheads are serialized through a single thread. > > Traces (with strace and/or tcpdump) show frequent stalls of roughly 200ms > where nothing seems to move across the channel and all client-side system > calls are waiting. 200ms is suspiciously close to the linux 'rto_min' > parameter, which was the first thing that led me to suspect TCP incast > collapse. We get some improvement by reducing rto_min on the server, and > we also get some improvement by reducing SO_RCVBUF in the client. But as I > said, both have tradeoffs, so I'm interested if anyone else has encountered > or overcome this particular problem. > > I do not see the dropoff from single-thread to multi-thread when I client > and server on the same host. I.e., I get around 500MB/s with one client > and roughly the same total bandwidth with multiple clients. I'm sure that > with some tuning, the 500MB/s could be improved, but that's not the issue > here. > > Here are the ethtool reports: > > On the client: > drdws0134$ ethtool eth0 > Settings for eth0: > Supported ports: [ TP ] > Supported link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Supported pause frame use: No > Supports auto-negotiation: Yes > Advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Advertised pause frame use: No > Advertised auto-negotiation: Yes > Speed: 1000Mb/s > Duplex: Full > Port: Twisted Pair > PHYAD: 1 > Transceiver: internal > Auto-negotiation: on > MDI-X: on (auto) > Cannot get wake-on-lan settings: Operation not permitted > Current message level: 0x00000007 (7) > drv probe link > Link detected: yes > drdws0134$ > > On the server: > > $ ethtool eth0 > Settings for eth0: > Supported ports: [ TP ] > Supported link modes: 1000baseT/Full > 10000baseT/Full > Supported pause frame use: No > Supports auto-negotiation: No > Advertised link modes: Not reported > Advertised pause frame use: No > Advertised auto-negotiation: No > Speed: 10000Mb/s > Duplex: Full > Port: Twisted Pair > PHYAD: 0 > Transceiver: internal > Auto-negotiation: off > MDI-X: Unknown > Cannot get wake-on-lan settings: Operation not permitted > Cannot get link status: Operation not permitted > $ > > > On 07/06/2017 03:08 AM, Guillaume Quintard wrote: > > Two things: do you get the same results when the client is directly on the > Varnish server? (ie. not going through the switch) And is each new request > opening a new connection? > > -- > Guillaume Quintard > > On Thu, Jul 6, 2017 at 6:45 AM, Andrei wrote: > >> Out of curiosity, what does ethtool show for the related nics on both >> servers? I also have Varnish on a 10G server, and can reach around >> 7.7Gbit/s serving anywhere between 6-28k requests/second, however it did >> take some sysctl tuning and the westwood TCP congestion control algo >> >> On Wed, Jul 5, 2017 at 3:09 PM, John Salmon < >> John.Salmon at deshawresearch.com> wrote: >> >>> I've been using Varnish in an "intranet" application. The picture is >>> roughly: >>> >>> origin <-> Varnish <-- 10G channel ---> switch <-- 1G channel --> >>> client >>> >>> The machine running Varnish is a high-performance server. It can >>> easily saturate a 10Gbit channel. The machine running the client is a >>> more modest desktop workstation, but it's fully capable of saturating >>> a 1Gbit channel. >>> >>> The client makes HTTP requests for objects of size 128kB. >>> >>> When the client makes those requests serially, "useful" data is >>> transferred at about 80% of the channel bandwidth of the Gigabit >>> link, which seems perfectly reasonable. >>> >>> But when the client makes the requests in parallel (typically >>> 4-at-a-time, but it can vary), *total* throughput drops to about 25% >>> of the channel bandwidth, i.e., about 30Mbyte/sec. >>> >>> After looking at traces and doing a fair amount of experimentation, we >>> have reached the tentative conclusion that we're seeing "TCP Incast >>> Throughput Collapse" (see references below) >>> >>> The literature on "TCP Incast Throughput Collapse" typically describes >>> scenarios where a large number of servers overwhelm a single inbound >>> port. I haven't found any discussion of incast collapse with only one >>> server, but it seems like a natural consequence of a 10Gigabit-capable >>> server feeding a 1-Gigabit downlink. >>> >>> Has anybody else seen anything similar? With Varnish or other single >>> servers on 10Gbit to 1Gbit links. >>> >>> The literature offers a variety of mitigation strategies, but there are >>> non-trivial tradeoffs and none appears to be a silver bullet. >>> >>> If anyone has seen TCP Incast Collapse with Varnish, were you able to >>> work >>> around it, and if so, how? >>> >>> Thanks, >>> John Salmon >>> >>> References: >>> >>> http://www.pdl.cmu.edu/Incast/ >>> >>> Annotated Bibliography in: >>> https://lists.freebsd.org/pipermail/freebsd-net/2015-Novembe >>> r/043926.html >>> >>> -- >>> *.* >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > -- > *.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cservin-varnish at cromagnon.com Fri Jul 7 15:48:38 2017 From: cservin-varnish at cromagnon.com (Craig Servin) Date: Fri, 07 Jul 2017 10:48:38 -0500 Subject: Varnish and TCP Incast Throughput Collapse In-Reply-To: <03e04d30-a4bf-82f3-7cbd-cfefd24b0f87@DEShawResearch.com> References: <03e04d30-a4bf-82f3-7cbd-cfefd24b0f87@DEShawResearch.com> Message-ID: Could you add another switch and use bonded interfaces? If you are thinking the switch can't handle the load that may help. From charles at beachcamera.com Mon Jul 10 11:23:13 2017 From: charles at beachcamera.com (Bender, Charles) Date: Mon, 10 Jul 2017 11:23:13 +0000 Subject: high memory usage with malloc and file backends configured Message-ID: Hi, There is a large discrepancy between Varnish resident memory reported by top vs reported by varnishstat. Varnish is configured with both malloc and file storage; 20G malloc and 75G file storage. These are the startup parameters- VARNISH_STORAGE="memcache=malloc,20G -s filecache=file,/mnt/xvdf1/varnish/varnish_storage.bin,75G" After running for a few days top is reporting more than twice amount of memory used for varnishd process than varnishstat. From top- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1083 varnish 20 0 82.235g 0.014t 7.765g S 75.7 51.4 2159:07 varnishd From varnishstat- SMA.memcache.g_bytes 5.20G 39.65K . 5.20G 5.20G 5.20G SMA.memcache.g_space 14.80G -39.65K . 14.80G 14.80G 14.80G SMF.filecache.g_bytes 7.78G 27.97K . 7.78G 7.78G 7.78G SMF.filecache.g_space 67.22G -27.97K . 67.22G 67.22G 67.22G This is the relevant part of the VCL regarding storage backend selection- sub vcl_backend_response { # define separate cache storage groups if (bereq.http.host ~ "^(encore|thereal|static)\.(beachcamera|buydig)\.com") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)\.(jpg|jpeg|gif|ico|pdf|swf|png|zip)") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)product\-image\.aspx") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } else { set beresp.storage_hint = "memcache"; set beresp.http.X-Cache-Storage = "memory"; } Can post entire VCL if needed. Would think that since varnishstat reports 5.20G RAM used the resident memory should be around 6-7G, 14G seems excessively high. File storage should use minimal resident memory, correct? Varnish was installed from Varnish Cache 4.1 repo. No VMODs loaded except std and directors. Using latest 4.1.7 Anything else you need please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jul 10 11:55:20 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 10 Jul 2017 13:55:20 +0200 Subject: high memory usage with malloc and file backends configured In-Reply-To: References: Message-ID: Hi Charles, So, if I'm reading this right, there's no discrepancy. Varnish will malloc the full storage (malloc), and will mmap a file the size of the full storage (file). So even though the storage is not used, it's allocated. Does it make sense, or did I miss something? -- Guillaume Quintard On Mon, Jul 10, 2017 at 1:23 PM, Bender, Charles wrote: > Hi, > > > > There is a large discrepancy between Varnish resident memory reported by > top vs reported by varnishstat. Varnish is configured with both malloc and > file storage; 20G malloc and 75G file storage. > > > > These are the startup parameters- > > > > VARNISH_STORAGE="memcache=malloc,20G -s filecache=file,/mnt/xvdf1/ > varnish/varnish_storage.bin,75G" > > > > After running for a few days top is reporting more than twice amount of > memory used for varnishd process than varnishstat. > > > > From top- > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ > COMMAND > > > 1083 varnish 20 0 82.235g 0.014t 7.765g S 75.7 51.4 2159:07 > varnishd > > > > > From varnishstat- > > > > SMA.memcache.g_bytes > 5.20G 39.65K . 5.20G 5.20G 5.20G > > SMA.memcache.g_space > 14.80G -39.65K . 14.80G 14.80G > 14.80G > > SMF.filecache.g_bytes > 7.78G 27.97K . 7.78G 7.78G > 7.78G > > SMF.filecache.g_space > 67.22G -27.97K . 67.22G 67.22G 67.22G > > > > This is the relevant part of the VCL regarding storage backend selection- > > > > sub vcl_backend_response { > > # define separate cache storage groups > > if (bereq.http.host ~ "^(encore|thereal|static)\.(beachcamera|buydig)\.com") > { > > set beresp.storage_hint = "filecache"; > > set beresp.http.X-Cache-Storage = "disk"; > > } elsif (bereq.url ~ "(?i)\.(jpg|jpeg|gif|ico|pdf|swf|png|zip)") > { > > set beresp.storage_hint = "filecache"; > > set beresp.http.X-Cache-Storage = "disk"; > > } elsif (bereq.url ~ "(?i)product\-image\.aspx") { > > set beresp.storage_hint = "filecache"; > > set beresp.http.X-Cache-Storage = "disk"; > > } else { > > set beresp.storage_hint = "memcache"; > > set beresp.http.X-Cache-Storage = > "memory"; > > } > > > > Can post entire VCL if needed. > > > > Would think that since varnishstat reports 5.20G RAM used the resident > memory should be around 6-7G, 14G seems excessively high. File storage > should use minimal resident memory, correct? > > > > Varnish was installed from Varnish Cache 4.1 repo. No VMODs loaded except > std and directors. Using latest 4.1.7 > > > > Anything else you need please let me know. > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles at beachcamera.com Mon Jul 10 12:42:59 2017 From: charles at beachcamera.com (Bender, Charles) Date: Mon, 10 Jul 2017 12:42:59 +0000 Subject: high memory usage with malloc and file backends configured In-Reply-To: References: , Message-ID: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com> Hi Guillaume, Thank you for replying so quickly. I think i'm misunderstanding what file storage does. My goal is to have some objects stored in memory (malloc) and others stored on disk (file) >From what you're saying file method still uses resident memory for each object, so with my configuration (20G malloc, 75G file) i would need 95G RAM if all storage is used? (without swap being used) If this is the case i'm curious what the use cases are for file vs only malloc. On Jul 10, 2017, at 1:55 PM, Guillaume Quintard > wrote: Hi Charles, So, if I'm reading this right, there's no discrepancy. Varnish will malloc the full storage (malloc), and will mmap a file the size of the full storage (file). So even though the storage is not used, it's allocated. Does it make sense, or did I miss something? -- Guillaume Quintard On Mon, Jul 10, 2017 at 1:23 PM, Bender, Charles > wrote: Hi, There is a large discrepancy between Varnish resident memory reported by top vs reported by varnishstat. Varnish is configured with both malloc and file storage; 20G malloc and 75G file storage. These are the startup parameters- VARNISH_STORAGE="memcache=malloc,20G -s filecache=file,/mnt/xvdf1/varnish/varnish_storage.bin,75G" After running for a few days top is reporting more than twice amount of memory used for varnishd process than varnishstat. >From top- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1083 varnish 20 0 82.235g 0.014t 7.765g S 75.7 51.4 2159:07 varnishd >From varnishstat- SMA.memcache.g_bytes 5.20G 39.65K . 5.20G 5.20G 5.20G SMA.memcache.g_space 14.80G -39.65K . 14.80G 14.80G 14.80G SMF.filecache.g_bytes 7.78G 27.97K . 7.78G 7.78G 7.78G SMF.filecache.g_space 67.22G -27.97K . 67.22G 67.22G 67.22G This is the relevant part of the VCL regarding storage backend selection- sub vcl_backend_response { # define separate cache storage groups if (bereq.http.host ~ "^(encore|thereal|static)\.(beachcamera|buydig)\.com") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)\.(jpg|jpeg|gif|ico|pdf|swf|png|zip)") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)product\-image\.aspx") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } else { set beresp.storage_hint = "memcache"; set beresp.http.X-Cache-Storage = "memory"; } Can post entire VCL if needed. Would think that since varnishstat reports 5.20G RAM used the resident memory should be around 6-7G, 14G seems excessively high. File storage should use minimal resident memory, correct? Varnish was installed from Varnish Cache 4.1 repo. No VMODs loaded except std and directors. Using latest 4.1.7 Anything else you need please let me know. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jul 10 12:53:52 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 10 Jul 2017 14:53:52 +0200 Subject: high memory usage with malloc and file backends configured In-Reply-To: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com> References: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com> Message-ID: You misundestood me, but that's probably my fault :-) The file used by the file storage will be fully allocated, ie. you;ll have a 75G file on your disk. Then varnish will mmap it to memory, which means the kernel will give Varnish a memory space corresponding to the file content. The trick is that the whole file doesn't need to be in memory, only the "active" parts are. What happens is that we let the kernel manage that space, and it will leverage the unused memory to do so. So, true, it you had 200G of RAM, the file storage would effectively take 75G, because they would be available. That's not your case, so the kernel will only use whatever amount is unused. That being said, outside of storage, Varnish uses roughly 1K per object stored, that's probably not impacting right now, but that's good to keep in mind. -- Guillaume Quintard On Mon, Jul 10, 2017 at 2:42 PM, Bender, Charles wrote: > Hi Guillaume, > > Thank you for replying so quickly. I think i'm misunderstanding what file > storage does. My goal is to have some objects stored in memory (malloc) and > others stored on disk (file) > > From what you're saying file method still uses resident memory for each > object, so with my configuration (20G malloc, 75G file) i would need 95G > RAM if all storage is used? (without swap being used) > > If this is the case i'm curious what the use cases are for file vs only > malloc. > > On Jul 10, 2017, at 1:55 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > > Hi Charles, > > So, if I'm reading this right, there's no discrepancy. Varnish will malloc > the full storage (malloc), and will mmap a file the size of the full > storage (file). So even though the storage is not used, it's allocated. > > Does it make sense, or did I miss something? > > -- > Guillaume Quintard > > On Mon, Jul 10, 2017 at 1:23 PM, Bender, Charles > wrote: > >> Hi, >> >> >> >> There is a large discrepancy between Varnish resident memory reported by >> top vs reported by varnishstat. Varnish is configured with both malloc and >> file storage; 20G malloc and 75G file storage. >> >> >> >> These are the startup parameters- >> >> >> >> VARNISH_STORAGE="memcache=malloc,20G -s filecache=file,/mnt/xvdf1/varn >> ish/varnish_storage.bin,75G" >> >> >> >> After running for a few days top is reporting more than twice amount of >> memory used for varnishd process than varnishstat. >> >> >> >> From top- >> >> >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >> COMMAND >> >> >> 1083 varnish 20 0 82.235g 0.014t 7.765g S 75.7 51.4 2159:07 >> varnishd >> >> >> >> >> From varnishstat- >> >> >> >> SMA.memcache.g_bytes >> 5.20G 39.65K . 5.20G 5.20G 5.20G >> >> SMA.memcache.g_space >> 14.80G -39.65K . 14.80G 14.80G >> 14.80G >> >> SMF.filecache.g_bytes >> 7.78G 27.97K . 7.78G 7.78G >> 7.78G >> >> SMF.filecache.g_space >> 67.22G -27.97K . 67.22G 67.22G 67.22G >> >> >> >> This is the relevant part of the VCL regarding storage backend selection- >> >> >> >> sub vcl_backend_response { >> >> # define separate cache storage groups >> >> if (bereq.http.host ~ "^(encore|thereal|static)\.(beachcamera|buydig)\.com") >> { >> >> set beresp.storage_hint = "filecache"; >> >> set beresp.http.X-Cache-Storage = "disk"; >> >> } elsif (bereq.url ~ "(?i)\.(jpg|jpeg|gif|ico|pdf|swf|png|zip)") >> { >> >> set beresp.storage_hint = "filecache"; >> >> set beresp.http.X-Cache-Storage = "disk"; >> >> } elsif (bereq.url ~ "(?i)product\-image\.aspx") { >> >> set beresp.storage_hint = "filecache"; >> >> set beresp.http.X-Cache-Storage = "disk"; >> >> } else { >> >> set beresp.storage_hint = "memcache"; >> >> set beresp.http.X-Cache-Storage = >> "memory"; >> >> } >> >> >> >> Can post entire VCL if needed. >> >> >> >> Would think that since varnishstat reports 5.20G RAM used the resident >> memory should be around 6-7G, 14G seems excessively high. File storage >> should use minimal resident memory, correct? >> >> >> >> Varnish was installed from Varnish Cache 4.1 repo. No VMODs loaded except >> std and directors. Using latest 4.1.7 >> >> >> >> Anything else you need please let me know. >> >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Mon Jul 10 13:12:08 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 10 Jul 2017 15:12:08 +0200 Subject: high memory usage with malloc and file backends configured In-Reply-To: References: Message-ID: On Mon, Jul 10, 2017 at 1:55 PM, Guillaume Quintard wrote: > Hi Charles, > > So, if I'm reading this right, there's no discrepancy. Varnish will malloc > the full storage (malloc), and will mmap a file the size of the full storage > (file). So even though the storage is not used, it's allocated. > > Does it make sense, or did I miss something? Technically malloc storage makes individual allocations, so only file storage will pre-allocate the full storage. Allocations also come with an overhead (house-keeping, alignment requirements...) not reported by Varnish since we can't technically tell what's happening under our feet (for instance, jemalloc). Also storage size is only for... storage. So while we have a rule of thumb of 1kB of overhead per object, that's not the sole non-storage memory footprint. Dridi From dridi at varni.sh Mon Jul 10 14:35:44 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 10 Jul 2017 16:35:44 +0200 Subject: high memory usage with malloc and file backends configured In-Reply-To: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com> References: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com> Message-ID: > From what you're saying file method still uses resident memory for each > object, so with my configuration (20G malloc, 75G file) i would need 95G RAM > if all storage is used? (without swap being used) Not RAM, virtual memory. But yes, this is the idea. > If this is the case i'm curious what the use cases are for file vs only > malloc. Basically, malloc storage relies on the underlying allocator (preferably jemalloc) to acquire memory, whereas file storage has its own allocator looking for space inside the designated "area" (the 75GB file that was mapped in memory). If you over-commit memory, it may be subject to swap for malloc, and the OS page cache/disk buffer for file. Depending on your workloads or the shape and distribution of your responses, one might perform better than the other. You can also have more than one (like you did) for various reasons. And finally, you may get more memory from the Transient storage (you didn't paste the stats) that is used for uncacheable or shortlived objects. Dridi From charles at beachcamera.com Mon Jul 10 15:22:10 2017 From: charles at beachcamera.com (Bender, Charles) Date: Mon, 10 Jul 2017 15:22:10 +0000 Subject: high memory usage with malloc and file backends configured In-Reply-To: References: <79CE4EEC-8EBF-413E-B035-133857BC09BA@beachcamera.com>, Message-ID: <9DAA1E85-FA75-4601-86C9-59355CD9C5F1@beachcamera.com> Thanks so much Guillaume for clearly explaining how the file storage works. Have a good understanding of how both storage engines work so will plan accordingly. Thanks again for your help and responding so quickly. Charles On Jul 10, 2017, at 2:54 PM, Guillaume Quintard > wrote: You misundestood me, but that's probably my fault :-) The file used by the file storage will be fully allocated, ie. you;ll have a 75G file on your disk. Then varnish will mmap it to memory, which means the kernel will give Varnish a memory space corresponding to the file content. The trick is that the whole file doesn't need to be in memory, only the "active" parts are. What happens is that we let the kernel manage that space, and it will leverage the unused memory to do so. So, true, it you had 200G of RAM, the file storage would effectively take 75G, because they would be available. That's not your case, so the kernel will only use whatever amount is unused. That being said, outside of storage, Varnish uses roughly 1K per object stored, that's probably not impacting right now, but that's good to keep in mind. -- Guillaume Quintard On Mon, Jul 10, 2017 at 2:42 PM, Bender, Charles > wrote: Hi Guillaume, Thank you for replying so quickly. I think i'm misunderstanding what file storage does. My goal is to have some objects stored in memory (malloc) and others stored on disk (file) >From what you're saying file method still uses resident memory for each object, so with my configuration (20G malloc, 75G file) i would need 95G RAM if all storage is used? (without swap being used) If this is the case i'm curious what the use cases are for file vs only malloc. On Jul 10, 2017, at 1:55 PM, Guillaume Quintard > wrote: Hi Charles, So, if I'm reading this right, there's no discrepancy. Varnish will malloc the full storage (malloc), and will mmap a file the size of the full storage (file). So even though the storage is not used, it's allocated. Does it make sense, or did I miss something? -- Guillaume Quintard On Mon, Jul 10, 2017 at 1:23 PM, Bender, Charles > wrote: Hi, There is a large discrepancy between Varnish resident memory reported by top vs reported by varnishstat. Varnish is configured with both malloc and file storage; 20G malloc and 75G file storage. These are the startup parameters- VARNISH_STORAGE="memcache=malloc,20G -s filecache=file,/mnt/xvdf1/varnish/varnish_storage.bin,75G" After running for a few days top is reporting more than twice amount of memory used for varnishd process than varnishstat. >From top- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1083 varnish 20 0 82.235g 0.014t 7.765g S 75.7 51.4 2159:07 varnishd >From varnishstat- SMA.memcache.g_bytes 5.20G 39.65K . 5.20G 5.20G 5.20G SMA.memcache.g_space 14.80G -39.65K . 14.80G 14.80G 14.80G SMF.filecache.g_bytes 7.78G 27.97K . 7.78G 7.78G 7.78G SMF.filecache.g_space 67.22G -27.97K . 67.22G 67.22G 67.22G This is the relevant part of the VCL regarding storage backend selection- sub vcl_backend_response { # define separate cache storage groups if (bereq.http.host ~ "^(encore|thereal|static)\.(beachcamera|buydig)\.com") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)\.(jpg|jpeg|gif|ico|pdf|swf|png|zip)") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } elsif (bereq.url ~ "(?i)product\-image\.aspx") { set beresp.storage_hint = "filecache"; set beresp.http.X-Cache-Storage = "disk"; } else { set beresp.storage_hint = "memcache"; set beresp.http.X-Cache-Storage = "memory"; } Can post entire VCL if needed. Would think that since varnishstat reports 5.20G RAM used the resident memory should be around 6-7G, 14G seems excessively high. File storage should use minimal resident memory, correct? Varnish was installed from Varnish Cache 4.1 repo. No VMODs loaded except std and directors. Using latest 4.1.7 Anything else you need please let me know. _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.lemonnier at cognix-systems.com Tue Jul 11 12:57:59 2017 From: kevin.lemonnier at cognix-systems.com (Kevin Lemonnier) Date: Tue, 11 Jul 2017 13:57:59 +0100 Subject: Weird 5 seconds delay Message-ID: <5964CB57.4070400@cognix-systems.com> Hi, I've posted on serverfault and on IRC, but since this is a bit (or very) urgent, I'll try it here too. I have a strange problem with varnish, it's in front of an API and it's caching the whole responses. It mostly works fine, but from time to time a request will take 5 seconds (or rarely 10 seconds, or 15 seconds .. always an increment of 5) more than usual to return. I've tried bypassing the HAProxy in front, same, and I checked, it does that whether the URL is already cached or not (I've checked the Age header). So it can't be the backend since the page is in cache, it's not what's in front of varnish, that leaves only varnish itself as the cause of that problem. Any idea as to what could cause that 5 seconds delay ? I've checked varnishlog, during that delay varnish isn't doing anything. I've also tried manually making another request during that delay, and varnish answered fine so it's not frozen or anything, it works fine. And at the end of that 5 seconds, it's outputting the log for the request as usual, nothing weird in it. Example : * << Request >> 132712 * Begin req 132711 rxreq * Timestamp Start: 1499701302.309413 0.000000 0.000000 * Timestamp Req: 1499701302.309413 0.000000 0.000000 * ReqStart 127.0.0.1 43955 * ReqMethod GET * ReqURL /url * ReqProtocol HTTP/1.1 * ReqHeader User-Agent: curl/7.38.0 * ReqHeader Host: host * ReqHeader Accept: /// * ReqHeader X-Forwarded-Proto: https * ReqHeader X-Forwarded-For: ip * ReqHeader Connection: close * ReqUnset X-Forwarded-For: ip * ReqHeader X-Forwarded-For: ip, 127.0.0.1 * VCL_call RECV * ReqUnset X-Forwarded-For: ip, 127.0.0.1 * ReqHeader X-Forwarded-For: ip, 127.0.0.1, 127.0.0.1 * VCL_return hash * VCL_call HASH * VCL_return lookup * Hit 2147582482 * VCL_call HIT * VCL_return deliver * RespProtocol HTTP/1.1 * RespStatus 200 * RespReason OK * RespHeader Date: Mon, 10 Jul 2017 15:10:00 GMT * RespHeader Server: gunicorn/19.7.1 * RespHeader content-type: application/json; charset=UTF-8 * RespHeader X-Varnish: 132712 98834 * RespHeader Age: 1902 * RespHeader Via: 1.1 varnish-v4 * VCL_call DELIVER * RespHeader X-Cacheable: YES * RespUnset Server: gunicorn/19.7.1 * RespUnset Via: 1.1 varnish-v4 * RespUnset X-Varnish: 132712 98834 * VCL_return deliver * Timestamp Process: 1499701302.309480 0.000067 0.000067 * RespHeader Content-Length: 251799 * Debug "RES_MODE 2" * RespHeader Connection: close * RespHeader Accept-Ranges: bytes * Timestamp Resp: 1499701302.309571 0.000159 0.000092 * Debug "XXX REF 2" * ReqAcct 198 0 198 197 251799 251996 * End I realize varnish believes that was treated quickly, but on curl's side it took 5 seconds. Curl is used directly on the varnish server, so it's not network latency. It's a bit hard to reproduce, I'm using a script that does queries in a loop and shows the curl time_total to finally get it to happen. Could it be something Linux side ? Maybe some kind of limit, or a socket cleanup job or something that would pause the request. It happens maybe once every 400 or 500 requests, sometimes more, sometimes less. Attached is the varnishstat -1 asked on the mailing list page. -- Kevin -------------- next part -------------- MAIN.uptime 14218 1.00 Child process uptime MAIN.sess_conn 8750 0.62 Sessions accepted MAIN.sess_drop 0 0.00 Sessions dropped MAIN.sess_fail 0 0.00 Session accept failures MAIN.sess_pipe_overflow 0 0.00 Session pipe overflow MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors MAIN.client_req_411 0 0.00 Client requests received, subject to 411 errors MAIN.client_req_413 0 0.00 Client requests received, subject to 413 errors MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors MAIN.client_req 9629 0.68 Good Client requests received MAIN.cache_hit 5218 0.37 Cache hits MAIN.cache_hitpass 0 0.00 Cache hits for pass MAIN.cache_miss 4142 0.29 Cache misses MAIN.backend_conn 34 0.00 Backend conn. success MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted MAIN.backend_busy 0 0.00 Backend conn. too many MAIN.backend_fail 0 0.00 Backend conn. failures MAIN.backend_reuse 4132 0.29 Backend conn. reuses MAIN.backend_toolate 23 0.00 Backend conn. was closed MAIN.backend_recycle 4163 0.29 Backend conn. recycles MAIN.backend_retry 0 0.00 Backend conn. retry MAIN.fetch_head 0 0.00 Fetch no body (HEAD) MAIN.fetch_length 2543 0.18 Fetch with Length MAIN.fetch_chunked 1622 0.11 Fetch chunked MAIN.fetch_eof 0 0.00 Fetch EOF MAIN.fetch_bad 0 0.00 Fetch bad T-E MAIN.fetch_close 0 0.00 Fetch wanted close MAIN.fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed MAIN.fetch_zero 0 0.00 Fetch zero len body MAIN.fetch_1xx 0 0.00 Fetch no body (1xx) MAIN.fetch_204 0 0.00 Fetch no body (204) MAIN.fetch_304 0 0.00 Fetch no body (304) MAIN.fetch_failed 0 0.00 Fetch body failed MAIN.pools 2 . Number of thread pools MAIN.threads 200 . Total number of threads MAIN.threads_limited 0 0.00 Threads hit max MAIN.threads_created 200 0.01 Threads created MAIN.threads_destroyed 0 0.00 Threads destroyed MAIN.threads_failed 0 0.00 Thread creation failed MAIN.thread_queue_len 0 . Length of session queue MAIN.busy_sleep 4 0.00 Number of requests sent to sleep on busy objhdr MAIN.busy_wakeup 4 0.00 Number of requests woken after sleep on busy objhdr MAIN.sess_queued 0 0.00 Sessions queued for thread MAIN.sess_dropped 0 0.00 Sessions dropped for thread MAIN.n_object 3477 . N struct object MAIN.n_vampireobject 0 . N unresurrected objects MAIN.n_objectcore 3485 . N struct objectcore MAIN.n_objecthead 3486 . N struct objecthead MAIN.n_waitinglist 41 . N struct waitinglist MAIN.n_backend 1 . N backends MAIN.n_expired 201 . N expired objects MAIN.n_lru_nuked 0 . N LRU nuked objects MAIN.n_lru_moved 4779 . N LRU moved objects MAIN.losthdr 0 0.00 HTTP header overflows MAIN.s_sess 8750 0.62 Total Sessions MAIN.s_req 9629 0.68 Total Requests MAIN.s_pipe 0 0.00 Total pipe MAIN.s_pass 23 0.00 Total pass MAIN.s_fetch 4165 0.29 Total fetch MAIN.s_synth 246 0.02 Total synth MAIN.s_req_hdrbytes 4837934 340.27 Request header bytes MAIN.s_req_bodybytes 0 0.00 Request body bytes MAIN.s_resp_hdrbytes 2206256 155.17 Response header bytes MAIN.s_resp_bodybytes 303132867 21320.36 Reponse body bytes MAIN.s_pipe_hdrbytes 0 0.00 Pipe request header bytes MAIN.s_pipe_in 0 0.00 Piped bytes from client MAIN.s_pipe_out 0 0.00 Piped bytes to client MAIN.sess_closed 26 0.00 Session Closed MAIN.sess_pipeline 0 0.00 Session Pipeline MAIN.sess_readahead 0 0.00 Session Read Ahead MAIN.sess_herd 1864 0.13 Session herd MAIN.shm_records 761102 53.53 SHM records MAIN.shm_writes 77346 5.44 SHM writes MAIN.shm_flushes 0 0.00 SHM flushes due to overflow MAIN.shm_cont 107 0.01 SHM MTX contention MAIN.shm_cycles 0 0.00 SHM cycles through buffer MAIN.sms_nreq 0 0.00 SMS allocator requests MAIN.sms_nobj 0 . SMS outstanding allocations MAIN.sms_nbytes 0 . SMS outstanding bytes MAIN.sms_balloc 0 . SMS bytes allocated MAIN.sms_bfree 0 . SMS bytes freed MAIN.backend_req 4166 0.29 Backend requests made MAIN.n_vcl 1 0.00 N vcl total MAIN.n_vcl_avail 1 0.00 N vcl available MAIN.n_vcl_discard 0 0.00 N vcl discarded MAIN.bans 247 . Count of bans MAIN.bans_completed 242 . Number of bans marked 'completed' MAIN.bans_obj 0 . Number of bans using obj.* MAIN.bans_req 246 . Number of bans using req.* MAIN.bans_added 247 0.02 Bans added MAIN.bans_deleted 0 0.00 Bans deleted MAIN.bans_tested 3602 0.25 Bans tested against objects (lookup) MAIN.bans_obj_killed 464 0.03 Objects killed by bans (lookup) MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker) MAIN.bans_tests_tested 5878 0.41 Ban tests tested against objects (lookup) MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker) MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker) MAIN.bans_dups 241 0.02 Bans superseded by other bans MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup MAIN.bans_persisted_bytes 40202 . Bytes used by the persisted ban lists MAIN.bans_persisted_fragmentation 36141 . Extra bytes in persisted ban lists due to fragmentation MAIN.n_purges 0 . Number of purge operations MAIN.n_obj_purged 0 . number of purged objects MAIN.exp_mailed 4606 0.32 Number of objects mailed to expiry thread MAIN.exp_received 4606 0.32 Number of objects received by expiry thread MAIN.hcb_nolock 9360 0.66 HCB Lookups without lock MAIN.hcb_lock 3679 0.26 HCB Lookups with lock MAIN.hcb_insert 3679 0.26 HCB Inserts MAIN.esi_errors 0 0.00 ESI parse errors (unlock) MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock) MAIN.vmods 0 . Loaded VMODs MAIN.n_gzip 1306 0.09 Gzip operations MAIN.n_gunzip 4569 0.32 Gunzip operations MAIN.vsm_free 972528 . Free VSM space MAIN.vsm_used 83962080 . Used VSM space MAIN.vsm_cooling 0 . Cooling VSM space MAIN.vsm_overflow 0 . Overflow VSM space MAIN.vsm_overflowed 0 0.00 Overflowed VSM space MGT.uptime 14219 1.00 Management process uptime MGT.child_start 1 0.00 Child process started MGT.child_exit 0 0.00 Child process normal exit MGT.child_stop 0 0.00 Child process unexpected exit MGT.child_died 0 0.00 Child process died (signal) MGT.child_dump 0 0.00 Child process core dumped MGT.child_panic 0 0.00 Child process panic MEMPOOL.vbc.live 9 . In use MEMPOOL.vbc.pool 10 . In Pool MEMPOOL.vbc.sz_wanted 88 . Size requested MEMPOOL.vbc.sz_needed 120 . Size allocated MEMPOOL.vbc.allocs 34 0.00 Allocations MEMPOOL.vbc.frees 25 0.00 Frees MEMPOOL.vbc.recycle 34 0.00 Recycled from pool MEMPOOL.vbc.timeout 24 0.00 Timed out from pool MEMPOOL.vbc.toosmall 0 0.00 Too small to recycle MEMPOOL.vbc.surplus 0 0.00 Too many for pool MEMPOOL.vbc.randry 0 0.00 Pool ran dry MEMPOOL.busyobj.live 1 . In use MEMPOOL.busyobj.pool 10 . In Pool MEMPOOL.busyobj.sz_wanted 65536 . Size requested MEMPOOL.busyobj.sz_needed 65568 . Size allocated MEMPOOL.busyobj.allocs 4166 0.29 Allocations MEMPOOL.busyobj.frees 4165 0.29 Frees MEMPOOL.busyobj.recycle 4166 0.29 Recycled from pool MEMPOOL.busyobj.timeout 361 0.03 Timed out from pool MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle MEMPOOL.busyobj.surplus 0 0.00 Too many for pool MEMPOOL.busyobj.randry 0 0.00 Pool ran dry MEMPOOL.req0.live 0 . In use MEMPOOL.req0.pool 10 . In Pool MEMPOOL.req0.sz_wanted 65536 . Size requested MEMPOOL.req0.sz_needed 65568 . Size allocated MEMPOOL.req0.allocs 4800 0.34 Allocations MEMPOOL.req0.frees 4800 0.34 Frees MEMPOOL.req0.recycle 4800 0.34 Recycled from pool MEMPOOL.req0.timeout 380 0.03 Timed out from pool MEMPOOL.req0.toosmall 0 0.00 Too small to recycle MEMPOOL.req0.surplus 0 0.00 Too many for pool MEMPOOL.req0.randry 0 0.00 Pool ran dry MEMPOOL.sess0.live 0 . In use MEMPOOL.sess0.pool 10 . In Pool MEMPOOL.sess0.sz_wanted 384 . Size requested MEMPOOL.sess0.sz_needed 416 . Size allocated MEMPOOL.sess0.allocs 4354 0.31 Allocations MEMPOOL.sess0.frees 4354 0.31 Frees MEMPOOL.sess0.recycle 4354 0.31 Recycled from pool MEMPOOL.sess0.timeout 703 0.05 Timed out from pool MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle MEMPOOL.sess0.surplus 0 0.00 Too many for pool MEMPOOL.sess0.randry 0 0.00 Pool ran dry MEMPOOL.req1.live 1 . In use MEMPOOL.req1.pool 10 . In Pool MEMPOOL.req1.sz_wanted 65536 . Size requested MEMPOOL.req1.sz_needed 65568 . Size allocated MEMPOOL.req1.allocs 4832 0.34 Allocations MEMPOOL.req1.frees 4831 0.34 Frees MEMPOOL.req1.recycle 4832 0.34 Recycled from pool MEMPOOL.req1.timeout 399 0.03 Timed out from pool MEMPOOL.req1.toosmall 0 0.00 Too small to recycle MEMPOOL.req1.surplus 0 0.00 Too many for pool MEMPOOL.req1.randry 0 0.00 Pool ran dry MEMPOOL.sess1.live 3 . In use MEMPOOL.sess1.pool 10 . In Pool MEMPOOL.sess1.sz_wanted 384 . Size requested MEMPOOL.sess1.sz_needed 416 . Size allocated MEMPOOL.sess1.allocs 4397 0.31 Allocations MEMPOOL.sess1.frees 4394 0.31 Frees MEMPOOL.sess1.recycle 4397 0.31 Recycled from pool MEMPOOL.sess1.timeout 735 0.05 Timed out from pool MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle MEMPOOL.sess1.surplus 0 0.00 Too many for pool MEMPOOL.sess1.randry 0 0.00 Pool ran dry SMA.s0.c_req 8291 0.58 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 343450365 24156.03 Bytes allocated SMA.s0.c_freed 22911069 1611.41 Bytes freed SMA.s0.g_alloc 6960 . Allocations outstanding SMA.s0.g_bytes 320539296 . Bytes outstanding SMA.s0.g_space 1252324704 . Bytes available SMA.Transient.c_req 47 0.00 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 1503638 105.76 Bytes allocated SMA.Transient.c_freed 1503638 105.76 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.vs1_weather_8080(127.0.0.1,::1,8080).vcls 1 . VCL references VBE.vs1_weather_8080(127.0.0.1,::1,8080).happy 0 . Happy health probes VBE.vs1_weather_8080(127.0.0.1,::1,8080).bereq_hdrbytes 2070935 145.66 Request header bytes VBE.vs1_weather_8080(127.0.0.1,::1,8080).bereq_bodybytes 0 0.00 Request body bytes VBE.vs1_weather_8080(127.0.0.1,::1,8080).beresp_hdrbytes 766225 53.89 Response header bytes VBE.vs1_weather_8080(127.0.0.1,::1,8080).beresp_bodybytes 132439724 9314.93 Response body bytes VBE.vs1_weather_8080(127.0.0.1,::1,8080).pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.vs1_weather_8080(127.0.0.1,::1,8080).pipe_out 0 0.00 Piped bytes to backend VBE.vs1_weather_8080(127.0.0.1,::1,8080).pipe_in 0 0.00 Piped bytes from backend LCK.sms.creat 0 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 0 0.00 Lock Operations LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 9716 0.68 Lock Operations LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 3959 0.28 Lock Operations LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 9128 0.64 Lock Operations LCK.sessmem.creat 0 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 0 0.00 Lock Operations LCK.sess.creat 8751 0.62 Created locks LCK.sess.destroy 8748 0.62 Destroyed locks LCK.sess.locks 0 0.00 Lock Operations LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 18670 1.31 Lock Operations LCK.herder.creat 0 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 0 0.00 Lock Operations LCK.wq.creat 3 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 47476 3.34 Lock Operations LCK.objhdr.creat 3688 0.26 Created locks LCK.objhdr.destroy 201 0.01 Destroyed locks LCK.objhdr.locks 76467 5.38 Lock Operations LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 18794 1.32 Lock Operations LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 14328 1.01 Lock Operations LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 4752 0.33 Lock Operations LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 58726 4.13 Lock Operations LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.backend.creat 1 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 8411 0.59 Lock Operations LCK.vcapace.creat 1 0.00 Created locks LCK.vcapace.destroy 0 0.00 Destroyed locks LCK.vcapace.locks 0 0.00 Lock Operations LCK.nbusyobj.creat 0 0.00 Created locks LCK.nbusyobj.destroy 0 0.00 Destroyed locks LCK.nbusyobj.locks 0 0.00 Lock Operations LCK.busyobj.creat 4166 0.29 Created locks LCK.busyobj.destroy 4165 0.29 Destroyed locks LCK.busyobj.locks 36357 2.56 Lock Operations LCK.mempool.creat 6 0.00 Created locks LCK.mempool.destroy 0 0.00 Destroyed locks LCK.mempool.locks 123234 8.67 Lock Operations LCK.vxid.creat 1 0.00 Created locks LCK.vxid.destroy 0 0.00 Destroyed locks LCK.vxid.locks 41 0.00 Lock Operations LCK.pipestat.creat 1 0.00 Created locks LCK.pipestat.destroy 0 0.00 Destroyed locks LCK.pipestat.locks 0 0.00 Lock Operations From kevin.lemonnier at cognix-systems.com Tue Jul 11 13:26:30 2017 From: kevin.lemonnier at cognix-systems.com (Kevin Lemonnier) Date: Tue, 11 Jul 2017 14:26:30 +0100 Subject: Weird 5 seconds delay In-Reply-To: <5964CB57.4070400@cognix-systems.com> References: <5964CB57.4070400@cognix-systems.com> Message-ID: <5964D206.8060304@cognix-systems.com> And of course, I just figured it out after posting. Sorry for the noise ! The problem was DNS resolution, curl does a new DNS query each time and it looks like the resolver for this server takes 5 seconds to answer, sometimes. Nothing to do with varnish, my bad On 07/11/2017 01:57 PM, Kevin Lemonnier wrote: > Hi, > > I've posted on serverfault and on IRC, but since this is a bit (or very) > urgent, I'll try it here too. > > I have a strange problem with varnish, it's in front of an API and it's > caching the whole responses. It mostly works fine, but from time to time > a request will take 5 seconds (or rarely 10 seconds, or 15 seconds .. > always an increment of 5) more than usual to return. > > I've tried bypassing the HAProxy in front, same, and I checked, it does > that whether the URL is already cached or not (I've checked the Age > header). So it can't be the backend since the page is in cache, it's not > what's in front of varnish, that leaves only varnish itself as the cause > of that problem. > > Any idea as to what could cause that 5 seconds delay ? I've checked > varnishlog, during that delay varnish isn't doing anything. I've also > tried manually making another request during that delay, and varnish > answered fine so it's not frozen or anything, it works fine. And at the > end of that 5 seconds, it's outputting the log for the request as usual, > nothing weird in it. Example : > > * << Request >> 132712 > * Begin req 132711 rxreq > * Timestamp Start: 1499701302.309413 0.000000 0.000000 > * Timestamp Req: 1499701302.309413 0.000000 0.000000 > * ReqStart 127.0.0.1 43955 > * ReqMethod GET > * ReqURL /url > * ReqProtocol HTTP/1.1 > * ReqHeader User-Agent: curl/7.38.0 > * ReqHeader Host: host > * ReqHeader Accept: /// > * ReqHeader X-Forwarded-Proto: https > * ReqHeader X-Forwarded-For: ip > * ReqHeader Connection: close > * ReqUnset X-Forwarded-For: ip > * ReqHeader X-Forwarded-For: ip, 127.0.0.1 > * VCL_call RECV > * ReqUnset X-Forwarded-For: ip, 127.0.0.1 > * ReqHeader X-Forwarded-For: ip, 127.0.0.1, 127.0.0.1 > * VCL_return hash > * VCL_call HASH > * VCL_return lookup > * Hit 2147582482 > * VCL_call HIT > * VCL_return deliver > * RespProtocol HTTP/1.1 > * RespStatus 200 > * RespReason OK > * RespHeader Date: Mon, 10 Jul 2017 15:10:00 GMT > * RespHeader Server: gunicorn/19.7.1 > * RespHeader content-type: application/json; charset=UTF-8 > * RespHeader X-Varnish: 132712 98834 > * RespHeader Age: 1902 > * RespHeader Via: 1.1 varnish-v4 > * VCL_call DELIVER > * RespHeader X-Cacheable: YES > * RespUnset Server: gunicorn/19.7.1 > * RespUnset Via: 1.1 varnish-v4 > * RespUnset X-Varnish: 132712 98834 > * VCL_return deliver > * Timestamp Process: 1499701302.309480 0.000067 0.000067 > * RespHeader Content-Length: 251799 > * Debug "RES_MODE 2" > * RespHeader Connection: close > * RespHeader Accept-Ranges: bytes > * Timestamp Resp: 1499701302.309571 0.000159 0.000092 > * Debug "XXX REF 2" > * ReqAcct 198 0 198 197 251799 251996 > * End > > I realize varnish believes that was treated quickly, but on curl's side > it took 5 seconds. Curl is used directly on the varnish server, so it's > not network latency. It's a bit hard to reproduce, I'm using a script > that does queries in a loop and shows the curl time_total to finally get > it to happen. > > Could it be something Linux side ? Maybe some kind of limit, or a socket > cleanup job or something that would pause the request. It happens maybe > once every 400 or 500 requests, sometimes more, sometimes less. > > Attached is the varnishstat -1 asked on the mailing list page. > > -- > Kevin > > > > > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Cordialement, Kevin LEMONNIER Administrateur Syst?mes, Cognix Systems *Rennes* | Brest | Saint-Malo | Paris kevin.lemonnier at cognix-systems.com T?l. : 02 99 27 75 92 Facebook Cognix Systems Twitter Cognix Systems Logo Cognix Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hchidihi.jpe Type: image/jpeg Size: 4935 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ihjiiedh.png Type: image/png Size: 1444 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cfeegaaf.png Type: image/png Size: 1623 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: agdahgjc.png Type: image/png Size: 1474 bytes Desc: not available URL: From guillaume at varnish-software.com Tue Jul 11 13:38:48 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 11 Jul 2017 15:38:48 +0200 Subject: Weird 5 seconds delay In-Reply-To: <5964CB57.4070400@cognix-systems.com> References: <5964CB57.4070400@cognix-systems.com> Message-ID: Looking at the last Timestamp line, Varnish pushed that to the kernel very quickly. What kind of equipment do you have between varnish and curl? To me it sounds like you get a miss once in a while and that's what causing the delay. To debug, I remove the "unset resp.http.x-varnish" you surely have in vcl_deliver. Then test again and find in the logs the exact request with the same x-varnish header. If varnish is still outputting the data super fast, try wireshark, and maybe look at open sockets. -- Guillaume Quintard On Tue, Jul 11, 2017 at 2:57 PM, Kevin Lemonnier < kevin.lemonnier at cognix-systems.com> wrote: > Hi, > > I've posted on serverfault and on IRC, but since this is a bit (or very) > urgent, I'll try it here too. > > I have a strange problem with varnish, it's in front of an API and it's > caching the whole responses. It mostly works fine, but from time to time > a request will take 5 seconds (or rarely 10 seconds, or 15 seconds .. > always an increment of 5) more than usual to return. > > I've tried bypassing the HAProxy in front, same, and I checked, it does > that whether the URL is already cached or not (I've checked the Age > header). So it can't be the backend since the page is in cache, it's not > what's in front of varnish, that leaves only varnish itself as the cause > of that problem. > > Any idea as to what could cause that 5 seconds delay ? I've checked > varnishlog, during that delay varnish isn't doing anything. I've also > tried manually making another request during that delay, and varnish > answered fine so it's not frozen or anything, it works fine. And at the > end of that 5 seconds, it's outputting the log for the request as usual, > nothing weird in it. Example : > > * << Request >> 132712 > * Begin req 132711 rxreq > * Timestamp Start: 1499701302.309413 0.000000 0.000000 > * Timestamp Req: 1499701302.309413 0.000000 0.000000 > * ReqStart 127.0.0.1 43955 > * ReqMethod GET > * ReqURL /url > * ReqProtocol HTTP/1.1 > * ReqHeader User-Agent: curl/7.38.0 > * ReqHeader Host: host > * ReqHeader Accept: /// > * ReqHeader X-Forwarded-Proto: https > * ReqHeader X-Forwarded-For: ip > * ReqHeader Connection: close > * ReqUnset X-Forwarded-For: ip > * ReqHeader X-Forwarded-For: ip, 127.0.0.1 > * VCL_call RECV > * ReqUnset X-Forwarded-For: ip, 127.0.0.1 > * ReqHeader X-Forwarded-For: ip, 127.0.0.1, 127.0.0.1 > * VCL_return hash > * VCL_call HASH > * VCL_return lookup > * Hit 2147582482 > * VCL_call HIT > * VCL_return deliver > * RespProtocol HTTP/1.1 > * RespStatus 200 > * RespReason OK > * RespHeader Date: Mon, 10 Jul 2017 15:10:00 GMT > * RespHeader Server: gunicorn/19.7.1 > * RespHeader content-type: application/json; charset=UTF-8 > * RespHeader X-Varnish: 132712 98834 > * RespHeader Age: 1902 > * RespHeader Via: 1.1 varnish-v4 > * VCL_call DELIVER > * RespHeader X-Cacheable: YES > * RespUnset Server: gunicorn/19.7.1 > * RespUnset Via: 1.1 varnish-v4 > * RespUnset X-Varnish: 132712 98834 > * VCL_return deliver > * Timestamp Process: 1499701302.309480 0.000067 0.000067 > * RespHeader Content-Length: 251799 > * Debug "RES_MODE 2" > * RespHeader Connection: close > * RespHeader Accept-Ranges: bytes > * Timestamp Resp: 1499701302.309571 0.000159 0.000092 > * Debug "XXX REF 2" > * ReqAcct 198 0 198 197 251799 251996 > * End > > I realize varnish believes that was treated quickly, but on curl's side > it took 5 seconds. Curl is used directly on the varnish server, so it's > not network latency. It's a bit hard to reproduce, I'm using a script > that does queries in a loop and shows the curl time_total to finally get > it to happen. > > Could it be something Linux side ? Maybe some kind of limit, or a socket > cleanup job or something that would pause the request. It happens maybe > once every 400 or 500 requests, sometimes more, sometimes less. > > Attached is the varnishstat -1 asked on the mailing list page. > > -- > Kevin > > > > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at section.io Wed Jul 12 07:14:13 2017 From: matt at section.io (Matthew Johnson) Date: Wed, 12 Jul 2017 17:14:13 +1000 Subject: Value of builtin.vcl - vcl_recv on modern websites Message-ID: Hi There, This may be a biblical / philosophical question but when teaching people Varnish ive been struggling to justify leveraging the builtin.vcl - Mainly vcl_recv in builtin.vcl Since Varnish 1.0 the builtin (or default) vcl_recv has had this statement: if (req.http.Authorization || req.http.Cookie) { /* Not cacheable by default */ return (pass); } My issue with this is the req.http.Cookie check, In any modern website cookies are always present. This means that out of the box Varnish doesnt cache any content, which may be an ok safe starting point but without the cookie check the default implementation of most webservers/applications would cache static content and not cache on dynamic content (HTML) - via cache-control / set-cookie response headers etc - A good outcome. Aside from the above poor caching start point (which could be ok from a low risk kickoff perspective), biggest issue with the cookie check is that it forces the user into a cookie management strategy. The most common scenario we hit is that users try to "do the right thing" and remove individual cookies so that they can fall through to underlying vcl_recv cookie check. The ends in disaster when marketing departments, other parts of the IT departments or anyone involved in the website such as SEO agencies; add an additional cookie to the site. A classic example of this is someone adding javascript via Google Tag Manager which then sets a cookie. The outcome of the above scenario is that suddenly the cache stops doing anything because there is a new cookie that is not "managed" and hence all requests, both static and dynamic "pass" via the underlying builtin.vcl logic. Do you still recommend configurations fall through to the underlying vcl_recv logic? Options i can think of: 1) Build lots of cookie whitelist/blacklist functionality above builtin.vcl so the underlying logic doesnt break things 2) Remove cookies entirely for some request types (such as static objects) so the underlying logic always works for some content types - My experience is that this generates issues on some customers sites as they have static handlers that are looking for a cookie in the origin and then do redirects/change response content if no cookies are present. 3) Explicitly handle all scenarios and return(pass) or return(hash) to avoid vcl_recv in builtin.vcl (and lift up the good bits of vcl_recv into the main config) Interested in your views, As i work on internet facing websites - I would have thought this was the most common scenario but maybe there are more users doing other things with Varnish or i'm missing something simple in terms of handling cookies? Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 12 09:01:03 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 12 Jul 2017 11:01:03 +0200 Subject: Value of builtin.vcl - vcl_recv on modern websites In-Reply-To: References: Message-ID: On Wed, Jul 12, 2017 at 9:14 AM, Matthew Johnson wrote: > Hi There, > > This may be a biblical / philosophical question but when teaching people > Varnish ive been struggling to justify leveraging the builtin.vcl - Mainly > vcl_recv in builtin.vcl > > Since Varnish 1.0 the builtin (or default) vcl_recv has had this statement: > if (req.http.Authorization || req.http.Cookie) { > > /* Not cacheable by default */ > > return (pass); > > } I can see a couple reasons: - HTTP (in general) is poorly designed - Cookie don't integrate well with other HTTP mechanisms - Web applications are often poorly implemented Not trying to be offensive, just factual. I could pinpoint the problems in HTTP, but I already have a blog post [1] and an unfinished draft for that. Not in the mood to duplicate that effort here, I don't really enjoy digging in that area. > My issue with this is the req.http.Cookie check, In any modern website > cookies are always present. > > This means that out of the box Varnish doesnt cache any content, which may > be an ok safe starting point but without the cookie check the default > implementation of most webservers/applications would cache static content > and not cache on dynamic content (HTML) - via cache-control / set-cookie > response headers etc - A good outcome. The problem is that we've often seen inconsistent cache directives, untrustworthy backends. You could very well cache dynamic contents, there's no point in using Varnish if you only cache static resources. The problem is when a backend sends user-specific contents and doesn't say so (Vary header) then you risk an information leak. Varnish doesn't cache by default for this reason. > Aside from the above poor caching start point (which could be ok from a low > risk kickoff perspective), biggest issue with the cookie check is that it > forces the user into a cookie management strategy. If your backend speaks HTTP fluently and provides correct support for cookies and caching, the aforementioned blog post [1] gives you a simple solution to deal with that in pure VCL. > Do you still recommend configurations fall through to the underlying > vcl_recv logic? > > Options i can think of: > 1) Build lots of cookie whitelist/blacklist functionality above builtin.vcl > so the underlying logic doesnt break things This can be avoided with the blog post [1] trick if you are confident your backend won't make Varnish leak information. > 2) Remove cookies entirely for some request types (such as static objects) > so the underlying logic always works for some content types - My experience > is that this generates issues on some customers sites as they have static > handlers that are looking for a cookie in the origin and then do > redirects/change response content if no cookies are present. So now you need to make assumptions about resources not managed by Varnish. A common pattern is to remove cookies when the path terminates with a "static resource" file extension. And then you run into applications that generate images on the fly and need cookies. > 3) Explicitly handle all scenarios and return(pass) or return(hash) to avoid > vcl_recv in builtin.vcl (and lift up the good bits of vcl_recv into the > main config) While I lean towards composing on top of the built-in, I don't mind this approach. > Interested in your views, As i work on internet facing websites - I would > have thought this was the most common scenario but maybe there are more > users doing other things with Varnish or i'm missing something simple in > terms of handling cookies? It can be simple only if your backend is reliable when it comes to cookie handling and caching. Otherwise you're in for a VCL soup of cookie filtering... Dridi [1] https://info.varnish-software.com/blog/yet-another-post-on-caching-vs-cookies From geoff at uplex.de Wed Jul 12 09:20:52 2017 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 12 Jul 2017 11:20:52 +0200 Subject: Value of builtin.vcl - vcl_recv on modern websites In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 07/12/2017 09:14 AM, Matthew Johnson wrote: > > Since Varnish 1.0 the builtin (or default) vcl_recv has had this > statement: if (req.http.Authorization || req.http.Cookie) { > > /* Not cacheable by default */ > > return (pass); > > } > > My issue with this is the req.http.Cookie check, In any modern > website cookies are always present. While I'm sympathetic to all the things you said and don't mean to disregard it but cutting things short, the answer here is simple: there is no other choice for the default configuration of a caching proxy, because those two headers mean that the response may be personalized. Many uses of cookies don't have that effect, of course, but Varnish has no way of knowing that. As bad as the effect of the default config may seem on sites that use cookies on every request -- Varnish doesn't cache anything -- it would be much worse if someone sets up Varnish, doesn't think of the consequences of not changing the default configuration, and you end up seeing someone else's personal data in cached responses. This problem is not specific to Varnish, but to any server that tries to do what Varnish tries to do. I know from experience that it's generally futile to say so, but this situation really ought to lead to some widespread re-thinking throughout the industry. Forgive me for shouting, soapbox-style, but this gives me an opportunity to sound off on a pet peeve: ==> Maybe modern web site SHOULD NOT use cookies on every request! Because of the way cookies interfere with downstream caching. I have come to the conviction that many uses of cookies are a result of lazy thinking in app development. Many PHP devs, for example, are in the habit of saying session_start(); at the beginning of every script, without thinking twice about whether they really need it. I have seen uses of cookies where "just toss that thing into a cookie" was evidently the easy decision to make. I have seen cookies with values that are 3KB long. (Sometime over beer I'll tell you about that little database that someone wanted to transport over a cookie, a base64-encoded CSV file whose data was *also* base64-encoded, leading to a doubly base64-encoded cookie value, in every request.) This is an instance of an issue that you encounter a lot with the use of Varnish in practice: app development that doesn't think outside of its own box in terms of functionality and performance. Rather than thinking about the benefits of handing off some of your work, by letting someone else serve your cached responses for you. HTTP was conceived from the beginning to enable caching as a means of solving performance problems in slow networks. A well-configured deployment of Varnish shows how beneficial that can be. But the universal and unreflected use of cookies is one of the forces presently at work that actively undermine that part of the equation. > A classic example of this is someone adding javascript via Google > Tag Manager which then sets a cookie. One might have hoped that the Googlers, of all people, would have more awareness of the trouble that they could cause by doing that. > Do you still recommend configurations fall through to the > underlying vcl_recv logic? > > Options i can think of: In a project where I am able to work with the app devs, I have had good experience with working out a policy with them: if you MUST have cookies in every request (although I WISH YOU WOULD RECONSIDER THAT), then the caching proxy cannot make caching decisions on your behalf. Only you can know if your response is cacheable, despite the presence of cookie foo or bar, but is not cacheable if the cookie is baz. So if you want your response to be cached, you MUST say so in a Cache-Control header. The proxy will not cache any other responses. Then we write VCL to bypass builtin's vcl_recv, and start Varnish with - -t 0 (default TTL is 0s). Responses are then cached only if they announce that they are cacheable. Of course, this has the effect that you're lamenting -- Varnish doesn't cache anything by default -- but in my experience, the result is that devs have become very good at thinking about the cacheability of their responses. That boils down to answering your question by saying no, you can't use builtin vcl_recv in a situation like that. When the cookies, like the Evil, are always and everywhere (to paraphrase a saying in Germany), and some cookies lead to cacheable responses while others don't, then there's no other option for a caching proxy. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJZZen0AAoJEOUwvh9pJNURnWEQAK9ucYSXEcrEwbmOrroBWoGK iR9a8OFhst1rVKFQ2vNTUpw+OYM8vf8SJYToyq2VxG5/f/uGsT6nSVRWGKThgeV1 geyyUQfbbDte1at4aFy6HX6LeCt62Si0L9KUZMZMkI5C6m6FKgA/5HecUchhuXdP Li17DXKvzrQyTBJpvbk2vBZGPVlErnVAUz75IeJMrD6t/WGO0PsZvkC/l8LZhqcD 2S2R9SHYMIyBrWSZm+YsI1DxMwvH6Gt84NRPpKcHQQ7TKEfvtOq0NwoqcNOB26EL KIfOuVJdbiMvD5D+BZud/7a7UzSpJz5klLMdTcJMN60MrHJjGcok/5KiG7TNNowj hMy5YUYpuIybsWzcvB5Ie/nteb0WyXt5+LYkxjP9dbN7AN3k+aU1PboSOyqYXO3u KK1al00LMKHfzMHs1vF3QHRt2Q1Udud6dCdHuw6TyJ7eWCc9YGgU8NyboMLkXBhO fBVNUQQjfNjaDWhKFvUIMEsGZhgvzzuMvjlZNhkc/lcDLmU8wVXyiMFSoBcuR1sX 7EM1wBUKcKix0wE4QPl9608ql/5LF3Ms+wqDpmS0ECgFIf1yKMWFZt9iHhMUbch2 7fhr77vVjVD1K6nKHqDGOuLp4Cq+lfBJkd7PX2huQUV/hc00C8+NEieD77wuAwk7 OPiGNt5YqmDNjtZmFUVH =BlAn -----END PGP SIGNATURE----- From matt at section.io Wed Jul 12 10:56:39 2017 From: matt at section.io (Matthew Johnson) Date: Wed, 12 Jul 2017 20:56:39 +1000 Subject: Value of builtin.vcl - vcl_recv on modern websites In-Reply-To: References: Message-ID: On Wed, Jul 12, 2017 at 7:01 PM, Dridi Boukelmoune wrote: > On Wed, Jul 12, 2017 at 9:14 AM, Matthew Johnson wrote: >> Hi There, >> >> This may be a biblical / philosophical question but when teaching people >> Varnish ive been struggling to justify leveraging the builtin.vcl - Mainly >> vcl_recv in builtin.vcl >> >> Since Varnish 1.0 the builtin (or default) vcl_recv has had this statement: >> if (req.http.Authorization || req.http.Cookie) { >> >> /* Not cacheable by default */ >> >> return (pass); >> >> } > > I can see a couple reasons: > > - HTTP (in general) is poorly designed > - Cookie don't integrate well with other HTTP mechanisms > - Web applications are often poorly implemented > > Not trying to be offensive, just factual. I could pinpoint the problems > in HTTP, but I already have a blog post [1] and an unfinished draft > for that. Not in the mood to duplicate that effort here, I don't really > enjoy digging in that area. > Great blog post (that i haven't seen before), thanks for sharing its right on topic, looking forward to part 2 >> My issue with this is the req.http.Cookie check, In any modern website >> cookies are always present. >> >> This means that out of the box Varnish doesnt cache any content, which may >> be an ok safe starting point but without the cookie check the default >> implementation of most webservers/applications would cache static content >> and not cache on dynamic content (HTML) - via cache-control / set-cookie >> response headers etc - A good outcome. > > The problem is that we've often seen inconsistent cache directives, > untrustworthy backends. You could very well cache dynamic contents, > there's no point in using Varnish if you only cache static resources. > The problem is when a backend sends user-specific contents and > doesn't say so (Vary header) then you risk an information leak. > > Varnish doesn't cache by default for this reason. > >> Aside from the above poor caching start point (which could be ok from a low >> risk kickoff perspective), biggest issue with the cookie check is that it >> forces the user into a cookie management strategy. > > If your backend speaks HTTP fluently and provides correct support for > cookies and caching, the aforementioned blog post [1] gives you a > simple solution to deal with that in pure VCL. It feels like one way or another a solution is going to be needed before vcl_recv in builtin.vcl to make the logic work on almost any web application. Im wondering if its more logical for new users to override (return) based based on explicit conditions that they define rather than move the cookie in and out of scope for different scenarios to achieve the same outcome (and have them forced to understand how we are being sneaky with the cookie to avoid the underlying cookie check). That said the "cookie shuffle" does keep the rest of the good logic in builtin vcl_recv in scope and saves lifting it up. > > > >> Do you still recommend configurations fall through to the underlying >> vcl_recv logic? >> >> Options i can think of: >> 1) Build lots of cookie whitelist/blacklist functionality above builtin.vcl >> so the underlying logic doesnt break things > > This can be avoided with the blog post [1] trick if you are confident > your backend won't make Varnish leak information. > I like the trick personally, It avoids alot of muckyness in cookie management. I still come back to whether its easier to teach than option 3 where there are explicit rules and the good code from builtin.vcl is now visible to the user in their default.vcl >> 2) Remove cookies entirely for some request types (such as static objects) >> so the underlying logic always works for some content types - My experience >> is that this generates issues on some customers sites as they have static >> handlers that are looking for a cookie in the origin and then do >> redirects/change response content if no cookies are present. > > So now you need to make assumptions about resources not managed by > Varnish. A common pattern is to remove cookies when the path > terminates with a "static resource" file extension. And then you run > into applications that generate images on the fly and need cookies. > >> 3) Explicitly handle all scenarios and return(pass) or return(hash) to avoid >> vcl_recv in builtin.vcl (and lift up the good bits of vcl_recv into the >> main config) > > While I lean towards composing on top of the built-in, I don't mind > this approach. Depending on the customers skills and application complexity I have been recommending this approach. It seems to feel more logical to new users of Varnish. > >> Interested in your views, As i work on internet facing websites - I would >> have thought this was the most common scenario but maybe there are more >> users doing other things with Varnish or i'm missing something simple in >> terms of handling cookies? > > It can be simple only if your backend is reliable when it comes to > cookie handling and caching. Otherwise you're in for a VCL soup of > cookie filtering... Agreed, though nothing is simple! a tricky topic, Thanks for your views and letting me know this is an area you have crossed recently! > > Dridi > > [1] https://info.varnish-software.com/blog/yet-another-post-on-caching-vs-cookies From matt at section.io Wed Jul 12 11:12:39 2017 From: matt at section.io (Matthew Johnson) Date: Wed, 12 Jul 2017 21:12:39 +1000 Subject: varnish-misc Digest, Vol 136, Issue 12 In-Reply-To: References: Message-ID: > Date: Wed, 12 Jul 2017 11:20:52 +0200 > From: Geoff Simmons > To: varnish-misc at varnish-cache.org > Subject: Re: Value of builtin.vcl - vcl_recv on modern websites > Message-ID: > Content-Type: text/plain; charset=windows-1252 > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 07/12/2017 09:14 AM, Matthew Johnson wrote: >> >> Since Varnish 1.0 the builtin (or default) vcl_recv has had this >> statement: if (req.http.Authorization || req.http.Cookie) { >> >> /* Not cacheable by default */ >> >> return (pass); >> >> } >> >> My issue with this is the req.http.Cookie check, In any modern >> website cookies are always present. > > While I'm sympathetic to all the things you said and don't mean to > disregard it but cutting things short, the answer here is simple: > there is no other choice for the default configuration of a caching > proxy, because those two headers mean that the response may be > personalized. Agreed. I think the area im looking to explore is how you move forwards from the default configuration as there are a few paths that can be taken. > > Many uses of cookies don't have that effect, of course, but Varnish > has no way of knowing that. As bad as the effect of the default config > may seem on sites that use cookies on every request -- Varnish doesn't > cache anything -- it would be much worse if someone sets up Varnish, > doesn't think of the consequences of not changing the default > configuration, and you end up seeing someone else's personal data in > cached responses. > > This problem is not specific to Varnish, but to any server that tries > to do what Varnish tries to do. I know from experience that it's > generally futile to say so, but this situation really ought to lead to > some widespread re-thinking throughout the industry. Forgive me for > shouting, soapbox-style, but this gives me an opportunity to sound off > on a pet peeve: > > ==> Maybe modern web site SHOULD NOT use cookies on every request! > Because of the way cookies interfere with downstream caching. > > I have come to the conviction that many uses of cookies are a result > of lazy thinking in app development. Many PHP devs, for example, are > in the habit of saying session_start(); at the beginning of every > script, without thinking twice about whether they really need it. I > have seen uses of cookies where "just toss that thing into a cookie" > was evidently the easy decision to make. I have seen cookies with > values that are 3KB long. Most .Net sites are the same, Almost any application I come across takes a "session first" approach. > > (Sometime over beer I'll tell you about that little database that > someone wanted to transport over a cookie, a base64-encoded CSV file > whose data was *also* base64-encoded, leading to a doubly > base64-encoded cookie value, in every request.) Beer sounds good. There are definitely many war stories on my side of madness in the use of cookies. Database in cookie in a novel approach! > > This is an instance of an issue that you encounter a lot with the use > of Varnish in practice: app development that doesn't think outside of > its own box in terms of functionality and performance. Rather than > thinking about the benefits of handing off some of your work, by > letting someone else serve your cached responses for you. > > HTTP was conceived from the beginning to enable caching as a means of > solving performance problems in slow networks. A well-configured > deployment of Varnish shows how beneficial that can be. But the > universal and unreflected use of cookies is one of the forces > presently at work that actively undermine that part of the equation. > >> A classic example of this is someone adding javascript via Google >> Tag Manager which then sets a cookie. > > One might have hoped that the Googlers, of all people, would have more > awareness of the trouble that they could cause by doing that. > Whilst Google do contribute to the issue with their own scripts, Google Tag Manager allows non technical people to deploy additional 3rd party javascript on a website. In the web performance space this always leads to slow loading websites in the browser but the cookie problem then plays into caching rules aswell. >> Do you still recommend configurations fall through to the >> underlying vcl_recv logic? >> >> Options i can think of: > > In a project where I am able to work with the app devs, I have had > good experience with working out a policy with them: if you MUST have > cookies in every request (although I WISH YOU WOULD RECONSIDER THAT), > then the caching proxy cannot make caching decisions on your behalf. > Only you can know if your response is cacheable, despite the presence > of cookie foo or bar, but is not cacheable if the cookie is baz. > > So if you want your response to be cached, you MUST say so in a > Cache-Control header. The proxy will not cache any other responses. I consider this ideal if it's possible to take that approach. I am often caught with customers using off the shelf platforms and less than ideal control over their application. > > Then we write VCL to bypass builtin's vcl_recv, and start Varnish with > - -t 0 (default TTL is 0s). Responses are then cached only if they > announce that they are cacheable. > > Of course, this has the effect that you're lamenting -- Varnish > doesn't cache anything by default -- but in my experience, the result > is that devs have become very good at thinking about the cacheability > of their responses. > > That boils down to answering your question by saying no, you can't use > builtin vcl_recv in a situation like that. When the cookies, like the > Evil, are always and everywhere (to paraphrase a saying in Germany), > and some cookies lead to cacheable responses while others don't, then > there's no other option for a caching proxy. A great summary, I think that summarises Dridi views earlier around the way HTTP has been implemented in applications. No perfect wins! > > > Best, > Geoff > - -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQIcBAEBCAAGBQJZZen0AAoJEOUwvh9pJNURnWEQAK9ucYSXEcrEwbmOrroBWoGK > iR9a8OFhst1rVKFQ2vNTUpw+OYM8vf8SJYToyq2VxG5/f/uGsT6nSVRWGKThgeV1 > geyyUQfbbDte1at4aFy6HX6LeCt62Si0L9KUZMZMkI5C6m6FKgA/5HecUchhuXdP > Li17DXKvzrQyTBJpvbk2vBZGPVlErnVAUz75IeJMrD6t/WGO0PsZvkC/l8LZhqcD > 2S2R9SHYMIyBrWSZm+YsI1DxMwvH6Gt84NRPpKcHQQ7TKEfvtOq0NwoqcNOB26EL > KIfOuVJdbiMvD5D+BZud/7a7UzSpJz5klLMdTcJMN60MrHJjGcok/5KiG7TNNowj > hMy5YUYpuIybsWzcvB5Ie/nteb0WyXt5+LYkxjP9dbN7AN3k+aU1PboSOyqYXO3u > KK1al00LMKHfzMHs1vF3QHRt2Q1Udud6dCdHuw6TyJ7eWCc9YGgU8NyboMLkXBhO > fBVNUQQjfNjaDWhKFvUIMEsGZhgvzzuMvjlZNhkc/lcDLmU8wVXyiMFSoBcuR1sX > 7EM1wBUKcKix0wE4QPl9608ql/5LF3Ms+wqDpmS0ECgFIf1yKMWFZt9iHhMUbch2 > 7fhr77vVjVD1K6nKHqDGOuLp4Cq+lfBJkd7PX2huQUV/hc00C8+NEieD77wuAwk7 > OPiGNt5YqmDNjtZmFUVH > =BlAn > -----END PGP SIGNATURE----- > > > > ********************************************* From matt at section.io Wed Jul 12 11:20:57 2017 From: matt at section.io (Matthew Johnson) Date: Wed, 12 Jul 2017 21:20:57 +1000 Subject: varnish-misc Digest, Vol 136, Issue 12 Message-ID: Apologies added this to a new thread instead of existing (as i have digest mode set) On Wed, Jul 12, 2017 at 9:12 PM, Matthew Johnson wrote: >> Date: Wed, 12 Jul 2017 11:20:52 +0200 >> From: Geoff Simmons >> To: varnish-misc at varnish-cache.org >> Subject: Re: Value of builtin.vcl - vcl_recv on modern websites >> Message-ID: >> Content-Type: text/plain; charset=windows-1252 >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> On 07/12/2017 09:14 AM, Matthew Johnson wrote: >>> >>> Since Varnish 1.0 the builtin (or default) vcl_recv has had this >>> statement: if (req.http.Authorization || req.http.Cookie) { >>> >>> /* Not cacheable by default */ >>> >>> return (pass); >>> >>> } >>> >>> My issue with this is the req.http.Cookie check, In any modern >>> website cookies are always present. >> >> While I'm sympathetic to all the things you said and don't mean to >> disregard it but cutting things short, the answer here is simple: >> there is no other choice for the default configuration of a caching >> proxy, because those two headers mean that the response may be >> personalized. > > Agreed. I think the area im looking to explore is how you move > forwards from the default configuration as there are a few paths that > can be taken. > >> >> Many uses of cookies don't have that effect, of course, but Varnish >> has no way of knowing that. As bad as the effect of the default config >> may seem on sites that use cookies on every request -- Varnish doesn't >> cache anything -- it would be much worse if someone sets up Varnish, >> doesn't think of the consequences of not changing the default >> configuration, and you end up seeing someone else's personal data in >> cached responses. >> >> This problem is not specific to Varnish, but to any server that tries >> to do what Varnish tries to do. I know from experience that it's >> generally futile to say so, but this situation really ought to lead to >> some widespread re-thinking throughout the industry. Forgive me for >> shouting, soapbox-style, but this gives me an opportunity to sound off >> on a pet peeve: >> >> ==> Maybe modern web site SHOULD NOT use cookies on every request! >> Because of the way cookies interfere with downstream caching. >> >> I have come to the conviction that many uses of cookies are a result >> of lazy thinking in app development. Many PHP devs, for example, are >> in the habit of saying session_start(); at the beginning of every >> script, without thinking twice about whether they really need it. I >> have seen uses of cookies where "just toss that thing into a cookie" >> was evidently the easy decision to make. I have seen cookies with >> values that are 3KB long. > > Most .Net sites are the same, Almost any application I come across > takes a "session first" approach. > > >> >> (Sometime over beer I'll tell you about that little database that >> someone wanted to transport over a cookie, a base64-encoded CSV file >> whose data was *also* base64-encoded, leading to a doubly >> base64-encoded cookie value, in every request.) > Beer sounds good. There are definitely many war stories on my side of > madness in the use of cookies. Database in cookie in a novel approach! > >> >> This is an instance of an issue that you encounter a lot with the use >> of Varnish in practice: app development that doesn't think outside of >> its own box in terms of functionality and performance. Rather than >> thinking about the benefits of handing off some of your work, by >> letting someone else serve your cached responses for you. >> >> HTTP was conceived from the beginning to enable caching as a means of >> solving performance problems in slow networks. A well-configured >> deployment of Varnish shows how beneficial that can be. But the >> universal and unreflected use of cookies is one of the forces >> presently at work that actively undermine that part of the equation. >> >>> A classic example of this is someone adding javascript via Google >>> Tag Manager which then sets a cookie. >> >> One might have hoped that the Googlers, of all people, would have more >> awareness of the trouble that they could cause by doing that. >> > Whilst Google do contribute to the issue with their own scripts, > Google Tag Manager allows non technical people to deploy additional > 3rd party javascript on a website. In the web performance space this > always leads to slow loading websites in the browser but the cookie > problem then plays into caching rules aswell. > >>> Do you still recommend configurations fall through to the >>> underlying vcl_recv logic? >>> >>> Options i can think of: >> >> In a project where I am able to work with the app devs, I have had >> good experience with working out a policy with them: if you MUST have >> cookies in every request (although I WISH YOU WOULD RECONSIDER THAT), >> then the caching proxy cannot make caching decisions on your behalf. >> Only you can know if your response is cacheable, despite the presence >> of cookie foo or bar, but is not cacheable if the cookie is baz. >> >> So if you want your response to be cached, you MUST say so in a >> Cache-Control header. The proxy will not cache any other responses. > > I consider this ideal if it's possible to take that approach. I am > often caught with customers using off the shelf platforms and less > than ideal control over their application. > >> >> Then we write VCL to bypass builtin's vcl_recv, and start Varnish with >> - -t 0 (default TTL is 0s). Responses are then cached only if they >> announce that they are cacheable. >> >> Of course, this has the effect that you're lamenting -- Varnish >> doesn't cache anything by default -- but in my experience, the result >> is that devs have become very good at thinking about the cacheability >> of their responses. >> >> That boils down to answering your question by saying no, you can't use >> builtin vcl_recv in a situation like that. When the cookies, like the >> Evil, are always and everywhere (to paraphrase a saying in Germany), >> and some cookies lead to cacheable responses while others don't, then >> there's no other option for a caching proxy. > > A great summary, I think that summarises Dridi views earlier around > the way HTTP has been implemented in applications. No perfect wins! > >> >> >> Best, >> Geoff >> - -- >> ** * * UPLEX - Nils Goroll Systemoptimierung >> >> Scheffelstra?e 32 >> 22301 Hamburg >> >> Tel +49 40 2880 5731 >> Mob +49 176 636 90917 >> Fax +49 40 42949753 >> >> http://uplex.de >> -----BEGIN PGP SIGNATURE----- >> Version: GnuPG v2 >> >> iQIcBAEBCAAGBQJZZen0AAoJEOUwvh9pJNURnWEQAK9ucYSXEcrEwbmOrroBWoGK >> iR9a8OFhst1rVKFQ2vNTUpw+OYM8vf8SJYToyq2VxG5/f/uGsT6nSVRWGKThgeV1 >> geyyUQfbbDte1at4aFy6HX6LeCt62Si0L9KUZMZMkI5C6m6FKgA/5HecUchhuXdP >> Li17DXKvzrQyTBJpvbk2vBZGPVlErnVAUz75IeJMrD6t/WGO0PsZvkC/l8LZhqcD >> 2S2R9SHYMIyBrWSZm+YsI1DxMwvH6Gt84NRPpKcHQQ7TKEfvtOq0NwoqcNOB26EL >> KIfOuVJdbiMvD5D+BZud/7a7UzSpJz5klLMdTcJMN60MrHJjGcok/5KiG7TNNowj >> hMy5YUYpuIybsWzcvB5Ie/nteb0WyXt5+LYkxjP9dbN7AN3k+aU1PboSOyqYXO3u >> KK1al00LMKHfzMHs1vF3QHRt2Q1Udud6dCdHuw6TyJ7eWCc9YGgU8NyboMLkXBhO >> fBVNUQQjfNjaDWhKFvUIMEsGZhgvzzuMvjlZNhkc/lcDLmU8wVXyiMFSoBcuR1sX >> 7EM1wBUKcKix0wE4QPl9608ql/5LF3Ms+wqDpmS0ECgFIf1yKMWFZt9iHhMUbch2 >> 7fhr77vVjVD1K6nKHqDGOuLp4Cq+lfBJkd7PX2huQUV/hc00C8+NEieD77wuAwk7 >> OPiGNt5YqmDNjtZmFUVH >> =BlAn >> -----END PGP SIGNATURE----- >> >> >> >> ********************************************* From dridi at varni.sh Wed Jul 12 12:16:05 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 12 Jul 2017 14:16:05 +0200 Subject: Value of builtin.vcl - vcl_recv on modern websites In-Reply-To: References: Message-ID: > ==> Maybe modern web site SHOULD NOT use cookies on every request! > Because of the way cookies interfere with downstream caching. Does it really matter? Cookies or not, they should rather *always* include at least a Cache-Control header in responses. > HTTP was conceived from the beginning to enable caching as a means of > solving performance problems in slow networks. A well-configured I must politely disagree here. HTTP/0.9 had no such thing, 1.0 introduced limited caching support and 1.1 got its act together but ended up with broken semantics. Spoiler alert, covering this in the part 2 draft. >> A classic example of this is someone adding javascript via Google >> Tag Manager which then sets a cookie. > > One might have hoped that the Googlers, of all people, would have more > awareness of the trouble that they could cause by doing that. I don't see this as a bad thing from a technical point of view. If you need to carry state in requests, you need cookies. > In a project where I am able to work with the app devs, I have had > good experience with working out a policy with them: if you MUST have > cookies in every request (although I WISH YOU WOULD RECONSIDER THAT), > then the caching proxy cannot make caching decisions on your behalf. > Only you can know if your response is cacheable, despite the presence > of cookie foo or bar, but is not cacheable if the cookie is baz. Here I disagree... > So if you want your response to be cached, you MUST say so in a > Cache-Control header. The proxy will not cache any other responses. ...but here I agree. If you need cookies, use cookies, but if you serve responses in the first place make sure to let downstreams know what to do with said response. For example it could be cacheable by the client but not by proxies in-between. > Then we write VCL to bypass builtin's vcl_recv, and start Varnish with > - -t 0 (default TTL is 0s). Responses are then cached only if they > announce that they are cacheable. +1 Ideally Varnish shouldn't make decisions in the lack of information. So default TTL and grace periods should ideally be zero and instead rely on Cache-Control entries (max-age, s-maxage, stale-while-revalidate...) > That boils down to answering your question by saying no, you can't use > builtin vcl_recv in a situation like that. When the cookies, like the > Evil, are always and everywhere (to paraphrase a saying in Germany), > and some cookies lead to cacheable responses while others don't, then > there's no other option for a caching proxy. Well, the proxy could always believe that the web application did its homework. That's what nginx does (or so I'm lead to believe) but experience shows that homework is often skipped for non-business topics like HTTP that are handed over to the underlying framework or CMS. I wish more webdevs would realize that HTTP is an application protocol, not transport (for 5 minutes when HTTP APIs became a thing I thought it was going to happen). Dridi From dridi at varni.sh Wed Jul 12 12:28:35 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 12 Jul 2017 14:28:35 +0200 Subject: Value of builtin.vcl - vcl_recv on modern websites In-Reply-To: References: Message-ID: > Great blog post (that i haven't seen before), thanks for sharing its > right on topic, looking forward to part 2 Thank you very much, really appreciated. The current draft for part 2 is complete but not really a good read. I'm having a hard time going over arcane HTTP contradictions and I don't have time these days to sit and write. > It feels like one way or another a solution is going to be needed > before vcl_recv in builtin.vcl to make the logic work on almost any > web application. > > Im wondering if its more logical for new users to override (return) > based based on explicit conditions that they define rather than move > the cookie in and out of scope for different scenarios to achieve the > same outcome (and have them forced to understand how we are being > sneaky with the cookie to avoid the underlying cookie check). It really depends. By default Varnish will pipe requests with unknown methods because part of the poor design of HTTP is the lack of semantics of the methods: if you don't already know a method, you can't anticipate how to handle it. For example with a HEAD request you may get a positive Content-Length but nevertheless no body. What if an unknown method has some similar behavior? (this is also true for response statues like 204.) So let's imagine that you'd use Varnish in front of a webdav application, you'd get a lot of legit methods that wouldn't go through the state machine (straight to vcl_pipe) if you run through the built-in vcl_recv{}. > That said the "cookie shuffle" does keep the rest of the good logic in > builtin vcl_recv in scope and saves lifting it up. > I like the trick personally, It avoids alot of muckyness in cookie management. Yes, simplest solution I found in the "composing with the built-in" case. > I still come back to whether its easier to teach than option 3 where > there are explicit rules and the good code from builtin.vcl is now > visible to the user in their default.vcl > Depending on the customers skills and application complexity I have > been recommending this approach. It seems to feel more logical to new > users of Varnish. Correct, the built-in vcl_recv{} would still show up with the `vcl.show -v` command but it would indeed be defeated. > Agreed, though nothing is simple! a tricky topic, Thanks for your > views and letting me know this is an area you have crossed recently! Too frequently I'm afraid ;-) From hugo.cisneiros at gmail.com Fri Jul 14 22:29:48 2017 From: hugo.cisneiros at gmail.com (Hugo Cisneiros (Eitch)) Date: Fri, 14 Jul 2017 19:29:48 -0300 Subject: Good way to get varnish statistics? Message-ID: Hi folks, What, in your opinion, is a good way to collect statistics from a varnish instance? For example, when I want to get frequency about RespStatus, I use varnishtop: varnishtop -i RespStatus But it seems that it doesn't give me the "count" value (like client.req from varnishstats), just the average (?). So I find it difficult to "plot" this info into my monitoring system (zabbix). It would be great if it could return the count in a custom period of time (like 1m) :) Another way would be parsing the access.log (generated by varnishncsa), counting lines and sending the value to the monitoring. But I think this way is bad and lacks performance. I also came up with this post: https://jiboumans.wordpress.com/2013/02/27/realtime-stats-from-varnish/ This uses a vmod but it seems to send statistics on every request. THe post says that there's no much overhead, but the complexity can grow :) What do you use or prefer? I doubt I'm the only one that needs this :P -- []'s Hugo www.devin.com.br From A.Hongens at netmatch.nl Sat Jul 15 12:12:06 2017 From: A.Hongens at netmatch.nl (=?iso-8859-1?Q?Angelo_H=F6ngens?=) Date: Sat, 15 Jul 2017 12:12:06 +0000 Subject: Good way to get varnish statistics? In-Reply-To: References: Message-ID: Hugo, You can use varnishstat.. See some example scripts below.. These scripts were written when we used mixed varnish versions, hence the the version checking ;) $ cat /etc/snmp/scripts/NM_VARNISH_TOTALREQS.sh #!/bin/sh varnishver=`varnishd -V 2>&1 | head -n 1 | cut -f 2 -d " " | cut -f 2 -d "-" | cut -f 1 -d "."` if [ $varnishver == '4' ] then /usr/bin/varnishstat -1 -f MAIN.client_req | awk '{ print $2 }' else /usr/bin/varnishstat -1 -f client_req | awk '{ print $2 }' fi $ cat /etc/snmp/scripts/NM_VARNISH_STORAGEUSED.sh #!/bin/sh varnishver=`varnishd -V 2>&1 | head -n 1 | cut -f 2 -d " " | cut -f 2 -d "-" | cut -f 1 -d "."` if [ $varnishver == '4' ] then usedmalloc=`/usr/bin/varnishstat -1 -f SMA.s0.g_bytes | awk '{ print $2 }'` usedfile=`/usr/bin/varnishstat -1 -f SMF.s0.g_bytes | awk '{ print $2 }'` if [ -z "$usedmalloc" ]; then usedmalloc=0 fi if [ -z "$usedfile" ]; then usedfile=0 fi used=$(($usedmalloc+$usedfile)) echo $used else echo "NaN" fi ________________________________________ From: varnish-misc-bounces+a.hongens=netmatch.nl at varnish-cache.org on behalf of Hugo Cisneiros (Eitch) Sent: Saturday, July 15, 2017 12:29:48 AM To: varnish-misc at varnish-cache.org Subject: Good way to get varnish statistics? Hi folks, What, in your opinion, is a good way to collect statistics from a varnish instance? For example, when I want to get frequency about RespStatus, I use varnishtop: varnishtop -i RespStatus But it seems that it doesn't give me the "count" value (like client.req from varnishstats), just the average (?). So I find it difficult to "plot" this info into my monitoring system (zabbix). It would be great if it could return the count in a custom period of time (like 1m) :) Another way would be parsing the access.log (generated by varnishncsa), counting lines and sending the value to the monitoring. But I think this way is bad and lacks performance. I also came up with this post: https://jiboumans.wordpress.com/2013/02/27/realtime-stats-from-varnish/ This uses a vmod but it seems to send statistics on every request. THe post says that there's no much overhead, but the complexity can grow :) What do you use or prefer? I doubt I'm the only one that needs this :P -- []'s Hugo www.devin.com.br _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From Yanick.Girouard at stm.info Thu Jul 20 17:44:45 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Thu, 20 Jul 2017 17:44:45 +0000 Subject: Varnish and max-age=0 Message-ID: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Thu Jul 20 17:57:59 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Thu, 20 Jul 2017 13:57:59 -0400 Subject: Varnish and max-age=0 In-Reply-To: References: Message-ID: The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick wrote: > Hi, > > > > We use Varnish to cache for multiple backends and need Varnish to *always* > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > > > We have recently realized that one of our backend always set the following > header: *Cache-Control: max-age=0, private, must-revalidate* > > > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=*0 *in the response header, that it makrs the > object as not cacheable. > > > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > > > Is this even by design or is it a bug? > > > > Thanks, > > *Yanick Girouard* > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Thu Jul 20 18:03:15 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Thu, 20 Jul 2017 18:03:15 +0000 Subject: Varnish and max-age=0 In-Reply-To: References: Message-ID: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> Hi Reza, Yes we are. Here's the default we apply. Those two subs are called in order in vcl_backend_response: /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * ***********************************************************/ sub stm_backend_resp_unset_cache_control_headers { unset beresp.http.Surrogate-Control; unset beresp.http.Cache-Control; unset beresp.http.Expires; } /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * *******************************************/ sub stm_backend_resp_expiration_default { set beresp.ttl = 30m; set beresp.grace = 15m; } That doesn't seem to have any impact when the backend responds with a Cache-Control: max-age=0 header. Any idea? De : Reza Naghibi [mailto:reza at varnish-software.com] Envoy? : jeudi 20 juillet 2017 13:58 ? : Girouard, Yanick Cc : varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick > wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Thu Jul 20 18:06:06 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Thu, 20 Jul 2017 14:06:06 -0400 Subject: Varnish and max-age=0 In-Reply-To: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> Message-ID: Can you provide the varnishlog for a request which isnt getting cached? -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 2:03 PM, Girouard, Yanick wrote: > Hi Reza, > > > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > > ***********************************************************/ > > sub stm_backend_resp_unset_cache_control_headers { > > unset beresp.http.Surrogate-Control; > > unset beresp.http.Cache-Control; > > unset beresp.http.Expires; > > } > > > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > > *******************************************/ > > sub stm_backend_resp_expiration_default { > > set beresp.ttl = 30m; > > set beresp.grace = 15m; > > } > > > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > > > Any idea? > > > > > > *De :* Reza Naghibi [mailto:reza at varnish-software.com] > *Envoy? :* jeudi 20 juillet 2017 13:58 > *? :* Girouard, Yanick > *Cc :* varnish-misc at varnish-cache.org > *Objet :* Re: Varnish and max-age=0 > > > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > > > sub vcl_backend_response > > { > > unset beresp.http.Cache-Control; > > set beresp.ttl = 120s; > > } > > > -- > Reza Naghibi > Varnish Software > > > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > Hi, > > > > We use Varnish to cache for multiple backends and need Varnish to *always* > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > > > We have recently realized that one of our backend always set the following > header: *Cache-Control: max-age=0, private, must-revalidate* > > > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=*0 *in the response header, that it makrs the > object as not cacheable. > > > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > > > Is this even by design or is it a bug? > > > > Thanks, > > *Yanick Girouard* > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Thu Jul 20 18:16:53 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Thu, 20 Jul 2017 18:16:53 +0000 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> Message-ID: Interesting? Following your response, I've tested setting beresp.ttl in a simplified version of our VCL and it's caching the request even with max-age=0? So that means something else is causing this in my VCL. I will try to debug it further and see if I can find it. Thanks! De : Reza Naghibi [mailto:reza at varnish-software.com] Envoy? : jeudi 20 juillet 2017 14:06 ? : Girouard, Yanick Cc : varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Can you provide the varnishlog for a request which isnt getting cached? -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 2:03 PM, Girouard, Yanick > wrote: Hi Reza, Yes we are. Here's the default we apply. Those two subs are called in order in vcl_backend_response: /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * ***********************************************************/ sub stm_backend_resp_unset_cache_control_headers { unset beresp.http.Surrogate-Control; unset beresp.http.Cache-Control; unset beresp.http.Expires; } /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * *******************************************/ sub stm_backend_resp_expiration_default { set beresp.ttl = 30m; set beresp.grace = 15m; } That doesn't seem to have any impact when the backend responds with a Cache-Control: max-age=0 header. Any idea? De : Reza Naghibi [mailto:reza at varnish-software.com] Envoy? : jeudi 20 juillet 2017 13:58 ? : Girouard, Yanick > Cc : varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick > wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles at beachcamera.com Thu Jul 20 18:25:14 2017 From: charles at beachcamera.com (Bender, Charles) Date: Thu, 20 Jul 2017 18:25:14 +0000 Subject: Varnish and max-age=0 References: Message-ID: <7300EDCB79BBC8489A35D7A51478B8F778C70A91@bcvmexmbox01.BEACHCAMERA.LOCAL> Yanick, You may just need to set beresp.ttl to a value and Varnish should obey it. And possibly unset cookies. For example- sub vcl_backend_response { if (bereq.url ~ "somepage\.php" && beresp.status == 200) { unset beresp.http.cache-control; beresp.http.set-cookie; set beresp.ttl = 3600s; return(deliver); } Above example removes cache-control and set-cookie headers (by default Varnish will not cache responses that set cookies), sets TTL to 3600 seconds, and delivers response. On 07/20/2017 02:14 PM, Girouard, Yanick wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Thu Jul 20 18:55:25 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Thu, 20 Jul 2017 18:55:25 +0000 Subject: Varnish and max-age=0 In-Reply-To: <7300EDCB79BBC8489A35D7A51478B8F778C70A91@bcvmexmbox01.BEACHCAMERA.LOCAL> References: <7300EDCB79BBC8489A35D7A51478B8F778C70A91@bcvmexmbox01.BEACHCAMERA.LOCAL> Message-ID: <10a448226ebc4280800f88c7baaba30c@e2k13mbx01.corpo.stm.info> I found my issue... It had nothing to do with the max-age=0 after all, not with the way I was setting beresp.ttl either. I forgot to strip Set-Cookie from the backend response and that's why it wasn't caching... /facepalm. Thanks for your help! De : Bender, Charles [mailto:charles at beachcamera.com] Envoy? : jeudi 20 juillet 2017 14:25 ? : Girouard, Yanick ; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Yanick, You may just need to set beresp.ttl to a value and Varnish should obey it. And possibly unset cookies. For example- sub vcl_backend_response { if (bereq.url ~ "somepage\.php" && beresp.status == 200) { unset beresp.http.cache-control; beresp.http.set-cookie; set beresp.ttl = 3600s; return(deliver); } Above example removes cache-control and set-cookie headers (by default Varnish will not cache responses that set cookies), sets TTL to 3600 seconds, and delivers response. On 07/20/2017 02:14 PM, Girouard, Yanick wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Jul 20 19:22:57 2017 From: lagged at gmail.com (Andrei) Date: Thu, 20 Jul 2017 22:22:57 +0300 Subject: Varnish and max-age=0 In-Reply-To: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> Message-ID: Just a thought, if you're going to force an otherwise uncacheable request to be cached, you should probably: set beresp.uncacheable = false; On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick wrote: > Hi Reza, > > > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > > ***********************************************************/ > > sub stm_backend_resp_unset_cache_control_headers { > > unset beresp.http.Surrogate-Control; > > unset beresp.http.Cache-Control; > > unset beresp.http.Expires; > > } > > > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > > *******************************************/ > > sub stm_backend_resp_expiration_default { > > set beresp.ttl = 30m; > > set beresp.grace = 15m; > > } > > > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > > > Any idea? > > > > > > *De :* Reza Naghibi [mailto:reza at varnish-software.com] > *Envoy? :* jeudi 20 juillet 2017 13:58 > *? :* Girouard, Yanick > *Cc :* varnish-misc at varnish-cache.org > *Objet :* Re: Varnish and max-age=0 > > > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > > > sub vcl_backend_response > > { > > unset beresp.http.Cache-Control; > > set beresp.ttl = 120s; > > } > > > -- > Reza Naghibi > Varnish Software > > > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > Hi, > > > > We use Varnish to cache for multiple backends and need Varnish to *always* > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > > > We have recently realized that one of our backend always set the following > header: *Cache-Control: max-age=0, private, must-revalidate* > > > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=*0 *in the response header, that it makrs the > object as not cacheable. > > > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > > > Is this even by design or is it a bug? > > > > Thanks, > > *Yanick Girouard* > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Thu Jul 20 22:09:12 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Thu, 20 Jul 2017 22:09:12 +0000 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info>, Message-ID: <1500588552361.40187@stm.info> That's a good thought, but what would really be the impact of this setting if I've already set the ttl to a positive value after stripping all headers that would make Varnish consider the object as being uncacheable to begin with? Is there a case where it would be required? ________________________________ De : Andrei Envoy? : 20 juillet 2017 15:22 ? : Girouard, Yanick Cc : Reza Naghibi; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Just a thought, if you're going to force an otherwise uncacheable request to be cached, you should probably: set beresp.uncacheable = false; On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick > wrote: Hi Reza, Yes we are. Here's the default we apply. Those two subs are called in order in vcl_backend_response: /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * ***********************************************************/ sub stm_backend_resp_unset_cache_control_headers { unset beresp.http.Surrogate-Control; unset beresp.http.Cache-Control; unset beresp.http.Expires; } /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * *******************************************/ sub stm_backend_resp_expiration_default { set beresp.ttl = 30m; set beresp.grace = 15m; } That doesn't seem to have any impact when the backend responds with a Cache-Control: max-age=0 header. Any idea? De : Reza Naghibi [mailto:reza at varnish-software.com] Envoy? : jeudi 20 juillet 2017 13:58 ? : Girouard, Yanick > Cc : varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick > wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From info+varnish at shee.org Thu Jul 20 22:27:59 2017 From: info+varnish at shee.org (Leon) Date: Fri, 21 Jul 2017 00:27:59 +0200 Subject: Handling HTML5 Video for iOS and Safari clients Message-ID: <19037858-010A-4C08-B3F5-168AB1A4C430@shee.org> Branch: varnish-5.1.2 Config: mostly default Dear List, there are a lot of stuff in the results of common search engines concerning streaming mp4 files via varnish. But my mental picture is still unsharp. Therefore following questions; It seems that Safari-Browser has still a problem with such assets. I had tried two approaches: 1st: Just "pass" it to the backend: vcl_recv : if ( req.url ~ "\.(mp4|webm)$" ) { return(pipe); } or 2nd: Deliver it directly vcl_recv : if ( req.url ~ "\.(mp4|webm)$" ) { unset req.http.cookie; } vcl_backend_response : if ( bereq.url ~ "\.(mp4|webm)$" ) { set beresp.do_stream = true; } Both don't help to get the Safari browser to display the video content as done by the Firefox browser for example. So, the questions: Which one should be the preferential approach - in general to deliver mp4 files? Does someone has the same issues with the Safari-Browser? What could help here? In the mean time, any other suggestions would be greatly appreciated. Thanks, Leon From guillaume at varnish-software.com Fri Jul 21 07:49:23 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 21 Jul 2017 09:49:23 +0200 Subject: Varnish and max-age=0 In-Reply-To: <1500588552361.40187@stm.info> References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> Message-ID: Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. "uncacheable" tells Varnish that if it gets a HIT for that object, it should convert it to a PASS/MISS (depending on the versions) and avoir request coalescing. In that scenario too, the ttl is the time the object will live in cache. ie. how long do you retain the memory that it's not cacheable. -- Guillaume Quintard On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick wrote: > That's a good thought, but what would really be the impact of this setting > if I've already set the ttl to a positive value after stripping all headers > that would make Varnish consider the object as being uncacheable to begin > with? Is there a case where it would be required? > > > ________________________________ > De : Andrei > Envoy? : 20 juillet 2017 15:22 > ? : Girouard, Yanick > Cc : Reza Naghibi; varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > Just a thought, if you're going to force an otherwise uncacheable request > to be cached, you should probably: set beresp.uncacheable = false; > > > > On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi Reza, > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > ***********************************************************/ > sub stm_backend_resp_unset_cache_control_headers { > unset beresp.http.Surrogate-Control; > unset beresp.http.Cache-Control; > unset beresp.http.Expires; > } > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > *******************************************/ > sub stm_backend_resp_expiration_default { > set beresp.ttl = 30m; > set beresp.grace = 15m; > } > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > Any idea? > > > De : Reza Naghibi [mailto:reza at varnish-software.com software.com>] > Envoy? : jeudi 20 juillet 2017 13:58 > ? : Girouard, Yanick info>> > Cc : varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > sub vcl_backend_response > { > unset beresp.http.Cache-Control; > set beresp.ttl = 120s; > } > > -- > Reza Naghibi > Varnish Software > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi, > > We use Varnish to cache for multiple backends and need Varnish to always > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > We have recently realized that one of our backend always set the following > header: Cache-Control: max-age=0, private, must-revalidate > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=0 in the response header, that it makrs the object > as not cacheable. > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > Is this even by design or is it a bug? > > Thanks, > Yanick Girouard > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Jul 21 07:53:02 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 21 Jul 2017 09:53:02 +0200 Subject: Handling HTML5 Video for iOS and Safari clients In-Reply-To: <19037858-010A-4C08-B3F5-168AB1A4C430@shee.org> References: <19037858-010A-4C08-B3F5-168AB1A4C430@shee.org> Message-ID: Don't pipe. ever. Unless your are using websockets. do_stream is set by default, you can remove it. Are we talking aboud VoD or Live? if the latter, remember to set grace to 0, other you'll have outdated manifest problems. What isn't working on Safari? Does the problem goes away if you connect straight to the origin? (ie. no varnish) What do the developer tools tell you? Have you looked at varnishlog? -- Guillaume Quintard On Fri, Jul 21, 2017 at 12:27 AM, Leon wrote: > Branch: varnish-5.1.2 > Config: mostly default > > Dear List, > > there are a lot of stuff in the results of common search engines concerning > streaming mp4 files via varnish. But my mental picture is still unsharp. > Therefore > following questions; It seems that Safari-Browser has still a problem with > such > assets. I had tried two approaches: > > 1st: Just "pass" it to the backend: > > vcl_recv : if ( req.url ~ "\.(mp4|webm)$" ) { return(pipe); } > > or > > 2nd: Deliver it directly > > vcl_recv : if ( req.url ~ "\.(mp4|webm)$" ) { unset > req.http.cookie; } > vcl_backend_response : if ( bereq.url ~ "\.(mp4|webm)$" ) { set > beresp.do_stream = true; } > > > Both don't help to get the Safari browser to display the video content as > done by the Firefox browser for example. > > > So, the questions: > > Which one should be the preferential approach - in general to deliver mp4 > files? > > Does someone has the same issues with the Safari-Browser? What could help > here? > > In the mean time, any other suggestions would be greatly appreciated. > > Thanks, > Leon > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Fri Jul 21 12:17:02 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Fri, 21 Jul 2017 12:17:02 +0000 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> Message-ID: <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> So in which case would you want to force it to false? I read about it and it's mainly used to force a hit for pass, but I haven't read about a scenario where the opposite would be useful. De : Guillaume Quintard [mailto:guillaume at varnish-software.com] Envoy? : vendredi 21 juillet 2017 03:49 ? : Girouard, Yanick Cc : Andrei ; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. "uncacheable" tells Varnish that if it gets a HIT for that object, it should convert it to a PASS/MISS (depending on the versions) and avoir request coalescing. In that scenario too, the ttl is the time the object will live in cache. ie. how long do you retain the memory that it's not cacheable. -- Guillaume Quintard On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick > wrote: That's a good thought, but what would really be the impact of this setting if I've already set the ttl to a positive value after stripping all headers that would make Varnish consider the object as being uncacheable to begin with? Is there a case where it would be required? ________________________________ De : Andrei > Envoy? : 20 juillet 2017 15:22 ? : Girouard, Yanick Cc : Reza Naghibi; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Just a thought, if you're going to force an otherwise uncacheable request to be cached, you should probably: set beresp.uncacheable = false; On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick >> wrote: Hi Reza, Yes we are. Here's the default we apply. Those two subs are called in order in vcl_backend_response: /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * ***********************************************************/ sub stm_backend_resp_unset_cache_control_headers { unset beresp.http.Surrogate-Control; unset beresp.http.Cache-Control; unset beresp.http.Expires; } /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * *******************************************/ sub stm_backend_resp_expiration_default { set beresp.ttl = 30m; set beresp.grace = 15m; } That doesn't seem to have any impact when the backend responds with a Cache-Control: max-age=0 header. Any idea? De : Reza Naghibi [mailto:reza at varnish-software.com>] Envoy? : jeudi 20 juillet 2017 13:58 ? : Girouard, Yanick >> Cc : varnish-misc at varnish-cache.org> Objet : Re: Varnish and max-age=0 The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick >> wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Jul 21 12:38:04 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 21 Jul 2017 14:38:04 +0200 Subject: Varnish and max-age=0 In-Reply-To: <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> Message-ID: beresp.uncacheable == false is the default, ie. "cache the object and serve it next time someone ask for it" -- Guillaume Quintard On Fri, Jul 21, 2017 at 2:17 PM, Girouard, Yanick wrote: > So in which case would you want to force it to false? I read about it and > it's mainly used to force a hit for pass, but I haven't read about a > scenario where the opposite would be useful. > > > > *De :* Guillaume Quintard [mailto:guillaume at varnish-software.com] > *Envoy? :* vendredi 21 juillet 2017 03:49 > *? :* Girouard, Yanick > *Cc :* Andrei ; varnish-misc at varnish-cache.org > > *Objet :* Re: Varnish and max-age=0 > > > > Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. > "uncacheable" tells Varnish that if it gets a HIT for that object, it > should convert it to a PASS/MISS (depending on the versions) and avoir > request coalescing. In that scenario too, the ttl is the time the object > will live in cache. ie. how long do you retain the memory that it's not > cacheable. > > > -- > > Guillaume Quintard > > > > On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > That's a good thought, but what would really be the impact of this setting > if I've already set the ttl to a positive value after stripping all headers > that would make Varnish consider the object as being uncacheable to begin > with? Is there a case where it would be required? > > > ________________________________ > De : Andrei > Envoy? : 20 juillet 2017 15:22 > ? : Girouard, Yanick > Cc : Reza Naghibi; varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > Just a thought, if you're going to force an otherwise uncacheable request > to be cached, you should probably: set beresp.uncacheable = false; > > > > On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi Reza, > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > ***********************************************************/ > sub stm_backend_resp_unset_cache_control_headers { > unset beresp.http.Surrogate-Control; > unset beresp.http.Cache-Control; > unset beresp.http.Expires; > } > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > *******************************************/ > sub stm_backend_resp_expiration_default { > set beresp.ttl = 30m; > set beresp.grace = 15m; > } > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > Any idea? > > > De : Reza Naghibi [mailto:reza at varnish-software.com software.com>] > Envoy? : jeudi 20 juillet 2017 13:58 > ? : Girouard, Yanick info>> > Cc : varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > sub vcl_backend_response > { > unset beresp.http.Cache-Control; > set beresp.ttl = 120s; > } > > -- > Reza Naghibi > Varnish Software > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi, > > We use Varnish to cache for multiple backends and need Varnish to always > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > We have recently realized that one of our backend always set the following > header: Cache-Control: max-age=0, private, must-revalidate > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=0 in the response header, that it makrs the object > as not cacheable. > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > Is this even by design or is it a bug? > > Thanks, > Yanick Girouard > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yanick.Girouard at stm.info Fri Jul 21 12:42:11 2017 From: Yanick.Girouard at stm.info (Girouard, Yanick) Date: Fri, 21 Jul 2017 12:42:11 +0000 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> Message-ID: Thanks but that doesn't really answer my question. Being the default, you'd only want to set it to false explicitly if it was set to true. My question was when would you ever want or need to do this? I can see cases where you'd want to force it to true, but not the opposite. De : Guillaume Quintard [mailto:guillaume at varnish-software.com] Envoy? : vendredi 21 juillet 2017 08:38 ? : Girouard, Yanick Cc : Andrei ; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 beresp.uncacheable == false is the default, ie. "cache the object and serve it next time someone ask for it" -- Guillaume Quintard On Fri, Jul 21, 2017 at 2:17 PM, Girouard, Yanick > wrote: So in which case would you want to force it to false? I read about it and it's mainly used to force a hit for pass, but I haven't read about a scenario where the opposite would be useful. De : Guillaume Quintard [mailto:guillaume at varnish-software.com] Envoy? : vendredi 21 juillet 2017 03:49 ? : Girouard, Yanick > Cc : Andrei >; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. "uncacheable" tells Varnish that if it gets a HIT for that object, it should convert it to a PASS/MISS (depending on the versions) and avoir request coalescing. In that scenario too, the ttl is the time the object will live in cache. ie. how long do you retain the memory that it's not cacheable. -- Guillaume Quintard On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick > wrote: That's a good thought, but what would really be the impact of this setting if I've already set the ttl to a positive value after stripping all headers that would make Varnish consider the object as being uncacheable to begin with? Is there a case where it would be required? ________________________________ De : Andrei > Envoy? : 20 juillet 2017 15:22 ? : Girouard, Yanick Cc : Reza Naghibi; varnish-misc at varnish-cache.org Objet : Re: Varnish and max-age=0 Just a thought, if you're going to force an otherwise uncacheable request to be cached, you should probably: set beresp.uncacheable = false; On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick >> wrote: Hi Reza, Yes we are. Here's the default we apply. Those two subs are called in order in vcl_backend_response: /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * ***********************************************************/ sub stm_backend_resp_unset_cache_control_headers { unset beresp.http.Surrogate-Control; unset beresp.http.Cache-Control; unset beresp.http.Expires; } /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * *******************************************/ sub stm_backend_resp_expiration_default { set beresp.ttl = 30m; set beresp.grace = 15m; } That doesn't seem to have any impact when the backend responds with a Cache-Control: max-age=0 header. Any idea? De : Reza Naghibi [mailto:reza at varnish-software.com>] Envoy? : jeudi 20 juillet 2017 13:58 ? : Girouard, Yanick >> Cc : varnish-misc at varnish-cache.org> Objet : Re: Varnish and max-age=0 The TTL is calculated before entering vcl_backend_response. So eventhough you unset the Cache-Control header, the value of TTL will be calculated based on it. Are you setting a new value for beresp.ttl? You need to do that: sub vcl_backend_response { unset beresp.http.Cache-Control; set beresp.ttl = 120s; } -- Reza Naghibi Varnish Software On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick >> wrote: Hi, We use Varnish to cache for multiple backends and need Varnish to always control what is cached despite what backends could respond. In other words, even if a backend sets Cache-Control headers to never cache its pages, we still want Varnish to cache them based on defined rules (i.e. certain URL patterns or hosts have different TTLs). We have recently realized that one of our backend always set the following header: Cache-Control: max-age=0, private, must-revalidate Our VCL unsets the Cache-Control header in vcl_backend_response and sets its own before delivering. By unsetting the Cache-Control header in vcl_backend_response I would expect Varnish to ignore the max-age=0 value and still cache the page as per our other rules, but it seems that the second it sees max-age=0 in the response header, that it makrs the object as not cacheable. Other than by changing the backend's response to never set max-age=0, is there a way to force Varnish to cach pages even if it returned max-age=0? Is this even by design or is it a bug? Thanks, Yanick Girouard _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Jul 21 12:58:51 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 21 Jul 2017 14:58:51 +0200 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> Message-ID: Ah, right, indeed. Unless you set it to true for some set of URLs, and to false again for a subset of it. That could make your VCL clearer, maybe -- Guillaume Quintard On Fri, Jul 21, 2017 at 2:42 PM, Girouard, Yanick wrote: > Thanks but that doesn't really answer my question. Being the default, > you'd only want to set it to false explicitly if it was set to true. My > question was when would you ever want or need to do this? I can see cases > where you'd want to force it to true, but not the opposite. > > > > > > *De :* Guillaume Quintard [mailto:guillaume at varnish-software.com] > *Envoy? :* vendredi 21 juillet 2017 08:38 > > *? :* Girouard, Yanick > *Cc :* Andrei ; varnish-misc at varnish-cache.org > *Objet :* Re: Varnish and max-age=0 > > > > beresp.uncacheable == false is the default, ie. "cache the object and > serve it next time someone ask for it" > > > -- > > Guillaume Quintard > > > > On Fri, Jul 21, 2017 at 2:17 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > So in which case would you want to force it to false? I read about it and > it's mainly used to force a hit for pass, but I haven't read about a > scenario where the opposite would be useful. > > > > *De :* Guillaume Quintard [mailto:guillaume at varnish-software.com] > *Envoy? :* vendredi 21 juillet 2017 03:49 > *? :* Girouard, Yanick > *Cc :* Andrei ; varnish-misc at varnish-cache.org > > > *Objet :* Re: Varnish and max-age=0 > > > > Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. > "uncacheable" tells Varnish that if it gets a HIT for that object, it > should convert it to a PASS/MISS (depending on the versions) and avoir > request coalescing. In that scenario too, the ttl is the time the object > will live in cache. ie. how long do you retain the memory that it's not > cacheable. > > > -- > > Guillaume Quintard > > > > On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > That's a good thought, but what would really be the impact of this setting > if I've already set the ttl to a positive value after stripping all headers > that would make Varnish consider the object as being uncacheable to begin > with? Is there a case where it would be required? > > > ________________________________ > De : Andrei > Envoy? : 20 juillet 2017 15:22 > ? : Girouard, Yanick > Cc : Reza Naghibi; varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > Just a thought, if you're going to force an otherwise uncacheable request > to be cached, you should probably: set beresp.uncacheable = false; > > > > On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi Reza, > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > ***********************************************************/ > sub stm_backend_resp_unset_cache_control_headers { > unset beresp.http.Surrogate-Control; > unset beresp.http.Cache-Control; > unset beresp.http.Expires; > } > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > *******************************************/ > sub stm_backend_resp_expiration_default { > set beresp.ttl = 30m; > set beresp.grace = 15m; > } > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > Any idea? > > > De : Reza Naghibi [mailto:reza at varnish-software.com software.com>] > Envoy? : jeudi 20 juillet 2017 13:58 > ? : Girouard, Yanick info>> > Cc : varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > sub vcl_backend_response > { > unset beresp.http.Cache-Control; > set beresp.ttl = 120s; > } > > -- > Reza Naghibi > Varnish Software > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi, > > We use Varnish to cache for multiple backends and need Varnish to always > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > We have recently realized that one of our backend always set the following > header: Cache-Control: max-age=0, private, must-revalidate > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=0 in the response header, that it makrs the object > as not cacheable. > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > Is this even by design or is it a bug? > > Thanks, > Yanick Girouard > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Fri Jul 21 20:08:59 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 21 Jul 2017 22:08:59 +0200 Subject: Varnish and max-age=0 In-Reply-To: References: <1f1eb5251b8d4ce48310fda6d3fc291e@e2k13mbx01.corpo.stm.info> <1500588552361.40187@stm.info> <96036de5b0ef48029abcbad9b88b0f05@e2k13mbx01.corpo.stm.info> Message-ID: Setting to false is a no-op iirc. Once true, you can no longer bend the truth. On Jul 21, 2017 17:29, "Guillaume Quintard" wrote: Ah, right, indeed. Unless you set it to true for some set of URLs, and to false again for a subset of it. That could make your VCL clearer, maybe -- Guillaume Quintard On Fri, Jul 21, 2017 at 2:42 PM, Girouard, Yanick wrote: > Thanks but that doesn't really answer my question. Being the default, > you'd only want to set it to false explicitly if it was set to true. My > question was when would you ever want or need to do this? I can see cases > where you'd want to force it to true, but not the opposite. > > > > > > *De :* Guillaume Quintard [mailto:guillaume at varnish-software.com] > *Envoy? :* vendredi 21 juillet 2017 08:38 > > *? :* Girouard, Yanick > *Cc :* Andrei ; varnish-misc at varnish-cache.org > *Objet :* Re: Varnish and max-age=0 > > > > beresp.uncacheable == false is the default, ie. "cache the object and > serve it next time someone ask for it" > > > -- > > Guillaume Quintard > > > > On Fri, Jul 21, 2017 at 2:17 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > So in which case would you want to force it to false? I read about it and > it's mainly used to force a hit for pass, but I haven't read about a > scenario where the opposite would be useful. > > > > *De :* Guillaume Quintard [mailto:guillaume at varnish-software.com] > *Envoy? :* vendredi 21 juillet 2017 03:49 > *? :* Girouard, Yanick > *Cc :* Andrei ; varnish-misc at varnish-cache.org > > > *Objet :* Re: Varnish and max-age=0 > > > > Common mistake, beresp.uncacheable isn't the opposite of beresp.ttl>0. > "uncacheable" tells Varnish that if it gets a HIT for that object, it > should convert it to a PASS/MISS (depending on the versions) and avoir > request coalescing. In that scenario too, the ttl is the time the object > will live in cache. ie. how long do you retain the memory that it's not > cacheable. > > > -- > > Guillaume Quintard > > > > On Fri, Jul 21, 2017 at 12:09 AM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > > That's a good thought, but what would really be the impact of this setting > if I've already set the ttl to a positive value after stripping all headers > that would make Varnish consider the object as being uncacheable to begin > with? Is there a case where it would be required? > > > ________________________________ > De : Andrei > Envoy? : 20 juillet 2017 15:22 > ? : Girouard, Yanick > Cc : Reza Naghibi; varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > Just a thought, if you're going to force an otherwise uncacheable request > to be cached, you should probably: set beresp.uncacheable = false; > > > > On Thu, Jul 20, 2017 at 9:03 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi Reza, > > Yes we are. Here's the default we apply. Those two subs are called in > order in vcl_backend_response: > > /* REMOVE CACHE-CONTROL AND SURROGATE-CONTROL FROM BACKEND * > ***********************************************************/ > sub stm_backend_resp_unset_cache_control_headers { > unset beresp.http.Surrogate-Control; > unset beresp.http.Cache-Control; > unset beresp.http.Expires; > } > > /* DEFAULT ALL TO: TTL 30MIN + GRACE 15MIN * > *******************************************/ > sub stm_backend_resp_expiration_default { > set beresp.ttl = 30m; > set beresp.grace = 15m; > } > > That doesn't seem to have any impact when the backend responds with a > Cache-Control: max-age=0 header. > > Any idea? > > > De : Reza Naghibi [mailto:reza at varnish-software.com reza at varnish-software.com>] > Envoy? : jeudi 20 juillet 2017 13:58 > ? : Girouard, Yanick Yanick.Girouard at stm.info>> > Cc : varnish-misc at varnish-cache.org > Objet : Re: Varnish and max-age=0 > > The TTL is calculated before entering vcl_backend_response. So eventhough > you unset the Cache-Control header, the value of TTL will be calculated > based on it. Are you setting a new value for beresp.ttl? You need to do > that: > > sub vcl_backend_response > { > unset beresp.http.Cache-Control; > set beresp.ttl = 120s; > } > > -- > Reza Naghibi > Varnish Software > > On Thu, Jul 20, 2017 at 1:44 PM, Girouard, Yanick < > Yanick.Girouard at stm.info> wrote: > Hi, > > We use Varnish to cache for multiple backends and need Varnish to always > control what is cached despite what backends could respond. In other words, > even if a backend sets Cache-Control headers to never cache its pages, we > still want Varnish to cache them based on defined rules (i.e. certain URL > patterns or hosts have different TTLs). > > We have recently realized that one of our backend always set the following > header: Cache-Control: max-age=0, private, must-revalidate > > Our VCL unsets the Cache-Control header in vcl_backend_response and sets > its own before delivering. By unsetting the Cache-Control header in > vcl_backend_response I would expect Varnish to ignore the max-age=0 value > and still cache the page as per our other rules, but it seems that the > second it sees max-age=0 in the response header, that it makrs the object > as not cacheable. > > Other than by changing the backend's response to never set max-age=0, is > there a way to force Varnish to cach pages even if it returned max-age=0? > > Is this even by design or is it a bug? > > Thanks, > Yanick Girouard > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Jul 28 08:49:32 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 28 Jul 2017 10:49:32 +0200 Subject: Child process recurrently being restarted In-Reply-To: References: Message-ID: Anything else using RAM on that machine? Best idea I have now is monitor Varnish memory usage in parallel of the *.g_bytes counters. -- Guillaume Quintard On Thu, Jun 29, 2017 at 7:09 PM, Stefano Baldo wrote: > Hi Guillaume and Reza. > > This time varnish restarted but it left some more info on syslog. > It seems like the system is running out of memory. > > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.297487] pool_herder invoked > oom-killer: gfp_mask=0x2000d0, order=2, oom_score_adj=0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.300992] pool_herder cpuset=/ > mems_allowed=0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.303157] CPU: 1 PID: 16214 > Comm: pool_herder Tainted: G C O 3.16.0-4-amd64 #1 Debian > 3.16.36-1+deb8u2 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] Hardware name: Xen > HVM domU, BIOS 4.2.amazon 02/16/2017 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] 0000000000000000 > ffffffff815123b5 ffff8800eb3652f0 0000000000000000 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] ffffffff8150ff8d > 0000000000000000 ffffffff810d6e3f 0000000000000000 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] ffffffff81516d2e > 0000000000000200 ffffffff810689d3 ffffffff810c43e4 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] Call Trace: > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? dump_stack+0x5d/0x78 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? dump_header+0x76/0x1e8 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? smp_call_function_single+0x5f/0xa0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? mutex_lock+0xe/0x2a > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? put_online_cpus+0x23/0x80 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? rcu_oom_notify+0xc4/0xe0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? do_try_to_free_pages+0x4ac/0x520 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? oom_kill_process+0x21d/0x370 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? find_lock_task_mm+0x3d/0x90 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? out_of_memory+0x473/0x4b0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? __alloc_pages_nodemask+0x9ef/0xb50 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? copy_process.part.25+0x116/0x1c50 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? __do_page_fault+0x1d1/0x4f0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? do_fork+0xe0/0x3d0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? stub_clone+0x69/0x90 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.304984] [] > ? system_call_fast_compare_end+0x10/0x15 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.367638] Mem-Info: > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.368962] Node 0 DMA per-cpu: > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.370768] CPU 0: hi: 0, > btch: 1 usd: 0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.373249] CPU 1: hi: 0, > btch: 1 usd: 0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.375652] Node 0 DMA32 per-cpu: > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.377508] CPU 0: hi: 186, > btch: 31 usd: 29 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.379898] CPU 1: hi: 186, > btch: 31 usd: 0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] active_anon:846474 > inactive_anon:1913 isolated_anon:0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] active_file:408 > inactive_file:415 isolated_file:32 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] unevictable:20736 > dirty:27 writeback:0 unstable:0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] free:16797 > slab_reclaimable:15276 slab_unreclaimable:10521 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] mapped:22002 > shmem:22935 pagetables:30362 bounce:0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.382318] free_cma:0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.397242] Node 0 DMA > free:15192kB min:184kB low:228kB high:276kB active_anon:416kB > inactive_anon:60kB active_file:0kB inactive_file:0kB unevictable:20kB > isolated(anon):0kB isolated(file):0kB present:15988kB managed:15904kB > mlocked:20kB dirty:0kB writeback:0kB mapped:20kB shmem:80kB > slab_reclaimable:32kB slab_unreclaimable:0kB kernel_stack:112kB > pagetables:20kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? yes > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.416338] lowmem_reserve[]: 0 > 3757 3757 3757 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.419030] Node 0 DMA32 > free:50120kB min:44868kB low:56084kB high:67300kB active_anon:3386780kB > inactive_anon:7592kB active_file:1732kB inactive_file:2060kB > unevictable:82924kB isolated(anon):0kB isolated(file):128kB > present:3915776kB managed:3849676kB mlocked:82924kB dirty:108kB > writeback:0kB mapped:88432kB shmem:91660kB slab_reclaimable:61072kB > slab_unreclaimable:42184kB kernel_stack:27248kB pagetables:121428kB > unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.440095] lowmem_reserve[]: 0 0 > 0 0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.442202] Node 0 DMA: 22*4kB > (UEM) 6*8kB (EM) 1*16kB (E) 2*32kB (UM) 2*64kB (UE) 2*128kB (EM) 3*256kB > (UEM) 1*512kB (E) 3*1024kB (UEM) 3*2048kB (EMR) 1*4096kB (M) = 15192kB > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.451936] Node 0 DMA32: > 4031*4kB (EM) 2729*8kB (EM) 324*16kB (EM) 1*32kB (R) 1*64kB (R) 0*128kB > 0*256kB 1*512kB (R) 1*1024kB (R) 1*2048kB (R) 0*4096kB = 46820kB > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.460240] Node 0 > hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.464122] 24240 total pagecache > pages > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.466048] 0 pages in swap cache > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.467672] Swap cache stats: add > 0, delete 0, find 0/0 > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.470159] Free swap = 0kB > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.471513] Total swap = 0kB > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.472980] 982941 pages RAM > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.474380] 0 pages > HighMem/MovableOnly > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.476190] 16525 pages reserved > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.477772] 0 pages hwpoisoned > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.479189] [ pid ] uid tgid > total_vm rss nr_ptes swapents oom_score_adj name > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.482698] [ 163] 0 163 > 10419 1295 21 0 0 systemd-journal > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.486646] [ 165] 0 165 > 10202 136 21 0 -1000 systemd-udevd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.490598] [ 294] 0 294 > 6351 1729 14 0 0 dhclient > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.494457] [ 319] 0 319 > 6869 62 18 0 0 cron > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.498260] [ 321] 0 321 > 4964 67 14 0 0 systemd-logind > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.502346] [ 326] 105 326 > 10558 101 25 0 -900 dbus-daemon > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.506315] [ 342] 0 342 > 65721 228 31 0 0 rsyslogd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.510222] [ 343] 0 343 > 88199 2108 61 0 -500 dockerd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.514022] [ 350] 106 350 > 18280 181 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.518040] [ 351] 106 351 > 18280 475 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.522041] [ 352] 106 352 > 18280 187 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.526025] [ 353] 106 353 > 18280 187 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.530067] [ 354] 106 354 > 18280 187 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.534033] [ 355] 106 355 > 18280 190 36 0 0 zabbix_agentd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.538001] [ 358] 0 358 > 66390 1826 32 0 0 fail2ban-server > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.541972] [ 400] 0 400 > 35984 444 24 0 -500 docker-containe > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.545879] [ 568] 0 568 > 13796 168 30 0 -1000 sshd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.549733] [ 576] 0 576 > 3604 41 12 0 0 agetty > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.553569] [ 577] 0 577 > 3559 38 12 0 0 agetty > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.557322] [16201] 0 16201 > 29695 20707 60 0 0 varnishd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.561103] [16209] 108 16209 > 118909802 822425 29398 0 0 cache-main > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.565002] [27352] 0 27352 > 20131 214 42 0 0 sshd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.568682] [27354] 1000 27354 > 20165 211 41 0 0 sshd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.572307] [27355] 1000 27355 > 5487 146 17 0 0 bash > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.575920] [27360] 0 27360 > 11211 107 26 0 0 sudo > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.579593] [27361] 0 27361 > 11584 97 27 0 0 su > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.583155] [27362] 0 27362 > 5481 142 15 0 0 bash > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.586782] [27749] 0 27749 > 20131 214 41 0 0 sshd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.590428] [27751] 1000 27751 > 20164 211 39 0 0 sshd > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.593979] [27752] 1000 27752 > 5487 147 15 0 0 bash > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.597488] [28762] 0 28762 > 26528 132 17 0 0 varnishstat > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.601239] [28764] 0 28764 > 11211 106 26 0 0 sudo > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.604737] [28765] 0 28765 > 11584 97 26 0 0 su > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.608602] [28766] 0 28766 > 5481 141 15 0 0 bash > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.612288] [28768] 0 28768 > 26528 220 18 0 0 varnishstat > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.616189] Out of memory: Kill > process 16209 (cache-main) score 880 or sacrifice child > Jun 29 13:11:01 ip-172-25-2-8 kernel: [93823.620106] Killed process 16209 > (cache-main) total-vm:475639208kB, anon-rss:3289700kB, file-rss:0kB > Jun 29 13:11:01 ip-172-25-2-8 varnishd[16201]: Child (16209) died signal=9 > Jun 29 13:11:01 ip-172-25-2-8 varnishd[16201]: Child cleanup complete > Jun 29 13:11:01 ip-172-25-2-8 varnishd[16201]: Child (30313) Started > Jun 29 13:11:01 ip-172-25-2-8 varnishd[16201]: Child (30313) said Child > starts > Jun 29 13:11:01 ip-172-25-2-8 varnishd[16201]: Child (30313) said SMF.s0 > mmap'ed 483183820800 bytes of 483183820800 > > Best, > Stefano > > > On Wed, Jun 28, 2017 at 11:33 AM, Reza Naghibi > wrote: > >> Assuming the problem is running out of memory, you will need to do some >> memory tuning, especially given the number of threads you are using and >> your access patterns. Your options: >> >> - Add more memory to the system >> - Reduce thread_pool_max >> - Reduce jemalloc's thread cache (MALLOC_CONF="lg_tcache_max:10") >> - Use some of the tuning params in here: https://info.varnish-sof >> tware.com/blog/understanding-varnish-cache-memory-usage >> >> >> >> -- >> Reza Naghibi >> Varnish Software >> >> On Wed, Jun 28, 2017 at 9:26 AM, Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hi, >>> >>> can you look that "varnishstat -1 | grep g_bytes" and see if if matches >>> the memory you are seeing? >>> >>> -- >>> Guillaume Quintard >>> >>> On Wed, Jun 28, 2017 at 3:20 PM, Stefano Baldo >>> wrote: >>> >>>> Hi Guillaume. >>>> >>>> I increased the cli_timeout yesterday to 900sec (15min) and it >>>> restarted anyway, which seems to indicate that the thread is really stalled. >>>> >>>> This was 1 minute after the last restart: >>>> >>>> MAIN.n_object 3908216 . object structs made >>>> SMF.s0.g_alloc 7794510 . Allocations outstanding >>>> >>>> I've just changed the I/O Scheduler to noop to see what happens. >>>> >>>> One interest thing I've found is about the memory usage. >>>> >>>> In the 1st minute of use: >>>> MemTotal: 3865572 kB >>>> MemFree: 120768 kB >>>> MemAvailable: 2300268 kB >>>> >>>> 1 minute before a restart: >>>> MemTotal: 3865572 kB >>>> MemFree: 82480 kB >>>> MemAvailable: 68316 kB >>>> >>>> It seems like the system is possibly running out of memory. >>>> >>>> When calling varnishd, I'm specifying only "-s file,..." as storage. I >>>> see in some examples that is common to use "-s file" AND "-s malloc" >>>> together. Should I be passing "-s malloc" as well to somehow try to limit >>>> the memory usage by varnishd? >>>> >>>> Best, >>>> Stefano >>>> >>>> >>>> On Wed, Jun 28, 2017 at 4:12 AM, Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Sadly, nothing suspicious here, you can still try: >>>>> - bumping the cli_timeout >>>>> - changing your disk scheduler >>>>> - changing the advice option of the file storage >>>>> >>>>> I'm still convinced this is due to Varnish getting stuck waiting for >>>>> the disk because of the file storage fragmentation. >>>>> >>>>> Maybe you could look at SMF.*.g_alloc and compare it to the number of >>>>> objects. Ideally, we would have a 1:1 relation between objects and >>>>> allocations. If that number drops prior to a restart, that would be a good >>>>> clue. >>>>> >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> On Tue, Jun 27, 2017 at 11:07 PM, Stefano Baldo < >>>>> stefanobaldo at gmail.com> wrote: >>>>> >>>>>> Hi Guillaume. >>>>>> >>>>>> It keeps restarting. >>>>>> Would you mind taking a quick look in the following VCL file to check >>>>>> if you find anything suspicious? >>>>>> >>>>>> Thank you very much. >>>>>> >>>>>> Best, >>>>>> Stefano >>>>>> >>>>>> vcl 4.0; >>>>>> >>>>>> import std; >>>>>> >>>>>> backend default { >>>>>> .host = "sites-web-server-lb"; >>>>>> .port = "80"; >>>>>> } >>>>>> >>>>>> include "/etc/varnish/bad_bot_detection.vcl"; >>>>>> >>>>>> sub vcl_recv { >>>>>> call bad_bot_detection; >>>>>> >>>>>> if (req.url == "/nocache" || req.url == "/version") { >>>>>> return(pass); >>>>>> } >>>>>> >>>>>> unset req.http.Cookie; >>>>>> if (req.method == "PURGE") { >>>>>> ban("obj.http.x-host == " + req.http.host + " && >>>>>> obj.http.x-user-agent !~ Googlebot"); >>>>>> return(synth(750)); >>>>>> } >>>>>> >>>>>> set req.url = regsuball(req.url, "(?>>>>> } >>>>>> >>>>>> sub vcl_synth { >>>>>> if (resp.status == 750) { >>>>>> set resp.status = 200; >>>>>> synthetic("PURGED => " + req.url); >>>>>> return(deliver); >>>>>> } elsif (resp.status == 501) { >>>>>> set resp.status = 200; >>>>>> set resp.http.Content-Type = "text/html; charset=utf-8"; >>>>>> synthetic(std.fileread("/etc/varnish/pages/invalid_domain.ht >>>>>> ml")); >>>>>> return(deliver); >>>>>> } >>>>>> } >>>>>> >>>>>> sub vcl_backend_response { >>>>>> unset beresp.http.Set-Cookie; >>>>>> set beresp.http.x-host = bereq.http.host; >>>>>> set beresp.http.x-user-agent = bereq.http.user-agent; >>>>>> >>>>>> if (bereq.url == "/themes/basic/assets/theme.min.css" >>>>>> || bereq.url == "/api/events/PAGEVIEW" >>>>>> || bereq.url ~ "^\/assets\/img\/") { >>>>>> set beresp.http.Cache-Control = "max-age=0"; >>>>>> } else { >>>>>> unset beresp.http.Cache-Control; >>>>>> } >>>>>> >>>>>> if (beresp.status == 200 || >>>>>> beresp.status == 301 || >>>>>> beresp.status == 302 || >>>>>> beresp.status == 404) { >>>>>> if (bereq.url ~ "\&ordenar=aleatorio$") { >>>>>> set beresp.http.X-TTL = "1d"; >>>>>> set beresp.ttl = 1d; >>>>>> } else { >>>>>> set beresp.http.X-TTL = "1w"; >>>>>> set beresp.ttl = 1w; >>>>>> } >>>>>> } >>>>>> >>>>>> if (bereq.url !~ "\.(jpeg|jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") >>>>>> { >>>>>> set beresp.do_gzip = true; >>>>>> } >>>>>> } >>>>>> >>>>>> sub vcl_pipe { >>>>>> set bereq.http.connection = "close"; >>>>>> return (pipe); >>>>>> } >>>>>> >>>>>> sub vcl_deliver { >>>>>> unset resp.http.x-host; >>>>>> unset resp.http.x-user-agent; >>>>>> } >>>>>> >>>>>> sub vcl_backend_error { >>>>>> if (beresp.status == 502 || beresp.status == 503 || beresp.status >>>>>> == 504) { >>>>>> set beresp.status = 200; >>>>>> set beresp.http.Content-Type = "text/html; charset=utf-8"; >>>>>> synthetic(std.fileread("/etc/varnish/pages/maintenance.html")); >>>>>> return (deliver); >>>>>> } >>>>>> } >>>>>> >>>>>> sub vcl_hash { >>>>>> if (req.http.User-Agent ~ "Google Page Speed") { >>>>>> hash_data("Google Page Speed"); >>>>>> } elsif (req.http.User-Agent ~ "Googlebot") { >>>>>> hash_data("Googlebot"); >>>>>> } >>>>>> } >>>>>> >>>>>> sub vcl_deliver { >>>>>> if (resp.status == 501) { >>>>>> return (synth(resp.status)); >>>>>> } >>>>>> if (obj.hits > 0) { >>>>>> set resp.http.X-Cache = "hit"; >>>>>> } else { >>>>>> set resp.http.X-Cache = "miss"; >>>>>> } >>>>>> } >>>>>> >>>>>> >>>>>> On Mon, Jun 26, 2017 at 3:47 PM, Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Nice! It may have been the cause, time will tell.can you report back >>>>>>> in a few days to let us know? >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> On Jun 26, 2017 20:21, "Stefano Baldo" >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Guillaume. >>>>>>>> >>>>>>>> I think things will start to going better now after changing the >>>>>>>> bans. >>>>>>>> This is how my last varnishstat looked like moments before a crash >>>>>>>> regarding the bans: >>>>>>>> >>>>>>>> MAIN.bans 41336 . Count of bans >>>>>>>> MAIN.bans_completed 37967 . Number of bans >>>>>>>> marked 'completed' >>>>>>>> MAIN.bans_obj 0 . Number of bans >>>>>>>> using obj.* >>>>>>>> MAIN.bans_req 41335 . Number of bans >>>>>>>> using req.* >>>>>>>> MAIN.bans_added 41336 0.68 Bans added >>>>>>>> MAIN.bans_deleted 0 0.00 Bans deleted >>>>>>>> >>>>>>>> And this is how it looks like now: >>>>>>>> >>>>>>>> MAIN.bans 2 . Count of bans >>>>>>>> MAIN.bans_completed 1 . Number of bans >>>>>>>> marked 'completed' >>>>>>>> MAIN.bans_obj 2 . Number of bans >>>>>>>> using obj.* >>>>>>>> MAIN.bans_req 0 . Number of bans >>>>>>>> using req.* >>>>>>>> MAIN.bans_added 2016 0.69 Bans added >>>>>>>> MAIN.bans_deleted 2014 0.69 Bans deleted >>>>>>>> >>>>>>>> Before the changes, bans were never deleted! >>>>>>>> Now the bans are added and quickly deleted after a minute or even a >>>>>>>> couple of seconds. >>>>>>>> >>>>>>>> May this was the cause of the problem? It seems like varnish was >>>>>>>> having a large number of bans to manage and test against. >>>>>>>> I will let it ride now. Let's see if the problem persists or it's >>>>>>>> gone! :-) >>>>>>>> >>>>>>>> Best, >>>>>>>> Stefano >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Jun 26, 2017 at 3:10 PM, Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Looking good! >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> On Mon, Jun 26, 2017 at 7:06 PM, Stefano Baldo < >>>>>>>>> stefanobaldo at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Guillaume, >>>>>>>>>> >>>>>>>>>> Can the following be considered "ban lurker friendly"? >>>>>>>>>> >>>>>>>>>> sub vcl_backend_response { >>>>>>>>>> set beresp.http.x-url = bereq.http.host + bereq.url; >>>>>>>>>> set beresp.http.x-user-agent = bereq.http.user-agent; >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> sub vcl_recv { >>>>>>>>>> if (req.method == "PURGE") { >>>>>>>>>> ban("obj.http.x-url == " + req.http.host + req.url + " && >>>>>>>>>> obj.http.x-user-agent !~ Googlebot"); >>>>>>>>>> return(synth(750)); >>>>>>>>>> } >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> sub vcl_deliver { >>>>>>>>>> unset resp.http.x-url; >>>>>>>>>> unset resp.http.x-user-agent; >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> Best, >>>>>>>>>> Stefano >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Jun 26, 2017 at 12:43 PM, Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Not lurker friendly at all indeed. You'll need to avoid req.* >>>>>>>>>>> expression. Easiest way is to stash the host, user-agent and url in >>>>>>>>>>> beresp.http.* and ban against those (unset them in vcl_deliver). >>>>>>>>>>> >>>>>>>>>>> I don't think you need to expand the VSL at all. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Guillaume Quintard >>>>>>>>>>> >>>>>>>>>>> On Jun 26, 2017 16:51, "Stefano Baldo" >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Guillaume. >>>>>>>>>>> >>>>>>>>>>> Thanks for answering. >>>>>>>>>>> >>>>>>>>>>> I'm using a SSD disk. I've changed from ext4 to ext2 to increase >>>>>>>>>>> performance but it stills restarting. >>>>>>>>>>> Also, I checked the I/O performance for the disk and there is no >>>>>>>>>>> signal of overhead. >>>>>>>>>>> >>>>>>>>>>> I've changed the /var/lib/varnish to a tmpfs and increased its >>>>>>>>>>> 80m default size passing "-l 200m,20m" to varnishd and using >>>>>>>>>>> "nodev,nosuid,noatime,size=256M 0 0" for the tmpfs mount. There >>>>>>>>>>> was a problem here. After a couple of hours varnish died and I received a >>>>>>>>>>> "no space left on device" message - deleting the /var/lib/varnish solved >>>>>>>>>>> the problem and varnish was up again, but it's weird because there was free >>>>>>>>>>> memory on the host to be used with the tmpfs directory, so I don't know >>>>>>>>>>> what could have happened. I will try to stop increasing the >>>>>>>>>>> /var/lib/varnish size. >>>>>>>>>>> >>>>>>>>>>> Anyway, I am worried about the bans. You asked me if the bans >>>>>>>>>>> are lurker friedly. Well, I don't think so. My bans are created this way: >>>>>>>>>>> >>>>>>>>>>> ban("req.http.host == " + req.http.host + " && req.url ~ " + >>>>>>>>>>> req.url + " && req.http.User-Agent !~ Googlebot"); >>>>>>>>>>> >>>>>>>>>>> Are they lurker friendly? I was taking a quick look and the >>>>>>>>>>> documentation and it looks like they're not. >>>>>>>>>>> >>>>>>>>>>> Best, >>>>>>>>>>> Stefano >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Jun 23, 2017 at 11:30 AM, Guillaume Quintard < >>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Stefano, >>>>>>>>>>>> >>>>>>>>>>>> Let's cover the usual suspects: I/Os. I think here Varnish gets >>>>>>>>>>>> stuck trying to push/pull data and can't make time to reply to the CLI. I'd >>>>>>>>>>>> recommend monitoring the disk activity (bandwidth and iops) to confirm. >>>>>>>>>>>> >>>>>>>>>>>> After some time, the file storage is terrible on a hard drive >>>>>>>>>>>> (SSDs take a bit more time to degrade) because of fragmentation. One >>>>>>>>>>>> solution to help the disks cope is to overprovision themif they're SSDs, >>>>>>>>>>>> and you can try different advices in the file storage definition in the >>>>>>>>>>>> command line (last parameter, after granularity). >>>>>>>>>>>> >>>>>>>>>>>> Is your /var/lib/varnish mount on tmpfs? That could help too. >>>>>>>>>>>> >>>>>>>>>>>> 40K bans is a lot, are they ban-lurker friendly? >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Jun 23, 2017 at 4:01 PM, Stefano Baldo < >>>>>>>>>>>> stefanobaldo at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hello. >>>>>>>>>>>>> >>>>>>>>>>>>> I am having a critical problem with Varnish Cache in >>>>>>>>>>>>> production for over a month and any help will be appreciated. >>>>>>>>>>>>> The problem is that Varnish child process is recurrently being >>>>>>>>>>>>> restarted after 10~20h of use, with the following message: >>>>>>>>>>>>> >>>>>>>>>>>>> Jun 23 09:15:13 b858e4a8bd72 varnishd[11816]: Child (11824) >>>>>>>>>>>>> not responding to CLI, killed it. >>>>>>>>>>>>> Jun 23 09:15:13 b858e4a8bd72 varnishd[11816]: Unexpected reply >>>>>>>>>>>>> from ping: 400 CLI communication error >>>>>>>>>>>>> Jun 23 09:15:13 b858e4a8bd72 varnishd[11816]: Child (11824) >>>>>>>>>>>>> died signal=9 >>>>>>>>>>>>> Jun 23 09:15:14 b858e4a8bd72 varnishd[11816]: Child cleanup >>>>>>>>>>>>> complete >>>>>>>>>>>>> Jun 23 09:15:14 b858e4a8bd72 varnishd[11816]: Child (24038) >>>>>>>>>>>>> Started >>>>>>>>>>>>> Jun 23 09:15:14 b858e4a8bd72 varnishd[11816]: Child (24038) >>>>>>>>>>>>> said Child starts >>>>>>>>>>>>> Jun 23 09:15:14 b858e4a8bd72 varnishd[11816]: Child (24038) >>>>>>>>>>>>> said SMF.s0 mmap'ed 483183820800 bytes of 483183820800 >>>>>>>>>>>>> >>>>>>>>>>>>> The following link is the varnishstat output just 1 minute >>>>>>>>>>>>> before a restart: >>>>>>>>>>>>> >>>>>>>>>>>>> https://pastebin.com/g0g5RVTs >>>>>>>>>>>>> >>>>>>>>>>>>> Environment: >>>>>>>>>>>>> >>>>>>>>>>>>> varnish-5.1.2 revision 6ece695 >>>>>>>>>>>>> Debian 8.7 - Debian GNU/Linux 8 (3.16.0) >>>>>>>>>>>>> Installed using pre-built package from official repo at >>>>>>>>>>>>> packagecloud.io >>>>>>>>>>>>> CPU 2x2.9 GHz >>>>>>>>>>>>> Mem 3.69 GiB >>>>>>>>>>>>> Running inside a Docker container >>>>>>>>>>>>> NFILES=131072 >>>>>>>>>>>>> MEMLOCK=82000 >>>>>>>>>>>>> >>>>>>>>>>>>> Additional info: >>>>>>>>>>>>> >>>>>>>>>>>>> - I need to cache a large number of objets and the cache >>>>>>>>>>>>> should last for almost a week, so I have set up a 450G storage space, I >>>>>>>>>>>>> don't know if this is a problem; >>>>>>>>>>>>> - I use ban a lot. There was about 40k bans in the system just >>>>>>>>>>>>> before the last crash. I really don't know if this is too much or may have >>>>>>>>>>>>> anything to do with it; >>>>>>>>>>>>> - No registered CPU spikes (almost always by 30%); >>>>>>>>>>>>> - No panic is reported, the only info I can retrieve is from >>>>>>>>>>>>> syslog; >>>>>>>>>>>>> - During all the time, event moments before the crashes, >>>>>>>>>>>>> everything is okay and requests are being responded very fast. >>>>>>>>>>>>> >>>>>>>>>>>>> Best, >>>>>>>>>>>>> Stefano Baldo >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> varnish-misc mailing list >>>>>>>>>>>>> varnish-misc at varnish-cache.org >>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish >>>>>>>>>>>>> -misc >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martynas at atomgraph.com Mon Jul 31 15:42:16 2017 From: martynas at atomgraph.com (=?UTF-8?Q?Martynas_Jusevi=C4=8Dius?=) Date: Mon, 31 Jul 2017 17:42:16 +0200 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL Message-ID: Hi, I have been reading quite a bit about Varnish and VCL but found almost no examples with Cache-Control coming from the client request [1]. What I want to achieve: if the client sends Cache-Control: max-age=60, TTL becomes 60 s. If the cache hit is fresher than 60 s, deliver it, otherwise fetch a new response from backend (I hope I'm not misusing the VCL terms here) *and* cache it. I had hacked this together in the vcl_fetch section in Varnish 3.x by setting the req.http.Cache-Control max-age value as beresp.ttl, but vcl_fetch is gone in Varnish 4.x. I have received a suggestion to use vcl_hit and/or grace [2], but again -- no examples... Could anyone provide some VCL pseudo-code that uses req.http.Cache-Control value to override TTL? max-age number parsing not necessary, I have figure that out. Thanks, Martynas [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#Cache_request_directives [2] https://github.com/varnishcache/varnish-cache/issues/2014#issuecomment-319096566 -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jul 31 16:37:59 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 31 Jul 2017 18:37:59 +0200 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL In-Reply-To: References: Message-ID: On github I pointed to the doc explaining how you can return(fetch) to ignore a cached object, possibly based on ttl, so you already have half the answer. The other part of the equation is just converting req.http.cache-control to a duration and comparing that to obj.ttl. It will be similar to what you have done on v3. -- Guillaume Quintard On Jul 31, 2017 18:25, "Martynas Jusevi?ius" wrote: > Hi, > > I have been reading quite a bit about Varnish and VCL but found almost no > examples with Cache-Control coming from the client request [1]. > > What I want to achieve: if the client sends Cache-Control: max-age=60, TTL > becomes 60 s. If the cache hit is fresher than 60 s, deliver it, otherwise > fetch a new response from backend (I hope I'm not misusing the VCL terms > here) *and* cache it. > > I had hacked this together in the vcl_fetch section in Varnish 3.x by > setting the req.http.Cache-Control max-age value as beresp.ttl, but > vcl_fetch is gone in Varnish 4.x. > > I have received a suggestion to use vcl_hit and/or grace [2], but again -- > no examples... > > Could anyone provide some VCL pseudo-code that uses req.http.Cache-Control > value to override TTL? max-age number parsing not necessary, I have figure > that out. > > Thanks, > > Martynas > > [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/ > Headers/Cache-Control#Cache_request_directives > [2] https://github.com/varnishcache/varnish-cache/ > issues/2014#issuecomment-319096566 > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martynas at atomgraph.com Mon Jul 31 17:56:26 2017 From: martynas at atomgraph.com (=?UTF-8?Q?Martynas_Jusevi=C4=8Dius?=) Date: Mon, 31 Jul 2017 17:56:26 +0000 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL In-Reply-To: References: Message-ID: Thanks. What was mostly unclear to me is passing the req header value all the way to where it's used to set TTL. Why doesn't bereq contain the req headers? At least Cache-Control is gone. But I guess that can be done using obj.ttl, which I didn't know about. Any documentation on that? On Mon, 31 Jul 2017 at 18.38, Guillaume Quintard < guillaume at varnish-software.com> wrote: > On github I pointed to the doc explaining how you can return(fetch) to > ignore a cached object, possibly based on ttl, so you already have half the > answer. > > The other part of the equation is just converting req.http.cache-control > to a duration and comparing that to obj.ttl. It will be similar to what you > have done on v3. > > -- > Guillaume Quintard > > On Jul 31, 2017 18:25, "Martynas Jusevi?ius" > wrote: > >> Hi, >> >> I have been reading quite a bit about Varnish and VCL but found almost no >> examples with Cache-Control coming from the client request [1]. >> >> What I want to achieve: if the client sends Cache-Control: max-age=60, >> TTL becomes 60 s. If the cache hit is fresher than 60 s, deliver it, >> otherwise fetch a new response from backend (I hope I'm not misusing the >> VCL terms here) *and* cache it. >> >> I had hacked this together in the vcl_fetch section in Varnish 3.x by >> setting the req.http.Cache-Control max-age value as beresp.ttl, but >> vcl_fetch is gone in Varnish 4.x. >> >> I have received a suggestion to use vcl_hit and/or grace [2], but again >> -- no examples... >> >> Could anyone provide some VCL pseudo-code that >> uses req.http.Cache-Control value to override TTL? max-age number parsing >> not necessary, I have figure that out. >> >> Thanks, >> >> Martynas >> >> [1] >> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#Cache_request_directives >> [2] >> https://github.com/varnishcache/varnish-cache/issues/2014#issuecomment-319096566 >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Jul 31 19:11:40 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 31 Jul 2017 21:11:40 +0200 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL In-Reply-To: References: Message-ID: man vcl bereq is filtered to avoid side effects of the client forcing the ttl to the backed. Anyway, by the time you have access to bereq, it's too late for you since the decision to go to the backend has already been been made. -- Guillaume Quintard On Jul 31, 2017 19:56, "Martynas Jusevi?ius" wrote: Thanks. What was mostly unclear to me is passing the req header value all the way to where it's used to set TTL. Why doesn't bereq contain the req headers? At least Cache-Control is gone. But I guess that can be done using obj.ttl, which I didn't know about. Any documentation on that? On Mon, 31 Jul 2017 at 18.38, Guillaume Quintard < guillaume at varnish-software.com> wrote: > On github I pointed to the doc explaining how you can return(fetch) to > ignore a cached object, possibly based on ttl, so you already have half the > answer. > > The other part of the equation is just converting req.http.cache-control > to a duration and comparing that to obj.ttl. It will be similar to what you > have done on v3. > > -- > Guillaume Quintard > > On Jul 31, 2017 18:25, "Martynas Jusevi?ius" > wrote: > >> Hi, >> >> I have been reading quite a bit about Varnish and VCL but found almost no >> examples with Cache-Control coming from the client request [1]. >> >> What I want to achieve: if the client sends Cache-Control: max-age=60, >> TTL becomes 60 s. If the cache hit is fresher than 60 s, deliver it, >> otherwise fetch a new response from backend (I hope I'm not misusing the >> VCL terms here) *and* cache it. >> >> I had hacked this together in the vcl_fetch section in Varnish 3.x by >> setting the req.http.Cache-Control max-age value as beresp.ttl, but >> vcl_fetch is gone in Varnish 4.x. >> >> I have received a suggestion to use vcl_hit and/or grace [2], but again >> -- no examples... >> >> Could anyone provide some VCL pseudo-code that >> uses req.http.Cache-Control value to override TTL? max-age number parsing >> not necessary, I have figure that out. >> >> Thanks, >> >> Martynas >> >> [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/ >> Headers/Cache-Control#Cache_request_directives >> [2] https://github.com/varnishcache/varnish-cache/ >> issues/2014#issuecomment-319096566 >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martynas at atomgraph.com Mon Jul 31 22:01:12 2017 From: martynas at atomgraph.com (=?UTF-8?Q?Martynas_Jusevi=C4=8Dius?=) Date: Tue, 1 Aug 2017 00:01:12 +0200 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL In-Reply-To: References: Message-ID: Thanks Guillaume. First I tried sub vcl_hit { if (req.http.Cache-Control ~ "max-age=[0-9]*") { set req.http.Max-Age = regsub(req.http.Cache-Control, "max-age=([0-9]*)", "\1"); if (obj.age > std.duration(req.http.Max-Age + "s", 1000000s)) { std.log("obj.age: " + obj.age + " req.http.Max-Age: " + req.http.Max-Age); return(fetch); } } ... but I got an error: - VCL_call HIT - ReqHeader Max-Age: 69 - VCL_Log obj.age: 102.306 req.http.Max-Age: 69 - VCL_return fetch - VCL_Error change return(fetch) to return(miss) in vcl_hit{} - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. - VCL_call PASS - VCL_return fetch I did as told and I tried On Mon, Jul 31, 2017 at 9:11 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > man vcl > > bereq is filtered to avoid side effects of the client forcing the ttl to > the backed. > > Anyway, by the time you have access to bereq, it's too late for you since > the decision to go to the backend has already been been made. > > -- > Guillaume Quintard > > > On Jul 31, 2017 19:56, "Martynas Jusevi?ius" > wrote: > > Thanks. What was mostly unclear to me is passing the req header value all > the way to where it's used to set TTL. > > Why doesn't bereq contain the req headers? At least Cache-Control is gone. > > But I guess that can be done using obj.ttl, which I didn't know about. Any > documentation on that? > > On Mon, 31 Jul 2017 at 18.38, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> On github I pointed to the doc explaining how you can return(fetch) to >> ignore a cached object, possibly based on ttl, so you already have half the >> answer. >> >> The other part of the equation is just converting req.http.cache-control >> to a duration and comparing that to obj.ttl. It will be similar to what you >> have done on v3. >> >> -- >> Guillaume Quintard >> >> On Jul 31, 2017 18:25, "Martynas Jusevi?ius" >> wrote: >> >>> Hi, >>> >>> I have been reading quite a bit about Varnish and VCL but found almost >>> no examples with Cache-Control coming from the client request [1]. >>> >>> What I want to achieve: if the client sends Cache-Control: max-age=60, >>> TTL becomes 60 s. If the cache hit is fresher than 60 s, deliver it, >>> otherwise fetch a new response from backend (I hope I'm not misusing the >>> VCL terms here) *and* cache it. >>> >>> I had hacked this together in the vcl_fetch section in Varnish 3.x by >>> setting the req.http.Cache-Control max-age value as beresp.ttl, but >>> vcl_fetch is gone in Varnish 4.x. >>> >>> I have received a suggestion to use vcl_hit and/or grace [2], but again >>> -- no examples... >>> >>> Could anyone provide some VCL pseudo-code that >>> uses req.http.Cache-Control value to override TTL? max-age number parsing >>> not necessary, I have figure that out. >>> >>> Thanks, >>> >>> Martynas >>> >>> [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Header >>> s/Cache-Control#Cache_request_directives >>> [2] https://github.com/varnishcache/varnish-cache/issues/ >>> 2014#issuecomment-319096566 >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martynas at atomgraph.com Mon Jul 31 22:10:49 2017 From: martynas at atomgraph.com (=?UTF-8?Q?Martynas_Jusevi=C4=8Dius?=) Date: Tue, 1 Aug 2017 00:10:49 +0200 Subject: [Varnish 4] Respecting client's Cache-Control: max-age= as TTL In-Reply-To: References: Message-ID: Sorry, sent too soon. Here it goes: Thanks Guillaume. First I tried return(fetch) as you suggested sub vcl_hit { if (req.http.Cache-Control ~ "max-age=[0-9]*") { set req.http.Max-Age = regsub(req.http.Cache-Control, "max-age=([0-9]*)", "\1"); if (obj.age > std.duration(req.http.Max-Age + "s", 1000000s)) { std.log("obj.age: " + obj.age + " req.http.Max-Age: " + req.http.Max-Age); return(fetch); } } ... but I got an error: - VCL_call HIT - ReqHeader Max-Age: 69 - VCL_Log obj.age: 102.306 req.http.Max-Age: 69 - VCL_return fetch - VCL_Error change return(fetch) to return(miss) in vcl_hit{} - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. - VCL_call PASS - VCL_return fetch I did as told and I tried return(miss) sub vcl_hit { if (req.http.Cache-Control ~ "max-age=[0-9]*") { set req.http.Max-Age = regsub(req.http.Cache-Control, "max-age=([0-9]*)", "\1"); if (obj.age > std.duration(req.http.Max-Age + "s", 1000000s)) { std.log("obj.age: " + obj.age + " req.http.Max-Age: " + req.http.Max-Age); return(miss); } } ... but then I got another error: - VCL_call HIT - ReqHeader Max-Age: 69 - VCL_Log obj.age: 195.391 req.http.Max-Age: 69 - VCL_return miss - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. - VCL_call PASS - VCL_return fetch So it looks like the max-age logic is triggered correctly, but what is wrong with the return values? On Tue, Aug 1, 2017 at 12:01 AM, Martynas Jusevi?ius wrote: > Thanks Guillaume. > > First I tried > > sub vcl_hit { > if (req.http.Cache-Control ~ "max-age=[0-9]*") { > set req.http.Max-Age = regsub(req.http.Cache-Control, > "max-age=([0-9]*)", "\1"); > if (obj.age > std.duration(req.http.Max-Age + "s", 1000000s)) { > std.log("obj.age: " + obj.age + " req.http.Max-Age: " + > req.http.Max-Age); > return(fetch); > } > } > ... > > but I got an error: > > - VCL_call HIT > - ReqHeader Max-Age: 69 > - VCL_Log obj.age: 102.306 req.http.Max-Age: 69 > - VCL_return fetch > - VCL_Error change return(fetch) to return(miss) in vcl_hit{} > - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. > - VCL_call PASS > - VCL_return fetch > > I did as told and I tried > > > On Mon, Jul 31, 2017 at 9:11 PM, Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> man vcl >> >> bereq is filtered to avoid side effects of the client forcing the ttl to >> the backed. >> >> Anyway, by the time you have access to bereq, it's too late for you since >> the decision to go to the backend has already been been made. >> >> -- >> Guillaume Quintard >> >> >> On Jul 31, 2017 19:56, "Martynas Jusevi?ius" >> wrote: >> >> Thanks. What was mostly unclear to me is passing the req header value all >> the way to where it's used to set TTL. >> >> Why doesn't bereq contain the req headers? At least Cache-Control is gone. >> >> But I guess that can be done using obj.ttl, which I didn't know about. >> Any documentation on that? >> >> On Mon, 31 Jul 2017 at 18.38, Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> On github I pointed to the doc explaining how you can return(fetch) to >>> ignore a cached object, possibly based on ttl, so you already have half the >>> answer. >>> >>> The other part of the equation is just converting req.http.cache-control >>> to a duration and comparing that to obj.ttl. It will be similar to what you >>> have done on v3. >>> >>> -- >>> Guillaume Quintard >>> >>> On Jul 31, 2017 18:25, "Martynas Jusevi?ius" >>> wrote: >>> >>>> Hi, >>>> >>>> I have been reading quite a bit about Varnish and VCL but found almost >>>> no examples with Cache-Control coming from the client request [1]. >>>> >>>> What I want to achieve: if the client sends Cache-Control: max-age=60, >>>> TTL becomes 60 s. If the cache hit is fresher than 60 s, deliver it, >>>> otherwise fetch a new response from backend (I hope I'm not misusing the >>>> VCL terms here) *and* cache it. >>>> >>>> I had hacked this together in the vcl_fetch section in Varnish 3.x by >>>> setting the req.http.Cache-Control max-age value as beresp.ttl, but >>>> vcl_fetch is gone in Varnish 4.x. >>>> >>>> I have received a suggestion to use vcl_hit and/or grace [2], but again >>>> -- no examples... >>>> >>>> Could anyone provide some VCL pseudo-code that >>>> uses req.http.Cache-Control value to override TTL? max-age number parsing >>>> not necessary, I have figure that out. >>>> >>>> Thanks, >>>> >>>> Martynas >>>> >>>> [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Header >>>> s/Cache-Control#Cache_request_directives >>>> [2] https://github.com/varnishcache/varnish-cache/issues/201 >>>> 4#issuecomment-319096566 >>>> >>>> _______________________________________________ >>>> varnish-misc mailing list >>>> varnish-misc at varnish-cache.org >>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: