From slink at schokola.de Tue Aug 5 10:51:08 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 05 Aug 2014 12:51:08 +0200 Subject: Patch for Nagios Varnish plugin - add multiinstances In-Reply-To: References: Message-ID: <53E0B71C.9030604@schokola.de> Hi, On 25/07/14 17:34, wolvverine wrote: > - add option for instance name This script is not part of the varnish distribution, I'd suggest you submit a pull request at https://github.com/varnish/varnish-nagios Thanks, Nils From slink at schokola.de Tue Aug 5 11:20:05 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 05 Aug 2014 13:20:05 +0200 Subject: [PATCH] Make "done" function available in VCL In-Reply-To: <5D103CE839D50E4CBC62C9FD7B83287C8FB399AD@EXCN015.encara.local.ads> References: <5D103CE839D50E4CBC62C9FD7B83287C8FB399AD@EXCN015.encara.local.ads> Message-ID: <53E0BDE5.8070302@schokola.de> Hi Thierry, On 07/03/14 16:37, MAGNIEN, Thierry wrote: > Update : previous patch generates a segfault during varnishtest while building modules. Not sure the fix is perfect but it seems to work. > > Here is the updated patch: I am getting back to your patch because https://www.varnish-cache.org/patchwork/patch/144/ is still marked 'New'. Martin has explained on IRC that you had discussed this and that the future solution to your requirement will be a vmod with PRIV_REQ state. I am thus marking the patch as rejected. Thanks, Nils From thierry.magnien at sfr.com Tue Aug 5 11:24:29 2014 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Tue, 5 Aug 2014 11:24:29 +0000 Subject: [PATCH] Make "done" function available in VCL In-Reply-To: <53E0BDE5.8070302@schokola.de> References: <5D103CE839D50E4CBC62C9FD7B83287C8FB399AD@EXCN015.encara.local.ads> <53E0BDE5.8070302@schokola.de> Message-ID: <5D103CE839D50E4CBC62C9FD7B83287CC9C0E96E@EXCN015.encara.local.ads> Hi Nils, That's fine, PRIV_REQ is indeed what I need. Regards, Thierry -----Message d'origine----- De?: Nils Goroll [mailto:slink at schokola.de] Envoy??: mardi 5 ao?t 2014 13:20 ??: MAGNIEN, Thierry; Varnish Development Objet?: Re: [PATCH] Make "done" function available in VCL Hi Thierry, On 07/03/14 16:37, MAGNIEN, Thierry wrote: > Update : previous patch generates a segfault during varnishtest while building modules. Not sure the fix is perfect but it seems to work. > > Here is the updated patch: I am getting back to your patch because https://www.varnish-cache.org/patchwork/patch/144/ is still marked 'New'. Martin has explained on IRC that you had discussed this and that the future solution to your requirement will be a vmod with PRIV_REQ state. I am thus marking the patch as rejected. Thanks, Nils From slink at schokola.de Tue Aug 5 12:47:25 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 05 Aug 2014 14:47:25 +0200 Subject: Non-recursive build system In-Reply-To: <52CFFC6B.7080701@smartjog.com> References: <52CFFC6B.7080701@smartjog.com> Message-ID: <53E0D25D.2040702@schokola.de> Salut Guillaume, I am going through patchwork and noticed https://www.varnish-cache.org/patchwork/patch/138/ still being in status 'New'. As this has not got marged, what would be an appropriate state for it? Nils From guillaume.quintard at smartjog.com Tue Aug 5 12:51:16 2014 From: guillaume.quintard at smartjog.com (Guillaume Quintard) Date: Tue, 5 Aug 2014 14:51:16 +0200 Subject: Non-recursive build system In-Reply-To: <53E0D25D.2040702@schokola.de> References: <52CFFC6B.7080701@smartjog.com> <53E0D25D.2040702@schokola.de> Message-ID: <53E0D344.6080607@smartjog.com> On 08/05/2014 02:47 PM, Nils Goroll wrote: > Salut Guillaume, > > I am going through patchwork and noticed > https://www.varnish-cache.org/patchwork/patch/138/ still being in status 'New'. > > As this has not got marged, what would be an appropriate state for it? > > Nils Hi Nils, I think you can reject it, as it hasn't been touched for a while, didn't attract any attention, and after all, autotools work well enough. -- Guillaume Quintard From slink at schokola.de Tue Aug 5 15:59:49 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 05 Aug 2014 17:59:49 +0200 Subject: [PATCH] Rework autocrap configuration for libedit/libreadline Message-ID: <53E0FF75.1030008@schokola.de> After an embarrassingly failing initial attempt, here's a patch to (hopefully) fix #1555 and other build issues regarding libedit vs. libreadline. @fgs: As discussed on irc, would be cool if you could check this on the build farm @all: would appreciate more test on different configs, especially more exotic ones like bsd, macos and the like Nils From slink at schokola.de Tue Aug 5 16:02:20 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 05 Aug 2014 18:02:20 +0200 Subject: [PATCH] Rework autocrap configuration for libedit/libreadline Message-ID: <53E1000C.7080207@schokola.de> (not as ambarrassing as breaking master, but still I forgot the attachment this time) After an embarrassingly failing initial attempt, here's a patch to (hopefully) fix #1555 and other build issues regarding libedit vs. libreadline. @fgs: As discussed on irc, would be cool if you could check this on the build farm @all: would appreciate more test on different configs, especially more exotic ones like bsd, macos and the like Nils _______________________________________________ varnish-dev mailing list varnish-dev at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Rework-autocrap-configuration-for-libedit-libreadlin.patch Type: text/x-patch Size: 2871 bytes Desc: not available URL: From phk at phk.freebsd.dk Wed Aug 6 09:33:33 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 06 Aug 2014 09:33:33 +0000 Subject: Non-recursive build system In-Reply-To: <53E0D344.6080607@smartjog.com> References: <52CFFC6B.7080701@smartjog.com> <53E0D25D.2040702@schokola.de> <53E0D344.6080607@smartjog.com> Message-ID: <91402.1407317613@critter.freebsd.dk> -------- In message <53E0D344.6080607 at smartjog.com>, Guillaume Quintard writes: >On 08/05/2014 02:47 PM, Nils Goroll wrote: >I think you can reject it, as it hasn't been touched for a while, didn't >attract any attention, and after all, autotools work well enough. Well, I wouldn't put it that way... :-) But I'm also not happy enough with the Makefile.phk stuff at this point in time to say that it is obviously better. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Mon Aug 11 10:47:50 2014 From: slink at schokola.de (Nils Goroll) Date: Mon, 11 Aug 2014 12:47:50 +0200 Subject: VRT_CacheReqBody without inline-C? In-Reply-To: <32B2E540-68AE-460A-B544-0FEF4EAF29CB@gmail.com> References: <53D692F4.7030904@schokola.de> <16357.1406707032@critter.freebsd.dk> <32B2E540-68AE-460A-B544-0FEF4EAF29CB@gmail.com> Message-ID: <53E89F56.8060105@schokola.de> Thank you for the patch. We have discussed it on irc and as we'd like to be able to have a "size" argument (to be able to specify std.cache_req_body(50m);), phk will add an approriate type to the vmod compiler and integrate your patch. On 30/07/14 17:00, Meng Zhang wrote: > > Just created one, please review. > > From e6f95f0bafa656de41c25ecd74f6d626ab70c25f Mon Sep 17 00:00:00 2001 > From: ijammy > > Date: Wed, 30 Jul 2014 22:56:40 +0800 > Subject: [PATCH] Add std.cache_req_body for VRT_CacheReqBody From slink at schokola.de Mon Aug 11 21:33:25 2014 From: slink at schokola.de (Nils Goroll) Date: Mon, 11 Aug 2014 23:33:25 +0200 Subject: restarting for bad synchronous responses - Re: PATCH: stale-while-revalidate support In-Reply-To: References: Message-ID: <53E936A5.2050900@schokola.de> Hi, I have started writing up a suggestion following an irc discussion about Federicos patch. I wanted to suggest to also add stale-if-error (s-i-e) support but really I don't see a way at the moment to achieve this without added functionality in vcl - am I missing something? For s-i-e, I think we need to handle the case of a synchronous backend fetch (ie one with one or more frontend requests waiting for the result) resulting in "an error". We'd have to restart the frontend request once and check for a stale object as in vcl_hit { // got here not because of grace but because of // keep or some other parameter if (req.restarts == 1) { return (deliver); } } The options I see at the moment to achieve this would all require additions to VCL: - something along the lines of a beresp.fetched ("this object originated from a fetch initiated by this req"): vcl_deliver { if (beresp.fetched && req.restarts == 0 && beresp.status >= 500) { return (restart); } } OR - some way to determine in v_b_e and v_b_r if the bereq is synchronous and restart the req. sub vcl_backend_(response|error) { if (bereq.synchronous.req && beresp.status > 500) { restart(bereq.synchronous.req); } } Thanks, Nils From slink at schokola.de Mon Aug 11 21:40:38 2014 From: slink at schokola.de (Nils Goroll) Date: Mon, 11 Aug 2014 23:40:38 +0200 Subject: PATCH: stale-while-revalidate support In-Reply-To: References: Message-ID: <53E93856.6050200@schokola.de> Federico, after our irc discussion, I could not come up with a more precise suggestion on how to implement s-i-e due to the problems I have just posted in "restarting for bad synchronous responses", so unless there is or will be a way to achieve a restart (not a retry) for the "error response case", I agree now that we can't get s-i-e support. So what remains is that I think we should support the case where the object we receive has age > max-age and put it into the cache with 0 ttl and appropriate grace. The rfc is clear about this (second example). We might need to touch more code to get ttl == 0 working. Thanks, Nils From nils.goroll at uplex.de Mon Aug 11 22:05:44 2014 From: nils.goroll at uplex.de (Nils Goroll) Date: Tue, 12 Aug 2014 00:05:44 +0200 Subject: [PATCH] Skip NULL arguments when hashing. Hash all arguments. Message-ID: <53E93E38.60006@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 When multiple arguments were passed to vmod_hash_backend, only the first argument was re-used for the number of arguments passed. Fixes #1568 Re phk on irc: I don't see how we should handle NULL differntly from "". If no bytes are processed, the hash does not change. Nils - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg tel +49 40 28805731 mob +49 170 2723133 fax +49 40 42949753 xmpp://slink at jabber.ccc.de/ http://uplex.de/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEARECAAYFAlPpPjYACgkQYhlPtokm/hv19gCfXpqBNwfB2QxNIW+OrWqHENEk B9cAoJzKjz2G2pTErNDMnUMvyI0brGUl =9zp+ -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Skip-NULL-arguments-when-hashing.-Hash-all-arguments.patch Type: text/x-patch Size: 2021 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Skip-NULL-arguments-when-hashing.-Hash-all-arguments.patch.sig Type: application/pgp-signature Size: 72 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4528 bytes Desc: S/MIME Cryptographic Signature URL: From slink at schokola.de Tue Aug 12 11:10:09 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 12 Aug 2014 13:10:09 +0200 Subject: [PATCH] Quote our etags used in tests. Message-ID: <53E9F611.4090102@schokola.de> A non-text attachment was scrubbed... Name: 0001-Quote-our-etags-used-in-tests.patch Type: text/x-patch Size: 10555 bytes Desc: not available URL: From fgsch at lodoss.net Wed Aug 13 22:50:09 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Wed, 13 Aug 2014 23:50:09 +0100 Subject: PATCH: Split ungzip counters Message-ID: Hi, As discussed on irc the attached diff adds a new counter, n_gunzip_test, to decouple the existing counter from testing and actually uncompressing at delivering time. Not terribly thrilled about the diff but I've tried a few variations and I liked this best. Comments? OKs? PS: pedantic question of the day: shouldn't all these be ungzip and not gunzip? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 000-split_n_gunzip_counter.patch Type: text/x-patch Size: 3445 bytes Desc: not available URL: From fgsch at lodoss.net Wed Aug 13 22:55:32 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Wed, 13 Aug 2014 23:55:32 +0100 Subject: PATCH: stale-while-revalidate support In-Reply-To: <53E93856.6050200@schokola.de> References: <53E93856.6050200@schokola.de> Message-ID: Comments inline. On Mon, Aug 11, 2014 at 10:40 PM, Nils Goroll wrote: > [..] So what remains is that I think we should support the case where the object > we > receive has age > max-age and put it into the cache with 0 ttl and > appropriate > grace. The rfc is clear about this (second example). > Where is this? I cannot really find/see it in the rfc. > We might need to touch more code to get ttl == 0 working. > That's an entire separate discussion that I'd prefer to have outside this. f.- -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Sun Aug 17 11:45:49 2014 From: slink at schokola.de (Nils Goroll) Date: Sun, 17 Aug 2014 13:45:49 +0200 Subject: PATCH: stale-while-revalidate support In-Reply-To: References: <53E93856.6050200@schokola.de> Message-ID: <53F095ED.20007@schokola.de> On 14/08/14 00:55, Federico Schwindt wrote: > Comments inline. > > On Mon, Aug 11, 2014 at 10:40 PM, Nils Goroll > wrote: > > [..] > > So what remains is that I think we should support the case where the object we > receive has age > max-age and put it into the cache with 0 ttl and appropriate > grace. The rfc is clear about this (second example). > > > Where is this? I cannot really find/see it in the rfc. http://tools.ietf.org/html/rfc5861 the successful response can be returned instead: HTTP/1.1 200 OK Cache-Control: max-age=600, stale-if-error=1200 Age: 900 Content-Type: text/plain From fgsch at lodoss.net Mon Aug 18 08:15:12 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Mon, 18 Aug 2014 09:15:12 +0100 Subject: PATCH: stale-while-revalidate support In-Reply-To: <53F095ED.20007@schokola.de> References: <53E93856.6050200@schokola.de> <53F095ED.20007@schokola.de> Message-ID: This is for s-i-e and I don't really see how that is related s-w-r unless I'm missing something. On Sun, Aug 17, 2014 at 12:45 PM, Nils Goroll wrote: > > > On 14/08/14 00:55, Federico Schwindt wrote: > > Comments inline. > > > > On Mon, Aug 11, 2014 at 10:40 PM, Nils Goroll > > wrote: > > > > [..] > > > > So what remains is that I think we should support the case where the > object we > > receive has age > max-age and put it into the cache with 0 ttl and > appropriate > > grace. The rfc is clear about this (second example). > > > > > > Where is this? I cannot really find/see it in the rfc. > > http://tools.ietf.org/html/rfc5861 > > the successful response can be returned instead: > > HTTP/1.1 200 OK > Cache-Control: max-age=600, stale-if-error=1200 > Age: 900 > Content-Type: text/plain > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fgsch at lodoss.net Fri Aug 22 11:45:33 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Fri, 22 Aug 2014 12:45:33 +0100 Subject: PATCH: vary on gzip Message-ID: Hi, I've been pondering on this for a while and I finally convinced myself that Varnish is doing it wrong. If http_gzip_support is enabled and the object is gzip'd we must add a Vary on Accept-Encoding on delivery to keep intermediary caches happy. This is done before vcl_deliver is called so it's possible to remove it before delivering although I can't see why anyone might want to do this. Alternatively we could add a parameter, e.g. http_gzip_vary, to controls this but I'm not in favour of adding more knobs. ymmv. Comments? OKs? FWIW, nginx support this via gzip_vary (1). It's also mentioned in Google's Optimizing caching docs (2) and in maxcdn blog post (3). f.- 1. http://nginx.org/en/docs/http/ngx_http_gzip_module.html#gzip_vary 2. https://developers.google.com/speed/docs/best-practices/caching 3. http://blog.maxcdn.com/accept-encoding-its-vary-important/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 000-gzip_vary.patch Type: text/x-patch Size: 3307 bytes Desc: not available URL: From pada at posteo.de Fri Aug 22 20:41:14 2014 From: pada at posteo.de (Daniel Parthey) Date: Fri, 22 Aug 2014 22:41:14 +0200 Subject: PATCH: vary on gzip In-Reply-To: References: Message-ID: <53F7AAEA.60601@posteo.de> Am 22.08.2014 13:45, schrieb Federico Schwindt: > Hi, > > I've been pondering on this for a while and I finally convinced myself that > Varnish is doing it wrong. If http_gzip_support is enabled and the object > is gzip'd we must add a Vary on Accept-Encoding on delivery to keep > intermediary caches happy. > Hi Federico, sounds reasonable. Our web applications have been adding the Vary: accept-encoding for years now, probably for the same reason (to make varnish and other caches happy), but for us the webserver still does the compression, because of issues like https://www.varnish-cache.org/trac/ticket/1220 which still seem to be present in Varnish 3.0.4. If webserver did not compress the content and Varnish adds the gzip layer, a Vary: accept-encoding is surely necessary in order to prevent a mix of compressed and non-compressed content in intermediary caches which are located between varnish and client. Kind regards Daniel -- https://emailselfdefense.fsf.org https://pgp.mit.edu/pks/lookup?op=get&search=0xB4DD34660B6F0F1B -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From fgsch at lodoss.net Tue Aug 26 10:05:43 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Tue, 26 Aug 2014 11:05:43 +0100 Subject: PATCH: vary on gzip In-Reply-To: <53F7AAEA.60601@posteo.de> References: <53F7AAEA.60601@posteo.de> Message-ID: Hi, Thanks for checking. To be honest I'm a bit surprised ticket 1220 has been outstanding for so long. I've tried reproducing this in 3.0.[345] without success. Have you checked whether this is fixed in V4? btw, if http_gzip_support is enabled you need the Vary header regardless of where compression takes place since Varnish will uncompress the content on the fly if the client doesn't support gzip. Cheers. On Fri, Aug 22, 2014 at 9:41 PM, Daniel Parthey wrote: > Am 22.08.2014 13:45, schrieb Federico Schwindt: > > Hi, > > > > I've been pondering on this for a while and I finally convinced myself > that > > Varnish is doing it wrong. If http_gzip_support is enabled and the > object > > is gzip'd we must add a Vary on Accept-Encoding on delivery to keep > > intermediary caches happy. > > > > Hi Federico, > > sounds reasonable. Our web applications have been adding the Vary: > accept-encoding > for years now, probably for the same reason (to make varnish and other > caches happy), > but for us the webserver still does the compression, because of issues like > https://www.varnish-cache.org/trac/ticket/1220 which still seem to be > present in > Varnish 3.0.4. > > If webserver did not compress the content and Varnish adds the gzip layer, > a > Vary: accept-encoding is surely necessary in order to prevent a mix of > compressed > and non-compressed content in intermediary caches which are located > between varnish and client. > > Kind regards > Daniel > > -- > https://emailselfdefense.fsf.org > https://pgp.mit.edu/pks/lookup?op=get&search=0xB4DD34660B6F0F1B > > > > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Tue Aug 26 10:46:04 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 26 Aug 2014 12:46:04 +0200 Subject: PATCH: vary on gzip In-Reply-To: References: Message-ID: <53FC656C.4060402@schokola.de> As discussed on IRC: I do very much appreciate that you've but this back onto the agenda, this is important and Varnish needs to get the default Vary right. But I think this needs to live in fetch (before calling v_b_r): The Vary header does not depend upon the request, so it does not need to live in deliver. Doing things at delivery time is more costly than at fetch time (because we probably cache the result). When backends handle gzip, we also end up with a cache-object with Vary: A-E (at least for sane backends), so we should get the same result when Varnish does the compression. vry_cmp already skips Accept-Encoding, so we should be fine at lookup time. Nils From fgsch at lodoss.net Tue Aug 26 11:16:25 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Tue, 26 Aug 2014 12:16:25 +0100 Subject: PATCH: vary on gzip In-Reply-To: <53FC656C.4060402@schokola.de> References: <53FC656C.4060402@schokola.de> Message-ID: This cannot happen before v_b_r as the gzip'ness of the object could be changed in there. It needs to happen after, or before and when changing do_gzip and do_gunzip. On Tue, Aug 26, 2014 at 11:46 AM, Nils Goroll wrote: > As discussed on IRC: I do very much appreciate that you've but this back > onto > the agenda, this is important and Varnish needs to get the default Vary > right. > > But I think this needs to live in fetch (before calling v_b_r): The Vary > header > does not depend upon the request, so it does not need to live in deliver. > Doing > things at delivery time is more costly than at fetch time (because we > probably > cache the result). > > When backends handle gzip, we also end up with a cache-object with Vary: > A-E (at > least for sane backends), so we should get the same result when Varnish > does the > compression. > > vry_cmp already skips Accept-Encoding, so we should be fine at lookup time. > > Nils > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Tue Aug 26 11:40:25 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 26 Aug 2014 13:40:25 +0200 Subject: PATCH: vary on gzip In-Reply-To: <53FC656C.4060402@schokola.de> References: <53FC656C.4060402@schokola.de> Message-ID: <53FC7229.4010109@schokola.de> On 26/08/14 12:46, Nils Goroll wrote: > But I think this needs to live in fetch (before calling v_b_r) After further discussion, it became clear that vfp_gzip_init or at least some code in cache_gzip.c could be an even better place. From phk at phk.freebsd.dk Tue Aug 26 11:49:27 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 26 Aug 2014 11:49:27 +0000 Subject: PATCH: vary on gzip In-Reply-To: <53FC7229.4010109@schokola.de> References: <53FC656C.4060402@schokola.de> <53FC7229.4010109@schokola.de> Message-ID: <18231.1409053767@critter.freebsd.dk> -------- In message <53FC7229.4010109 at schokola.de>, Nils Goroll writes: >On 26/08/14 12:46, Nils Goroll wrote: >> But I think this needs to live in fetch (before calling v_b_r) > >After further discussion, it became clear that vfp_gzip_init or at least some >code in cache_gzip.c could be an even better place. The intent is that VFP's become self-contained entities so that for instance GZIP/GUNZIP and TESTGUNZIP will use their init-> functions to mangle headers as appropriate. We're not *quite* there yet, and there is a complication in relation to how we mangle IMS-fetched objects headers which I have not found a good solution to yet. As always: if you spot any wrong behaviour, by all means tell me! -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From fgsch at lodoss.net Tue Aug 26 11:52:06 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Tue, 26 Aug 2014 12:52:06 +0100 Subject: PATCH: vary on gzip In-Reply-To: <18231.1409053767@critter.freebsd.dk> References: <53FC656C.4060402@schokola.de> <53FC7229.4010109@schokola.de> <18231.1409053767@critter.freebsd.dk> Message-ID: Does this mean that doing it in vfp_gzip_init() or cache_gzip.c would be appropriate or preferred? On Tue, Aug 26, 2014 at 12:49 PM, Poul-Henning Kamp wrote: > -------- > In message <53FC7229.4010109 at schokola.de>, Nils Goroll writes: > >On 26/08/14 12:46, Nils Goroll wrote: > >> But I think this needs to live in fetch (before calling v_b_r) > > > >After further discussion, it became clear that vfp_gzip_init or at least > some > >code in cache_gzip.c could be an even better place. > > The intent is that VFP's become self-contained entities so that > for instance GZIP/GUNZIP and TESTGUNZIP will use their init-> functions > to mangle headers as appropriate. > > We're not *quite* there yet, and there is a complication in relation > to how we mangle IMS-fetched objects headers which I have not found > a good solution to yet. > > As always: if you spot any wrong behaviour, by all means tell me! > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Aug 26 12:26:29 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 26 Aug 2014 12:26:29 +0000 Subject: PATCH: vary on gzip In-Reply-To: References: <53FC656C.4060402@schokola.de> <53FC7229.4010109@schokola.de> <18231.1409053767@critter.freebsd.dk> Message-ID: <18500.1409055989@critter.freebsd.dk> -------- In message , Federico Schwindt writes: >Does this mean that doing it in vfp_gzip_init() or cache_gzip.c would be >appropriate or preferred? I probably lost context for "it" here :-) But if you look in -trunk, you'll see for instance vfp_gzip_init() doing: if (vfe->vfp->priv2 == VFP_GUNZIP || vfe->vfp->priv2 == VFP_GZIP) { http_Unset(vc->http, H_Content_Encoding); http_Unset(vc->http, H_Content_Length); RFC2616_Weaken_Etag(vc->http); } -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Tue Aug 26 13:44:48 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 26 Aug 2014 15:44:48 +0200 Subject: vdd14q3 - pls doodle your dates Message-ID: <53FC8F50.5010908@schokola.de> if you plan to participate: http://doodle.com/6gtscmezq4suht2q#table From slink at schokola.de Tue Aug 26 16:53:35 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 26 Aug 2014 18:53:35 +0200 Subject: [PATCH] status code overhaul and consistent status codes from VSUBs Message-ID: <53FCBB8F.1020106@schokola.de> This is an attempt to make Varnish status codes more consistent with the documented behaviour (recently added). In particular, varnishd -C should fail with the status code from a failing VSUB. Fixes #1572 -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-status-code-overhaul-and-consistent-status-codes-fro.patch Type: text/x-patch Size: 20086 bytes Desc: not available URL: From slink at schokola.de Tue Aug 26 17:59:57 2014 From: slink at schokola.de (Nils Goroll) Date: Tue, 26 Aug 2014 19:59:57 +0200 Subject: what is the ttl? Message-ID: <53FCCB1D.5070707@schokola.de> Hi, "our" Martin noticed confusing VCL behavior which relates to a bug and two commits we have been discussing recently: * https://www.varnish-cache.org/trac/ticket/1578 * https://www.varnish-cache.org/trac/changeset/4e9fb4b339b7df0679609d699aa7f5c31aa32595 * https://www.varnish-cache.org/trac/changeset/160927b690dd076702601b7407eb71bd423c62dd We now have the situation that the ttl actually starts at the point in time corresponding to Age, so if an object with "Age: 120" and "max-age=180" is received from a backend, by default, it will only have one minute left to live in cache. While this makes perfect sense for the default rfc2616 ttl, we get the same behavior when setting the ttl explicitly. With this VCL sub vcl_backend_response { set beresp.ttl = 120s; } if the received object has "Age: 60", it will get cached for one minute only. What might be even more confusing is the fact that reading the ttl a second later in vcl_hit will give 59 (seconds remaining in cache). If Age is larger than the ttl we set in VCL, the object won't get cached at all. I think we should make this consistent and easier to understand. One simple idea to regain consistency would be to make a "set beresp.ttl" do what a read on obj.ttl gives us: the time remaining in cache (ie add the Age to the internal ttl). In addition to that (or even replacing the VCL ttl) I like the idea to add two (additional) variables: * (beresp|obj).age : corresponding to the Age header, the current Age of the Object * (beresp|obj).maxage : directly access the internal ttl, irrespective of age Internally, (beresp|obj).maxage and (beresp|obj).ttl would use the same internal ttl, so changing one would change the other. Nils From pada at posteo.de Tue Aug 26 22:52:05 2014 From: pada at posteo.de (Daniel Parthey) Date: Wed, 27 Aug 2014 00:52:05 +0200 Subject: PATCH: vary on gzip In-Reply-To: References: <53F7AAEA.60601@posteo.de> Message-ID: <4113def9-0648-4073-ab57-47d39128e074@email.android.com> On 26. August 2014 12:05:43 MESZ, Federico Schwindt wrote: >To be honest I'm a bit surprised ticket 1220 has been outstanding for >so >long. I've tried reproducing this in 3.0.[345] without success. >Have you checked whether this is fixed in V4? No, we have not tested streaming+gzip with varnish 4 yet. What I know is, that gzip errors occurred with both gzip and the experimental streaming enabled in varnish 3. Disabling streaming also made the gzip errors disappear, so we currently run varnish 3 with gzip enabled and streaming disabled. Unfortunately, we are also not running varnish 4 in production yet, one reason is the missing stale-if-error feature (discussed in other threads), since our old hacky vcl "restart" code from varnish 3 some inline-c which emulated Stale-If-Error, didn't work any more in varnish 4, since backend and frontend are separate now and internal code has changed a lot, so our inline-c did not fit in. We will retry streaming and gzip with varnish 4 as soon as s-i-e is properly implemented in varnish 4.x, so we are able to throw out the old vcl hacks and make our varnish 3 vcl files to work with varnish 4. Kind regards Daniel From fgsch at lodoss.net Wed Aug 27 00:50:56 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Wed, 27 Aug 2014 01:50:56 +0100 Subject: PATCH: vary on gzip In-Reply-To: <18500.1409055989@critter.freebsd.dk> References: <53FC656C.4060402@schokola.de> <53FC7229.4010109@schokola.de> <18231.1409053767@critter.freebsd.dk> <18500.1409055989@critter.freebsd.dk> Message-ID: Attached is a new diff based on our discussion on irc. This is now handled within the vfps and thus will happen after vcl_backend_response and the inserted object will include the Vary header. The caveat with this approach, at least in the current implementation, is that disabling http_gzip_support won't have any effect on existing objects but I can't see this being an issue. On Tue, Aug 26, 2014 at 1:26 PM, Poul-Henning Kamp wrote: > -------- > In message HDhAyp8AGwVhRx30_OBdQ10Sp4R1ufA at mail.gmail.com> > , Federico Schwindt writes: > > >Does this mean that doing it in vfp_gzip_init() or cache_gzip.c would be > >appropriate or preferred? > > I probably lost context for "it" here :-) > > But if you look in -trunk, you'll see for instance vfp_gzip_init() > doing: > > if (vfe->vfp->priv2 == VFP_GUNZIP || vfe->vfp->priv2 == VFP_GZIP) { > http_Unset(vc->http, H_Content_Encoding); > http_Unset(vc->http, H_Content_Length); > RFC2616_Weaken_Etag(vc->http); > } > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0000-gzip-vary.patch Type: text/x-patch Size: 7487 bytes Desc: not available URL: From perbu at varnish-software.com Wed Aug 27 07:23:37 2014 From: perbu at varnish-software.com (Per Buer) Date: Wed, 27 Aug 2014 09:23:37 +0200 Subject: what is the ttl? In-Reply-To: <53FCCB1D.5070707@schokola.de> References: <53FCCB1D.5070707@schokola.de> Message-ID: On Tue, Aug 26, 2014 at 7:59 PM, Nils Goroll wrote: > (..) > I think we should make this consistent and easier to understand. > > One simple idea to regain consistency would be to make a "set beresp.ttl" > do > what a read on obj.ttl gives us: the time remaining in cache (ie add the > Age to > the internal ttl). > While I see the point the issue is further complicated by the grace and keep timeouts, which in effect extends the TTL. This is much more of a real issue now that the default VCL will serve an object past the TTL using the builtin VCL. In addition to that (or even replacing the VCL ttl) I like the idea to add > two > (additional) variables: > > * (beresp|obj).age : corresponding to the Age header, the current Age > of the Object > If obj.ttl could reflect the real ttl and obj.age is exposed then we could have vcl_hit look like: if (obj.ttl - obj.age >= 0s) { return (deliver); } if (obj.ttl + obj.grace - obj.age > 0s) { // Object is in grace, deliver it // Automatically triggers a background fetch return (deliver); } This would have the advantage that everyone would immediately see how TTL and age work together. It would still make sense to have the calculated TTL available of course, but more for convenience. Per. -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Aug 27 07:38:00 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 27 Aug 2014 07:38:00 +0000 Subject: what is the ttl? In-Reply-To: <53FCCB1D.5070707@schokola.de> References: <53FCCB1D.5070707@schokola.de> Message-ID: <91804.1409125080@critter.freebsd.dk> -------- In message <53FCCB1D.5070707 at schokola.de>, Nils Goroll writes: >We now have the situation that the ttl actually starts at the point in time >corresponding to Age, so if an object with "Age: 120" and "max-age=180" is >received from a backend, by default, it will only have one minute left to live >in cache. > >While this makes perfect sense for the default rfc2616 ttl, we get the same >behavior when setting the ttl explicitly. With this VCL > > sub vcl_backend_response { > set beresp.ttl = 120s; > } > >if the received object has "Age: 60", it will get cached for one minute only. I think this is wrong. If you set the ttl, it should be "ttl from now". >What might be even more confusing is the fact that reading the ttl a second >later in vcl_hit will give 59 (seconds remaining in cache). This is correct. >If Age is larger than the ttl we set in VCL, the object won't get cached at all. >I think we should make this consistent and easier to understand. So this is a point I've been pondering. It used to be that we respected the beresp.ttl for delivery decisions, but we have other ways of saying "don't deliver the object just fetched" now. So now we should probably always deliver the fetched object, no matter what beresp.ttl is set to. >One simple idea to regain consistency would be to make a "set beresp.ttl" do >what a read on obj.ttl gives us: the time remaining in cache (ie add the Age to >the internal ttl). Yes, it's a bug that it doesn't. >In addition to that (or even replacing the VCL ttl) I like the idea to add two >(additional) variables: > >* (beresp|obj).age : corresponding to the Age header, the current Age > of the Object This may not correspond to the Age: header, but we can return the value of (now() - exp.t_origin) which would almost always be the same thing. (We don't store the unadultered age) >* (beresp|obj).maxage : directly access the internal ttl, irrespective of > age I don't understand what "internal ttl" means ? I also fear confusion with respect to "max-age" in Cache-Control: -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin at varnish-software.com Wed Aug 27 15:01:13 2014 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Wed, 27 Aug 2014 17:01:13 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. Message-ID: <1409151673-23375-1-git-send-email-martin@varnish-software.com> When selecting a backend for reuse, the list is first checked from the end for connections that has expired it's maximum age. Any connection in the list that is found to be too old is closed and removed from the list. This can be useful to tune when one is having e.g. transparent load balancers behind Varnish that will close a connection without send RST to the Varnish servers, resulting in RST on the first packet sent. These connections will then appear to be ready to be reused. By tuning this slightly lower than the lowest connection timeout setting for the complete backend request path, it will be possible to avoid generating 503. The maxage is setable as a global parameter, or as a per backend attribute from VCL. --- bin/varnishd/cache/cache_backend.c | 52 +++++++++++++++++++++++++++++++++++--- bin/varnishd/cache/cache_backend.h | 3 ++- bin/varnishd/cache/cache_dir.c | 2 ++ bin/varnishd/common/params.h | 3 +++ bin/varnishd/mgt/mgt_param_tbl.c | 8 ++++++ bin/varnishtest/tests/c00069.vtc | 47 ++++++++++++++++++++++++++++++++++ doc/sphinx/reference/vcl.rst | 4 +++ include/vrt.h | 1 + lib/libvcc/vcc_backend.c | 7 +++++ 9 files changed, 122 insertions(+), 5 deletions(-) create mode 100644 bin/varnishtest/tests/c00069.vtc diff --git a/bin/varnishd/cache/cache_backend.c b/bin/varnishd/cache/cache_backend.c index 6622f4b..ab638e2 100644 --- a/bin/varnishd/cache/cache_backend.c +++ b/bin/varnishd/cache/cache_backend.c @@ -42,6 +42,7 @@ #include "cache_backend.h" #include "vrt.h" #include "vtcp.h" +#include "vtim.h" static struct mempool *vbcpool; @@ -185,6 +186,26 @@ bes_conn_try(struct busyobj *bo, struct vbc *vc, const struct vdi_simple *vs) } /*-------------------------------------------------------------------- + * Check that the connection hasn't expired the relevant maxage setting. + */ + +static int +vbe_CheckLastUse(const struct vbc *vbc, double now) +{ + struct vdi_simple *vdis; + + CHECK_OBJ_NOTNULL(vbc, VBC_MAGIC); + vdis = vbc->vdis; + CHECK_OBJ_NOTNULL(vdis, VDI_SIMPLE_MAGIC); + AN(vdis->vrt); + + assert(vbc->last_use > 0.); + if (vdis->vrt->recycle_maxage > 0.) + return (now - vbc->last_use < vdis->vrt->recycle_maxage); + return (now - vbc->last_use < cache_param->recycle_maxage); +} + +/*-------------------------------------------------------------------- * Check that there is still something at the far end of a given socket. * We poll the fd with instant timeout, if there are any events we can't * use it (backends are not allowed to pipeline). @@ -249,6 +270,8 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) { struct vbc *vc; struct backend *bp; + double now; + int toolate; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); CHECK_OBJ_NOTNULL(vs, VDI_SIMPLE_MAGIC); @@ -256,10 +279,23 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) CHECK_OBJ_NOTNULL(bp, BACKEND_MAGIC); /* first look for vbc's we can recycle */ + now = VTIM_real(); while (1) { Lck_Lock(&bp->mtx); - vc = VTAILQ_FIRST(&bp->connlist); + toolate = 0; + vc = VTAILQ_LAST(&bp->connlist, vtqh_vbc); + CHECK_OBJ_ORNULL(vc, VBC_MAGIC); + if (vc != NULL && !vbe_CheckLastUse(vc, now)) { + /* Least recently used is too old */ + VSLb(bo->vsl, SLT_BackendClose, "%d %s toolate", + vc->fd, bp->display_name); + toolate = 1; + } else + /* Pick the most recently used */ + vc = VTAILQ_FIRST(&bp->connlist); + if (vc != NULL) { + CHECK_OBJ_NOTNULL(vc, VBC_MAGIC); bp->refcount++; assert(vc->backend == bp); assert(vc->fd >= 0); @@ -267,9 +303,16 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) VTAILQ_REMOVE(&bp->connlist, vc, list); } Lck_Unlock(&bp->mtx); + if (vc == NULL) break; - if (vbe_CheckFd(vc->fd)) { + if (!toolate && !vbe_CheckFd(vc->fd)) { + VSLb(bo->vsl, SLT_BackendClose, "%d %s poll", + vc->fd, bp->display_name); + toolate = 1; + } + + if (!toolate) { /* XXX locking of stats */ VSC_C_main->backend_reuse += 1; VSLb(bo->vsl, SLT_Backend, "%d %s %s", @@ -277,11 +320,11 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) bp->display_name); vc->vdis = vs; vc->recycled = 1; + vc->last_use = now; return (vc); } + VSC_C_main->backend_toolate++; - VSLb(bo->vsl, SLT_BackendClose, "%d %s toolate", - vc->fd, bp->display_name); /* Checkpoint log to flush all info related to this connection before the OS reuses the FD */ @@ -318,6 +361,7 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) VSLb(bo->vsl, SLT_Backend, "%d %s %s", vc->fd, bo->director->vcl_name, bp->display_name); vc->vdis = vs; + vc->last_use = now; return (vc); } diff --git a/bin/varnishd/cache/cache_backend.h b/bin/varnishd/cache/cache_backend.h index c9ce112..adb978b 100644 --- a/bin/varnishd/cache/cache_backend.h +++ b/bin/varnishd/cache/cache_backend.h @@ -117,7 +117,7 @@ struct backend { struct suckaddr *ipv6; unsigned n_conn; - VTAILQ_HEAD(, vbc) connlist; + VTAILQ_HEAD(vtqh_vbc, vbc) connlist; struct vbp_target *probe; unsigned healthy; @@ -142,6 +142,7 @@ struct vbc { struct suckaddr *addr; uint8_t recycled; + double last_use; /* Timeouts */ double first_byte_timeout; diff --git a/bin/varnishd/cache/cache_dir.c b/bin/varnishd/cache/cache_dir.c index 5d374ab..557aa01 100644 --- a/bin/varnishd/cache/cache_dir.c +++ b/bin/varnishd/cache/cache_dir.c @@ -36,6 +36,7 @@ #include "cache_backend.h" #include "vtcp.h" +#include "vtim.h" /* Close a connection ------------------------------------------------*/ @@ -83,6 +84,7 @@ VDI_RecycleFd(struct vbc **vbp, const struct acct_bereq *acct_bereq) CHECK_OBJ_NOTNULL(vc, VBC_MAGIC); CHECK_OBJ_NOTNULL(vc->backend, BACKEND_MAGIC); assert(vc->fd >= 0); + vc->last_use = VTIM_real(); bp = vc->backend; diff --git a/bin/varnishd/common/params.h b/bin/varnishd/common/params.h index 2ce95c6..f1d7615 100644 --- a/bin/varnishd/common/params.h +++ b/bin/varnishd/common/params.h @@ -164,6 +164,9 @@ struct params { double first_byte_timeout; double between_bytes_timeout; + /* Maximum recycled backend connection age */ + double recycle_maxage; + /* CLI buffer size */ unsigned cli_buffer; diff --git a/bin/varnishd/mgt/mgt_param_tbl.c b/bin/varnishd/mgt/mgt_param_tbl.c index 55889bc..2519c65 100644 --- a/bin/varnishd/mgt/mgt_param_tbl.c +++ b/bin/varnishd/mgt/mgt_param_tbl.c @@ -365,6 +365,14 @@ struct parspec mgt_parspec[] = { "and backend request. This parameter does not apply to pipe.", 0, "60", "s" }, + { "recycle_maxage", tweak_timeout, + &mgt_param.recycle_maxage, + "0", NULL, + "Default maximum age of a recycled backend connection. If the " + "age is exceeded the connection will be closed. VCL can " + "override this default value for each backend.", + 0, + "60", "s" }, { "acceptor_sleep_max", tweak_timeout, &mgt_param.acceptor_sleep_max, "0", "10", diff --git a/bin/varnishtest/tests/c00069.vtc b/bin/varnishtest/tests/c00069.vtc new file mode 100644 index 0000000..3244853 --- /dev/null +++ b/bin/varnishtest/tests/c00069.vtc @@ -0,0 +1,47 @@ +varnishtest "Check recycle_maxage" + +server s1 { + rxreq + txresp + + rxreq + txresp + + # Expect a close happening on the next backend attempt + delay 1 + expect_close + + accept + rxreq + txresp +} -start + +varnish v1 -vcl { + backend default { + .host = "${s1_addr}"; + .port = "${s1_port}"; + .recycle_maxage = 1s; + } + sub vcl_recv { + return (pass); + } +} -start + +# Make sure that the global parameter is setable and that the per backend +# takes precedence +varnish v1 -cliok "param.set recycle_maxage 10" + +client c1 { + txreq -url "/one" + rxresp + + txreq -url "/two" + rxresp + + # Delay .1 second longer than the server delay, to put us at the time + # the server expects the close + delay 1.1 + + txreq -url "/three" + rxresp +} -run diff --git a/doc/sphinx/reference/vcl.rst b/doc/sphinx/reference/vcl.rst index 6828f99..0d5c660 100644 --- a/doc/sphinx/reference/vcl.rst +++ b/doc/sphinx/reference/vcl.rst @@ -202,6 +202,10 @@ are available: between_bytes_timeout Timeout between bytes. + recycle_maxage + Maximum age allowed for attempts to reuse a recycled backend + connection. + probe Attach a probe to the backend. See Probes. diff --git a/include/vrt.h b/include/vrt.h index cb47db9..b193429 100644 --- a/include/vrt.h +++ b/include/vrt.h @@ -163,6 +163,7 @@ struct vrt_backend { double connect_timeout; double first_byte_timeout; double between_bytes_timeout; + double recycle_maxage; unsigned max_connections; const struct vrt_backend_probe *probe; }; diff --git a/lib/libvcc/vcc_backend.c b/lib/libvcc/vcc_backend.c index 7c079a9..18362a7 100644 --- a/lib/libvcc/vcc_backend.c +++ b/lib/libvcc/vcc_backend.c @@ -294,6 +294,7 @@ vcc_ParseHostDef(struct vcc *tl, const struct token *t_be) "?between_bytes_timeout", "?probe", "?max_connections", + "?recycle_maxage", NULL); SkipToken(tl, '{'); @@ -381,6 +382,12 @@ vcc_ParseHostDef(struct vcc *tl, const struct token *t_be) VSB_printf(tl->sb, " at\n"); vcc_ErrWhere(tl, tl->t); return; + } else if (vcc_IdIs(t_field, "recycle_maxage")) { + Fb(tl, 0, "\t.recycle_maxage = "); + vcc_Duration(tl, &t); + ERRCHK(tl); + Fb(tl, 0, "%g,\n", t); + SkipToken(tl, ';'); } else { ErrInternal(tl); return; -- 2.1.0.rc1 From martin at varnish-software.com Wed Aug 27 15:09:12 2014 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Wed, 27 Aug 2014 17:09:12 +0200 Subject: [PATCH] Remove lingering VSL flush calls from when we used FD as log ID. Message-ID: <1409152152-31289-1-git-send-email-martin@varnish-software.com> --- bin/varnishd/cache/cache_backend.c | 5 ----- bin/varnishd/cache/cache_dir.c | 8 -------- 2 files changed, 13 deletions(-) diff --git a/bin/varnishd/cache/cache_backend.c b/bin/varnishd/cache/cache_backend.c index ab638e2..d0358bf 100644 --- a/bin/varnishd/cache/cache_backend.c +++ b/bin/varnishd/cache/cache_backend.c @@ -325,11 +325,6 @@ vbe_GetVbe(struct busyobj *bo, struct vdi_simple *vs) } VSC_C_main->backend_toolate++; - - /* Checkpoint log to flush all info related to this connection - before the OS reuses the FD */ - VSL_Flush(bo->vsl, 0); - VTCP_close(&vc->fd); VBE_DropRefConn(bp, NULL); vc->backend = NULL; diff --git a/bin/varnishd/cache/cache_dir.c b/bin/varnishd/cache/cache_dir.c index 557aa01..f90ee96 100644 --- a/bin/varnishd/cache/cache_dir.c +++ b/bin/varnishd/cache/cache_dir.c @@ -57,13 +57,7 @@ VDI_CloseFd(struct vbc **vbp, const struct acct_bereq *acct_bereq) VSLb(vc->vsl, SLT_BackendClose, "%d %s", vc->fd, bp->display_name); - /* - * Checkpoint log to flush all info related to this connection - * before the OS reuses the FD - */ - VSL_Flush(vc->vsl, 0); vc->vsl = NULL; - VTCP_close(&vc->fd); VBE_DropRefConn(bp, acct_bereq); vc->backend = NULL; @@ -90,8 +84,6 @@ VDI_RecycleFd(struct vbc **vbp, const struct acct_bereq *acct_bereq) VSLb(vc->vsl, SLT_BackendReuse, "%d %s", vc->fd, bp->display_name); - /* XXX: revisit this hack */ - VSL_Flush(vc->vsl, 0); vc->vsl = NULL; Lck_Lock(&bp->mtx); -- 2.1.0.rc1 From fgsch at lodoss.net Wed Aug 27 17:11:13 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Wed, 27 Aug 2014 18:11:13 +0100 Subject: [PATCH] stale-while-revalidate updated Message-ID: Hi, Attached is an updated patch for s-w-r support that I originally sent in June 23. I've kept the code outside RFC2616_Ttl but still adding it into rfc2616_cache.c. It might be worth merging it and/or moving this into rfc5861_cache.c. Before this turns into another bike-shedding, a few things worth clarifying: It only implements stale-while-revalidate as the subject says. Implementing stale-if-error requires more changes to Varnish and while it might be implemented in the future is not in the scope of this patch. I rather have something that people can use now that nothing at all for a longer period (this diff was sent over 2 months ago). If ttl is equal or lower than 0 this implementation will ignore the stale-while-revalidate value. There are 2 reasons for this that are not related to this diff but to Varnish itself: 1. Varnish won't cache an object with a 0 ttl and 2. a ttl of -1 indicates that either the response status was uncacheable (as per RFC2616_Ttl) or that the content already expired (again based on RFC2616_Ttl). Should the situation in Varnish change it's as simple of modifying the check to be `expp->ttl < 0.' so I'd prefer to have that discussion decoupled from this patch. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0000.stale-while-revalidate.patch Type: text/x-patch Size: 2793 bytes Desc: not available URL: From slink at schokola.de Wed Aug 27 18:28:36 2014 From: slink at schokola.de (Nils Goroll) Date: Wed, 27 Aug 2014 20:28:36 +0200 Subject: what is the ttl? In-Reply-To: References: <53FCCB1D.5070707@schokola.de> Message-ID: <53FE2354.1000908@schokola.de> On 27/08/14 09:23, Per Buer wrote: > > If obj.ttl could reflect the real ttl and obj.age is exposed then we could have > vcl_hit look like: > > if (obj.ttl - obj.age >= 0s) { > return (deliver); > } > if (obj.ttl + obj.grace - obj.age > 0s) { > // Object is in grace, deliver it > // Automatically triggers a background fetch > return (deliver); > } > would it work for you if you could write this as if (obj.maxage - obj.age >= 0s) { return (deliver); } if (obj.maxage + obj.grace - obj.age > 0s) { // Object is in grace, deliver it // Automatically triggers a background fetch return (deliver); } ? From slink at schokola.de Wed Aug 27 18:35:39 2014 From: slink at schokola.de (Nils Goroll) Date: Wed, 27 Aug 2014 20:35:39 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <1409151673-23375-1-git-send-email-martin@varnish-software.com> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> Message-ID: <53FE24FB.7030609@schokola.de> Hi Martin, it's cool you are addressing this issue, thanks. I have not tested your patch (yet) but at first sight it looks good to me. If no-one else did a real review and phk thinks it is necessary, I could probably do it on Friday. While your patch appears to be a real improvement which I'd like to see integrated, I still think we should have some backend connection nanny thread doing this work preventatively and out of context of the backend thread desperately in need of a backend connection. Cheers, Nils From slink at schokola.de Wed Aug 27 18:38:18 2014 From: slink at schokola.de (Nils Goroll) Date: Wed, 27 Aug 2014 20:38:18 +0200 Subject: [PATCH 1/2] Add a function to remove an element/token from a header, value (like Accept-Encoding from Vary:) Message-ID: <53FE259A.30800@schokola.de> see next patch for context -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Add-a-function-to-remove-an-element-token-from-a-hea.patch Type: text/x-patch Size: 3272 bytes Desc: not available URL: From slink at schokola.de Wed Aug 27 18:59:22 2014 From: slink at schokola.de (Nils Goroll) Date: Wed, 27 Aug 2014 20:59:22 +0200 Subject: [PATCH 2/2] gzip+vary review - Aim for completeness of the Vary+gzip (+ ETag) tests., Remove Vary: Accept-Encoding for do_gunzip Message-ID: <53FE2A8A.7050705@schokola.de> As discussed on irc, I have reviewed the tests from e8e089fc1968ee147fb06c60e3caae568bd11e40 (originally written by Federico) and tried to improve coverage. In particular, I have also included checks for Etag and Content-Encoding. Results are: * With do_gunzip, we end up with only an uncompressed object in cache and varnish will never deliver it (re)-compressed. So there will only be one variant (the plain one) and thus we remove must remove Accept-Encoding from Vary. If we don't, downstream caches could unncessarily store two copies. The change to cache_gzip.c implements this using http_RemoveHdrToken from the previous patch. One real world use case is to work around a misconfigured backend which, for example, gzip-compresses jpg images. For this case we really do want to avoid additional gzip overhead in varnish and the client, so we do want do_gunzip and no Vary on A-E. * There is a minor issue with do_gzip and Etags: When using do_gzip and delivering to a non-gzip client, we restore the original byte stream by gunzip'ing, so we could deliver the original (strong) Etag we might have received from the backend - but we weaken the Etag because we have gzipp'ed at fetch time. Because fixing this would probably require additonal state in objects just for this exotic case ("this is an object for which we originally received a strong etag and gzipped it, so for this case only we may un-weaken the Etag"), I have simply marked the case XXX in the vtc. Other than that, while reviewing I noticed bug #1582 With this patch I am quite confident that we can really claim to have got gzip + Vary + Etags right - finally - and closing #940 should be justified. Thanks again, Federico -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-Aim-for-completeness-of-the-Vary-gzip-ETag-tests.-Re.patch Type: text/x-patch Size: 17930 bytes Desc: not available URL: From perbu at varnish-software.com Wed Aug 27 19:19:23 2014 From: perbu at varnish-software.com (Per Buer) Date: Wed, 27 Aug 2014 21:19:23 +0200 Subject: what is the ttl? In-Reply-To: <53FE2354.1000908@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> Message-ID: On Wed, Aug 27, 2014 at 8:28 PM, Nils Goroll wrote: > > would it work for you if you could write this as > > > if (obj.maxage - obj.age >= 0s) { > return (deliver); > } > if (obj.maxage + obj.grace - obj.age > 0s) { > // Object is in grace, deliver it > // Automatically triggers a background fetch > return (deliver); > } > > ? > Absolutely. I think it would be great to expose this to the user. I'm not sure if the term "maxage" is crystal clear (also because it is used to denote relative time in Cache-control) but I cannot say I have better suggestions at this point in time. Deadline, maybe? -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Wed Aug 27 21:34:29 2014 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 27 Aug 2014 23:34:29 +0200 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: References: Message-ID: <53FE4EE5.7050507@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 8/27/14 7:11 PM, Federico Schwindt wrote: > > Attached is an updated patch for s-w-r support that I originally > sent in June 23. > > Before this turns into another bike-shedding, a few things worth > clarifying: [...] This might be a bit of bike-shedding, and I know that there's been some discussion going that I haven't kept up with. So apologies in advance. I think I'm unsure about what we're striving for in Varnish 4 -- wasn't the goal to move as much caching policy as possible out to VCL, with good defaults in builtin.vcl? The patch implements a solution for s-w-r in the binary, by magic as it were, but it seems to me that it could be done in VCL. My stab at a VCL solution does turn out to be cumbersome (just typing the idea here, haven't tested it). if (beresp.http.Cache-Control ~ "stale-while-revalidate\s*=\s*\d+") { set beresp.grace = std.duration(regsub(beresp.http.Cache-Control, "^.*stale-while-revalidate\s*=\s*(\d+).*$", "\1s"), 120s); } That is admittedly pretty ugly. (We really need to get that VMOD for easier backref capture upgraded for Varnish 4, then it would be much simpler.) Is the verbosity of a VCL solution the reason for wanting varnishd to take care of s-w-r? I'm hesitating here because s-w-r would give application developers a way to set grace. It can be hard enough to get them to understand how to set TTL (and do it well), and grace needs even more explanation. But grace can be critical for getting Varnish to run smoothly and perform well. App developers might get the idea of setting grace to 0, but as an admin in charge of Varnish and the owner of VCL, I think I'd prevent them from doing so. That thought tends to persuade me of the wisdom of implementing as much caching policy in VCL as we can, especially where grace is concerned. Best, Geoff - -- UPLEX Systemoptimierung Scheffelstra?e 32 22301 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) iQIcBAEBCAAGBQJT/k7kAAoJEOUwvh9pJNURyScP/R6r2kKnfeRaNVzf6+QiC7sO ics/lOCPVCTuNeVyjfSPFiW6G+rvGa7KkltIit1sooh1Qo5lNMPqguh5McnUzrvU mPEpjPleO8dNuPN6yu5ybimJpLgWSLAP4VJxruCytrqld0pAV/ItuHhWb/0LiUGs McqlV17zvDourZ4ltsPhdFZM7UTJWOwvUNi9VUFmDqWXYAN2CZT+srOVKiiElYvw v106uu2bpdCK8D/6yh1zZThqH2QvIHSmIP39FdJZFagjuGKPIviWe3JP2E/IoKJ0 S9dAWBjP18ObLezesk+X55HVcelmAn9Uu8LG5plOLhVtdD8L81LCrR8h97xZQ5NE GqM+0sGjbQtTd/YTncR4NnXfjAfjKK/Fpb5pL747PWpzY6IilK9kNuJA7eIvzx3l alv/znK2DY7CSjOjcJPxwu2kSUaNpxo0DxsvjaGY5VV50eSIJ0pJyV0KihJgc+DN m4uwnw6C4QlQsOrfW9Fd/uxk1vYuxPTGt3fZ3RK5VVtS0FwO2FftZImmeaYPH//H 1F9poE2lweAqmbqsg4zSI8EOb5NRNAI4bSofxSaaRuKp0d8gso1yI6oHgRvFuPrn M4baOg0ht7CaKWMrSD8CZcVFBHDY7Fr+bFNZnZmthhWpkKvD8/L25elhEkYpfhkb esjSSXQBNRg1Y8/LID8p =i60z -----END PGP SIGNATURE----- From slink at schokola.de Thu Aug 28 06:13:39 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 08:13:39 +0200 Subject: what is the ttl? In-Reply-To: References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> Message-ID: <53FEC893.5060401@schokola.de> On 27/08/14 21:19, Per Buer wrote: > I'm not sure if the term "maxage" is crystal clear (also because it is used to > denote relative time in Cache-control) I actually think the term "maxage" is good because it is identical to CC max-age for all practical purposes: HTTP Cache-Control max-age denotes the maximum live time of the object from the time corresponding to Age: 0. This is exactly the semantics I am proposing for (beresp|obj).maxage. Having said that, I not so sure if users would expect implicit changes to http headers. IOW, if I had set beresp.maxage = 2h; in my VCL, would anyone expect this to imply something like # simplified, should edit the header and not just set it set beresp.http.Cache-Control = "max-age=7200" ? I tend to think that users should really understand that beresp.something and beresp.http.something are different things and the documentation should make clear when we update headers (I guess there is some work to be done here). At any rate, we must not mix the internal ttl and the (downsteam) caching directives, the question is if we really need substantially different names if they basically have the same semantics (for the cache and downstream). Nils From geoff at uplex.de Thu Aug 28 06:35:10 2014 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 28 Aug 2014 08:35:10 +0200 Subject: what is the ttl? In-Reply-To: <53FEC893.5060401@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> Message-ID: <53FECD9E.4040009@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 8/28/14 8:13 AM, Nils Goroll wrote: > > [...] I not so sure if users would expect implicit changes to http > headers. IOW, if I had > > set beresp.maxage = 2h; > > in my VCL, would anyone expect this to imply something like > > # simplified, should edit the header and not just set it set > beresp.http.Cache-Control = "max-age=7200" > > ? Would *anyone* expect that? Yes, guaranteed. I think it's likely that many users would expect it, and that they would become confused and angry if it doesn't work that way. > I tend to think that users should really understand that > beresp.something and beresp.http.something are different things and > the documentation should make clear when we update headers (I guess > there is some work to be done here). When we find ourselves saying "users should really understand" and "documentation should make it clear", we have unmistakable indicators of strong potential for confusion. %^) Best, Geoff - -- UPLEX Systemoptimierung Scheffelstra?e 32 22301 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) iQIcBAEBCAAGBQJT/s2dAAoJEOUwvh9pJNURo58P/033MRG+pjZk2O/i28ZSTLKu NCsOzGtNAapBggiIk28LiH8j5p1dYXUJSmObRjvV4XqZMcGV/FqGoh0bdttEDijy TKVxXwJwytMN3B27nSiv5j8DmaL286Bu9KJk9SD2Rg7LHA+8oS9FZJX3eUq6Ldk0 poE4Uodi82pUNfRD6h8SHmCleDSxzYIIvvu1m9d6ACVYOGdwPtvMjZEOO92cLCk+ yYBVhatIGC3FomcMzsoBFBoOAChqwtmix9p65OXXNHW7zBG3a7xkoxRI0L+TCosI mGEAjWZqDiNUrY0K3nA56UiSBChhVa8QCj79YeCxb+ZN0pYxGQSA6Qna5PVaWqhq yokCenLjFy2Io2VZGL2I3IfmkE/x6okGG7m4vbNaHv1WvQHeYIDK4ekJJerHqQ6M enotmecLwodGyIaoO1rvBWBWJPpWsWjG8+QfQEkvKfEa6FOhILrXMLz+fShOsYaC 4P5cZF2CVev/6xC7Hka/JO+JdtDgGAXNWTfyy+wy44wdVAu/0fOrf0Nho7amnxya GLOOazwulUTxvepo9g7eATpJ+lGwv91TlNk4h04Q4T/mGXtwds4d+yOWYK8UkBil CnRvW7n5uExSRdWIDlAz6yZ2yf2MqHyy5x4teRqwcH1qPkfJcLXAJIdftf8r+T2P c86o+P2Gb1S75U2JBCkk =+HFu -----END PGP SIGNATURE----- From slink at schokola.de Thu Aug 28 06:43:12 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 08:43:12 +0200 Subject: what is the ttl? In-Reply-To: <53FECD9E.4040009@uplex.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> Message-ID: <53FECF80.8090506@schokola.de> On 28/08/14 08:35, Geoff Simmons wrote: > When we find ourselves saying "users should really understand" and > "documentation should make it clear", we have unmistakable indicators > of strong potential for confusion. %^) Then my suggestion must be really bad. What's yours? At least Per and myself think that having (beresp|obj).something representing the overall TTL (from the Age=0 point in time) would be helpful. Any suggestions on how "something" should be named? Nils From slink at schokola.de Thu Aug 28 06:26:39 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 08:26:39 +0200 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: <53FE4EE5.7050507@uplex.de> References: <53FE4EE5.7050507@uplex.de> Message-ID: <53FECB9F.60807@schokola.de> On 27/08/14 23:34, Geoff Simmons wrote: > I think I'm unsure about what we're striving for in Varnish 4 -- > wasn't the goal to move as much caching policy as possible out to VCL, > with good defaults in builtin.vcl? I see a bit of a tendency that we are moving towards having C code provide good/better defaults, still allowing VCL to modify them. This definitely is the case with fgs' proposed patch, vcl_backend_fetch still has the final word. But, yes, s-w-r can be done in VCL already (and it really is a good question if we shold just add it to the builtin.vcl). s-i-e, I think, needs additional C support to allow for a VCL implementation (see my post "restarting for bad synchronous responses"). Some header mangling (Vary, Etag) we are doing in fetch processors at the moment is the exact contrary - VCL control is reduced (limited to vcl_deliver) until we get explicit fetch processor pushes (which phk is planning for). Nils From phk at phk.freebsd.dk Thu Aug 28 06:49:25 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 06:49:25 +0000 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: <53FE4EE5.7050507@uplex.de> References: <53FE4EE5.7050507@uplex.de> Message-ID: <9421.1409208565@critter.freebsd.dk> -------- In message <53FE4EE5.7050507 at uplex.de>, Geoff Simmons writes: >I think I'm unsure about what we're striving for in Varnish 4 -- >wasn't the goal to move as much caching policy as possible out to VCL, >with good defaults in builtin.vcl? Something like that. It's a bit more complicated though. The first design rule is to not make "the pascal mistake", by which I mean that the writer of the VCL can _always_ do what he wants to do. That means that as a general rule we cannot call VCL and then munge headers afterwards in C-code, we should always do that before VCL or as a result of things done in VCL. That is not _always_ possible and certainly not always desirable. For instance, we don't want to send 'T-E: Chunked' unless we are actually going to deliver the object using our chunked code-path, no matter what the VCL writer might think. The trick here is to try to put a dividing line between stuff which is protocol (T-E: Chunked, Content-Length etc.) and things which are policy (Vary, A-E, etc.) Given how utterly messed up the HTTP "standard" is, that is not anywhere as easy as it should be. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Aug 28 06:50:47 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 06:50:47 +0000 Subject: what is the ttl? In-Reply-To: <53FECD9E.4040009@uplex.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> Message-ID: <9442.1409208647@critter.freebsd.dk> -------- In message <53FECD9E.4040009 at uplex.de>, Geoff Simmons writes: > >When we find ourselves saying "users should really understand" and >"documentation should make it clear", we have unmistakable indicators >of strong potential for confusion. %^) In particular if we follow the hand-waving up by not actually writing that documentation :-) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Aug 28 06:55:13 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 06:55:13 +0000 Subject: what is the ttl? In-Reply-To: <53FECF80.8090506@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> <53FECF80.8090506@schokola.de> Message-ID: <18146.1409208913@critter.freebsd.dk> -------- In message <53FECF80.8090506 at schokola.de>, Nils Goroll writes: >At least Per and myself think that having (beresp|obj).something representing >the overall TTL (from the Age=0 point in time) would be helpful. > >Any suggestions on how "something" should be named? My problem with this concept is that objects have three lifetimes: age + ttl age + ttl + grace age + ttl + grace + keep Which one are we talking about, and why ? Second question: What do you need the lifetime for ? Third question: Wouldn't it be better to explicitly show which lifetime you want, using one of the three additions above ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Aug 28 06:57:59 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 06:57:59 +0000 Subject: what is the ttl? In-Reply-To: <53FEC893.5060401@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> Message-ID: <18938.1409209079@critter.freebsd.dk> -------- In message <53FEC893.5060401 at schokola.de>, Nils Goroll writes: >On 27/08/14 21:19, Per Buer wrote: >> I'm not sure if the term "maxage" is crystal clear (also because it is used to >> denote relative time in Cache-control) > >I actually think the term "maxage" is good because it is identical >to CC max-age which is exactly what makes me uneasy about the name, because they are not the same thing, and we don't update one from the other. Btw, mildly related to this: Long time ago we talked about being able to work on sub-parts of http headers: if (INT(beresp.http.cache-control.max-age, 0) > 1200) { set beresp.http.cache-contro.max-age = 1200; } We may want to reconsider this before we write too intricate VCL -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From fgsch at lodoss.net Thu Aug 28 08:03:25 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Thu, 28 Aug 2014 09:03:25 +0100 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: <53FE4EE5.7050507@uplex.de> References: <53FE4EE5.7050507@uplex.de> Message-ID: I don't think we have a way to express this in any sensible way in VCL at the moment and I'd argue that it'd be weird to only cover grace and not other variables. Personally I don't want the default/builtin VCL to grow unbounded. Processing in C while allowing to tweak things afterwards in VCL is a good combination. This is the scenario I imagine for s-w-r. This is used as the default value, much like default_grace is used, while you can still change it in VCL if you want. If the consensus is to handle this in VCL that's fine but we need some primitives to do so. A regex spaghetti will make things harder. So which one should we aim for? On Wed, Aug 27, 2014 at 10:34 PM, Geoff Simmons wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 8/27/14 7:11 PM, Federico Schwindt wrote: > > > > Attached is an updated patch for s-w-r support that I originally > > sent in June 23. > > > > Before this turns into another bike-shedding, a few things worth > > clarifying: > [...] > > This might be a bit of bike-shedding, and I know that there's been > some discussion going that I haven't kept up with. So apologies in > advance. > > I think I'm unsure about what we're striving for in Varnish 4 -- > wasn't the goal to move as much caching policy as possible out to VCL, > with good defaults in builtin.vcl? The patch implements a solution for > s-w-r in the binary, by magic as it were, but it seems to me that it > could be done in VCL. > > My stab at a VCL solution does turn out to be cumbersome (just typing > the idea here, haven't tested it). > > if (beresp.http.Cache-Control ~ "stale-while-revalidate\s*=\s*\d+") { > set beresp.grace > = std.duration(regsub(beresp.http.Cache-Control, > "^.*stale-while-revalidate\s*=\s*(\d+).*$", > "\1s"), 120s); > } > > That is admittedly pretty ugly. (We really need to get that VMOD for > easier backref capture upgraded for Varnish 4, then it would be much > simpler.) > > Is the verbosity of a VCL solution the reason for wanting varnishd to > take care of s-w-r? > > I'm hesitating here because s-w-r would give application developers a > way to set grace. It can be hard enough to get them to understand how > to set TTL (and do it well), and grace needs even more explanation. > But grace can be critical for getting Varnish to run smoothly and > perform well. App developers might get the idea of setting grace to 0, > but as an admin in charge of Varnish and the owner of VCL, I think I'd > prevent them from doing so. That thought tends to persuade me of the > wisdom of implementing as much caching policy in VCL as we can, > especially where grace is concerned. > > > Best, > Geoff > - -- > UPLEX Systemoptimierung > Scheffelstra?e 32 > 22301 Hamburg > http://uplex.de/ > Mob: +49-176-63690917 > -----BEGIN PGP SIGNATURE----- > Version: GnuPG/MacGPG2 v2.0.14 (Darwin) > > iQIcBAEBCAAGBQJT/k7kAAoJEOUwvh9pJNURyScP/R6r2kKnfeRaNVzf6+QiC7sO > ics/lOCPVCTuNeVyjfSPFiW6G+rvGa7KkltIit1sooh1Qo5lNMPqguh5McnUzrvU > mPEpjPleO8dNuPN6yu5ybimJpLgWSLAP4VJxruCytrqld0pAV/ItuHhWb/0LiUGs > McqlV17zvDourZ4ltsPhdFZM7UTJWOwvUNi9VUFmDqWXYAN2CZT+srOVKiiElYvw > v106uu2bpdCK8D/6yh1zZThqH2QvIHSmIP39FdJZFagjuGKPIviWe3JP2E/IoKJ0 > S9dAWBjP18ObLezesk+X55HVcelmAn9Uu8LG5plOLhVtdD8L81LCrR8h97xZQ5NE > GqM+0sGjbQtTd/YTncR4NnXfjAfjKK/Fpb5pL747PWpzY6IilK9kNuJA7eIvzx3l > alv/znK2DY7CSjOjcJPxwu2kSUaNpxo0DxsvjaGY5VV50eSIJ0pJyV0KihJgc+DN > m4uwnw6C4QlQsOrfW9Fd/uxk1vYuxPTGt3fZ3RK5VVtS0FwO2FftZImmeaYPH//H > 1F9poE2lweAqmbqsg4zSI8EOb5NRNAI4bSofxSaaRuKp0d8gso1yI6oHgRvFuPrn > M4baOg0ht7CaKWMrSD8CZcVFBHDY7Fr+bFNZnZmthhWpkKvD8/L25elhEkYpfhkb > esjSSXQBNRg1Y8/LID8p > =i60z > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Thu Aug 28 08:23:39 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 10:23:39 +0200 Subject: VCL: edit header tokens/entities In-Reply-To: <18938.1409209079@critter.freebsd.dk> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <18938.1409209079@critter.freebsd.dk> Message-ID: <53FEE70B.10200@schokola.de> On 28/08/14 08:57, Poul-Henning Kamp wrote: > Btw, mildly related to this: Long time ago we talked about being able > to work on sub-parts of http headers: > > if (INT(beresp.http.cache-control.max-age, 0) > 1200) { > set beresp.http.cache-contro.max-age = 1200; > } > > We may want to reconsider this before we write too intricate VCL +1 ! I volunteer to work on this, if you think it would make sense, phk. From phk at phk.freebsd.dk Thu Aug 28 08:35:15 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 08:35:15 +0000 Subject: VCL: edit header tokens/entities In-Reply-To: <53FEE70B.10200@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <18938.1409209079@critter.freebsd.dk> <53FEE70B.10200@schokola.de> Message-ID: <1339.1409214915@critter.freebsd.dk> -------- In message <53FEE70B.10200 at schokola.de>, Nils Goroll writes: >On 28/08/14 08:57, Poul-Henning Kamp wrote: >> Btw, mildly related to this: Long time ago we talked about being able >> to work on sub-parts of http headers: >> >> if (INT(beresp.http.cache-control.max-age, 0) > 1200) { >> set beresp.http.cache-contro.max-age = 1200; >> } >> >> We may want to reconsider this before we write too intricate VCL > >+1 ! > >I volunteer to work on this, if you think it would make sense, phk. Go for it! -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Aug 28 08:37:13 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 08:37:13 +0000 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <1409151673-23375-1-git-send-email-martin@varnish-software.com> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> Message-ID: <1895.1409215033@critter.freebsd.dk> -------- In message <1409151673-23375-1-git-send-email-martin at varnish-software.com>, Mar tin Blix Grydeland writes: Is there any (sensible) way we can make this auto-tuning ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin at varnish-software.com Thu Aug 28 09:03:09 2014 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Thu, 28 Aug 2014 11:03:09 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <53FE24FB.7030609@schokola.de> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <53FE24FB.7030609@schokola.de> Message-ID: Hi, I guess the check of the list tail for expired ones could be moved to the BackendClose()/BackendRecycle() functions which are called after the fetch has finished and as such shouldn't create any delays for anything. Though I wouldn't expect this to have any significant impact on fetch times, it would probably be just one connection being closed at a time and compared to the fetch operation the time spent is negligible. Having a nanny thread for this part is in my opinion too much complexity for little gain. Having a single worker servicing all of the backends would create some interesting locking complexity and resource freeing headaches. Having one per backend would be easy to implement, but would add too much resource usage for little benefit. Nils: A review would be much appreciated. Martin On 27 August 2014 20:35, Nils Goroll wrote: > Hi Martin, > > it's cool you are addressing this issue, thanks. I have not tested your > patch > (yet) but at first sight it looks good to me. If no-one else did a real > review > and phk thinks it is necessary, I could probably do it on Friday. > > While your patch appears to be a real improvement which I'd like to see > integrated, I still think we should have some backend connection nanny > thread > doing this work preventatively and out of context of the backend thread > desperately in need of a backend connection. > > Cheers, Nils > -- *Martin Blix Grydeland* Senior Developer | Varnish Software AS Cell: +47 21 98 92 60 We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Thu Aug 28 09:09:46 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 11:09:46 +0200 Subject: what is the ttl? In-Reply-To: <53FCCB1D.5070707@schokola.de> References: <53FCCB1D.5070707@schokola.de> Message-ID: <53FEF1DA.9070405@schokola.de> For anyone not following the commit msgs: phk has just fixed this in f2caddcff6d2018604b2ab83ec9bf0e12ec4d51a On 26/08/14 19:59, Nils Goroll wrote: > What might be even more confusing is the fact that reading the ttl a second > later in vcl_hit will give 59 (seconds remaining in cache). > > If Age is larger than the ttl we set in VCL, the object won't get cached at all. > > > I think we should make this consistent and easier to understand. From martin at varnish-software.com Thu Aug 28 09:16:29 2014 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Thu, 28 Aug 2014 11:16:29 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <1895.1409215033@critter.freebsd.dk> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <1895.1409215033@critter.freebsd.dk> Message-ID: I guess it would be possible to add a failed flag to the struct vbc, and then have the struct backend keep some running average of (now - last_use) on BackendClose() where failed==true and recycled==true. But for that approach to get close to accurate, the time the connection fails would need to be picked up accurately, which would make the nanny thread keeping a poll() running on the connections necessary. But in the case of these transparent loadbalancers that don't RST an open connection that poll() would not succeed in noticing the connection goes bad, and then the averaging value would not be accurate any more. It would be averaging when we happened to notice the bad connection due to a reused backend connection attempt failing instead of when the connection actually went bad. So I guess the answer is no, I can't think of any sensible way we could make this auto-tuning. What the default value should be is another good question. I put it at 60 because it's a round number, and long enough for it to not bother people who don't need the feature. Perhaps we should have a 'disabled' value instead and default to that. And then it can be turned on for those that actually need it. Or even put it at e.g. 10 seconds, and we might sort out some problems for people being hit by this. Input most welcome :) Martin On 28 August 2014 10:37, Poul-Henning Kamp wrote: > -------- > In message <1409151673-23375-1-git-send-email-martin at varnish-software.com>, > Mar > tin Blix Grydeland writes: > > Is there any (sensible) way we can make this auto-tuning ? > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -- *Martin Blix Grydeland* Senior Developer | Varnish Software AS Cell: +47 21 98 92 60 We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Thu Aug 28 09:21:08 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 11:21:08 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <53FE24FB.7030609@schokola.de> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <53FE24FB.7030609@schokola.de> Message-ID: <53FEF484.1050105@schokola.de> On 27/08/14 20:35, Nils Goroll wrote: > I still think we should have some backend connection nanny thread could the health check threads do this? From phk at phk.freebsd.dk Thu Aug 28 09:22:46 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 09:22:46 +0000 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <1895.1409215033@critter.freebsd.dk> Message-ID: <10789.1409217766@critter.freebsd.dk> -------- In message , Martin Blix Grydeland writes: >I guess it would be possible to add a failed flag to the struct vbc, and >then have the struct backend keep some running average of (now - last_use) Well, not exactly an average, since our aim is not to have 50% of our connections fail, but an "estimate" of some kind. I was thinking something far more crude and heuristic: if (recycle fails) t_est = max(1 second, this connections idle - .5 sec) else t_est *= (1 + 1e-6) The point here is that the actual timeout as implemented by servers is not precise and not even constant, many servers trim idle connections on other criteria than idle time (anti-DoS) At the very least, we should run some real-life tests to see how well something like the above works... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Thu Aug 28 10:22:36 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 12:22:36 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <53FE24FB.7030609@schokola.de> Message-ID: <53FF02EC.8070808@schokola.de> On 28/08/14 11:03, Martin Blix Grydeland wrote: > Having a nanny thread for this part is in my opinion too much complexity for > little gain. I should have mentioned this in my first reply: The main benefit I'd see is that currently closing BE conns depends on backend requests being issued, so for instance no close will happen on a sick backends. Despite the fact that I never experienced any real issues due to this, I really don't like the fact that I have seen hundreds or even thousands of tcp conns in CLOSE_WAIT and this has raised questions and serious worries of sysadmins we work with. At least this is a real waste of file descriptors, which becomes particularly apparent when migrating to different backends (without restarting varnish). I have not looked at the locking complexity question, but as we have one health check thread per backend anyway, I'd be optimistic that the overhead could be kept low to begin with. Nils From geoff at uplex.de Thu Aug 28 11:02:11 2014 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 28 Aug 2014 13:02:11 +0200 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: <53FECB9F.60807@schokola.de> References: <53FE4EE5.7050507@uplex.de> <53FECB9F.60807@schokola.de> Message-ID: <53FF0C33.7080809@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 08/28/2014 08:26 AM, Nils Goroll wrote: > > This definitely is the case with fgs' proposed patch, > vcl_backend_fetch still has the final word. All right, then it really might be bike-shedding to worry too much about whether it's done in the binary or in VCL -- although I'd come down on the side of VCL, considering what phk had to say, because it seems to me that s-w-r is about caching policy, not about the network protocol. But what's crucial in my view is that VCL can veto changes in beresp.grace -- I'd worry a lot if that were left to backends and I couldn't overrule them. On 08/28/2014 08:57 AM, Poul-Henning Kamp wrote: > -------- Btw, mildly related to this: Long time ago we talked > about being able to work on sub-parts of http headers: > > if (INT(beresp.http.cache-control.max-age, 0) > 1200) { set > beresp.http.cache-contro.max-age = 1200; } +1 Cache-Control and other headers like Surrogate-Control are certain to become more sophisticated and relevant for Varnish over time, and this is just what we need to work with them. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJT/wwmAAoJEOUwvh9pJNURYHwP/3mqB9YrPhVvkGMhzbP3PF2u P74ox82jOzYwlpth9D6gGkZK9mT9QfUKaOgbRDf03xvZaHsl4OOL+gXVCPCgNd+2 4A1jFrPgQDmJaKIKp3Mr8tOnH9eZ4RDAf3XVCfDLIYvzovj8rd5aBWBpQcvr5wgN qON0PvxGB+FkmbOMWdJB4oA5CROPeeaRvXs4vacL6fUS5aS5BAYluHDwodh8JgHE gywntNXOw3eHWAVRq54K6W2f3sEt+v6MmbzfSNZJ4XN8nAJ0n6mzI7URUHXBmFbX L0q5mavUGfeV09IyWRmf66Gx5GhGk3YaBLrTbqvQyKH26nLlRbyrZhU0d6Mm0ui+ HqujW1k4yrERfv5UKLolelfXsMlMBELDqIwN0FtAUKKSGTi8RICmRhPNIYbCYmED tMS67VHPrrlWirZzKDu1nWwWDRI9V2UFxgJgCrqu/XKOuCvkWKGM7c4QeeuOIVqM LDVV76fwbMmn3ApsaEneT+03uNMVvUTPIK5s2fIDQWfwxglkeKFk7j3qn9ricQbz YIbcAucxr3x/y/Teb0yzJZL1WuJKaUj4Fh07a97el8k7iOJ30djiQPYpdeImTTUz nucAuQB8LSpIWtrZGxdFY77T0lDR/om1jmHwoOH/7uO5o2hXpLacnwmBb45qeToR S2e9XIyBm9WB3URhk2I4 =6w7q -----END PGP SIGNATURE----- From geoff at uplex.de Thu Aug 28 11:46:19 2014 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 28 Aug 2014 13:46:19 +0200 Subject: what is the ttl? In-Reply-To: <53FECF80.8090506@schokola.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> <53FECF80.8090506@schokola.de> Message-ID: <53FF168B.5020405@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 08/28/2014 08:43 AM, Nils Goroll wrote: > > On 28/08/14 08:35, Geoff Simmons wrote: >> When we find ourselves saying "users should really understand" >> and "documentation should make it clear", we have unmistakable >> indicators of strong potential for confusion. %^) > > Then my suggestion must be really bad. > > What's yours? > > At least Per and myself think that having (beresp|obj).something > representing the overall TTL (from the Age=0 point in time) would > be helpful. > > Any suggestions on how "something" should be named? Looks like this will be me agreeing with phk again -- if "something" can be expressed in terms of the other variables, then what was the compelling need for it? Earlier in the thread you brought this up: sub vcl_backend_response { set beresp.ttl = 120s; } > if the received object has "Age: 60", it will get cached for one > minute only. Then how about this? sub vcl_backend_response { # TTL adds another two minutes, regardless of # current Age. set beresp.ttl = 120s + beresp.age; } This would mean exposing (beresp|obj).age -- we seem to have a consensus on that. Then again, someone might really want to set beresp.ttl=120s -- i.e. no more than a total of two minutes, including any current Age. It looks right to me, to be able to express the distinction this way. Best, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJT/xaLAAoJEOUwvh9pJNUR4d8QAJe8YYafjMaSo0B+hnMMoqm6 B48IXlBQuYYTCWE2cnFT803u7eDnEOn1oqMOlrnLRoWiL0yvVhO3VO+FuPRYGPSn c+Wz/5uXEB2sknOYhSAUNTy923/8VAUKMDWrfvN9PStUvo6MPiWX2Kv+o9Ug8Pd2 JCtkZae6QR3Q7YSywydt6FG6XTCKrQbDmDnjqWgUtxa9vQHk30UmurwlAzRlhjac +4JPK6im2egWYzXvmPoTLNp0q5VK6G2zFiZhJ5zV1vmI9XG2wvNsnSMg4IvyMYew oEAZKmhvk8SkIZYItCnobz/qwm7nGM+ETKIM/FdkDP9DF8y46iw7CFbeXXGcMf1h tCEnyOfUBZ2+5udJlug+RczH7I4qiXMgBH7CmC7rpm4DEH7UDHBYDP6jqGVl2SIJ a7U+76XvgaKyqT9vlOOxmn+dsv5ufIp+S/QU51PpD54/0xZHwmWPnMSvCBek019p 7F6Q+zFd2kvGi5vMGyAdg2aWGy/QxRZlze4waPNqBZaa++Hp2i2eDqw5qjzReDEV tleB6O0f/Ueae6hGWEkOnR7bcDE3uOS2q5paIIVlAJLM2ZJ+cx2VVLWiuXNKP2tC 1S1RUdxwezyF9XpsCgsxyv5Z47uNfEfRNXfHOGKuXtXCCsacivZoVSRiwxBaztF+ OzWwLRfo4Ohmw1sbbP+j =AiM3 -----END PGP SIGNATURE----- From phk at phk.freebsd.dk Thu Aug 28 11:57:42 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 11:57:42 +0000 Subject: what is the ttl? In-Reply-To: <53FF168B.5020405@uplex.de> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> <53FECF80.8090506@schokola.de> <53FF168B.5020405@uplex.de> Message-ID: <1316.1409227062@critter.freebsd.dk> -------- In message <53FF168B.5020405 at uplex.de>, Geoff Simmons writes: >Then how about this? > > sub vcl_backend_response { > # TTL adds another two minutes, regardless of > # current Age. > set beresp.ttl = 120s + beresp.age; > } To what your comment says, you would: set beresp.ttl = 120s; because in VCL ttl is always relative to now. I think the task is how do we express "Give this a total lifetime of two minutes" and that would be: set beresl.ttl = 120s - beresp.age; >This would mean exposing (beresp|obj).age -- we seem to have a >consensus on that. Yes, unless this discussion leads to better results. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Thu Aug 28 12:00:15 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 28 Aug 2014 12:00:15 +0000 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <53FF02EC.8070808@schokola.de> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <53FE24FB.7030609@schokola.de> <53FF02EC.8070808@schokola.de> Message-ID: <1356.1409227215@critter.freebsd.dk> -------- In message <53FF02EC.8070808 at schokola.de>, Nils Goroll writes: >On 28/08/14 11:03, Martin Blix Grydeland wrote: > >> Having a nanny thread for this part is in my opinion too much complexity for >> little gain. > >I should have mentioned this in my first reply: > >The main benefit I'd see is that currently closing BE conns depends on backend >requests being issued, so for instance no close will happen on a sick backends. So this discussion raises several questions: 1. Should we default to having a health-check ? 2. Should that thread also nanny the open connections ? (I have some long range points about this related to HTTP/2 but we can disregard them for now) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Thu Aug 28 12:58:13 2014 From: slink at schokola.de (Nils Goroll) Date: Thu, 28 Aug 2014 14:58:13 +0200 Subject: [PATCH] Add a recycle_maxage attribute to backend definitions. In-Reply-To: <1356.1409227215@critter.freebsd.dk> References: <1409151673-23375-1-git-send-email-martin@varnish-software.com> <53FE24FB.7030609@schokola.de> <53FF02EC.8070808@schokola.de> <1356.1409227215@critter.freebsd.dk> Message-ID: <53FF2765.4020809@schokola.de> On 28/08/14 14:00, Poul-Henning Kamp wrote: > 1. Should we default to having a health-check ? Personally, I'd never use a backend without a health check, but I do see a point in not having one by default to lower the barrier for newbies. > 2. Should that thread also nanny the open connections ? yes. And if there is no health check, the health check thread could still exist just to nanny the open connections. Nils From fgsch at lodoss.net Thu Aug 28 17:12:25 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Thu, 28 Aug 2014 18:12:25 +0100 Subject: [PATCH] minor gzip changes Message-ID: Hi, The attached patch does: 1. Fix 0 Content-Lenth with gzip+esi. This is already handled in the non-esi case. 2. Make gzip handling with and without esi more inline. 3. Change some checks into asserts as they cannot really happen (the vfp won't get pushed). Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 00.gzip+esi_cl_0.patch Type: text/x-patch Size: 3125 bytes Desc: not available URL: From fgsch at lodoss.net Thu Aug 28 17:14:53 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Thu, 28 Aug 2014 18:14:53 +0100 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: <53FECB9F.60807@schokola.de> References: <53FE4EE5.7050507@uplex.de> <53FECB9F.60807@schokola.de> Message-ID: You meant vcl_backend_response, right? Actually there is yet another reason to do it in C. If we were going to do it in the builtin vcl people wanting to override this value would need to either return early or fiddle with the header. On Thu, Aug 28, 2014 at 7:26 AM, Nils Goroll wrote: > On 27/08/14 23:34, Geoff Simmons wrote: > > I think I'm unsure about what we're striving for in Varnish 4 -- > > wasn't the goal to move as much caching policy as possible out to VCL, > > with good defaults in builtin.vcl? > > I see a bit of a tendency that we are moving towards having C code provide > good/better defaults, still allowing VCL to modify them. > > This definitely is the case with fgs' proposed patch, vcl_backend_fetch > still > has the final word. > > But, yes, s-w-r can be done in VCL already (and it really is a good > question if > we shold just add it to the builtin.vcl). s-i-e, I think, needs additional > C > support to allow for a VCL implementation (see my post "restarting for bad > synchronous responses"). > > > Some header mangling (Vary, Etag) we are doing in fetch processors at the > moment > is the exact contrary - VCL control is reduced (limited to vcl_deliver) > until we > get explicit fetch processor pushes (which phk is planning for). > > > Nils > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fgsch at lodoss.net Thu Aug 28 17:18:22 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Thu, 28 Aug 2014 18:18:22 +0100 Subject: [PATCH] stale-while-revalidate updated In-Reply-To: References: <53FE4EE5.7050507@uplex.de> <53FECB9F.60807@schokola.de> Message-ID: Updated patch after commit f2caddcf. On Thu, Aug 28, 2014 at 6:14 PM, Federico Schwindt wrote: > You meant vcl_backend_response, right? > > Actually there is yet another reason to do it in C. > > If we were going to do it in the builtin vcl people wanting to override > this value would need to either return early or fiddle with the header. > > > On Thu, Aug 28, 2014 at 7:26 AM, Nils Goroll wrote: > >> On 27/08/14 23:34, Geoff Simmons wrote: >> > I think I'm unsure about what we're striving for in Varnish 4 -- >> > wasn't the goal to move as much caching policy as possible out to VCL, >> > with good defaults in builtin.vcl? >> >> I see a bit of a tendency that we are moving towards having C code provide >> good/better defaults, still allowing VCL to modify them. >> >> This definitely is the case with fgs' proposed patch, vcl_backend_fetch >> still >> has the final word. >> >> But, yes, s-w-r can be done in VCL already (and it really is a good >> question if >> we shold just add it to the builtin.vcl). s-i-e, I think, needs >> additional C >> support to allow for a VCL implementation (see my post "restarting for bad >> synchronous responses"). >> >> >> Some header mangling (Vary, Etag) we are doing in fetch processors at the >> moment >> is the exact contrary - VCL control is reduced (limited to vcl_deliver) >> until we >> get explicit fetch processor pushes (which phk is planning for). >> >> >> Nils >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0000.stale-while-revalidate.patch Type: text/x-patch Size: 3043 bytes Desc: not available URL: From perbu at varnish-software.com Fri Aug 29 06:32:38 2014 From: perbu at varnish-software.com (Per Buer) Date: Fri, 29 Aug 2014 08:32:38 +0200 Subject: what is the ttl? In-Reply-To: <1316.1409227062@critter.freebsd.dk> References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> <53FECF80.8090506@schokola.de> <53FF168B.5020405@uplex.de> <1316.1409227062@critter.freebsd.dk> Message-ID: On Thu, Aug 28, 2014 at 1:57 PM, Poul-Henning Kamp wrote: > > I think the task is how do we express "Give this a total lifetime > of two minutes" and that would be: > > set beresl.ttl = 120s - beresp.age; > +1 FWIW, I think this is pretty clear and it seems to be quite easy to explain to people. -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com [image: Register now] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Fri Aug 29 12:15:15 2014 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Fri, 29 Aug 2014 14:15:15 +0200 Subject: VDD14Q3 in Oslo on September 17th Message-ID: Hello everyone everywhere, SSIA. More details> https://www.varnish-cache.org/trac/wiki/VDD14Q3 Best regards, -- *Rub?n Romero* Community & Sales | Varnish Software AS Cell: +47 95964088 / Office: +47 21989260 Skype, Twitter & IRC: ruben_varnish We Make Websites Fly! [image: Varnish Summits Autumn 2014] -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at varnish-software.com Fri Aug 29 12:48:30 2014 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Fri, 29 Aug 2014 14:48:30 +0200 Subject: [PATCH] Make random/hash director exhaust the backend list when looking for healthy backend. Message-ID: <1409316510-23073-1-git-send-email-martin@varnish-software.com> Fixes: #1575 --- bin/varnishtest/tests/r01575.vtc | 56 ++++++++++++++++++++++++++++++++++++++++ lib/libvmod_directors/hash.c | 6 ++--- lib/libvmod_directors/random.c | 6 ++--- lib/libvmod_directors/vdir.c | 2 +- 4 files changed, 63 insertions(+), 7 deletions(-) create mode 100644 bin/varnishtest/tests/r01575.vtc diff --git a/bin/varnishtest/tests/r01575.vtc b/bin/varnishtest/tests/r01575.vtc new file mode 100644 index 0000000..11c82c1 --- /dev/null +++ b/bin/varnishtest/tests/r01575.vtc @@ -0,0 +1,56 @@ +varnishtest "#1575 - random director exhaust backend list" + +# Add 5 backends to a random director, with the 5th having very low weight. +# Mark the first 4 sick, and make sure that the 5th will be selected. + +server s1 { + rxreq + txresp +} -start + +server s2 { + rxreq + txresp +} -start + +server s3 { + rxreq + txresp +} -start + +server s4 { + rxreq + txresp +} -start + +server s5 { + rxreq + txresp +} -start + +varnish v1 -vcl+backend { + import ${vmod_directors}; + sub vcl_init { + new rd = directors.random(); + rd.add_backend(s1, 10000); + rd.add_backend(s2, 10000); + rd.add_backend(s3, 10000); + rd.add_backend(s4, 10000); + rd.add_backend(s5, 1); + } + + sub vcl_backend_fetch { + set bereq.backend = rd.backend(); + } +} -start + +varnish v1 -cliok "backend.set_health s1 sick" +varnish v1 -cliok "backend.set_health s2 sick" +varnish v1 -cliok "backend.set_health s3 sick" +varnish v1 -cliok "backend.set_health s4 sick" + +client c1 { + txreq + rxresp + expect resp.status == 200 +} -run diff --git a/lib/libvmod_directors/hash.c b/lib/libvmod_directors/hash.c index afef7ed..090039f 100644 --- a/lib/libvmod_directors/hash.c +++ b/lib/libvmod_directors/hash.c @@ -47,7 +47,7 @@ struct vmod_directors_hash { unsigned magic; #define VMOD_DIRECTORS_HASH_MAGIC 0xc08dd611 struct vdir *vd; - unsigned nloops; + unsigned n_backend; struct vbitmap *vbm; }; @@ -64,7 +64,6 @@ vmod_hash__init(const struct vrt_ctx *ctx, struct vmod_directors_hash **rrp, AN(rr); rr->vbm = vbit_init(8); AN(rr->vbm); - rr->nloops = 3; // *rrp = rr; vdir_new(&rr->vd, vcl_name, NULL, NULL, rr); } @@ -90,6 +89,7 @@ vmod_hash_add_backend(const struct vrt_ctx *ctx, CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(rr, VMOD_DIRECTORS_HASH_MAGIC); (void)vdir_add_backend(rr->vd, be, w); + rr->n_backend++; } VCL_BACKEND __match_proto__() @@ -120,6 +120,6 @@ vmod_hash_backend(const struct vrt_ctx *ctx, struct vmod_directors_hash *rr, r = vbe32dec(sha256); r = scalbn(r, -32); assert(r >= 0 && r <= 1.0); - be = vdir_pick_be(rr->vd, r, rr->nloops); + be = vdir_pick_be(rr->vd, r, rr->n_backend); return (be); } diff --git a/lib/libvmod_directors/random.c b/lib/libvmod_directors/random.c index 22f0bb9..8ae36a7 100644 --- a/lib/libvmod_directors/random.c +++ b/lib/libvmod_directors/random.c @@ -45,7 +45,7 @@ struct vmod_directors_random { unsigned magic; #define VMOD_DIRECTORS_RANDOM_MAGIC 0x4732d092 struct vdir *vd; - unsigned nloops; + unsigned n_backend; struct vbitmap *vbm; }; @@ -68,7 +68,7 @@ vmod_rr_getfd(const struct director *dir, struct busyobj *bo) CAST_OBJ_NOTNULL(rr, dir->priv, VMOD_DIRECTORS_RANDOM_MAGIC); r = scalbn(random(), -31); assert(r >= 0 && r < 1.0); - be = vdir_pick_be(rr->vd, r, rr->nloops); + be = vdir_pick_be(rr->vd, r, rr->n_backend); if (be == NULL) return (NULL); return (be->getfd(be, bo)); @@ -87,7 +87,6 @@ vmod_random__init(const struct vrt_ctx *ctx, struct vmod_directors_random **rrp, AN(rr); rr->vbm = vbit_init(8); AN(rr->vbm); - rr->nloops = 3; // *rrp = rr; vdir_new(&rr->vd, vcl_name, vmod_rr_healthy, vmod_rr_getfd, rr); } @@ -113,6 +112,7 @@ vmod_random_add_backend(const struct vrt_ctx *ctx, CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); CHECK_OBJ_NOTNULL(rr, VMOD_DIRECTORS_RANDOM_MAGIC); (void)vdir_add_backend(rr->vd, be, w); + rr->n_backend++; } VCL_BACKEND __match_proto__() diff --git a/lib/libvmod_directors/vdir.c b/lib/libvmod_directors/vdir.c index 12268eb..aae7a14 100644 --- a/lib/libvmod_directors/vdir.c +++ b/lib/libvmod_directors/vdir.c @@ -186,7 +186,7 @@ vdir_pick_be(struct vdir *vd, double w, unsigned nloops) nbe = vd->n_backend; assert(w >= 0.0 && w < 1.0); vdir_lock(vd); - for (l = 0; nbe > 0 && tw > 0.0 && l 0 && tw > 0.0 && l < nloops; l++) { u = vdir_pick_by_weight(vd, w * tw, vbm); be = vd->backend[u]; CHECK_OBJ_NOTNULL(be, DIRECTOR_MAGIC); -- 2.1.0.rc1 From fgsch at lodoss.net Fri Aug 29 13:10:39 2014 From: fgsch at lodoss.net (Federico Schwindt) Date: Fri, 29 Aug 2014 14:10:39 +0100 Subject: [PATCH] Make random/hash director exhaust the backend list when looking for healthy backend. In-Reply-To: <1409316510-23073-1-git-send-email-martin@varnish-software.com> References: <1409316510-23073-1-git-send-email-martin@varnish-software.com> Message-ID: Hi, Any reason not to use rr->vd->n_backend directly as the round-robin director does? On Fri, Aug 29, 2014 at 1:48 PM, Martin Blix Grydeland < martin at varnish-software.com> wrote: > Fixes: #1575 > --- > bin/varnishtest/tests/r01575.vtc | 56 > ++++++++++++++++++++++++++++++++++++++++ > lib/libvmod_directors/hash.c | 6 ++--- > lib/libvmod_directors/random.c | 6 ++--- > lib/libvmod_directors/vdir.c | 2 +- > 4 files changed, 63 insertions(+), 7 deletions(-) > create mode 100644 bin/varnishtest/tests/r01575.vtc > > diff --git a/bin/varnishtest/tests/r01575.vtc > b/bin/varnishtest/tests/r01575.vtc > new file mode 100644 > index 0000000..11c82c1 > --- /dev/null > +++ b/bin/varnishtest/tests/r01575.vtc > @@ -0,0 +1,56 @@ > +varnishtest "#1575 - random director exhaust backend list" > + > +# Add 5 backends to a random director, with the 5th having very low > weight. > +# Mark the first 4 sick, and make sure that the 5th will be selected. > + > +server s1 { > + rxreq > + txresp > +} -start > + > +server s2 { > + rxreq > + txresp > +} -start > + > +server s3 { > + rxreq > + txresp > +} -start > + > +server s4 { > + rxreq > + txresp > +} -start > + > +server s5 { > + rxreq > + txresp > +} -start > + > +varnish v1 -vcl+backend { > + import ${vmod_directors}; > + sub vcl_init { > + new rd = directors.random(); > + rd.add_backend(s1, 10000); > + rd.add_backend(s2, 10000); > + rd.add_backend(s3, 10000); > + rd.add_backend(s4, 10000); > + rd.add_backend(s5, 1); > + } > + > + sub vcl_backend_fetch { > + set bereq.backend = rd.backend(); > + } > +} -start > + > +varnish v1 -cliok "backend.set_health s1 sick" > +varnish v1 -cliok "backend.set_health s2 sick" > +varnish v1 -cliok "backend.set_health s3 sick" > +varnish v1 -cliok "backend.set_health s4 sick" > + > +client c1 { > + txreq > + rxresp > + expect resp.status == 200 > +} -run > diff --git a/lib/libvmod_directors/hash.c b/lib/libvmod_directors/hash.c > index afef7ed..090039f 100644 > --- a/lib/libvmod_directors/hash.c > +++ b/lib/libvmod_directors/hash.c > @@ -47,7 +47,7 @@ struct vmod_directors_hash { > unsigned magic; > #define VMOD_DIRECTORS_HASH_MAGIC 0xc08dd611 > struct vdir *vd; > - unsigned nloops; > + unsigned n_backend; > struct vbitmap *vbm; > }; > > @@ -64,7 +64,6 @@ vmod_hash__init(const struct vrt_ctx *ctx, struct > vmod_directors_hash **rrp, > AN(rr); > rr->vbm = vbit_init(8); > AN(rr->vbm); > - rr->nloops = 3; // > *rrp = rr; > vdir_new(&rr->vd, vcl_name, NULL, NULL, rr); > } > @@ -90,6 +89,7 @@ vmod_hash_add_backend(const struct vrt_ctx *ctx, > CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); > CHECK_OBJ_NOTNULL(rr, VMOD_DIRECTORS_HASH_MAGIC); > (void)vdir_add_backend(rr->vd, be, w); > + rr->n_backend++; > } > > VCL_BACKEND __match_proto__() > @@ -120,6 +120,6 @@ vmod_hash_backend(const struct vrt_ctx *ctx, struct > vmod_directors_hash *rr, > r = vbe32dec(sha256); > r = scalbn(r, -32); > assert(r >= 0 && r <= 1.0); > - be = vdir_pick_be(rr->vd, r, rr->nloops); > + be = vdir_pick_be(rr->vd, r, rr->n_backend); > return (be); > } > diff --git a/lib/libvmod_directors/random.c > b/lib/libvmod_directors/random.c > index 22f0bb9..8ae36a7 100644 > --- a/lib/libvmod_directors/random.c > +++ b/lib/libvmod_directors/random.c > @@ -45,7 +45,7 @@ struct vmod_directors_random { > unsigned magic; > #define VMOD_DIRECTORS_RANDOM_MAGIC 0x4732d092 > struct vdir *vd; > - unsigned nloops; > + unsigned n_backend; > struct vbitmap *vbm; > }; > > @@ -68,7 +68,7 @@ vmod_rr_getfd(const struct director *dir, struct busyobj > *bo) > CAST_OBJ_NOTNULL(rr, dir->priv, VMOD_DIRECTORS_RANDOM_MAGIC); > r = scalbn(random(), -31); > assert(r >= 0 && r < 1.0); > - be = vdir_pick_be(rr->vd, r, rr->nloops); > + be = vdir_pick_be(rr->vd, r, rr->n_backend); > if (be == NULL) > return (NULL); > return (be->getfd(be, bo)); > @@ -87,7 +87,6 @@ vmod_random__init(const struct vrt_ctx *ctx, struct > vmod_directors_random **rrp, > AN(rr); > rr->vbm = vbit_init(8); > AN(rr->vbm); > - rr->nloops = 3; // > *rrp = rr; > vdir_new(&rr->vd, vcl_name, vmod_rr_healthy, vmod_rr_getfd, rr); > } > @@ -113,6 +112,7 @@ vmod_random_add_backend(const struct vrt_ctx *ctx, > CHECK_OBJ_NOTNULL(ctx, VRT_CTX_MAGIC); > CHECK_OBJ_NOTNULL(rr, VMOD_DIRECTORS_RANDOM_MAGIC); > (void)vdir_add_backend(rr->vd, be, w); > + rr->n_backend++; > } > > VCL_BACKEND __match_proto__() > diff --git a/lib/libvmod_directors/vdir.c b/lib/libvmod_directors/vdir.c > index 12268eb..aae7a14 100644 > --- a/lib/libvmod_directors/vdir.c > +++ b/lib/libvmod_directors/vdir.c > @@ -186,7 +186,7 @@ vdir_pick_be(struct vdir *vd, double w, unsigned > nloops) > nbe = vd->n_backend; > assert(w >= 0.0 && w < 1.0); > vdir_lock(vd); > - for (l = 0; nbe > 0 && tw > 0.0 && l + for (l = 0; nbe > 0 && tw > 0.0 && l < nloops; l++) { > u = vdir_pick_by_weight(vd, w * tw, vbm); > be = vd->backend[u]; > CHECK_OBJ_NOTNULL(be, DIRECTOR_MAGIC); > -- > 2.1.0.rc1 > > > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Fri Aug 29 13:34:01 2014 From: slink at schokola.de (Nils Goroll) Date: Fri, 29 Aug 2014 15:34:01 +0200 Subject: what is the ttl? In-Reply-To: References: <53FCCB1D.5070707@schokola.de> <53FE2354.1000908@schokola.de> <53FEC893.5060401@schokola.de> <53FECD9E.4040009@uplex.de> <53FECF80.8090506@schokola.de> <53FF168B.5020405@uplex.de> <1316.1409227062@critter.freebsd.dk> Message-ID: <54008149.5090408@schokola.de> After the helpful discussion on the topic I'm with phk now and I think we should just add (beresp|obj).age and leave the ttl as is. Thanks, Nils