From viktor.villafuerte at optusnet.com.au Tue Nov 1 03:01:41 2016 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Tue, 1 Nov 2016 14:01:41 +1100 Subject: Varnish 4.1.3 starting error In-Reply-To: References: Message-ID: <20161101030141.GC22553@optusnet.com.au> Hi, I'm testing 4.1.3 as we speak and [root at linearcdn01.test ~]# find /usr -name '*libvarnish*' /usr/lib64/libvarnishapi.so.1.0.4 /usr/lib64/varnish/libvarnish.so /usr/lib64/varnish/libvarnishcompat.so /usr/lib64/libvarnishapi.so.1 [root at linearcdn01.test ~]# rpm -q --whatprovides /usr/lib64/varnish/libvarnish.so varnish-libs-4.1.3-1.el6.x86_64 [root at linearcdn01.test ~]# Did you check if the library actually exists.. as opposed to checking if the pkg is installed? I'd suggest checking the content of varnish-libs and also checking it's MD5 sum to make sure that anything didn't go wrong when downloading. Do listing of files in the pkg.. If that all is ok the turn it off and on again :) remove all Vanrish related pkgs, reboot, re-install (make sure you've cheked the md5s). Last piece of advice I can give is to re-run ldconfig hope some of this will help v On Fri 28 Oct 2016 01:44:33, Ayberk Kimsesiz wrote: > Hi Ian, > > package varnish-libs-devel-4.1.3-1.el6.x86_64 is already installed > package varnish-libs-4.1.3-1.el6.x86_64 is already installed > > Do you have another suggestion? > > Thanks. > > > > > > 2016-10-28 1:34 GMT+03:00 Ian Macdonald : > > > Hi, > > > > How did you install varnish if via separate rpms then you probably are > > missing the libs > > > > varnish-libs-4.1.3-1.el6.x86_64.rpm > > > > you will also need jemalloc, if you did go this way > > > > > > cheers > > > > Ian > > > > On 27 October 2016 at 22:25, Ayberk Kimsesiz > > wrote: > > > Hi, > > > > > > How can I solve this? > > > > > > Starting Varnish Cache: /usr/sbin/varnishd: error while loading shared > > > libraries: libvarnish.so: cannot open shared object file: No such file or > > > directory [FAILED] > > > > > > OS: Centos 6.5 > > > > > > Thanks > > > > > > _______________________________________________ > > > varnish-misc mailing list > > > varnish-misc at varnish-cache.org > > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From ayberk.kimsesiz at gmail.com Tue Nov 1 08:15:21 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Tue, 1 Nov 2016 11:15:21 +0300 Subject: Varnish 4.1.3 starting error In-Reply-To: <20161101030141.GC22553@optusnet.com.au> References: <20161101030141.GC22553@optusnet.com.au> Message-ID: Hi, I have solved my problem with the following solution rpm -qa | grep -i varnish rpm -e varnish-release-xxxxxx.noarch rpm -e varnish-libs-xxxxxx.x86_64 yum clean all rpm --nosignature -i http://repo.varnish-cache.org/redhat/varnish-4.1/el6/noarch/varnish-release/varnish-release-4.1-2.el6.noarch.rpm 2016-11-01 6:01 GMT+03:00 Viktor Villafuerte < viktor.villafuerte at optusnet.com.au>: > Hi, > > I'm testing 4.1.3 as we speak and > > [root at linearcdn01.test ~]# find /usr -name '*libvarnish*' > /usr/lib64/libvarnishapi.so.1.0.4 > /usr/lib64/varnish/libvarnish.so > /usr/lib64/varnish/libvarnishcompat.so > /usr/lib64/libvarnishapi.so.1 > [root at linearcdn01.test ~]# rpm -q --whatprovides > /usr/lib64/varnish/libvarnish.so > varnish-libs-4.1.3-1.el6.x86_64 > [root at linearcdn01.test ~]# > > Did you check if the library actually exists.. as opposed to checking if > the pkg is installed? > > I'd suggest checking the content of varnish-libs and also checking it's > MD5 sum to make sure that anything didn't go wrong when downloading. Do > listing of files in the pkg.. > > If that all is ok the turn it off and on again :) > > remove all Vanrish related pkgs, reboot, re-install (make sure you've > cheked the md5s). > > Last piece of advice I can give is to re-run ldconfig > > > hope some of this will help > > > v > > > > > > > On Fri 28 Oct 2016 01:44:33, Ayberk Kimsesiz wrote: > > Hi Ian, > > > > package varnish-libs-devel-4.1.3-1.el6.x86_64 is already > installed > > package varnish-libs-4.1.3-1.el6.x86_64 is already installed > > > > Do you have another suggestion? > > > > Thanks. > > > > > > > > > > > > 2016-10-28 1:34 GMT+03:00 Ian Macdonald : > > > > > Hi, > > > > > > How did you install varnish if via separate rpms then you probably are > > > missing the libs > > > > > > varnish-libs-4.1.3-1.el6.x86_64.rpm > > > > > > you will also need jemalloc, if you did go this way > > > > > > > > > cheers > > > > > > Ian > > > > > > On 27 October 2016 at 22:25, Ayberk Kimsesiz < > ayberk.kimsesiz at gmail.com> > > > wrote: > > > > Hi, > > > > > > > > How can I solve this? > > > > > > > > Starting Varnish Cache: /usr/sbin/varnishd: error while loading > shared > > > > libraries: libvarnish.so: cannot open shared object file: No such > file or > > > > directory [FAILED] > > > > > > > > OS: Centos 6.5 > > > > > > > > Thanks > > > > > > > > _______________________________________________ > > > > varnish-misc mailing list > > > > varnish-misc at varnish-cache.org > > > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -- > Regards > > Viktor Villafuerte > Optus Internet Engineering > t: +61 2 80825265 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Tue Nov 1 09:18:52 2016 From: slink at schokola.de (Nils Goroll) Date: Tue, 1 Nov 2016 10:18:52 +0100 Subject: Connections "CLOSE_WAIT" that never close In-Reply-To: <2a54323b-c79b-8629-faa9-71002d6ba7c4@schokola.de> References: <20160925204849.0cfbeaea@R2D2> <2a54323b-c79b-8629-faa9-71002d6ba7c4@schokola.de> Message-ID: Hi, I've fixed what I believe is the most likely cause for this issue and merged the fix to master just now. Nils From zxcvbn4038 at gmail.com Tue Nov 1 21:41:01 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 1 Nov 2016 17:41:01 -0400 Subject: Cache survived varnish restart? In-Reply-To: References: Message-ID: I didn't think so either but not sure how to explain the lack of disruption from the restarts. On Mon, Oct 31, 2016 at 5:21 AM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > I don't think that's possible :-) That would be nice though. > > -- > Guillaume Quintard > > On Thu, Oct 27, 2016 at 8:41 PM, CJ Ess wrote: > >> I saw something today that I can't explain - I upgraded Varnish 4.1.0 to >> v4.1.3 by restarting varnishd on a server. I can see that inode of the >> executable and libraries are right for the updated files in the running >> varnishd via /proc//maps (on Linux), I can see that no other varnishd >> processes are running. I expected that restarting varnish would cause the >> cache to be wiped and require some period of warmup, however the hit rate >> held steady at 98% and I don't know why. >> >> I'm very happy to be able to do this, its going to make the rest of my >> upgrades non-disruptive, but I'd like to be able to explain whats happening >> behind the scenes. Can anyone tell me how the cache seemed to survive the >> restart? >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu Nov 3 09:57:38 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 3 Nov 2016 10:57:38 +0100 Subject: varnish5 - how to view what currently is in cache? In-Reply-To: <20161031093651.GY23724@nerd.dk> References: <1a4ea136-7e2c-a429-8f58-759c8dff4517@beckspaced.com> <20161031064220.GX23724@nerd.dk> <5a0af452-abcd-c73c-462d-e94ef14a426f@beckspaced.com> <20161031093651.GY23724@nerd.dk> Message-ID: On Mon, Oct 31, 2016 at 10:36 AM, Andreas Plesner wrote: > On Mon, Oct 31, 2016 at 10:23:50AM +0100, Admin Beckspaced wrote: >> >> but i'm looking more into finding out what URL's there are in the cache to >> find the culprits as i never know what clients out there request. > > You can't. When they're in the cache, the hash is the only key. Opening a can of worm here, you can do that with a VMOD that has yet to be written ;) Cheers From dridi at varni.sh Thu Nov 3 10:04:39 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 3 Nov 2016 11:04:39 +0100 Subject: BAN - Memory consumption exploding under load In-Reply-To: References: Message-ID: On Thu, Oct 27, 2016 at 6:02 PM, Nils Goroll wrote: > Hi, > > we've added a bunch of ban and ban lurker improvements which did not get > backported to 4.1.3 > > Please upgrade to 5.0 or master. Hi Nils, I was thinking (against the too-many-knobs trend) that maybe we should have some sort of ban_queue_limit parameter to avoid piling up too many bans. It would also play nice with your std.ban() initiative where hitting the queue limit could be one of the reasons returned by the function when it fails. The default could be 0 for unlimited, or a fairly large number. Dridi From admin at beckspaced.com Sat Nov 5 09:10:33 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Sat, 5 Nov 2016 10:10:33 +0100 Subject: varnish 5 - do you need to normalize accept-encoding? Message-ID: <27c3ed94-7d85-04d7-b7d2-b36407971bbb@beckspaced.com> hello varnish users ;) in varnish 5 ... do I need to normalize the accept-encoding header? or is varnish 5 doing this by itself internally? i looked around in google but couldn't find any up-to-date info. so I thought why not ask the pro's here ;) thanks, greetings & all the best becki From jack.xsuperman at gmail.com Mon Nov 7 05:20:12 2016 From: jack.xsuperman at gmail.com (JackDrogon) Date: Mon, 7 Nov 2016 13:20:12 +0800 Subject: How to use varnish to pass data when the backend is ok && return the last normal beresq when the backend is down? Message-ID: Hi? All: I need varnish to return data directly and update cache at every time when the backend is ok. I also need varnish to return the last normal beresq data from the cache when the backend is down. How shuold I do with varnish? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Mon Nov 7 07:13:48 2016 From: apj at mutt.dk (Andreas Plesner) Date: Mon, 7 Nov 2016 08:13:48 +0100 Subject: varnish 5 - do you need to normalize accept-encoding? In-Reply-To: <27c3ed94-7d85-04d7-b7d2-b36407971bbb@beckspaced.com> References: <27c3ed94-7d85-04d7-b7d2-b36407971bbb@beckspaced.com> Message-ID: <20161107071348.GZ23724@nerd.dk> On Sat, Nov 05, 2016 at 10:10:33AM +0100, Admin Beckspaced wrote: > > in varnish 5 ... do I need to normalize the accept-encoding header? > or is varnish 5 doing this by itself internally? No. Not since 3.0 -- Andreas From admin at beckspaced.com Mon Nov 7 09:14:45 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Mon, 7 Nov 2016 10:14:45 +0100 Subject: varnish 5 - do you need to normalize accept-encoding? In-Reply-To: <20161107071348.GZ23724@nerd.dk> References: <27c3ed94-7d85-04d7-b7d2-b36407971bbb@beckspaced.com> <20161107071348.GZ23724@nerd.dk> Message-ID: <577320b0-418f-888a-b2f9-235d0ca38800@beckspaced.com> >> in varnish 5 ... do I need to normalize the accept-encoding header? >> or is varnish 5 doing this by itself internally? > No. Not since 3.0 > thanks ;) From mblissett at gbif.org Mon Nov 7 10:48:15 2016 From: mblissett at gbif.org (Matthew Blissett) Date: Mon, 7 Nov 2016 11:48:15 +0100 Subject: http/2 first experiences In-Reply-To: References: Message-ID: <84fdfe91-ff30-e919-73e5-5a6aa9fafe3a@gbif.org> On 04/10/16 16:06, dridi at varni.sh (Dridi Boukelmoune) wrote: > On Tue, Oct 4, 2016 at 1:31 PM, Tom Anheyer wrote: >> Hello, >> >> I've setup a little test environment with varnish5 and hitch as TLS >> offloader. HTTP/2 works for me in FF and Chrome (after upgrading to openssl >> 1.0.2). I've done the same ? Varnish 5 and Hitch ? and we're really pleased to see the initial HTTP/2 support. >> 2016 11:26:36 GMT >> "Incomplete code in >> h2_rx_rst_stream(), http2/cache_http2_proto.c line 113: > So apparently you reached one of the things Varnish doesn't implement > yet, in this case RST frames. Your browser tried to cancel a request > and close the related streams and Varnish doesn't support it. > > Spoiler alert, your browser may crash with DATA, PUSH_PROMISE and > CONTINUATION frames. Part of our motivation for HTTP/2 is to serve map tiles, so we'll have to wait until RST frames are handled ? browsers send these when tiles are scrolled/zoomed off the map before they've loaded. I've also hit this error, which may be more interesting as it's a failed assertion: "Incomplete code in h2_rx_rst_stream(), http2/cache_http2_proto.c line 113: thread = (cache-worker) version = varnish-5.0.0 revision 99d036f ident = Linux,3.10.0-327.36.3.el7.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll Backtrace: 0x4359b6: pan_ic+0x166 0x45bca9: varnishd() [0x45bca9] 0x45d121: h2_new_session+0xd01 0x44d9d1: WRK_Thread+0x481 0x44de3b: pool_thread+0x2b 0x7f79246fedc5: libpthread.so.0(+0x7dc5) [0x7f79246fedc5] 0x7f792442bced: libc.so.6(clone+0x6d) [0x7f792442bced] req = 0x7f7913096020 { vxid = 1, transport = H2 step = 0x0, req_body = R_BODY_INIT, err_code = 1, err_reason = (null), restarts = 0, esi_level = 0, sp = 0x7f7912c0f420 { fd = 20, vxid = 1, t_open = 1478512502.492168, t_idle = 1478512507.188742, transport = H2 { streams { 0x00000000 idle 0x00000003 idle 0x00000005 idle 0x00000007 idle 0x00000009 idle 0x0000000b idle 0x00000019 open 0x00000025 open } } client = 192.38.28.2 46264, }, ws = 0x7f79130961f8 { id = \"req\", {s, f, r, e} = {0x7f7913097ff8, +1048, (nil), +57344}, }, http_conn = 0x7f7913096128 { fd = 20, doclose = NULL, ws = 0x7f791b731e38, {rxbuf_b, rxbuf_e} = {0x7f791b7313f0, 0x7f791b7313fd}, {pipeline_b, pipeline_e} = {(nil), (nil)}, content_length = 0, body_status = none, first_byte_timeout = 0.000000, between_bytes_timeout = 0.000000, }, http[req] = 0x7f7913096290 { ws[] = (nil), hdrs { }, }, flags = { }, }, But anyway, the server is accessible, should anyone want to fiddle: - https://api2.gbif-uat.org/v2/map/demo1.html - http://api2.gbif-uat.org/v2/map/demo1.html These are both the same instance of Varnish, the first via Hitch, the second not. The server is in Copenhagen, Denmark, sharing a 1Gbit/s connection. I'm following the mailing lists, and will aim to update the server as more of HTTP/2 is implemented. Cheers all, Matt Blissett From slink at schokola.de Mon Nov 7 16:41:23 2016 From: slink at schokola.de (Nils Goroll) Date: Mon, 7 Nov 2016 17:41:23 +0100 Subject: http/2 first experiences In-Reply-To: <84fdfe91-ff30-e919-73e5-5a6aa9fafe3a@gbif.org> References: <84fdfe91-ff30-e919-73e5-5a6aa9fafe3a@gbif.org> Message-ID: I think we could use a bug report for this. Would you open one here? https://github.com/varnishcache/varnish-cache/issues/new On 07/11/16 11:48, Matthew Blissett wrote: > > "Incomplete code in h2_rx_rst_stream(), http2/cache_http2_proto.c line 113: > thread = (cache-worker) > version = varnish-5.0.0 revision 99d036f > ident = > Linux,3.10.0-327.36.3.el7.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll From slink at schokola.de Mon Nov 7 16:51:48 2016 From: slink at schokola.de (Nils Goroll) Date: Mon, 7 Nov 2016 17:51:48 +0100 Subject: ban_limit parameter // Re: BAN - Memory consumption exploding under load In-Reply-To: References: Message-ID: <48f0e2bf-7eec-d2e3-6c8e-38b6d4f5a6fb@schokola.de> (discussion started on -misc, added -dev) Hi, On 03/11/16 11:04, Dridi Boukelmoune wrote: > I was thinking (against the too-many-knobs trend) that maybe we should > have some sort of ban_queue_limit parameter to avoid piling up too > many bans. Dridi and myself have briefly discussed this and at this point I think, yes, we should have such a parameter. It seems that the best option when exceeding the ban list maximum would probably be to have (by the ban lurker) all objects removed which are untested at the tail of the ban list. This way, we would loose the long-tail bit of the cache, but not the hot objects (because they will already have been tested up the ban list) while still retaining the correct ban behavior (because all objects which are supposed to be banned will be banned - implicitly). The alternative to refuse additional bans appears problematic because applications will rely on the ability to invalidate the cache for correctness. Nils From admin at beckspaced.com Mon Nov 7 17:04:17 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Mon, 7 Nov 2016 18:04:17 +0100 Subject: varnish time it takes for a request Message-ID: <557de56f-71d1-5f79-db39-3c98c870bf35@beckspaced.com> hello again dear varnish community, if i look at the varnishlog I see lots of timing information, e.g. - Timestamp Start: 1478537654.083406 0.000000 0.000000 ... - Timestamp Req: 1478537654.083406 0.000000 0.000000 ... - Timestamp Process: 1478537654.083552 0.000146 0.000146 - Timestamp Resp: 1478537654.149267 0.065862 0.065716 - ReqAcct 456 0 456 297 117701 117998 what would be the best way to find out how long a single request took in total? start from the client request until the delivery is done. how would i find that one out? thanks, greetings & all the best becki From dridi at varni.sh Mon Nov 7 17:57:30 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 7 Nov 2016 18:57:30 +0100 Subject: ban_limit parameter // Re: BAN - Memory consumption exploding under load In-Reply-To: <48f0e2bf-7eec-d2e3-6c8e-38b6d4f5a6fb@schokola.de> References: <48f0e2bf-7eec-d2e3-6c8e-38b6d4f5a6fb@schokola.de> Message-ID: > The alternative to refuse additional bans appears problematic because > applications will rely on the ability to invalidate the cache for correctness. To which I suggested that we should maybe have a means to clear a cache completely without relying on a whole-encompassing ban, like a varnish-cli command. Users seeing failing bans could then make the decision to wipe the cache while still maintaining uptime. Same thing when lookups start taking seconds because of the number of bans to test, wiping the cache can be a last-resort solution if turning the hypothetical knob down isn't enough. Dridi From dridi at varni.sh Mon Nov 7 18:12:01 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 7 Nov 2016 19:12:01 +0100 Subject: varnish time it takes for a request In-Reply-To: <557de56f-71d1-5f79-db39-3c98c870bf35@beckspaced.com> References: <557de56f-71d1-5f79-db39-3c98c870bf35@beckspaced.com> Message-ID: On Mon, Nov 7, 2016 at 6:04 PM, Admin Beckspaced wrote: > hello again dear varnish community, > > if i look at the varnishlog I see lots of timing information, e.g. > > - Timestamp Start: 1478537654.083406 0.000000 0.000000 > ... > - Timestamp Req: 1478537654.083406 0.000000 0.000000 > ... > - Timestamp Process: 1478537654.083552 0.000146 0.000146 > - Timestamp Resp: 1478537654.149267 0.065862 0.065716 > - ReqAcct 456 0 456 297 117701 117998 > > what would be the best way to find out how long a single request took in > total? > start from the client request until the delivery is done. > > how would i find that one out? See `man vsl`, the second decimal field is the "Time since start of work unit", so in your case it should be 0.065862 seconds or almost 66ms from the last Timestamp record. But there are other details to consider, like buffering in your TCP stack, or apparently the lack of Timestamp record for the response body. Cheers, Dridi From slink at schokola.de Mon Nov 7 19:04:23 2016 From: slink at schokola.de (Nils Goroll) Date: Mon, 7 Nov 2016 20:04:23 +0100 Subject: ban_limit parameter // Re: BAN - Memory consumption exploding under load In-Reply-To: References: <48f0e2bf-7eec-d2e3-6c8e-38b6d4f5a6fb@schokola.de> Message-ID: <67e31074-d4f5-998b-000c-93d7aee095e8@schokola.de> https://github.com/varnishcache/varnish-cache/pull/2131 From slink at schokola.de Mon Nov 7 19:06:05 2016 From: slink at schokola.de (Nils Goroll) Date: Mon, 7 Nov 2016 20:06:05 +0100 Subject: varnish time it takes for a request In-Reply-To: References: <557de56f-71d1-5f79-db39-3c98c870bf35@beckspaced.com> Message-ID: <8daa16a0-954a-671a-9476-8eb73eddbcdc@schokola.de> I'd just like to add to Dridis perfect explanation that "varnishhist" is a great tool to get a quick overview of response times. Since 5.0, it can also graph backend response times and replay logs. From viktor.villafuerte at optusnet.com.au Thu Nov 10 00:05:51 2016 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Thu, 10 Nov 2016 11:05:51 +1100 Subject: Hitch discussion Message-ID: <20161110000551.GC21229@optusnet.com.au> Hi all, would this mail-list be also used for Hitch related discussions? If not what would you suggest. thanks v -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From lkarsten at varnish-software.com Thu Nov 10 12:02:09 2016 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Thu, 10 Nov 2016 13:02:09 +0100 Subject: Hitch discussion In-Reply-To: <20161110000551.GC21229@optusnet.com.au> References: <20161110000551.GC21229@optusnet.com.au> Message-ID: On Thu, Nov 10, 2016 at 1:05 AM, Viktor Villafuerte wrote: > Hi all, > > would this mail-list be also used for Hitch related discussions? If not > what would you suggest. Hi Viktor. For the time being I think using the Github issue tracker is the best way forward: https://github.com/varnish/hitch/issues -- Lasse From miguel_3_gonzalez at yahoo.es Thu Nov 10 18:52:15 2016 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Thu, 10 Nov 2016 19:52:15 +0100 Subject: gzip not working in Varnish 4.1 Message-ID: Dear all, I have read in the docs that enabling in varnishd http_gzip_support: -p http_gzip_support=true it should compress the content but testing with curl -I -H 'Accept-Encoding:gzip' is not reporting gzip. My vcl backend_response is below, do I need to set somewhere beresp.do_gzip = true even when I set it for varnishd? Regards, Miguel ----------------------------------------------------------------------------------------------- sub vcl_backend_response { # Remove some headers we never want to see unset beresp.http.Server; unset beresp.http.X-Powered-By; # For static content strip all backend cookies if (bereq.url ~ "\.(css|js|png|gif|jp(e?)g)|swf|ico") { unset beresp.http.cookie; } # Don't store backend if (bereq.url ~ "wp-(login|admin)" || bereq.url ~ "preview=true") { set beresp.uncacheable = true; set beresp.ttl = 30s; return (deliver); } # Only allow cookies to be set if we're in admin area if (!(bereq.url ~ "(wp-login|cart|my-account|checkout|addons|tienda|carro|wp-admin|preview=true)")) { unset beresp.http.set-cookie; } # don't cache response to posted requests or those with basic auth if ( bereq.method == "POST" || bereq.http.Authorization ) { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } # don't cache search results if ( bereq.url ~ "\?s=" ){ set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } # only cache status ok if ( beresp.status != 200 ) { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } # A TTL of 24h set beresp.ttl = 24h; # Define the default grace period to serve cached content #set beresp.grace = 30s; set beresp.grace = 1h; return (deliver); } From geoff at uplex.de Thu Nov 10 19:22:28 2016 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 10 Nov 2016 20:22:28 +0100 Subject: gzip not working in Varnish 4.1 In-Reply-To: References: Message-ID: <7abbcdce-d695-a439-dc32-252d12e6b064@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 11/10/2016 07:52 PM, Miguel Gonz?lez wrote: > > My vcl backend_response is below, do I need to set somewhere > beresp.do_gzip = true even when I set it for varnishd? Yes. HTH, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJYJMj0AAoJEOUwvh9pJNUR5d8P/iEFPO+GVesm08sO1lOPMLe3 we3fYVWqrEI+Algi2uH2t6VR0X2njj7M+E0JBWvU5EJoIWqD/oMiJ970WEuyqtyP 4EkEgReLHV3TAO+oPPunyRONIXXf/VoMi9wDABf68576+dyWL+SvTGOBsdBvt59F 6vMIaZukdFLyYAEtuCtS/gW+/hme4MOGrzFECE8QCVj53g4aTJvawavGNoiHeq8v UyC/eTWS+bXYHHNsUYmFx+nR8G3sABVd25B6ujw1yA5Oo4FfTef7RON+eyXJNpdx D5km74shN4rhe/QrOjMo1m1VDz8gOdfT7llr9na2DMe6WqsX7eNuPaOaFYa2mSOE 1gu6/KUEauuOagPd2o/kWFrrMlnU1c9vt5ulytKX8nD068NX1hD63bnd54Zb8Llk tD5loAT9cmP+W6CwOH84cXb7TccpA9l1ue8nV7gPWJbxkx3FZFAXabQ9PuKf47B8 M7/YA3GqZieL3jq+v3NvNxHBYNmvThby38OtP8+OF9CPLC3lQfOTUturx6ODDHv3 iMmIaZOOXzKuVcdqyAn1Beham/d95cdYTzk5ClyccpHOC8Z586O+ER2zesrtDGuT 5uq2GmL4NYA12otTDjfcM0C5iLipeUMmr+DRFzwT+A0uIX6eBywVkYsKwiiOXgZF l/Spgyg2lt4dvYaBvwsq =dapL -----END PGP SIGNATURE----- From miguel_3_gonzalez at yahoo.es Thu Nov 10 22:15:48 2016 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Thu, 10 Nov 2016 23:15:48 +0100 Subject: gzip not working in Varnish 4.1 In-Reply-To: <7abbcdce-d695-a439-dc32-252d12e6b064@uplex.de> References: <7abbcdce-d695-a439-dc32-252d12e6b064@uplex.de> Message-ID: <1c9d504b-3492-614e-a79b-17a894b7b062@yahoo.es> On 11/10/16 8:22 PM, Geoff Simmons wrote: > On 11/10/2016 07:52 PM, Miguel Gonz?lez wrote: > >> My vcl backend_response is below, do I need to set somewhere >> beresp.do_gzip = true even when I set it for varnishd? > > Yes. > > > HTH, > Geoff > Ok, where and how? The example in varnish docs is only for html and I have Wordpress sites behind. Regards, Miguel From lagged at gmail.com Sat Nov 12 09:49:57 2016 From: lagged at gmail.com (Andrei) Date: Sat, 12 Nov 2016 03:49:57 -0600 Subject: Request challenging Message-ID: Hello all, I've been digging into some mitigation techniques to protect the backend, and am currently up to a request challenging feature similar to how Cloudflare introduces a CAPTCHA request in order to let the end user's request through. Has anyone had any success with this sort of method, or can give some suggestions on how to implement without causing too much overhead? Initially I was thinking of routing requests through a different backend who's main purpose would be to "challenge requests" by responding with a captcha, then set temporary global variables to store the IP results, and either cut the request or restart through the domain related backend. Similar to this I was thinking I could just inject the Javascript challenge right into the initial request without rerouting, but I'm not quite sure if that's even possible with Varnish. Any input and suggestions would be greatly appreciated! Have a great weekend everyone! Andrei -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at beckspaced.com Sat Nov 12 10:52:28 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Sat, 12 Nov 2016 11:52:28 +0100 Subject: varnishncsa logs split per domain Message-ID: hello there ;) since putting varnish version 5 in front of my apache backend the apache access logs don't fill up and webalizer has no data to process. I know that I can use varnishncsa to create logs in apache format. but since I'm using multiple domains I would need logs per domain. I see that there's the -q option I found stuff by google like: varnishncsa -m "RxHeader:^Host: www.domain1.com$" -a -w /var/log/varnish/www.domain1.com -D but I see that the -m param is no longer available in version 5? how could i use that -q param so varnishncsa would produce different logs for different domains? would a regex query on the host like below be possible? varnishncsa -q "ReqHost ~ '^mydomain.com$' " -a -w /var/log/varnish/mydomain.com -D varnishncsa -q "ReqHost ~ '^myotherdomain.com$' " -a -w /var/log/varnish/myotherdomain.com -D where can I find a list of variables I can use in the vsl-query language? a RTFM with the proper link would be perfect ;) thanks & greetings becki From admin at beckspaced.com Sat Nov 12 15:38:18 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Sat, 12 Nov 2016 16:38:18 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: Message-ID: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> Hello Andrei, thanks a lot for your reply and thoughts ... perhaps I used the wrong words in the subject? I'm actually not looking to split a varnish logfile into different domains and log files ... instead I was looking to create single log files for each domain with varnishncsa similar to this: varnishncsa -q "ReqHost ~ '^mydomain.com$' " -a -w /var/log/varnish/mydomain.com -D varnishncsa -q "ReqHost ~ '^myotherdomain.com$' " -a -w /var/log/varnish/myotherdomain.com -D is something like this possible? thanks & greetings becki Am 12.11.2016 um 13:30 schrieb Andrei: > Hello, > > I suggest looking into splitlogs with piped logging: > > http://httpd.apache.org/docs/current/logs.html#piped > https://httpd.apache.org/docs/2.4/programs/split-logfile.html > > This is similarly used with cPanel/WHM > (https://documentation.cpanel.net/display/ALD/The+splitlogs+Binary) > which you can likely pipe to directly from varnishncsa granted the > expected format is used. I haven't tried this personally, but now that > you've mentioned it this would seem pretty useful. Hope this helps! > > On 11/12/2016 12:52 PM, Admin Beckspaced wrote: >> hello there ;) >> >> since putting varnish version 5 in front of my apache backend the >> apache access logs don't fill up and webalizer has no data to process. >> >> I know that I can use varnishncsa to create logs in apache format. >> >> but since I'm using multiple domains I would need logs per domain. >> I see that there's the -q option >> >> I found stuff by google like: >> >> varnishncsa -m "RxHeader:^Host: www.domain1.com$" -a -w >> /var/log/varnish/www.domain1.com -D >> >> but I see that the -m param is no longer available in version 5? >> >> how could i use that -q param so varnishncsa would produce >> different logs for different domains? >> >> would a regex query on the host like below be possible? >> >> varnishncsa -q "ReqHost ~ '^mydomain.com$' " -a -w >> /var/log/varnish/mydomain.com -D >> varnishncsa -q "ReqHost ~ '^myotherdomain.com$' " -a -w >> /var/log/varnish/myotherdomain.com -D >> >> where can I find a list of variables I can use in the vsl-query language? >> a RTFM with the proper link would be perfect ;) >> >> thanks & greetings >> becki >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > From miguel_3_gonzalez at yahoo.es Sat Nov 12 18:48:06 2016 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Sat, 12 Nov 2016 19:48:06 +0100 Subject: gzip not working in Varnish 4.1 In-Reply-To: <7abbcdce-d695-a439-dc32-252d12e6b064@uplex.de> References: <7abbcdce-d695-a439-dc32-252d12e6b064@uplex.de> Message-ID: On 11/10/16 8:22 PM, Geoff Simmons wrote: > On 11/10/2016 07:52 PM, Miguel Gonz?lez wrote: > >> My vcl backend_response is below, do I need to set somewhere >> beresp.do_gzip = true even when I set it for varnishd? > > Yes. > > > HTH, > Geoff > It seems that adding this to vcl backend_response it worked: if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } now works :) Miguel From miguel_3_gonzalez at yahoo.es Sat Nov 12 19:20:08 2016 From: miguel_3_gonzalez at yahoo.es (=?UTF-8?Q?Miguel_Gonz=c3=a1lez?=) Date: Sat, 12 Nov 2016 20:20:08 +0100 Subject: varnishncsa logs split per domain In-Reply-To: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> Message-ID: >> I suggest looking into splitlogs with piped logging: >> >> http://httpd.apache.org/docs/current/logs.html#piped >> https://httpd.apache.org/docs/2.4/programs/split-logfile.html >> >> This is similarly used with cPanel/WHM >> (https://documentation.cpanel.net/display/ALD/The+splitlogs+Binary) >> which you can likely pipe to directly from varnishncsa granted the >> expected format is used. I haven't tried this personally, but now that >> you've mentioned it this would seem pretty useful. Hope this helps! Sorry to sort of hijack the thread but this is interesting to me. I have Varnish 4.1 in front of WHM. Is it possible to pipe all varnishncsa logs into one log in Apache (WHM in my case) so after Apache processes those logs in webalizer or awstats? I would like to avoid to have to add manually everytime I add a new virtualhost in WHM. Miguel From admin at beckspaced.com Sun Nov 13 10:18:54 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Sun, 13 Nov 2016 11:18:54 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> Message-ID: <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> Same Hello here ;) did have a more in depth look in the manual and figured out that varnishncsa does support VSL query. so someone could filter on the Request Header and Host varnishncsa -q "ReqHeader ~ '^Host: .*example.com'" which would produce a log for a specific domain only it then would need multiple varnishncsa instances for logging per domain, which I found here: https://kevops.com/2015/11/varnish-logging-per-host-with-init-script/ I use varnish version 5 and then there would be no need for splitlog and the logs would be created directly. please correct me if I'm wrong? thanks for your time & help Becki Am 12.11.2016 um 17:05 schrieb Andrei: > Hello again, > > My apologies for not explaining my thoughts better earlier then. Afaik, > varnishncsa does not have a native method to split output based on > different parameters. The method I was thinking of was based on piping > varnishncsa output through splitlogs (or similar) for the log processing > and writeouts. Since replying earlier, I've got this working on a cPanel > server with piped logging enabled for Apache using the following two for > example (X-Port is a custom header set in vcl_recv related to SSL > offloading, but you can use a static value or similar custom header): > > varnishncsa -F "%{HOST}i:%{X-Port}i %h %l %u %t \"%m %U%q %H\" %s %b > \"%{Referer}i\" \"%{User-agent}i\""|sed -e > 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` > --mainout=/usr/local/apache/logs/access_log > varnishncsa -F "%{HOST}i %{%s}t %b ."|sed -e > 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` > --suffix=-bytes_log > > The above pipes the requests to the splitlogs binary which queues then > writes to separate logs per domain, that are later processed by the > cPanel log stats apps. Either way, I believe you need an intermediary > script to queue and write the log entries per domain. While looking into > this process, I ran across this little tidbit which you may find of use > https://gist.github.com/garlandkr/4954272 for logstash style output. > > From lagged at gmail.com Sun Nov 13 15:01:52 2016 From: lagged at gmail.com (Andrei) Date: Sun, 13 Nov 2016 17:01:52 +0200 Subject: varnishncsa logs split per domain In-Reply-To: <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> Message-ID: Hello, By not using splitlogs or an intermediary script, you will be forced to run multiple instances of varnishncsa, which isn't optimal if you host multiple domains. The more traffic/domains you have, the more resources you will consume on parsing the same data across multiple channels. Yes, varnishncsa supports VSL, however I think your approach is a bit off (no offense). As in, you need to shift the log writing process away from varnishncsa. By doing so, you only have one instance of varnishncsa using resources to gather the data, which is then fed to a parser that handles the per domain log splits and writes. That's where 'splitlogs' came into play. As your question does raise some interest in the cPanel community (myself and Miguel Gonz?lez on this list for example), I threw together a quick Perl script that will in short, pipe and parse data between varnishncsa and the splitlogs binary for cache hits. This lets splitlogs handle the queued log writes which are later parsed for cPanel bandwidth usage and graphs, webalizer, awstats, logaholic, etc - https://github.com/AndreiG6/vscp On Sun, Nov 13, 2016 at 12:18 PM, Admin Beckspaced wrote: > Same Hello here ;) > > did have a more in depth look in the manual and figured out that > varnishncsa does support VSL query. > so someone could filter on the Request Header and Host > > varnishncsa -q "ReqHeader ~ '^Host: .*example.com'" > > which would produce a log for a specific domain only > > it then would need multiple varnishncsa instances for logging per domain, > which I found here: > > https://kevops.com/2015/11/varnish-logging-per-host-with-init-script/ > > I use varnish version 5 and then there would be no need for splitlog and > the logs would be created directly. > > please correct me if I'm wrong? > > thanks for your time & help > Becki > > > Am 12.11.2016 um 17:05 schrieb Andrei: > >> Hello again, >> >> My apologies for not explaining my thoughts better earlier then. Afaik, >> varnishncsa does not have a native method to split output based on >> different parameters. The method I was thinking of was based on piping >> varnishncsa output through splitlogs (or similar) for the log processing >> and writeouts. Since replying earlier, I've got this working on a cPanel >> server with piped logging enabled for Apache using the following two for >> example (X-Port is a custom header set in vcl_recv related to SSL >> offloading, but you can use a static value or similar custom header): >> >> varnishncsa -F "%{HOST}i:%{X-Port}i %h %l %u %t \"%m %U%q %H\" %s %b >> \"%{Referer}i\" \"%{User-agent}i\""|sed -e >> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >> --mainout=/usr/local/apache/logs/access_log >> varnishncsa -F "%{HOST}i %{%s}t %b ."|sed -e >> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >> --suffix=-bytes_log >> >> The above pipes the requests to the splitlogs binary which queues then >> writes to separate logs per domain, that are later processed by the >> cPanel log stats apps. Either way, I believe you need an intermediary >> script to queue and write the log entries per domain. While looking into >> this process, I ran across this little tidbit which you may find of use >> https://gist.github.com/garlandkr/4954272 for logstash style output. >> >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Sun Nov 13 15:44:18 2016 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 13 Nov 2016 16:44:18 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> Message-ID: Is the resources argument that compelling? I find it cleaner to have one ncsa per domain, plus the ncsas will read from memory, which is super fast. Sure you can then split logs afterwards using awk or whatever, but that's adding an extra layer and serializing a process that doesn't need to be. But I'm not an admin, so I may be off. On Nov 13, 2016 16:27, "Andrei" wrote: > Hello, > > By not using splitlogs or an intermediary script, you will be forced to > run multiple instances of varnishncsa, which isn't optimal if you host > multiple domains. The more traffic/domains you have, the more resources you > will consume on parsing the same data across multiple channels. > Yes, varnishncsa supports VSL, however I think your approach is a bit off > (no offense). As in, you need to shift the log writing process away > from varnishncsa. By doing so, you only have one instance of varnishncsa > using resources to gather the data, which is then fed to a parser that > handles the per domain log splits and writes. That's where 'splitlogs' came > into play. > > As your question does raise some interest in the cPanel community (myself > and Miguel Gonz?lez on this list for example), I threw together a quick > Perl script that will in short, pipe and parse data between varnishncsa and > the splitlogs binary for cache hits. This lets splitlogs handle the queued > log writes which are later parsed for cPanel bandwidth usage and graphs, > webalizer, awstats, logaholic, etc - https://github.com/AndreiG6/vscp > > > > On Sun, Nov 13, 2016 at 12:18 PM, Admin Beckspaced > wrote: > >> Same Hello here ;) >> >> did have a more in depth look in the manual and figured out that >> varnishncsa does support VSL query. >> so someone could filter on the Request Header and Host >> >> varnishncsa -q "ReqHeader ~ '^Host: .*example.com'" >> >> which would produce a log for a specific domain only >> >> it then would need multiple varnishncsa instances for logging per domain, >> which I found here: >> >> https://kevops.com/2015/11/varnish-logging-per-host-with-init-script/ >> >> I use varnish version 5 and then there would be no need for splitlog and >> the logs would be created directly. >> >> please correct me if I'm wrong? >> >> thanks for your time & help >> Becki >> >> >> Am 12.11.2016 um 17:05 schrieb Andrei: >> >>> Hello again, >>> >>> My apologies for not explaining my thoughts better earlier then. Afaik, >>> varnishncsa does not have a native method to split output based on >>> different parameters. The method I was thinking of was based on piping >>> varnishncsa output through splitlogs (or similar) for the log processing >>> and writeouts. Since replying earlier, I've got this working on a cPanel >>> server with piped logging enabled for Apache using the following two for >>> example (X-Port is a custom header set in vcl_recv related to SSL >>> offloading, but you can use a static value or similar custom header): >>> >>> varnishncsa -F "%{HOST}i:%{X-Port}i %h %l %u %t \"%m %U%q %H\" %s %b >>> \"%{Referer}i\" \"%{User-agent}i\""|sed -e >>> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >>> --mainout=/usr/local/apache/logs/access_log >>> varnishncsa -F "%{HOST}i %{%s}t %b ."|sed -e >>> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >>> --suffix=-bytes_log >>> >>> The above pipes the requests to the splitlogs binary which queues then >>> writes to separate logs per domain, that are later processed by the >>> cPanel log stats apps. Either way, I believe you need an intermediary >>> script to queue and write the log entries per domain. While looking into >>> this process, I ran across this little tidbit which you may find of use >>> https://gist.github.com/garlandkr/4954272 for logstash style output. >>> >>> >>> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Sun Nov 13 15:53:32 2016 From: lagged at gmail.com (Andrei) Date: Sun, 13 Nov 2016 17:53:32 +0200 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> Message-ID: I haven't done any actual benchmarks for the resource usage, however as an admin I would rather not have numerous processes for a single data channel that needs to be parsed and split. My apologies for sort of hijacking this thread with a cPanel integration. It's just so widely used, that I'm sure others here have had the need or thought at some point and figured I would share my findings. As cPanel servers are known to house hundreds of domains, per domain varnishncsa simply isn't an option, which is why I chose to piggyback on splitlogs as it's already a supported option of theirs. On Nov 13, 2016 17:44, "Guillaume Quintard" wrote: > Is the resources argument that compelling? I find it cleaner to have one > ncsa per domain, plus the ncsas will read from memory, which is super fast. > > Sure you can then split logs afterwards using awk or whatever, but that's > adding an extra layer and serializing a process that doesn't need to be. > > But I'm not an admin, so I may be off. > > On Nov 13, 2016 16:27, "Andrei" wrote: > >> Hello, >> >> By not using splitlogs or an intermediary script, you will be forced to >> run multiple instances of varnishncsa, which isn't optimal if you host >> multiple domains. The more traffic/domains you have, the more resources you >> will consume on parsing the same data across multiple channels. >> Yes, varnishncsa supports VSL, however I think your approach is a bit off >> (no offense). As in, you need to shift the log writing process away >> from varnishncsa. By doing so, you only have one instance of varnishncsa >> using resources to gather the data, which is then fed to a parser that >> handles the per domain log splits and writes. That's where 'splitlogs' came >> into play. >> >> As your question does raise some interest in the cPanel community (myself >> and Miguel Gonz?lez on this list for example), I threw together a quick >> Perl script that will in short, pipe and parse data between varnishncsa and >> the splitlogs binary for cache hits. This lets splitlogs handle the queued >> log writes which are later parsed for cPanel bandwidth usage and graphs, >> webalizer, awstats, logaholic, etc - https://github.com/AndreiG6/vscp >> >> >> >> On Sun, Nov 13, 2016 at 12:18 PM, Admin Beckspaced >> wrote: >> >>> Same Hello here ;) >>> >>> did have a more in depth look in the manual and figured out that >>> varnishncsa does support VSL query. >>> so someone could filter on the Request Header and Host >>> >>> varnishncsa -q "ReqHeader ~ '^Host: .*example.com'" >>> >>> which would produce a log for a specific domain only >>> >>> it then would need multiple varnishncsa instances for logging per >>> domain, which I found here: >>> >>> https://kevops.com/2015/11/varnish-logging-per-host-with-init-script/ >>> >>> I use varnish version 5 and then there would be no need for splitlog and >>> the logs would be created directly. >>> >>> please correct me if I'm wrong? >>> >>> thanks for your time & help >>> Becki >>> >>> >>> Am 12.11.2016 um 17:05 schrieb Andrei: >>> >>>> Hello again, >>>> >>>> My apologies for not explaining my thoughts better earlier then. Afaik, >>>> varnishncsa does not have a native method to split output based on >>>> different parameters. The method I was thinking of was based on piping >>>> varnishncsa output through splitlogs (or similar) for the log processing >>>> and writeouts. Since replying earlier, I've got this working on a cPanel >>>> server with piped logging enabled for Apache using the following two for >>>> example (X-Port is a custom header set in vcl_recv related to SSL >>>> offloading, but you can use a static value or similar custom header): >>>> >>>> varnishncsa -F "%{HOST}i:%{X-Port}i %h %l %u %t \"%m %U%q %H\" %s %b >>>> \"%{Referer}i\" \"%{User-agent}i\""|sed -e >>>> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >>>> --mainout=/usr/local/apache/logs/access_log >>>> varnishncsa -F "%{HOST}i %{%s}t %b ."|sed -e >>>> 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` >>>> --suffix=-bytes_log >>>> >>>> The above pipes the requests to the splitlogs binary which queues then >>>> writes to separate logs per domain, that are later processed by the >>>> cPanel log stats apps. Either way, I believe you need an intermediary >>>> script to queue and write the log entries per domain. While looking into >>>> this process, I ran across this little tidbit which you may find of use >>>> https://gist.github.com/garlandkr/4954272 for logstash style output. >>>> >>>> >>>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >>> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.g.dixon at gmail.com Mon Nov 14 17:22:06 2016 From: alan.g.dixon at gmail.com (Alan Dixon) Date: Mon, 14 Nov 2016 12:22:06 -0500 Subject: A small ode to varnish Message-ID: I just posted this, a little light reading: http://homeofficekernel.blogspot.ca/2016/11/varnish-saves-day-in-unexpectedly.html Summary: I'd never considered how Varnish could save the day like this. -- Alan Dixon, Web Developer Drupal and CiviCRM for Canadian nonprofits. http://blackflysolutions.ca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at beckspaced.com Mon Nov 14 17:33:46 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Mon, 14 Nov 2016 18:33:46 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> Message-ID: <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Hello Andrei, No offense taken ;) that's the reason we talk to find the best solution! and I think you are right. It's better to have one instance of varnishncsa and shift the log writing away. I tried piping varnishncsa to the apache split-logfile perl script: /usr/sbin/varnishncsa -f /etc/varnish/varnishncsa-log-format-string -P /var/run/varnishlog.pid | sed -e 's#^www\.##g' | /usr/bin/split-logfile & and split-logfile started creating different log files for the vhosts. but still having a bit of an issue with this approach as most of the data just gets written to the logs once I kill the background process. the different log files for the vhosts get created but are mostly empty and start filling up on killing the varnishncsa main process perhaps the apache split-logfile is not the same as the splitlogs from cpanel? where could I download a version to test? another thing is the naming of the vhosts. you actually never know what clients are requesting? so Baidu spider requests URL like 333.domain.com ... or ww3.domain.com ... wwq.domain.com in your pipe command I see sed removing the www. subdomain but you can actually never know what comes in? in apache I can easily filter all those subdomains and redirect to the proper domain. even in varnish VCL I can normalize the host not sure how to do this in varnishncsa? is there a way to do that? anyway, I think a single varnishncsa process and shifting the log writing away is the way to go! but still needs a bit of tweaking if you ask me ;) thanks & greetings becki Am 13.11.2016 um 16:01 schrieb Andrei: > Hello, > > By not using splitlogs or an intermediary script, you will be forced > to run multiple instances of varnishncsa, which isn't optimal if you > host multiple domains. The more traffic/domains you have, the more > resources you will consume on parsing the same data across multiple > channels. Yes, varnishncsa supports VSL, however I think your approach > is a bit off (no offense). As in, you need to shift the log writing > process away from varnishncsa. By doing so, you only have one instance > of varnishncsa using resources to gather the data, which is then fed > to a parser that handles the per domain log splits and writes. That's > where 'splitlogs' came into play. > > As your question does raise some interest in the cPanel community > (myself and Miguel Gonz?lez on this list for example), I threw > together a quick Perl script that will in short, pipe and parse data > between varnishncsa and the splitlogs binary for cache hits. This lets > splitlogs handle the queued log writes which are later parsed for > cPanel bandwidth usage and graphs, webalizer, awstats, logaholic, etc > - https://github.com/AndreiG6/vscp > > > > On Sun, Nov 13, 2016 at 12:18 PM, Admin Beckspaced > > wrote: > > Same Hello here ;) > > did have a more in depth look in the manual and figured out that > varnishncsa does support VSL query. > so someone could filter on the Request Header and Host > > varnishncsa -q "ReqHeader ~ '^Host: .*example.com > '" > > which would produce a log for a specific domain only > > it then would need multiple varnishncsa instances for logging per > domain, which I found here: > > https://kevops.com/2015/11/varnish-logging-per-host-with-init-script/ > > > I use varnish version 5 and then there would be no need for > splitlog and the logs would be created directly. > > please correct me if I'm wrong? > > thanks for your time & help > Becki > > > Am 12.11.2016 um 17:05 schrieb Andrei: > > Hello again, > > My apologies for not explaining my thoughts better earlier > then. Afaik, > varnishncsa does not have a native method to split output based on > different parameters. The method I was thinking of was based > on piping > varnishncsa output through splitlogs (or similar) for the log > processing > and writeouts. Since replying earlier, I've got this working > on a cPanel > server with piped logging enabled for Apache using the > following two for > example (X-Port is a custom header set in vcl_recv related to SSL > offloading, but you can use a static value or similar custom > header): > > varnishncsa -F "%{HOST}i:%{X-Port}i %h %l %u %t \"%m %U%q %H\" > %s %b > \"%{Referer}i\" \"%{User-agent}i\""|sed -e > 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` > --mainout=/usr/local/apache/logs/access_log > varnishncsa -F "%{HOST}i %{%s}t %b ."|sed -e > 's#^www\.##g'|/usr/local/cpanel/bin/splitlogs --main=`hostname` > --suffix=-bytes_log > > The above pipes the requests to the splitlogs binary which > queues then > writes to separate logs per domain, that are later processed > by the > cPanel log stats apps. Either way, I believe you need an > intermediary > script to queue and write the log entries per domain. While > looking into > this process, I ran across this little tidbit which you may > find of use > https://gist.github.com/garlandkr/4954272 > for logstash style > output. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > From phk at phk.freebsd.dk Mon Nov 14 17:36:02 2016 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 14 Nov 2016 17:36:02 +0000 Subject: A small ode to varnish In-Reply-To: References: Message-ID: <22066.1479144962@critter.freebsd.dk> -------- In message , Alan Dixon w rites: >--===============4499531035030001701== >Content-Type: multipart/alternative; boundary=001a114caa9e46fddb0541461430 > >--001a114caa9e46fddb0541461430 >Content-Type: text/plain; charset=UTF-8 > >I just posted this, a little light reading: >http://homeofficekernel.blogspot.ca/2016/11/varnish-saves-day-in-unexpectedly.html > >Summary: I'd never considered how Varnish could save the day like this. > Neat! -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From niall.murphy at sparwelt.de Tue Nov 15 08:42:29 2016 From: niall.murphy at sparwelt.de (Niall Murphy) Date: Tue, 15 Nov 2016 09:42:29 +0100 Subject: Transient storage killing memory Message-ID: <20161115094229.6cddc4e0@insomnia.sparwelt.de> Hi all, Occasionally I encounter the following crash: Nov 13 19:44:03 vc-varnish-02 kernel: [4029833.862916] apt-get[30000]: segfault at ffffffffffffffff ip 00007f3acce7456a sp 00007fffa93b00a0 error 5 in libapt-pkg.so.4.12.0[7f3acce1d000+168000] Nov 13 19:45:33 vc-varnish-02 varnishd[12731]: Child (12732) died signal=6 Nov 13 19:45:33 vc-varnish-02 varnishd[12731]: Child (12732) Panic at: Sun, 13 Nov 2016 18:45:32 GMT#012"Assert error in mpl_alloc(), cache/cache_mempool.c line 79:#012 Condition((mi) != 0) not true.#012errno = 12 (Cannot allocate memory)#012thread = (cache-worker)#012version = varnish-5.0.0 revision 99d036f#012ident = Linux,3.16.0-4-amd64,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll#012Backtrace:#012#012" I've included the apt-get segfault in case it's relevant. SMA.s0.c_bytes 1598929523 11755.71 Bytes allocated SMA.s0.c_freed 1473918175 10836.60 Bytes freed SMA.s0.g_bytes 125011348 . Bytes outstanding SMA.s0.g_space 948730476 . Bytes available SMA.Transient.c_bytes 4731618771 34787.99 Bytes allocated SMA.Transient.c_freed 3838779376 28223.62 Bytes freed SMA.Transient.g_bytes 892839395 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available I'm wondering if someone could put me in the right direction in how to further investigate the content of transient storage, and why it's larger than the regular cache. The parameter itself is the default of 10 seconds, and I set a ttl of 5 minutes. It's Varnish 5.0.0-1 on Debian jessie. Never experienced it with 4.13, but there were probably conf changes since we upgraded. Regards, -- Niall Murphy From dridi at varni.sh Tue Nov 15 09:54:47 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 15 Nov 2016 10:54:47 +0100 Subject: varnishncsa logs split per domain In-Reply-To: <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Message-ID: > even in varnish VCL I can normalize the host > > not sure how to do this in varnishncsa? is there a way to do that? In VCL you can std.log the normalized host, and use a custom pattern in varnishncsa to pick it up. Dridi From admin at beckspaced.com Tue Nov 15 15:31:21 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Tue, 15 Nov 2016 16:31:21 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Message-ID: Am 15.11.2016 um 10:54 schrieb Dridi Boukelmoune: >> even in varnish VCL I can normalize the host >> >> not sure how to do this in varnishncsa? is there a way to do that? > In VCL you can std.log the normalized host, and use a custom pattern > in varnishncsa to pick it up. > > Dridi > > hello Dridi, thanks a lot for your reply. and how would i do that? I already normalize the host in vcl_recv //normalize the req.http.host set req.http.host = regsub(req.http.Host, "^www\.", ""); but not seeing that normalized host in varnishncsa ... a bit more hints and info would be nice ;) thanks & greetings becki From guillaume at varnish-software.com Tue Nov 15 15:45:01 2016 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 15 Nov 2016 16:45:01 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Message-ID: Probably because varnishncsa sees the ReqHeader before it is normalized. Try putting the normalized host in req.http.x-host, and filtering on that. -- Guillaume Quintard On Tue, Nov 15, 2016 at 4:31 PM, Admin Beckspaced wrote: > > Am 15.11.2016 um 10:54 schrieb Dridi Boukelmoune: > >> even in varnish VCL I can normalize the host >>> >>> not sure how to do this in varnishncsa? is there a way to do that? >>> >> In VCL you can std.log the normalized host, and use a custom pattern >> in varnishncsa to pick it up. >> >> Dridi >> >> >> hello Dridi, > > thanks a lot for your reply. and how would i do that? > > I already normalize the host in vcl_recv > > //normalize the req.http.host > set req.http.host = regsub(req.http.Host, "^www\.", ""); > > but not seeing that normalized host in varnishncsa ... > > a bit more hints and info would be nice ;) > > thanks & greetings > becki > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue Nov 15 15:51:10 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 15 Nov 2016 16:51:10 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Message-ID: > a bit more hints and info would be nice ;) man vmod_std man varnishncsa That's how much "nice" I'm willing to do :p Dridi From admin at beckspaced.com Tue Nov 15 16:57:57 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Tue, 15 Nov 2016 17:57:57 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> Message-ID: <912a8f40-e02b-d578-585c-e3d9b8fdd0a2@beckspaced.com> Am 15.11.2016 um 16:51 schrieb Dridi Boukelmoune: >> a bit more hints and info would be nice ;) > man vmod_std > man varnishncsa > > That's how much "nice" I'm willing to do :p > > Dridi > > ok. first I want to say thanks for being nice and pointing me to the man pages. after a bit of reading I finally found the parts I was looking for: import std; sub vcl_recv { std.log(?myhost:? + regsub(req.http.Host, "^www\.", "") ); } in varnishncsa: # varnishncsa -F ?%h %l %u %t ?%r? %s %b ?%{Referer}i? ?%{User-agent}i %{VCL_Log:myhost}x? not yet tested but I think this is what Dridi was pointing to? Ok ... I also do understand that people need to read manuals a.k.a. RTFM ;) But is a mailing list ONLY here to get the finger pointed to MAN and RTFM??? I'm part of quite a few mailing lists and if I see others having problems I already encountered and solved earlier I'm always happy to give them the bits and pieces they need. why just point them to MAN or RTFM if I know that specific info? Isn't that what a mailing list is there for? To help others? Your thoughts please ;) thanks & greetings becki From stardothosting at gmail.com Tue Nov 15 17:32:02 2016 From: stardothosting at gmail.com (Star Dot) Date: Tue, 15 Nov 2016 12:32:02 -0500 Subject: Dynamically set ttl based on cache-control header Message-ID: We have a django site that will be determining the cache times via setting the cache-control header. I want varnish to read this header and dynamically set the TTLs for the cache time based on that header. Is there an easy way of doing this or is this even possible at all? Thanks! -- StackStar Managed Hosting Services : https://www.stackstar.com Shift8 Web Design in Toronto : https://www.shift8web.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue Nov 15 17:31:34 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 15 Nov 2016 18:31:34 +0100 Subject: varnishncsa logs split per domain In-Reply-To: <912a8f40-e02b-d578-585c-e3d9b8fdd0a2@beckspaced.com> References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> <912a8f40-e02b-d578-585c-e3d9b8fdd0a2@beckspaced.com> Message-ID: > ok. first I want to say thanks for being nice and pointing me to the man > pages. Sometimes it's hard to even know that man pages even exist (I'm looking at vsl and vsl-query in particular) so I tend to do that a lot ;) > not yet tested but I think this is what Dridi was pointing to? Yes, that was the suggestion. > Ok ... I also do understand that people need to read manuals a.k.a. RTFM ;) > > But is a mailing list ONLY here to get the finger pointed to MAN and RTFM??? No but it saves some time and typing for lazy me. > Isn't that what a mailing list is there for? To help others? > > Your thoughts please ;) Now that the finger is pointed at me, while I agree that I often point to the manual, I'm rarely saying to RTFM. I can be weeks between to peeks at the misc list for me, so sometimes I get extra lazy. I'm also sure you learned more than just how to pick the normalized host by looking at varnishncsa's manual. I could have been very nasty and not mention vmod_std(3) because after all, I had already hinted "std.log" in a previous email ;) I gave you a minimalist solution [1], you didn't know how to do it, I pointed to the right manuals since you already had the recipe. The little push that should get the ball rolling. Then if it doesn't work out as expected I can still try helping further. I'm usually reluctant to give a turn-key solution (except in the docs) and that's probably a habit I got from being a trainer. DYI and I will back you up, sort of thing. Cheers [1] https://www.varnish-cache.org/lists/pipermail/varnish-misc/2016-November/025402.html From dridi at varni.sh Tue Nov 15 18:03:34 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 15 Nov 2016 19:03:34 +0100 Subject: Dynamically set ttl based on cache-control header In-Reply-To: References: Message-ID: On Tue, Nov 15, 2016 at 6:32 PM, Star Dot wrote: > We have a django site that will be determining the cache times via setting > the cache-control header. I want varnish to read this header and dynamically > set the TTLs for the cache time based on that header. > > Is there an easy way of doing this or is this even possible at all? It's the default, you should see it with varnishlog. Varnish being a proxy s-maxage has precedence. Dridi From admin at beckspaced.com Wed Nov 16 07:17:21 2016 From: admin at beckspaced.com (Admin Beckspaced) Date: Wed, 16 Nov 2016 08:17:21 +0100 Subject: varnishncsa logs split per domain In-Reply-To: References: <976bff90-7c3e-e9c2-3517-6bc30a937de5@beckspaced.com> <4aea1d32-f9b8-ada4-fb9d-532f605ae5da@beckspaced.com> <5aa0e368-0121-8542-70bc-27aeae96e4e9@beckspaced.com> <912a8f40-e02b-d578-585c-e3d9b8fdd0a2@beckspaced.com> Message-ID: <66f19432-97c5-cbbf-f093-bd2e8e694a11@beckspaced.com> >> Isn't that what a mailing list is there for? To help others? >> >> Your thoughts please ;) > Now that the finger is pointed at me, while I agree that I often point > to the manual, I'm rarely saying to RTFM. I can be weeks between to > peeks at the misc list for me, so sometimes I get extra lazy. I'm also > sure you learned more than just how to pick the normalized host by > looking at varnishncsa's manual. > > I could have been very nasty and not mention vmod_std(3) because > after all, I had already hinted "std.log" in a previous email ;) > > I gave you a minimalist solution [1], you didn't know how to do it, I > pointed to the right manuals since you already had the recipe. The > little push that should get the ball rolling. > > Then if it doesn't work out as expected I can still try helping further. > I'm usually reluctant to give a turn-key solution (except in the docs) > and that's probably a habit I got from being a trainer. DYI and I will > back you up, sort of thing. > > Cheers Thanks lazy you, glad you gave me the right hints and pointers. good to know that it's only your extra laziness ;) anyway, no offense taken! in general I'm in the same boat with you and DYI instead of turn-key solutions sometimes I just wish for a short explanation to get the ball a better rolling instead of a MAN and then asking for further help. anyway, thanks again for mentioning std.log which took me on the right path ... wish you lots of health & happiness so you can perhaps kick that extra bit of laziness and just be plain lazy and perhaps add a tiny explanation / example the next time ;) greetings & all the best Becki From ayberk.kimsesiz at gmail.com Wed Nov 16 09:37:08 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Wed, 16 Nov 2016 12:37:08 +0300 Subject: 503 Backend fetch failed / (again) Message-ID: Hi, We were previously receiving Error 503 in Varnish and then it has fixed after the fresh reinstall. But we are getting the same error again. I have observed that this appears when Ajax requests increase. *http://.com/wp-admin/admin-ajax.php Failed to load resource: the server responded with a status of 503 (Backend fetch failed)* Is there an error in Default.vcl file? /* SET THE HOST AND PORT OF WORDPRESS * *********************************************************/ vcl 4.0; import std; backend default { .host = "******"; .port = "8080"; } # SET THE ALLOWED IP OF PURGE REQUESTS # ########################################################## acl purge { "localhost"; "127.0.0.1"; "******"; } #THE RECV FUNCTION # ########################################################## sub vcl_recv { if (req.http.Host == "***.com/forum" || req.url ~ "forum") { return (pass); } # set realIP by trimming CloudFlare IP which will be used for various checks set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, "[, ].*$", ""); # FORWARD THE IP OF THE REQUEST if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } # Purge request check sections for hash_always_miss, purge and ban # BLOCK IF NOT IP is not in purge acl # ########################################################## # Enable smart refreshing using hash_always_miss if (req.http.Cache-Control ~ "no-cache") { if (client.ip ~ purge || !std.ip(req.http.X-Actual-IP, "1.2.3.4") ~ purge) { set req.hash_always_miss = true; } } if (req.method == "PURGE") { if (!client.ip ~ purge || !std.ip(req.http.X-Actual-IP, "1.2.3.4") ~ purge) { return(synth(405,"Not allowed.")); } return (purge); } if (req.method == "BAN") { # Same ACL check as above: if (!client.ip ~ purge || !std.ip(req.http.X-Actual-IP, "1.2.3.4") ~ purge) { return(synth(403, "Not allowed.")); } ban("req.http.host == " + req.http.host + " && req.url == " + req.url); # Throw a synthetic page so the # request won't go to the backend. return(synth(200, "Ban added")); } # Unset cloudflare cookies # Remove has_js and CloudFlare/Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # For Testing: If you want to test with Varnish passing (not caching) uncomment # return( pass ); # DO NOT CACHE RSS FEED if (req.url ~ "/feed(/)?") { return ( pass ); } ## Do not cache search results, comment these 3 lines if you do want to cache them if (req.url ~ "/\?s\=") { return ( pass ); } # CLEAN UP THE ENCODING HEADER. # SET TO GZIP, DEFLATE, OR REMOVE ENTIRELY. WITH VARY ACCEPT-ENCODING # VARNISH WILL CREATE SEPARATE CACHES FOR EACH # DO NOT ACCEPT-ENCODING IMAGES, ZIPPED FILES, AUDIO, ETC. # ########################################################## if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these unset req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm unset req.http.Accept-Encoding; } } # PIPE ALL NON-STANDARD REQUESTS # ########################################################## if (req.method != "GET" && req.method != "HEAD" && req.method != "PUT" && req.method != "POST" && req.method != "TRACE" && req.method != "OPTIONS" && req.method != "DELETE") { return (pipe); } # ONLY CACHE GET AND HEAD REQUESTS # ########################################################## if (req.method != "GET" && req.method != "HEAD") { return (pass); } # OPTIONAL: DO NOT CACHE LOGGED IN USERS (THIS OCCURS IN FETCH TOO, EITHER # COMMENT OR UNCOMMENT BOTH # ########################################################## if ( req.http.cookie ~ "wordpress_logged_in" ) { return( pass ); } # IF THE REQUEST IS NOT FOR A PREVIEW, WP-ADMIN OR WP-LOGIN # THEN UNSET THE COOKIES # ########################################################## if (!(req.url ~ "wp-(login|admin)") && !(req.url ~ "&preview=true" ) ){ unset req.http.cookie; } # IF BASIC AUTH IS ON THEN DO NOT CACHE # ########################################################## if (req.http.Authorization || req.http.Cookie) { return (pass); } # IF YOU GET HERE THEN THIS REQUEST SHOULD BE CACHED # ########################################################## return (hash); # This is for phpmyadmin if (req.http.Host == "ki1.org") { return (pass); } if (req.http.Host == "mysql.ki1.org") { return (pass); } } # HIT FUNCTION # ########################################################## sub vcl_hit { # IF THIS IS A PURGE REQUEST THEN DO THE PURGE # ########################################################## if (req.method == "PURGE") { # # This is now handled in vcl_recv. # # purge; return (synth(200, "Purged.")); } return (deliver); } # MISS FUNCTION # ########################################################## sub vcl_miss { if (req.method == "PURGE") { # # This is now handled in vcl_recv. # # purge; return (synth(200, "Purged.")); } return (fetch); } # FETCH FUNCTION # ########################################################## sub vcl_backend_response { # I SET THE VARY TO ACCEPT-ENCODING, THIS OVERRIDES W3TC # TENDANCY TO SET VARY USER-AGENT. YOU MAY OR MAY NOT WANT # TO DO THIS # ########################################################## set beresp.http.Vary = "Accept-Encoding"; # IF NOT WP-ADMIN THEN UNSET COOKIES AND SET THE AMOUNT OF # TIME THIS PAGE WILL STAY CACHED (TTL) # ########################################################## if (!(bereq.url ~ "wp-(login|admin)|forum") && !bereq.http.cookie ~ "wordpress_logged_in" && !bereq.http.host == "***.com/forum" ) { unset beresp.http.set-cookie; set beresp.ttl = 52w; # set beresp.grace =1w; } if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; # set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); } return (deliver); } # DELIVER FUNCTION # ########################################################## sub vcl_deliver { # IF THIS PAGE IS ALREADY CACHED THEN RETURN A 'HIT' TEXT # IN THE HEADER (GREAT FOR DEBUGGING) # ########################################################## if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; # IF THIS IS A MISS RETURN THAT IN THE HEADER # ########################################################## } else { set resp.http.X-Cache = "MISS"; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Wed Nov 16 09:56:52 2016 From: apj at mutt.dk (Andreas Plesner) Date: Wed, 16 Nov 2016 10:56:52 +0100 Subject: 503 Backend fetch failed / (again) In-Reply-To: References: Message-ID: <20161116095652.GB22017@nerd.dk> On Wed, Nov 16, 2016 at 12:37:08PM +0300, Ayberk Kimsesiz wrote: > > We were previously receiving Error 503 in Varnish and then it has fixed > after the fresh reinstall. But we are getting the same error again. I have > observed that this appears when Ajax requests increase. It's hard to troubleshoot anything without a varnishlog. -- Andreas From viktor.villafuerte at optusnet.com.au Thu Nov 17 23:54:49 2016 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Fri, 18 Nov 2016 10:54:49 +1100 Subject: Varnish 4.1.3 (RPM rhel6) not listening on all interfaces Message-ID: <20161117235449.GA17739@optusnet.com.au> Hi all, I've been upgrading Varnish to 4.1.3 using an RPM varnish-4.1.3-1.el6.x86_64 All is good, except of Varnish cannot be configured to listen on all interfaces via sysconfig file?! I read the docs and they claim that the sysconfig works as before - EG, :80 will make Varnish listen on port 80 on all interfaces. However, I cannot achieve this! Varnish errors out with Starting Varnish Cache: Error: Cannot open socket: :80: Address family not supported by protocol Currently I *must* specify all the IPs I want Varnish to listen to on in the sysconfig file. This is particulary annoying when trying to automate the config with a mix of physical and virtual IPs.. I'm assuming I'm doing smth wrong, anybody has any suggestions? thanks v -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From dridi at varni.sh Fri Nov 18 09:46:18 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 18 Nov 2016 10:46:18 +0100 Subject: Varnish 4.1.3 (RPM rhel6) not listening on all interfaces In-Reply-To: <20161117235449.GA17739@optusnet.com.au> References: <20161117235449.GA17739@optusnet.com.au> Message-ID: On Fri, Nov 18, 2016 at 12:54 AM, Viktor Villafuerte wrote: > Hi all, > > I've been upgrading Varnish to 4.1.3 using an RPM > varnish-4.1.3-1.el6.x86_64 > > All is good, except of Varnish cannot be configured to listen on all > interfaces via sysconfig file?! I read the docs and they claim that the > sysconfig works as before - EG, :80 will make Varnish listen on port 80 > on all interfaces. However, I cannot achieve this! Varnish errors out > with > > Starting Varnish Cache: Error: Cannot open socket: :80: Address > family not supported by protocol Is your kernel started with IPv6 disabled? I think this is a known issue. Dridi From dridi at varni.sh Fri Nov 18 14:38:08 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 18 Nov 2016 15:38:08 +0100 Subject: Varnish 4.1.3 (RPM rhel6) not listening on all interfaces In-Reply-To: <20161117235449.GA17739@optusnet.com.au> References: <20161117235449.GA17739@optusnet.com.au> Message-ID: > Currently I *must* specify all the IPs I want Varnish to listen to on > in the sysconfig file. If IPv6 is disabled as I suspect, you could start with -a 0.0.0.0:80 to listen to all IPv4 interfaces. Does that work? Dridi From viktor.villafuerte at optusnet.com.au Sun Nov 20 23:05:34 2016 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Mon, 21 Nov 2016 10:05:34 +1100 Subject: Varnish 4.1.3 (RPM rhel6) not listening on all interfaces In-Reply-To: References: <20161117235449.GA17739@optusnet.com.au> Message-ID: <20161120230534.GA7511@optusnet.com.au> Hi Dridi, yes, kernel is ipv6 disabled and 0.0.0.0 does seems to fix this. :) thanks v On Fri 18 Nov 2016 10:46:18, Dridi Boukelmoune wrote: > On Fri, Nov 18, 2016 at 12:54 AM, Viktor Villafuerte > wrote: > > Hi all, > > > > I've been upgrading Varnish to 4.1.3 using an RPM > > varnish-4.1.3-1.el6.x86_64 > > > > All is good, except of Varnish cannot be configured to listen on all > > interfaces via sysconfig file?! I read the docs and they claim that the > > sysconfig works as before - EG, :80 will make Varnish listen on port 80 > > on all interfaces. However, I cannot achieve this! Varnish errors out > > with > > > > Starting Varnish Cache: Error: Cannot open socket: :80: Address > > family not supported by protocol > > Is your kernel started with IPv6 disabled? I think this is a known issue. > > Dridi -- Regards Viktor Villafuerte Optus Internet Engineering t: +61 2 80825265 From ayberk.kimsesiz at gmail.com Mon Nov 21 09:34:57 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Mon, 21 Nov 2016 12:34:57 +0300 Subject: 503 Backend fetch failed / (again) In-Reply-To: <20161116095652.GB22017@nerd.dk> References: <20161116095652.GB22017@nerd.dk> Message-ID: Hi, Varnishlog: *FetchError http first read error: EOF* * << BeReq >> 3244175 - Begin bereq 3244174 pass - Timestamp Start: 1479720397.535252 0.000000 0.000000 - BereqMethod GET - BereqURL /forum/js/brivium/CustomNodeStyle/custom_node.js?_v=00bdfbe8 - BereqProtocol HTTP/1.1 - BereqHeader Host:******.com - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1003.1 Safari/535.19 Awesomium/1.7.1 - BereqHeader Accept: */* - BereqHeader Referer: https://facebook.com/ - BereqHeader Accept-Encoding: gzip,deflate - BereqHeader Accept-Language: en-us,en - BereqHeader Accept-Charset: iso-8859-1,*,utf-8 - BereqHeader Cookie: _gat=1; pps_show_100=Tue%20Nov%2022%202016%2012%3A31%3A52%20GMT+0300%20%28Turkey%20Standard%20Time%29; pps_times_showed_100=2; _ga=GA1.2.1950747215.1479806983; xf_session=20756c36c3041eb1b77c10d43b913cd8 - BereqHeader X-Forwarded-For: 176.240.98.52 - BereqHeader X-Varnish: 3244175 - VCL_call BACKEND_FETCH - VCL_return fetch - BackendOpen 160 boot.default 176.53.126.10 8080 176.53.126.10 47362 - BackendStart 176.53.126.10 8080 - Timestamp Bereq: 1479720397.535321 0.000070 0.000070 *- FetchError http first read error: EOF* - BackendClose 160 boot.default - Timestamp Beresp: 1479720457.535409 60.000158 60.000088 - Timestamp Error: 1479720457.535417 60.000165 0.000008 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Mon, 21 Nov 2016 09:27:37 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc Transient - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Mon, 21 Nov 2016 09:27:37 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 284 - BereqAcct 643 0 643 0 0 0 - End 2016-11-16 12:56 GMT+03:00 Andreas Plesner : > On Wed, Nov 16, 2016 at 12:37:08PM +0300, Ayberk Kimsesiz wrote: > > > > We were previously receiving Error 503 in Varnish and then it has fixed > > after the fresh reinstall. But we are getting the same error again. I > have > > observed that this appears when Ajax requests increase. > > It's hard to troubleshoot anything without a varnishlog. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Mon Nov 21 10:53:33 2016 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 21 Nov 2016 11:53:33 +0100 Subject: 503 Backend fetch failed / (again) In-Reply-To: References: <20161116095652.GB22017@nerd.dk> Message-ID: <6e0b6765-02bf-ace0-a1ee-ccd64477e6cb@uplex.de> On 11/21/2016 10:34 AM, Ayberk Kimsesiz wrote: > > Varnishlog: *FetchError http first read error: EOF* That's a first byte timeout. > *- FetchError http first read error: EOF* - BackendClose > 160 boot.default - Timestamp Beresp: 1479720457.535409 > 60.000158 60.000088 - Timestamp Error: 1479720457.535417 > 60.000165 0.000008 The Timestamp is telling you (2nd and 3rd numbers after Timestamp:Beresp) that your backend "boot.default" didn't send a response for 60 seconds before Varnish gave up, presumably because first_byte_timeout=60s. man vsl(7) tells you how to interpret the Timestamp log entries: https://www.varnish-cache.org/docs/trunk/reference/vsl.html#timestamps https://www.varnish-cache.org/docs/trunk/reference/vsl.html#backend-fetch-timestamps So you need to fix your backend. HTH, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From ayberk.kimsesiz at gmail.com Mon Nov 21 11:05:54 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Mon, 21 Nov 2016 14:05:54 +0300 Subject: 503 Backend fetch failed / (again) In-Reply-To: <6e0b6765-02bf-ace0-a1ee-ccd64477e6cb@uplex.de> References: <20161116095652.GB22017@nerd.dk> <6e0b6765-02bf-ace0-a1ee-ccd64477e6cb@uplex.de> Message-ID: Hi Geoff, Thank you for your suggestion. My current settings: backend default { .host = "IP"; .port = "8080"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; .max_connections = 800; } The log screen: http://i.imgur.com/RixRgM3.png And the *http first read error: EOF* continues. Thanks. ? 2016-11-21 13:53 GMT+03:00 Geoff Simmons : > On 11/21/2016 10:34 AM, Ayberk Kimsesiz wrote: > > > > Varnishlog: *FetchError http first read error: EOF* > > That's a first byte timeout. > > > *- FetchError http first read error: EOF* - BackendClose > > 160 boot.default - Timestamp Beresp: 1479720457.535409 > > 60.000158 60.000088 - Timestamp Error: 1479720457.535417 > > 60.000165 0.000008 > > The Timestamp is telling you (2nd and 3rd numbers after > Timestamp:Beresp) that your backend "boot.default" didn't send a > response for 60 seconds before Varnish gave up, presumably because > first_byte_timeout=60s. > > man vsl(7) tells you how to interpret the Timestamp log entries: > > https://www.varnish-cache.org/docs/trunk/reference/vsl.html#timestamps > > https://www.varnish-cache.org/docs/trunk/reference/vsl.html# > backend-fetch-timestamps > > So you need to fix your backend. > > > HTH, > Geoff > -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ad.png Type: image/png Size: 69204 bytes Desc: not available URL: From ayberk.kimsesiz at gmail.com Mon Nov 21 16:25:56 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Mon, 21 Nov 2016 19:25:56 +0300 Subject: FetchError :http first read error: EOF Message-ID: Hi, I'm trying to solve this problem for a long time. I reset the default.vcl settings, but it did not work. What is your suggestion? * << BeReq >> 1277958 - Begin bereq 1277957 fetch - Timestamp Start: 1479744699.658494 0.000000 0.000000 - BereqMethod GET - BereqURL / - BereqProtocol HTTP/1.1 - BereqHeader Host: www.*****.com - BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 - BereqHeader Accept: */* - BereqHeader Accept-Language: tr-TR,tr;q=0.8,en-US;q=0.6,en;q=0.4 - BereqHeader X-Actual-IP: 50.60.135.57 - BereqHeader X-Forwarded-For: 50.60.135.57, 50.60.135.57 - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 1277958 - VCL_call BACKEND_FETCH - VCL_return fetch - BackendOpen 132 boot.default 176.53.126.10 8080 176.53.126.10 51776 - BackendStart 176.53.126.10 8080 - Timestamp Bereq: 1479744702.659311 3.000817 3.000817 - FetchError http first read error: EOF - BackendClose 132 boot.default - Timestamp Beresp: 1479744811.501441 111.842947 108.842130 - Timestamp Error: 1479744811.501446 111.842952 0.000005 - BerespProtocol HTTP/1.1 - BerespStatus 503 - BerespReason Service Unavailable - BerespReason Backend fetch failed - BerespHeader Date: Mon, 21 Nov 2016 16:13:31 GMT - BerespHeader Server: Varnish - VCL_call BACKEND_ERROR - BerespHeader Content-Type: text/html; charset=utf-8 - BerespHeader Retry-After: 5 - VCL_return deliver - Storage malloc s0 - ObjProtocol HTTP/1.1 - ObjStatus 503 - ObjReason Backend fetch failed - ObjHeader Date: Mon, 21 Nov 2016 16:13:31 GMT - ObjHeader Server: Varnish - ObjHeader Content-Type: text/html; charset=utf-8 - ObjHeader Retry-After: 5 - Length 284 - BereqAcct 356 0 356 0 0 0 - End -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Mon Nov 21 16:53:46 2016 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 21 Nov 2016 17:53:46 +0100 Subject: FetchError :http first read error: EOF In-Reply-To: References: Message-ID: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 11/21/2016 05:25 PM, Ayberk Kimsesiz wrote: > > I'm trying to solve this problem for a long time. I reset the > default.vcl settings, but it did not work. What is your > suggestion? > > * << BeReq >> 1277958 - FetchError http first read > error: EOF - BackendClose 132 boot.default - Timestamp > Beresp: 1479744811.501441 111.842947 108.842130 You're misunderstanding the problem, it has nothing to do Varnish. And there's nothing Varnish can do about it, no matter what you do with the settings. Your backend is not responding. In this case, it took over 108 seconds. Varnish has to time out and give up, eventually. Your backend named "boot.default", presumably an Apache or nginx or Tomcat or something like that, is the cause of the problem here. *That* is what you have to fix, not Varnish. Find out why it's taking so long, and fix that; or, as the case may be, yell at the application developers until they get it fixed. HTH, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJYMyaaAAoJEOUwvh9pJNURGykQAKYVg0FEK0ZbPv2AlUBuD+Yc W4+D637le8ubL5L7QXjH1B54WBzwWDH/IYSMZii46OnU+pSn06hhKLDFUJzPEzEH FAY9mPa951D1Fh5GVW8QhxYyA1tL3f3NmJw/qkOOmKoy1pr452x/xwCP+nFHvQfw DoQ+v29Qu+VOJD4Kcsi80AphODgLIkgvebs6LWF8XK6iu4kcxFVQFJnSGsLU+ZPp 6N9VtdmLx5pvULXBb7R4IALe4NH2m7ft6fLKYMgAHRqvT3hp2asnFCdypjZlxD68 q0TULy2wNf3cM2bbD4OFl2L/DHrRk+QYrJQm8crNwdbXNQSavRCMy17n84cVXDr3 KJnWyPTUQEegs7DCmbK8/nPNVY/qth1ErPDk9zuqxykdD9qzTqzaBcArIJfui1L9 erD969I8rJl1ixMyKVXMhxt6WMDjGBZmMy6T2Q4oLT3Ck5HPiHmicgzJMrRSXrv1 OXPm/OFpV51bfY8FhQljBe7kgd8TMXbT2r/mfAJRRRytXumeQNuHsYDDgmR09m+y 8ngfe3MCMYtYYn57y7IeD+oM+kxwZrz9+z4em1xHRuHNYkDRkC0esWjodOvGyQov Ts6k3yzRhCaRtq2rBXThbT9JZDx3ZqztYcpzwNweKo9z1FIKUaHcsw9cHpuRMHeb nRVsUMIWq5uiMjdVvbda =bkTG -----END PGP SIGNATURE----- From ayberk.kimsesiz at gmail.com Mon Nov 21 17:21:50 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Mon, 21 Nov 2016 20:21:50 +0300 Subject: FetchError :http first read error: EOF In-Reply-To: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> References: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> Message-ID: I see very few *Error 503 Backend fetch failed*, errors during the day. But when I follow the logs, there are hundreds of errors. Where exactly do I need to look at Apache? 2016-11-21 19:53 GMT+03:00 Geoff Simmons : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On 11/21/2016 05:25 PM, Ayberk Kimsesiz wrote: > > > > I'm trying to solve this problem for a long time. I reset the > > default.vcl settings, but it did not work. What is your > > suggestion? > > > > * << BeReq >> 1277958 - FetchError http first read > > error: EOF - BackendClose 132 boot.default - Timestamp > > Beresp: 1479744811.501441 111.842947 108.842130 > > You're misunderstanding the problem, it has nothing to do Varnish. And > there's nothing Varnish can do about it, no matter what you do with > the settings. > > Your backend is not responding. In this case, it took over 108 > seconds. Varnish has to time out and give up, eventually. > > Your backend named "boot.default", presumably an Apache or nginx or > Tomcat or something like that, is the cause of the problem here. > *That* is what you have to fix, not Varnish. Find out why it's taking > so long, and fix that; or, as the case may be, yell at the application > developers until they get it fixed. > > > HTH, > Geoff > - -- > ** * * UPLEX - Nils Goroll Systemoptimierung > > Scheffelstra?e 32 > 22301 Hamburg > > Tel +49 40 2880 5731 > Mob +49 176 636 90917 > Fax +49 40 42949753 > > http://uplex.de > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2 > > iQIcBAEBCAAGBQJYMyaaAAoJEOUwvh9pJNURGykQAKYVg0FEK0ZbPv2AlUBuD+Yc > W4+D637le8ubL5L7QXjH1B54WBzwWDH/IYSMZii46OnU+pSn06hhKLDFUJzPEzEH > FAY9mPa951D1Fh5GVW8QhxYyA1tL3f3NmJw/qkOOmKoy1pr452x/xwCP+nFHvQfw > DoQ+v29Qu+VOJD4Kcsi80AphODgLIkgvebs6LWF8XK6iu4kcxFVQFJnSGsLU+ZPp > 6N9VtdmLx5pvULXBb7R4IALe4NH2m7ft6fLKYMgAHRqvT3hp2asnFCdypjZlxD68 > q0TULy2wNf3cM2bbD4OFl2L/DHrRk+QYrJQm8crNwdbXNQSavRCMy17n84cVXDr3 > KJnWyPTUQEegs7DCmbK8/nPNVY/qth1ErPDk9zuqxykdD9qzTqzaBcArIJfui1L9 > erD969I8rJl1ixMyKVXMhxt6WMDjGBZmMy6T2Q4oLT3Ck5HPiHmicgzJMrRSXrv1 > OXPm/OFpV51bfY8FhQljBe7kgd8TMXbT2r/mfAJRRRytXumeQNuHsYDDgmR09m+y > 8ngfe3MCMYtYYn57y7IeD+oM+kxwZrz9+z4em1xHRuHNYkDRkC0esWjodOvGyQov > Ts6k3yzRhCaRtq2rBXThbT9JZDx3ZqztYcpzwNweKo9z1FIKUaHcsw9cHpuRMHeb > nRVsUMIWq5uiMjdVvbda > =bkTG > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.staudinger at nyi.net Mon Nov 21 20:09:08 2016 From: mark.staudinger at nyi.net (Mark Staudinger) Date: Mon, 21 Nov 2016 15:09:08 -0500 Subject: Transient storage killing memory In-Reply-To: <20161115094229.6cddc4e0@insomnia.sparwelt.de> References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> Message-ID: On Tue, 15 Nov 2016 03:42:29 -0500, Niall Murphy wrote: > I'm wondering if someone could put me in the right direction in how > to further investigate the content of transient storage, and why it's > larger than the regular cache. Hi Niall, This is a longstanding Varnish behavior, if one is not specified, a Transient storage pool is created as type "malloc" with no limit. https://www.varnish-cache.org/docs/5.0/users-guide/storage-backends.html#transient-storage The short-term solution to put a limit on this is to create a storage pool named "Transient" when starting varnishd. For example, this will create an 128M pool - add your existing "-s pool=settings" along with this one on the command line or startup script. "-s Transient=malloc,128m" It might also be useful to capture some varnishlog output and determine what objects are being stored in the Transient pool, and whether or not your "shortlived" parameter, or default grace value needs to be adjusted. Even if you do determine you need to do some things differently here that will prevent the Transient pool from growing beyond your ideal limit, IMO it's a good idea to keep this limited anyway. Cheers, -=Mark From apj at mutt.dk Tue Nov 22 06:14:54 2016 From: apj at mutt.dk (Andreas Plesner) Date: Tue, 22 Nov 2016 07:14:54 +0100 Subject: FetchError :http first read error: EOF In-Reply-To: References: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> Message-ID: <20161122061454.GD22017@nerd.dk> On Mon, Nov 21, 2016 at 08:21:50PM +0300, Ayberk Kimsesiz wrote: > Where exactly do I need to look at Apache? We can't tell you where. We can only tell you that it is taking it a very long time to respond. -- Andreas From ayberk.kimsesiz at gmail.com Tue Nov 22 06:47:29 2016 From: ayberk.kimsesiz at gmail.com (Ayberk Kimsesiz) Date: Tue, 22 Nov 2016 09:47:29 +0300 Subject: FetchError :http first read error: EOF In-Reply-To: <20161122061454.GD22017@nerd.dk> References: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> <20161122061454.GD22017@nerd.dk> Message-ID: This problem started in October. According to the reports in July, everything was in the way. From July to October, we only set up tools like Memcached, Redis, APC and then removed them. Could it be about them? 2016-11-22 9:14 GMT+03:00 Andreas Plesner : > On Mon, Nov 21, 2016 at 08:21:50PM +0300, Ayberk Kimsesiz wrote: > > > Where exactly do I need to look at Apache? > > We can't tell you where. We can only tell you that it is taking it a very > long > time to respond. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Nov 22 07:56:10 2016 From: lagged at gmail.com (Andrei) Date: Tue, 22 Nov 2016 01:56:10 -0600 Subject: FetchError :http first read error: EOF In-Reply-To: References: <1d179404-5f28-f1ef-4cd9-fe18c00e3a0a@uplex.de> <20161122061454.GD22017@nerd.dk> Message-ID: These are concerns which you will want to take up with your sysadmin/developer. We can only speculate on your Varnish issues, not your entire stack. On Tue, Nov 22, 2016 at 12:47 AM, Ayberk Kimsesiz wrote: > This problem started in October. According to the reports in July, > everything was in the way. From July to October, we only set up tools like > Memcached, Redis, APC and then removed them. Could it be about them? > > 2016-11-22 9:14 GMT+03:00 Andreas Plesner : > >> On Mon, Nov 21, 2016 at 08:21:50PM +0300, Ayberk Kimsesiz wrote: >> >> > Where exactly do I need to look at Apache? >> >> We can't tell you where. We can only tell you that it is taking it a very >> long >> time to respond. >> >> -- >> Andreas >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.staudinger at nyi.net Tue Nov 22 16:00:15 2016 From: mark.staudinger at nyi.net (Mark Staudinger) Date: Tue, 22 Nov 2016 11:00:15 -0500 Subject: How to use varnish to pass data when the backend is ok && return the last normal beresq when the backend is down? In-Reply-To: References: Message-ID: On Mon, 07 Nov 2016 00:20:12 -0500, JackDrogon wrote: > Hi, All: > I need varnish to return data directly and update cache at every time > when the backend is ok. I also need varnish to return the last normal > beresq data from >the cache when the backend is down. How shuold I do > with varnish? > >> Thanks. Hi Jack, There are a couple of ways to approach this (as there almost always are with a tool as flexible as Varnish). If traffic is generally light, you could get away with something like this: in vcl_backend_response, set beresp.ttl = 1ms; beresp.keep = 0s; beresp.grace = 1h; # or as long as you would want to serve from cache during an outage The very short TTL should translate into no hits as long as you don't get a lot of requests within a 1ms window. But this might not be exactly the behavior you want. If you have a heavily loaded scenario where you would get multiple hits in this period, then you could simply put some logic into vcl_hit that looks at the backend health, and forces a cache miss if the backend is healthy, and otherwise honors the TTL/grace periods. In this scenario it wouldn't matter as much what the TTL value is, but beresp.keep should stay set to 0s. in vcl_recv, if ( std.healthy(req.backend_hint) ) { set req.http.allow-caching = "no"; } else { set req.http.allow-caching = "yes"; } in vcl_hit if ( req.http.allow-caching == "no" ) { return (fetch); } //... otherwise, carry on with the rest of the normal vcl_hit comparisons of ttl, grace, and keep... Of course these examples don't take into account the behavior of what to do with set-cookie and other headers on the response, but for simplicity I'm assuming here that it's ok to cache the responses as-is. You could always strip headers out during vcl_deliver on the basis of the req.http.allow-caching header, or more generically on the obj.hits counter, if necessary. Best, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.rusu at lola.tech Tue Nov 22 19:57:41 2016 From: vlad.rusu at lola.tech (Vlad Rusu) Date: Tue, 22 Nov 2016 21:57:41 +0200 Subject: Varnish 4.1 - manager and cacher processes owned by "varnish" user Message-ID: <08D05DD1-3657-41F1-A070-9FC8B0B8468E@lola.tech> Hi everyone, I noticed the user owning both varnishd processes (parent + child) is now ?varnish" (or whatever user we specify in the config). I was previously using Varnish 3 in RHEL 6 and the parent process was owned by root, as the book also describes. Looking at the Varnish 4.0 book (can?t find a 4.1 one), it still says that?s how it should be ?> http://book.varnish-software.com/4.0/chapters/Tuning.html#the-parent-process-the-manager Before I start testing diff Varnish versions on different OS versions, can you tell me if this is expected? Is it safe.. ? ======= OS: Centos 7.2 Varnish: 4.1.3 from the Varnish repo [root at xxx varnish]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root at xxx varnish]# rpm -qi varnish Name : varnish Version : 4.1.3 Release : 1.el7 Architecture: x86_64 Install Date: Tue 22 Nov 2016 07:16:30 PM UTC Group : System Environment/Daemons Size : 1131779 License : BSD Signature : RSA/SHA1, Wed 06 Jul 2016 12:39:52 PM UTC, Key ID 60e7c096c4deffeb Source RPM : varnish-4.1.3-1.el7.src.rpm Build Date : Wed 06 Jul 2016 12:30:55 PM UTC Build Host : centos7.varnish-software.com Relocations : (not relocatable) URL : https://www.varnish-cache.org/ Summary : High-performance HTTP accelerator [root at xxx varnish]# ps auxf | grep varnish varnish 14899 0.0 0.0 133080 1292 ? Ss 19:32 0:00 /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :6081 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M varnish 14901 0.0 4.5 314788 85248 ? Sl 19:32 0:00 \_ /usr/sbin/varnishd -P /var/run/varnish.pid -f /etc/varnish/default.vcl -a :6081 -T 127.0.0.1:6082 -S /etc/varnish/secret -s malloc,256M ======= Thanks! -- Vlad Rusu skypeid: rusu.h.vlad | cell: +40758066019 Lola Tech | lola.tech -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedarad at gmail.com Wed Nov 23 14:59:51 2016 From: vedarad at gmail.com (Sreenath Kodedala) Date: Wed, 23 Nov 2016 09:59:51 -0500 Subject: varnish 4 to cache from multiple servers with different content Message-ID: http://stackoverflow.com/questions/40751646/varnish-4-to-cache-from-multiple-servers-with-different-content Using varnish 4 to cache different content of same request from multiple servers. It looks like it caches the first request from one server and keeps giving the same content for every subsequent request. doing curl gives response with two caches and different age. Are there any factors like load or anything else for stickiness behaviour? Used Jmeter and apache benchmark with load but still got the same behaviour. Is my vcl_hash is good? Want to save the object with hash combination of url and ip of backend server. am I missing anything? using round robin and hash_data. below is my config.vcl backend s1{ .host = "190.120.90.1"; } backend s2{ .host = "190.120.90.2"; } sub vcl_init { new vms = directors.round_robin(); vms.add_backend(s1); vms.add_backend(s2); } sub vcl_recv { set req.backend_hint = vms.backend(); } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return(lookup); } Thanks, Sreenath -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.cramer at beslist.nl Fri Nov 25 09:12:42 2016 From: t.cramer at beslist.nl (Thijs Cramer) Date: Fri, 25 Nov 2016 10:12:42 +0100 Subject: Varnish Caching based on Cookie Message-ID: <1480065162.3711.2.camel@beslist.nl> Hey Guys, Simple question: Will varnish' cache increase in size when we add a specific cookie to the hash? Or will varnish see that some pages overlap in content and (eventually) evict duplicate pages from the page cache? I'm asking because we segment users into 20 segments by using a cookie, but we don't want to increase our page cache 20-fold. Thanks in advance! - Thijs From thomas.lecomte at virtual-expo.com Fri Nov 25 09:26:15 2016 From: thomas.lecomte at virtual-expo.com (Thomas Lecomte) Date: Fri, 25 Nov 2016 10:26:15 +0100 Subject: Varnish Caching based on Cookie In-Reply-To: <1480065162.3711.2.camel@beslist.nl> References: <1480065162.3711.2.camel@beslist.nl> Message-ID: On Fri, Nov 25, 2016 at 10:12 AM, Thijs Cramer wrote: > I'm asking because we segment users into 20 segments by using a cookie, > but we don't want to increase our page cache 20-fold. Hello, If you add a cookie to the hash, and this cookie can have 20 different values, then you could end up with up to 20 differents objects in cache. Varnish doesn't look into the content of the object and won't "de-duplicate" objects or their content. Also, beware of adding user data (cookie value, set by the client's browser) into objects' hashes, as it could enable a client to saturate the cache by filling it until exhaustion. -- Thomas Lecomte / Sysadmin +33 4 86 13 48 65 / Virtual Expo, Marseille From guillaume at varnish-software.com Fri Nov 25 16:22:14 2016 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 25 Nov 2016 17:22:14 +0100 Subject: varnish 4 to cache from multiple servers with different content In-Reply-To: References: Message-ID: Hi, There's no stickyness involved with the round-robin director. But from what you are describing, Varnish is doing what you ask it to do: it caches from one backend and keep serving the cached content (ie. it doesn't need to go to the backends anymore). If you wish to have a non-caching loadbalancer, then then "return (pass);" at the end of vcl_recv and you'll be ok. Otherwise, what behavior do you wish to obtain? -- Guillaume Quintard On Wed, Nov 23, 2016 at 3:59 PM, Sreenath Kodedala wrote: > http://stackoverflow.com/questions/40751646/varnish-4- > to-cache-from-multiple-servers-with-different-content > > > Using varnish 4 to cache different content of same request from multiple > servers. It looks like it caches the first request from one server and > keeps giving the same content for every subsequent request. > > doing curl gives response with two caches and different age. > > Are there any factors like load or anything else for stickiness behaviour? > Used Jmeter and apache benchmark with load but still got the same behaviour. > > Is my vcl_hash is good? Want to save the object with hash combination of > url and ip of backend server. > > am I missing anything? > > using round robin and hash_data. below is my config.vcl > > backend s1{ > .host = "190.120.90.1"; > } > > backend s2{ > .host = "190.120.90.2"; > } > > sub vcl_init { > new vms = directors.round_robin(); > vms.add_backend(s1); > vms.add_backend(s2); > } > > sub vcl_recv { > set req.backend_hint = vms.backend(); > } > > sub vcl_hash { > hash_data(req.url); > if (req.http.host) { > hash_data(req.http.host); > } else { > hash_data(server.ip); > } > return(lookup); > } > > > Thanks, > Sreenath > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri Nov 25 17:20:43 2016 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 25 Nov 2016 18:20:43 +0100 Subject: varnish 4 to cache from multiple servers with different content In-Reply-To: References: Message-ID: Please keep the mailing list in the loop, this is an open conversation. OK, so your vcl is wrong and the round-robin is not what you want. Get rid of you vcl_init (wrong in your case) and vcl_hash (this is the default one, no need for it, at least for now), and use something like: sub vcl_recv { if (req.http.host == "foo.com") { set req.backend_hint == s1; } else { set req.backend_hint == s2; } } On Nov 25, 2016 17:41, "Sreenath Kodedala" wrote: I might be asking something different. My two backends will return two different contents and should be able to cache both the objects depending on backend hostname or ip address. And should return the appropriate cached object depending on the what backend it will be used while load-balancing. Not sure if I was clear. -- Sreenath On Fri, Nov 25, 2016 at 11:22 AM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > There's no stickyness involved with the round-robin director. But from > what you are describing, Varnish is doing what you ask it to do: it caches > from one backend and keep serving the cached content (ie. it doesn't need > to go to the backends anymore). > > If you wish to have a non-caching loadbalancer, then then "return (pass);" > at the end of vcl_recv and you'll be ok. > > Otherwise, what behavior do you wish to obtain? > > -- > Guillaume Quintard > > On Wed, Nov 23, 2016 at 3:59 PM, Sreenath Kodedala > wrote: > >> http://stackoverflow.com/questions/40751646/varnish-4-to-cac >> he-from-multiple-servers-with-different-content >> >> >> Using varnish 4 to cache different content of same request from multiple >> servers. It looks like it caches the first request from one server and >> keeps giving the same content for every subsequent request. >> >> doing curl gives response with two caches and different age. >> >> Are there any factors like load or anything else for stickiness >> behaviour? Used Jmeter and apache benchmark with load but still got the >> same behaviour. >> >> Is my vcl_hash is good? Want to save the object with hash combination of >> url and ip of backend server. >> >> am I missing anything? >> >> using round robin and hash_data. below is my config.vcl >> >> backend s1{ >> .host = "190.120.90.1"; >> } >> >> backend s2{ >> .host = "190.120.90.2"; >> } >> >> sub vcl_init { >> new vms = directors.round_robin(); >> vms.add_backend(s1); >> vms.add_backend(s2); >> } >> >> sub vcl_recv { >> set req.backend_hint = vms.backend(); >> } >> >> sub vcl_hash { >> hash_data(req.url); >> if (req.http.host) { >> hash_data(req.http.host); >> } else { >> hash_data(server.ip); >> } >> return(lookup); >> } >> >> >> Thanks, >> Sreenath >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Sat Nov 26 14:22:14 2016 From: lagged at gmail.com (Andrei) Date: Sat, 26 Nov 2016 08:22:14 -0600 Subject: Geolocation vmods Message-ID: Hello all, I was wondering if there were any preferences among the community on which vmod (from https://varnish-cache.org/vmods) to use for geolocation, and why. My main concerns are of course speed, resources and accuracy. The ones I'm looking over from the vmods page are: ip2location - https://github.com/thlc/libvmod-ip2location geoip2 - https://github.com/fgsch/libvmod-geoip2 maxminddb - https://github.com/simonvik/libvmod_maxminddb -- Andrei -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Sun Nov 27 20:55:59 2016 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 27 Nov 2016 21:55:59 +0100 Subject: Geolocation vmods In-Reply-To: References: Message-ID: i'd say: look at the APIs and pick based on what's the most convenient for you, and also, check the underlying libraries and supported databases. The vmods don't do much but rather leverage libmaxminddb and libgeoip, so they are the real point of focus here in terms or speed and resources. -- Guillaume Quintard On Sat, Nov 26, 2016 at 3:22 PM, Andrei wrote: > Hello all, > > I was wondering if there were any preferences among the community on which > vmod (from https://varnish-cache.org/vmods) to use for geolocation, and > why. My main concerns are of course speed, resources and accuracy. The ones > I'm looking over from the vmods page are: > > ip2location - https://github.com/thlc/libvmod-ip2location > geoip2 - https://github.com/fgsch/libvmod-geoip2 > maxminddb - https://github.com/simonvik/libvmod_maxminddb > > -- Andrei > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.lecomte at virtual-expo.com Mon Nov 28 07:16:58 2016 From: thomas.lecomte at virtual-expo.com (Thomas Lecomte) Date: Mon, 28 Nov 2016 08:16:58 +0100 Subject: Geolocation vmods In-Reply-To: References: Message-ID: On Sat, Nov 26, 2016 at 3:22 PM, Andrei wrote: > Hello all, > > I was wondering if there were any preferences among the community on which > vmod (from https://varnish-cache.org/vmods) to use for geolocation, and why. > My main concerns are of course speed, resources and accuracy. The ones I'm > looking over from the vmods page are: > > ip2location - https://github.com/thlc/libvmod-ip2location Hello Andrei, I will talk only about ip2location since I'm the developer behind it. As it only implements some bindings to the C library provided by ip2location, the overhead is pretty minimal. The accuracy depends on the freshness of you IP2location database. Its UNIX mtime is checked at each call and the database is reloaded on the fly if the file has been updated. Our varnish servers don't handle a lot of requests per second so I can't say wether it is able to cope with more than 1000 req/s without increasing the CPU load. I only implemented the bindings we needed, that's why I don't provide full bindings to the whole library yet. I can add some if you need them. Thanks, -- Thomas Lecomte | Sysadmin @ Virtual Expo From lagged at gmail.com Mon Nov 28 07:34:38 2016 From: lagged at gmail.com (Andrei) Date: Mon, 28 Nov 2016 09:34:38 +0200 Subject: Geolocation vmods In-Reply-To: References: Message-ID: Thank you both for the awesome input. Running stat on a file each time a request comes in is something I would like to avoid. Would it be possible to go by the varnishd uptime instead, and trigger database checks every N seconds instead? We're looking to push around 5k req/s on average which is why I'm trying to avoid the added syscalls. On Mon, Nov 28, 2016 at 9:16 AM, Thomas Lecomte < thomas.lecomte at virtual-expo.com> wrote: > On Sat, Nov 26, 2016 at 3:22 PM, Andrei wrote: > > Hello all, > > > > I was wondering if there were any preferences among the community on > which > > vmod (from https://varnish-cache.org/vmods) to use for geolocation, and > why. > > My main concerns are of course speed, resources and accuracy. The ones > I'm > > looking over from the vmods page are: > > > > ip2location - https://github.com/thlc/libvmod-ip2location > > Hello Andrei, > > I will talk only about ip2location since I'm the developer behind it. > As it only implements some bindings to the C library provided by > ip2location, the overhead is pretty minimal. > > The accuracy depends on the freshness of you IP2location database. Its > UNIX mtime is checked at each call and the database is reloaded on the > fly if the file has been updated. Our varnish servers don't handle a > lot of requests per second so I can't say wether it is able to cope > with more than 1000 req/s without increasing the CPU load. > > I only implemented the bindings we needed, that's why I don't provide > full bindings to the whole library yet. I can add some if you need > them. > > Thanks, > > -- > Thomas Lecomte | Sysadmin @ Virtual Expo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.lecomte at virtual-expo.com Mon Nov 28 07:41:48 2016 From: thomas.lecomte at virtual-expo.com (Thomas Lecomte) Date: Mon, 28 Nov 2016 08:41:48 +0100 Subject: Geolocation vmods In-Reply-To: References: Message-ID: On Mon, Nov 28, 2016 at 8:34 AM, Andrei wrote: > Thank you both for the awesome input. Running stat on a file each time a > request comes in is something I would like to avoid. Would it be possible to > go by the varnishd uptime instead, and trigger database checks every N > seconds instead? We're looking to push around 5k req/s on average which is > why I'm trying to avoid the added syscalls. Indeed it would make more sense. If you are interested in ip2location and plan to use this vmod, I can change the its behavior as you suggested, it should be pretty straightforward. -- Thomas Lecomte | Sysadmin @ Virtual Expo From lagged at gmail.com Mon Nov 28 07:53:07 2016 From: lagged at gmail.com (Andrei) Date: Mon, 28 Nov 2016 09:53:07 +0200 Subject: Geolocation vmods In-Reply-To: References: Message-ID: This change would definitely help! From what I'm seeing both ip2loc and maxmind are just as accurate when it comes to country tags, which is what we're aiming for. On Mon, Nov 28, 2016 at 9:41 AM, Thomas Lecomte < thomas.lecomte at virtual-expo.com> wrote: > On Mon, Nov 28, 2016 at 8:34 AM, Andrei wrote: > > Thank you both for the awesome input. Running stat on a file each time a > > request comes in is something I would like to avoid. Would it be > possible to > > go by the varnishd uptime instead, and trigger database checks every N > > seconds instead? We're looking to push around 5k req/s on average which > is > > why I'm trying to avoid the added syscalls. > > Indeed it would make more sense. If you are interested in ip2location > and plan to use this vmod, I can change the its behavior as you > suggested, it should be pretty straightforward. > > -- > Thomas Lecomte | Sysadmin @ Virtual Expo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brettgfitzgerald at gmail.com Mon Nov 28 12:41:20 2016 From: brettgfitzgerald at gmail.com (Brett Fitzgerald) Date: Mon, 28 Nov 2016 12:41:20 +0000 Subject: Geolocation vmods In-Reply-To: References: Message-ID: Hi Andrel and Thomas, I also wrote an implementation of ip2location as a vmod: https://github.com/controversy187/libvmod-ip2location I'm not well versed in C, and this is my first vmod, but maybe it could be helpful. And I'm also open to suggestions for improvement :) Brett On Mon, Nov 28, 2016 at 3:38 AM Andrei wrote: > This change would definitely help! From what I'm seeing both ip2loc and > maxmind are just as accurate when it comes to country tags, which is what > we're aiming for. > > On Mon, Nov 28, 2016 at 9:41 AM, Thomas Lecomte < > thomas.lecomte at virtual-expo.com> wrote: > > On Mon, Nov 28, 2016 at 8:34 AM, Andrei wrote: > > Thank you both for the awesome input. Running stat on a file each time a > > request comes in is something I would like to avoid. Would it be > possible to > > go by the varnishd uptime instead, and trigger database checks every N > > seconds instead? We're looking to push around 5k req/s on average which > is > > why I'm trying to avoid the added syscalls. > > Indeed it would make more sense. If you are interested in ip2location > and plan to use this vmod, I can change the its behavior as you > suggested, it should be pretty straightforward. > > -- > Thomas Lecomte | Sysadmin @ Virtual Expo > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Mon Nov 28 15:56:58 2016 From: lagged at gmail.com (Andrei) Date: Mon, 28 Nov 2016 09:56:58 -0600 Subject: Geolocation vmods In-Reply-To: References: Message-ID: Hi Brett, Thanks for the heads up! Looks pretty lightweight and the db is only loaded on init, which is also preferable in my case to avoid the numerous syscalls. On Mon, Nov 28, 2016 at 6:41 AM, Brett Fitzgerald < brettgfitzgerald at gmail.com> wrote: > Hi Andrel and Thomas, > > I also wrote an implementation of ip2location as a vmod: > https://github.com/controversy187/libvmod-ip2location > > I'm not well versed in C, and this is my first vmod, but maybe it could be > helpful. And I'm also open to suggestions for improvement :) > > Brett > > On Mon, Nov 28, 2016 at 3:38 AM Andrei wrote: > >> This change would definitely help! From what I'm seeing both ip2loc and >> maxmind are just as accurate when it comes to country tags, which is what >> we're aiming for. >> >> On Mon, Nov 28, 2016 at 9:41 AM, Thomas Lecomte < >> thomas.lecomte at virtual-expo.com> wrote: >> >> On Mon, Nov 28, 2016 at 8:34 AM, Andrei wrote: >> > Thank you both for the awesome input. Running stat on a file each time a >> > request comes in is something I would like to avoid. Would it be >> possible to >> > go by the varnishd uptime instead, and trigger database checks every N >> > seconds instead? We're looking to push around 5k req/s on average which >> is >> > why I'm trying to avoid the added syscalls. >> >> Indeed it would make more sense. If you are interested in ip2location >> and plan to use this vmod, I can change the its behavior as you >> suggested, it should be pretty straightforward. >> >> -- >> Thomas Lecomte | Sysadmin @ Virtual Expo >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niall.murphy at sparwelt.de Tue Nov 29 16:09:04 2016 From: niall.murphy at sparwelt.de (Niall Murphy) Date: Tue, 29 Nov 2016 17:09:04 +0100 Subject: Transient storage killing memory In-Reply-To: References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> Message-ID: <20161129170904.2fa848d6@insomnia.sparwelt.de> On Mon, 21 Nov 2016 15:09:08 -0500 Mark Staudinger wrote: > It might also be useful to capture some varnishlog output and > determine what objects are being stored in the Transient pool, and > whether or not your "shortlived" parameter, or default grace value > needs to be adjusted. Even if you do determine you need to do some > things differently here that will prevent the Transient pool from > growing beyond your ideal limit, IMO it's a good idea to keep this > limited anyway. > Hi Mark, I took a look into ttl < 10s objects and saw that they are requests we intentionally apply either "max-age=0, private" or "no-cache, private" to. However their storage field is still "malloc Transient", and transient storage usage only appears to be going up. Any ideas how to investigate further? It's varnish 5.0.0-1, and this didn't happen with 4.1, thought there may well have been configuration changes since then. Regards, -- Niall From mark.staudinger at nyi.net Tue Nov 29 16:26:50 2016 From: mark.staudinger at nyi.net (Mark Staudinger) Date: Tue, 29 Nov 2016 11:26:50 -0500 Subject: Transient storage killing memory In-Reply-To: <20161129170904.2fa848d6@insomnia.sparwelt.de> References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> <20161129170904.2fa848d6@insomnia.sparwelt.de> Message-ID: On Tue, 29 Nov 2016 11:09:04 -0500, Niall Murphy wrote: > On Mon, 21 Nov 2016 15:09:08 -0500 > Mark Staudinger wrote: > >> It might also be useful to capture some varnishlog output and >> determine what objects are being stored in the Transient pool, and >> whether or not your "shortlived" parameter, or default grace value >> needs to be adjusted. Even if you do determine you need to do some >> things differently here that will prevent the Transient pool from >> growing beyond your ideal limit, IMO it's a good idea to keep this >> limited anyway. >> > Hi Mark, > > I took a look into ttl < 10s objects and saw that they are requests we > intentionally apply either "max-age=0, private" or "no-cache, private" > to. However their storage field is still "malloc Transient", and > transient storage usage only appears to be going up. > > Any ideas how to investigate further? > > It's varnish 5.0.0-1, and this didn't happen with 4.1, thought there > may well have been configuration changes since then. > > Regards, > > -- > Niall Hi Niall, It's not clear if you actually wish to cache these requests. The best way to proceed would be to look at the output of varnishlog for a few sample requests, and see what the values are for the "TTL" log entry, and make sure they match the desired settings/behavior. Not that if you change TTL/grace/keep settings during the request, there will be multiple entries in the log. Here's a sample entry for an object that was not cached: -- TTL VCL 0 0 0 1480436502 Best, -=Mark From dridi at varni.sh Tue Nov 29 16:32:17 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 29 Nov 2016 17:32:17 +0100 Subject: Transient storage killing memory In-Reply-To: <20161129170904.2fa848d6@insomnia.sparwelt.de> References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> <20161129170904.2fa848d6@insomnia.sparwelt.de> Message-ID: > I took a look into ttl < 10s objects and saw that they are requests we > intentionally apply either "max-age=0, private" or "no-cache, private" > to. However their storage field is still "malloc Transient", and > transient storage usage only appears to be going up. Transient storage is used for short-lived object, but also for pass'ed transactions. The storage is freed when the response is consumed by the client. Which counters appears to only go up? > Any ideas how to investigate further? > > It's varnish 5.0.0-1, and this didn't happen with 4.1, thought there > may well have been configuration changes since then. It could be caused by a change of behavior from your backend, or a different access pattern from your clients. Dridi From michael.lee at zerustech.com Wed Nov 30 07:32:02 2016 From: michael.lee at zerustech.com (Michael Lee) Date: Wed, 30 Nov 2016 15:32:02 +0800 Subject: Any chance to support "max-stale" request header in Varnish? Message-ID: <583E8072.1020002@zerustech.com> Hi Guys, I am working on a tutorial of HTTP/1.1 caching and I'd like to use Varnish as the reverse proxy to demonstrate how HTTP/1.1 caching works. However, I found there is no way to support the max-stale request header ("Cache-Control: max-stale=..."). The problem is that "a call to return(deliver)" always triggers a backend fetch, explicitly, if the object's ttl has been exceeded, so it's impossible to deliver a staled object for a single request. The workaround might be shutting down the backend server to force a failure of the backend fetch, thus to get the staled object delivered. But besides that, is there any better solution to support the max-stale header? In fact, the "max-stale" header is almost identical to "grace", except that the "max-stale" header is controlled by the client, while the "grace" is controlled by varnish. But still, unless we shut down the backend server, there is no way to demonstrate the "grace" for single request either. Kind regards, Michael -- Michael Lee / Managing Director / ZerusTech Ltd Tel: +86 (21) 6107 3305 Mobile: +86 186 021 03818 Skype: zerustech Email: michael.lee at zerustech.com www.zerustech.com Suite 9208 Building No. 9, 4361 HuTai Road Shanghai P.R.China 201906 From jllach at agilecontents.com Wed Nov 30 09:22:09 2016 From: jllach at agilecontents.com (Jordi Llach) Date: Wed, 30 Nov 2016 10:22:09 +0100 Subject: Varnish could build an Etag based on all collected Etags from the backend Message-ID: When using Varnish in conjunction with ESis no one but Varnish could build a global Etag that really summarizes the response send to the client. I am wondering if Varnish could build a global Etag based on all Etags received from the backend. I understand that creating this Etag from multiple chunks(ESIs) is cpu intensive and thus I am wondering if Varnish could build this Etag based on all ESI Etags received, instead of doing it from the body/html in itself Jordi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Nov 30 09:53:52 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 30 Nov 2016 10:53:52 +0100 Subject: Varnish could build an Etag based on all collected Etags from the backend In-Reply-To: References: Message-ID: On Wed, Nov 30, 2016 at 10:22 AM, Jordi Llach wrote: > When using Varnish in conjunction with ESis no one but Varnish could build a > global Etag that really summarizes the response send to the client. > > I am wondering if Varnish could build a global Etag based on all Etags > received from the backend. > I understand that creating this Etag from multiple chunks(ESIs) is cpu > intensive and thus I am wondering if Varnish could build this Etag based on > all ESI Etags received, instead of doing it from the body/html in itself Short answer is no, the game is over before you get to the response headers of ESI sub-requests. Dridi From dridi at varni.sh Wed Nov 30 09:59:18 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 30 Nov 2016 10:59:18 +0100 Subject: Any chance to support "max-stale" request header in Varnish? In-Reply-To: <583E8072.1020002@zerustech.com> References: <583E8072.1020002@zerustech.com> Message-ID: > In fact, the "max-stale" header is almost identical to "grace", except that > the "max-stale" header is controlled by the client, while the "grace" is You can't trust clients, so Varnish will not likely honor clients' claims, like no-cache or max-stale. If you want to give control to the clients (which I don't recommend for the general case) you have do painfully do it yourself in VCL. It's a bit tedious to implement. > controlled by varnish. But still, unless we shut down the backend server, > there is no way to demonstrate the "grace" for single request either. There is, you can see the Age header going above the Cache-Control's mas-age when it happens. From niall.murphy at sparwelt.de Wed Nov 30 10:37:30 2016 From: niall.murphy at sparwelt.de (Niall Murphy) Date: Wed, 30 Nov 2016 11:37:30 +0100 Subject: Transient storage killing memory In-Reply-To: References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> <20161129170904.2fa848d6@insomnia.sparwelt.de> Message-ID: <20161130113730.49fbfc33@insomnia.sparwelt.de> On Tue, 29 Nov 2016 11:26:50 -0500 Mark Staudinger wrote: > > It's not clear if you actually wish to cache these requests. > > The best way to proceed would be to look at the output of varnishlog > for a few sample requests, and see what the values are for the "TTL" > log entry, and make sure they match the desired settings/behavior. > Not that if you change TTL/grace/keep settings during the request, > there will be multiple entries in the log. Here's a sample entry for > an object that was not cached: > > -- TTL VCL 0 0 0 1480436502 > Hi, I've observed that the requests I don't want to cache have ttl and max-age = 0 as intended, but that they're stored in transient storage regardless. "Hit-For-Pass is now actually Hit-For-Miss" https://varnish-cache.org/docs/5.0/whats-new/changes-5.0.html I don't understand the finer repercussions of this but I thought it would result in less transient storage, not more. Also I have if ( beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private") { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } else { unset beresp.http.set-cookie; } in vcl_backend_response. So i'm wondering if .uncacheable even applies to transient storage. Despite the counters going up, "varnishlog -q HitPass" returns nothing. -- Niall Murphy System Engineer Telefon: +49 30 / 92 10 64-23 Telefax: +49 30 / 92 10 64-323 SPARWELT GmbH W?hlertstr. 12-13 10115 Berlin SPARWELT Webseiten: www.sparwelt.de - www.deals.de - www.promoszop.pl From niall.murphy at sparwelt.de Wed Nov 30 10:38:10 2016 From: niall.murphy at sparwelt.de (Niall Murphy) Date: Wed, 30 Nov 2016 11:38:10 +0100 Subject: Transient storage killing memory In-Reply-To: References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> <20161129170904.2fa848d6@insomnia.sparwelt.de> Message-ID: <20161130113810.573e08fe@insomnia.sparwelt.de> On Tue, 29 Nov 2016 11:26:50 -0500 Mark Staudinger wrote: > > It's not clear if you actually wish to cache these requests. > > The best way to proceed would be to look at the output of varnishlog > for a few sample requests, and see what the values are for the "TTL" > log entry, and make sure they match the desired settings/behavior. > Not that if you change TTL/grace/keep settings during the request, > there will be multiple entries in the log. Here's a sample entry for > an object that was not cached: > > -- TTL VCL 0 0 0 1480436502 > Hi, I've observed that the requests I don't want to cache have ttl and max-age = 0 as intended, but that they're stored in transient storage regardless. "Hit-For-Pass is now actually Hit-For-Miss" https://varnish-cache.org/docs/5.0/whats-new/changes-5.0.html I don't understand the finer repercussions of this but I thought it would result in less transient storage, not more. Also I have if ( beresp.http.Pragma ~ "no-cache" || beresp.http.Cache-Control ~ "no-cache" || beresp.http.Cache-Control ~ "private") { set beresp.uncacheable = true; set beresp.ttl = 120s; return (deliver); } else { unset beresp.http.set-cookie; } in vcl_backend_response. So i'm wondering if .uncacheable even applies to transient storage. Despite the counters going up, "varnishlog -q HitPass" returns nothing. From niall.murphy at sparwelt.de Wed Nov 30 11:02:19 2016 From: niall.murphy at sparwelt.de (Niall Murphy) Date: Wed, 30 Nov 2016 12:02:19 +0100 Subject: Transient storage killing memory In-Reply-To: References: <20161115094229.6cddc4e0@insomnia.sparwelt.de> <20161129170904.2fa848d6@insomnia.sparwelt.de> Message-ID: <20161130120219.2b37d3db@insomnia.sparwelt.de> On Tue, 29 Nov 2016 17:32:17 +0100 Dridi Boukelmoune wrote: > > I took a look into ttl < 10s objects and saw that they are requests > > we intentionally apply either "max-age=0, private" or "no-cache, > > private" to. However their storage field is still "malloc > > Transient", and transient storage usage only appears to be going > > up. > > Transient storage is used for short-lived object, but also for pass'ed > transactions. The storage is freed when the response is consumed by > the client. > > Which counters appears to only go up? > SMA.s0.c_req 210375 2.65 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 976439262 12317.58 Bytes allocated SMA.s0.c_freed 888098949 11203.19 Bytes freed SMA.s0.g_alloc 22425 . Allocations outstanding SMA.s0.g_bytes 88340313 . Bytes outstanding SMA.s0.g_space 698091687 . Bytes available SMA.Transient.c_req 622804 7.86 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 2780556626 35076.15 Bytes allocated SMA.Transient.c_freed 2224419585 28060.60 Bytes freed SMA.Transient.g_alloc 337153 . Allocations outstanding SMA.Transient.g_bytes 556137041 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available Transient I thought but it appears it _is_ being freed at least, but it's unusually large compared to s0 when you consider only 7% of requests have a ttl < shortlived wouldn't you say? Synthetic error responses are low, but hit-pass (return(miss) in v5) are high: MAIN.client_req 597833 7.52 Good client requests received MAIN.cache_hit 289103 3.64 Cache hits MAIN.cache_hitpass 284475 3.58 Cache hits for pass MAIN.cache_miss 307426 3.87 Cache misses But I only tell varnish to miss on the above 7%. From michael.lee at zerustech.com Wed Nov 30 11:25:28 2016 From: michael.lee at zerustech.com (Michael Lee) Date: Wed, 30 Nov 2016 19:25:28 +0800 Subject: Any chance to support "max-stale" request header in Varnish? In-Reply-To: References: <583E8072.1020002@zerustech.com> Message-ID: <583EB728.9070008@zerustech.com> Thanks for your feedback. On 11/30/16 5:59 PM, Dridi Boukelmoune wrote: >> In fact, the "max-stale" header is almost identical to "grace", except that >> the "max-stale" header is controlled by the client, while the "grace" is > You can't trust clients, so Varnish will not likely honor clients' > claims, like no-cache or max-stale. If you want to give control to the > clients (which I don't recommend for the general case) you have do > painfully do it yourself in VCL. It's a bit tedious to implement. > >> controlled by varnish. But still, unless we shut down the backend server, >> there is no way to demonstrate the "grace" for single request either. > There is, you can see the Age header going above the Cache-Control's > mas-age when it happens. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Michael Lee / Managing Director / ZerusTech Ltd Tel: +86 (21) 6107 3305 Mobile: +86 186 021 03818 Skype: zerustech Email: michael.lee at zerustech.com www.zerustech.com Suite 9208 Building No. 9, 4361 HuTai Road Shanghai P.R.China 201906 From jllach at agilecontents.com Wed Nov 30 14:40:03 2016 From: jllach at agilecontents.com (Jordi Llach) Date: Wed, 30 Nov 2016 15:40:03 +0100 Subject: Varnish could build an Etag based on all collected Etags from the backend In-Reply-To: References: Message-ID: >From my ignorance sounds strange as the final response is a composition of multiple sub-requests, which in turn are cached individually. Thanks for the feedback 2016-11-30 10:53 GMT+01:00 Dridi Boukelmoune : > On Wed, Nov 30, 2016 at 10:22 AM, Jordi Llach > wrote: > > When using Varnish in conjunction with ESis no one but Varnish could > build a > > global Etag that really summarizes the response send to the client. > > > > I am wondering if Varnish could build a global Etag based on all Etags > > received from the backend. > > I understand that creating this Etag from multiple chunks(ESIs) is cpu > > intensive and thus I am wondering if Varnish could build this Etag based > on > > all ESI Etags received, instead of doing it from the body/html in itself > > Short answer is no, the game is over before you get to the response > headers of ESI sub-requests. > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Nov 30 14:43:47 2016 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 30 Nov 2016 15:43:47 +0100 Subject: Varnish could build an Etag based on all collected Etags from the backend In-Reply-To: References: Message-ID: On Wed, Nov 30, 2016 at 3:40 PM, Jordi Llach wrote: > From my ignorance sounds strange as the final response is a composition of > multiple sub-requests, which in turn are cached individually. > Thanks for the feedback I know, but this not an area of Varnish where I'm much knowledgeable so I'll stick to the short answer... It's basically the same problem as collecting Set-Cookie headers from sub-requests to add them in the top response. That's not possible unfortunately. From apj at mutt.dk Wed Nov 30 14:54:16 2016 From: apj at mutt.dk (Andreas Plesner) Date: Wed, 30 Nov 2016 15:54:16 +0100 Subject: Varnish could build an Etag based on all collected Etags from the backend In-Reply-To: References: Message-ID: <20161130145416.GF22017@nerd.dk> On Wed, Nov 30, 2016 at 03:40:03PM +0100, Jordi Llach wrote: > From my ignorance sounds strange as the final response is a composition of > multiple sub-requests, which in turn are cached individually. But the response body isn't rendered until after the headers have already been sent. So varnish doesn't know about the full list of ESI includes until after the body is sent. -- Andreas