From caiotedim at gmail.com Mon Sep 2 13:30:23 2013 From: caiotedim at gmail.com (Caio Tedim) Date: Mon, 2 Sep 2013 10:30:23 -0300 Subject: [Varnish 3.0.3 - 3.0.4] Logging client source port Message-ID: Hi guys, This is my first post in this group. I?m facing some problem to logging client source port in varnishncsa. Eg: httpd: %{remote}p nginx: $remote_port Anyone know if it?s possible to logging this? Tanks in advance for the help!! []?s Caio Tedim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jos at dwim.org Mon Sep 2 17:56:31 2013 From: jos at dwim.org (Jos Boumans) Date: Mon, 2 Sep 2013 10:56:31 -0700 Subject: [Varnish 3.0.3 - 3.0.4] Logging client source port In-Reply-To: References: Message-ID: On 2 Sep 2013, at 06:30, Caio Tedim wrote: > I?m facing some problem to logging client source port in varnishncsa. > > Eg: httpd: %{remote}p > nginx: $remote_port > > Anyone know if it?s possible to logging this? Yes it is; you want %h See here for the full list of variables: https://www.varnish-cache.org/docs/3.0/reference/varnishncsa.html From caiotedim at gmail.com Mon Sep 2 19:12:05 2013 From: caiotedim at gmail.com (Caio Tedim) Date: Mon, 2 Sep 2013 16:12:05 -0300 Subject: [Varnish 3.0.3 - 3.0.4] Logging client source port In-Reply-To: References: Message-ID: Thanks for replying but %h works to remote host and not for remote port. I?m using the follow line: %t %h %m "/%{Host}i"%U%q %b %s %{Varnish:hitmiss}x %b %{Referer}i I want to logging remote port, using varnishncsa. Below follow a line that contains the information in varnishlog 10 SessionOpen c 10.133.133.110 50225 *:80 []?s Caio Tedim On Mon, Sep 2, 2013 at 2:56 PM, Jos Boumans wrote: > > On 2 Sep 2013, at 06:30, Caio Tedim wrote: > > > I?m facing some problem to logging client source port in varnishncsa. > > > > Eg: httpd: %{remote}p > > nginx: $remote_port > > > > Anyone know if it?s possible to logging this? > > Yes it is; you want %h > > See here for the full list of variables: > > https://www.varnish-cache.org/docs/3.0/reference/varnishncsa.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at hagander.net Thu Sep 5 18:26:23 2013 From: magnus at hagander.net (Magnus Hagander) Date: Thu, 5 Sep 2013 20:26:23 +0200 Subject: varnishtop incorrect help Message-ID: When you run "varnishtop --help" it clearly states that -m is a supported switch, but if you try to actually use it there's a special case branch that says it's unsupported. That's quite user-unfriendly. I realize it comes from the fact that -m is part of the "standard VSL arguments", and is as such processed by VSL_Arg(). But ISTM that if it's not actually in all the frontends, it can't really be called a standard argument, and should not be in that list of arguments. Something even better would of course be to *make* it supported, but presumably there is a reason why it isn't... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ From slink at schokola.de Wed Sep 11 17:24:54 2013 From: slink at schokola.de (Nils Goroll) Date: Wed, 11 Sep 2013 19:24:54 +0200 Subject: HSH_Private Message-ID: <5230A766.5040509@schokola.de> Hi phk, I am working on the patch for the expire superseded objects feature which I had been asked to send. In preparation for this, I am looking at recent changes. 68a424092d589932c782c9514d340f1752f094a5 introduces HSH_Private and private_oh for private objects, which before simply had a NULL objhead and were not inserted into any objhead list. I hope to understand the intention behind the change and see that this could be useful for debugging, but I fail to understand why what it buys is worth the price: - yes, it does remove some special casing in some places, but adds some in others. - private_oh->mtx now is a congestion point for all private object creation, unbusy and deref We have got a class of customers who need to pass most requests and, with previous code, see virtually no lock contention. Wouldn't this change quite drastically - in particular as there are probably tens of thousands of objheads when caching, but there is only one private_oh for all passes? - in particular unbusy on an OC_F_PRIVATE will needlessly grab the mtx and move the oc to the front of the private_oh Regarding the actual patch I am working on: This was most efficiently done in HSH_Lookup when (oc->objhead == NULL) was still valid: - remove oc from oh while holding the oh->mtx anyway - NULL oc->objhead - EXP_Rearm outside oh->mtx - deref oh With HSH_Private, IIUC avoiding a second lock of oh->mtx is not possible any more, so I wonder if HSH_Lookup sill would be the right place. So, in short: will HSH_Private really persist? Nils From slink at schokola.de Thu Sep 12 05:21:45 2013 From: slink at schokola.de (Nils Goroll) Date: Thu, 12 Sep 2013 07:21:45 +0200 Subject: HSH_Private (resent) Message-ID: <52314F69.8030503@schokola.de> (resent because my initial mail did not make it to the mailing list) Hi phk, I am working on the patch for the expire superseded objects feature which I had been asked to send. In preparation for this, I am looking at recent changes. 68a424092d589932c782c9514d340f1752f094a5 introduces HSH_Private and private_oh for private objects, which before simply had a NULL objhead and were not inserted into any objhead list. I hope to understand the intention behind the change and see that this could be useful for debugging, but I fail to understand why what it buys is worth the price: - yes, it does remove some special casing in some places, but adds some in others. - private_oh->mtx now is a congestion point for all private object creation, unbusy and deref We have got a class of customers who need to pass most requests and, with previous code, see virtually no lock contention. Wouldn't this change quite drastically - in particular as there are probably tens of thousands of objheads when caching, but there is only one private_oh for all passes? - in particular unbusy on an OC_F_PRIVATE will needlessly grab the mtx and move the oc to the front of the private_oh Regarding the actual patch I am working on: This was most efficiently done in HSH_Lookup when (oc->objhead == NULL) was still valid: - remove oc from oh while holding the oh->mtx anyway - NULL oc->objhead - EXP_Rearm outside oh->mtx - deref oh With HSH_Private, IIUC avoiding a second lock of oh->mtx is not possible any more, so I wonder if HSH_Lookup still would be the right place. So, in short: will HSH_Private really persist? Nils From phk at phk.freebsd.dk Thu Sep 12 11:15:43 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 12 Sep 2013 11:15:43 +0000 Subject: HSH_Private In-Reply-To: <5230A766.5040509@schokola.de> References: <5230A766.5040509@schokola.de> Message-ID: <1748.1378984543@critter.freebsd.dk> In message <5230A766.5040509 at schokola.de>, Nils Goroll writes: >- private_oh->mtx now is a congestion point for all private object creation, > unbusy and deref Maybe, maybe not. It is trivial to loosen up that object, at the expense of (up to a lot of) memory, but I want actual numbers to guide that tradeoff. I think you should expect HSH_Private to stay, it is a pretty good simplification of the code, and it helps make a lot of code simpler (including vcl_err/synth) One of the benefits of having a limited number of oh's for private objects is for debugging: We can now find them. Previously they had to be tracked down on some thread or other. >Regarding the actual patch I am working on: [...] First of all, we did agree that a successfull conditional fetch should nuke the "old" object from cache, (ie: ignore obj.keep) right ? I expect that will solve most of the duplication problem ? >HSH_Lookup when (oc->objhead == NULL) was still valid: > >- remove oc from oh while holding the oh->mtx anyway >- NULL oc->objhead >- EXP_Rearm outside oh->mtx >- deref oh > >With HSH_Private, IIUC avoiding a second lock of oh->mtx is not possible any >more, so I wonder if HSH_Lookup sill would be the right place. 1. How could de-dup'ing a oh's list ever be relevant to pass/private objects ? (ie: I don't understand what HSH_Private() changes for you. 2. I thought the way to do it was - spot duplicate - nuke its ttl,keep,grace and let EXP reap it 2. The three places I can see it happen are: a) HSH_Lookup b) HSH_Unbusy c) EXP The reason I have pointed to HSH_Lookup() is that the code will not affect pure hits, and the code to walk the list is there already. In certain evironments this may still degenerate, so it has to be cheap. A credible case can be made for Unbusy and EXP, but it will take more code, since they don't traverse the list today. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Thu Sep 12 13:50:45 2013 From: slink at schokola.de (Nils Goroll) Date: Thu, 12 Sep 2013 15:50:45 +0200 Subject: HSH_Private & expire superseded In-Reply-To: <1748.1378984543@critter.freebsd.dk> References: <5230A766.5040509@schokola.de> <1748.1378984543@critter.freebsd.dk> Message-ID: <5231C6B5.6090200@schokola.de> > having a limited number of oh's for private > objects is for debugging: We can now find them. Previously they > had to be tracked down on some thread or other. Yet, I agree on this point (also mentioned in my first email) >> Regarding the actual patch I am working on: [...] > > First of all, we did agree that a successfull conditional fetch > should nuke the "old" object from cache, (ie: ignore obj.keep) > right ? My understanding of what should be done for this feature was basically this: - add storage steal feature for conditional backend fetch 304 response: - if stealing is ok (determined by stevedores involved) - the new obj from the 304 response steals from expired obj - expired obj gets a reference on the new obj - expired obj gets a "my storage has been ripped off" flag - otherwise copy storage BUT: I am not working on conditional fetches (IMS) in master, my understanding was that this is something you still want to do yourself, right? - Context on the "expire superseded" feature: Several patches following this idea had been posted to this list before, here are the references I am aware of: an initial "poc" patch which i had never taken into production, doing work in Lookup: https://www.varnish-cache.org/lists/pipermail/varnish-dev/2010-November/006600.html Doc Wilco's implementation of the same idea, but working in Unbusy: https://www.varnish-cache.org/lists/pipermail/varnish-dev/2012-January/007042.html What I have at this point (and do successfully use in production for one month now) is a variant of my initial patch for Varnish2 , but _much_ simpler. > I expect that will solve most of the duplication problem ? The duplication effect we would like to address is simply this: When using a long beresp.grace time with a short ttl, many copies of the same object will accumulate until they expire, so usually one will run out of cache and lru nuke will kick in. But LRU might first expire objects we could still, so selectively expiring old "grace" copies (I like to call them superseded objects) is a much better idea. >> HSH_Lookup when (oc->objhead == NULL) was still valid: >> >> - remove oc from oh while holding the oh->mtx anyway >> - NULL oc->objhead >> - EXP_Rearm outside oh->mtx >> - deref oh >> >> With HSH_Private, IIUC avoiding a second lock of oh->mtx is not possible any >> more, so I wonder if HSH_Lookup sill would be the right place. > > 1. How could de-dup'ing a oh's list ever be relevant to pass/private > objects ? (ie: I don't understand what HSH_Private() changes for you. I don't want to de-dupe the private_oh. I was mainly asking about HSH_Private because of the performance implications and because of the fact that this changes the implementation required for the expire superseded feature to what I have now (or, put it this way: one implementation which I thought was particularly efficient is not possible any more). > In certain evironments this may still degenerate, so it has to be > cheap. That's what I am after. For illustrative purposes _only_, you might want to have a look at the varnish2 patch I am using at the moment (attached) Nils -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: expire_superseded_varnish2.patch URL: From phk at phk.freebsd.dk Fri Sep 13 07:43:49 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 13 Sep 2013 07:43:49 +0000 Subject: HSH_Private & expire superseded In-Reply-To: <5231C6B5.6090200@schokola.de> References: <5230A766.5040509@schokola.de> <1748.1378984543@critter.freebsd.dk> <5231C6B5.6090200@schokola.de> Message-ID: <3319.1379058229@critter.freebsd.dk> Nils and I discussed this briefly on IRC yesterday, and I promised to ponder the issue some more. This is the result of said ponderage: I think HSH_Private() is here to stay, for a host of reasons, but Nils point about being able to pull stuff from the hash-lists with just one lock-operation is both valid and pertinent. We already have some other troubles in that space, specifically that the EXPire thread cannot keep up etc. So bearing in mind that I have just come up with this in the shower and not built a prototype yet, here is my current idea: We instantiate an expiry-thread per thread-pool. This moves us a little bit closer to NUMA and gives us more expiry bandwidth. Each EXP instance will have a "you should look at this object" inbox-list, and whenever we add or yank something to a hash-list or when we change the obj.keep value, we append the objcore to the thread-pools EXP-inbox. The EXP thread will sort through its inbox, keeps the binheap updated and disposes of the objcores which died. The major benefit of this scheme, is that we have someplace to dump unwanted objects when we are in a hurry in HSH_Lookup(), and somebody else will clean them up out of the critical path. Yanking an object from a live hash-list will look something like: (we have the oh->mtx locked) ... VTAILQ_REMOVE(...) oc->objhead = NULL; EXP_Feed(oc); lock(EXP-mtx) if (!(oc->flags & OC_F_EXP_QUEUED) { VTAILQ_INSERT_TAIL(EXP->queue, oc, exp_list) oc->flags |= OC_F_EXP_QUEUED; } unlock(EXP-mtx) ... It may also be beneficial to push LRU updates to the EXP thread, since it could batch process all EXP updates in the inbox and move it out of the critical path. Comments, critique etc. welcome... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Fri Sep 13 23:39:19 2013 From: slink at schokola.de (Nils Goroll) Date: Sat, 14 Sep 2013 01:39:19 +0200 Subject: HSH_Private & expire superseded In-Reply-To: <3319.1379058229@critter.freebsd.dk> References: <5230A766.5040509@schokola.de> <1748.1378984543@critter.freebsd.dk> <5231C6B5.6090200@schokola.de> <3319.1379058229@critter.freebsd.dk> Message-ID: <5233A227.9070609@schokola.de> On 09/13/13 09:43 AM, Poul-Henning Kamp wrote: > We instantiate an expiry-thread per thread-pool. This sounds like an interesting idea and I hope to get around to devouring it in detail. For the time being: This does not change the fact that the exp thread will still need to lock the objhead list, right? Nils From phk at phk.freebsd.dk Sat Sep 14 05:55:54 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Sat, 14 Sep 2013 05:55:54 +0000 Subject: HSH_Private & expire superseded In-Reply-To: <5233A227.9070609@schokola.de> References: <5230A766.5040509@schokola.de> <1748.1378984543@critter.freebsd.dk> <5231C6B5.6090200@schokola.de> <3319.1379058229@critter.freebsd.dk> <5233A227.9070609@schokola.de> Message-ID: <7783.1379138154@critter.freebsd.dk> In message <5233A227.9070609 at schokola.de>, Nils Goroll writes: >On 09/13/13 09:43 AM, Poul-Henning Kamp wrote: >For the time being: This does not change the fact that the exp thread will still >need to lock the objhead list, right? In the case were somebody else nukes an object (de-dup, length restriction) the objcore is taken off the objhdr list by that thread and delivered to EXP on its "inbox" list, so no further objhdr locking will be required. The trick is that the objcore can hang out on the exp-inbox list until its refcount becomes zero. For LRU and obj.keep expiry, there's no way around locking objhead if we want to recover the storage in a timely way. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From slink at schokola.de Mon Sep 16 11:26:45 2013 From: slink at schokola.de (Nils Goroll) Date: Mon, 16 Sep 2013 13:26:45 +0200 Subject: per-workpool exp threads In-Reply-To: <3319.1379058229@critter.freebsd.dk> References: <5230A766.5040509@schokola.de> <1748.1378984543@critter.freebsd.dk> <5231C6B5.6090200@schokola.de> <3319.1379058229@critter.freebsd.dk> Message-ID: <5236EAF5.3040204@schokola.de> First a side note on the private_oh: Couldn't the code handle a second magic private_oh which demands "don't queue here" and is always empty? This way we would only need to take on the additional sync overhead when needed (for debugging). (side note on the side note: why is this worrying me so much? Probably a benchmark would help putting this into perspective...) On 09/13/13 09:43 AM, Poul-Henning Kamp wrote: > Yanking an object from a live hash-list will look something like: > > (we have the oh->mtx locked) > ... > VTAILQ_REMOVE(...) > oc->objhead = NULL; > EXP_Feed(oc); > lock(EXP-mtx) > if (!(oc->flags& OC_F_EXP_QUEUED) { > VTAILQ_INSERT_TAIL(EXP->queue, oc, exp_list) > oc->flags |= OC_F_EXP_QUEUED; > } > unlock(EXP-mtx) > ... Hm, reflecting on this verbosely: The per-workerpool exp thread could minimize the critical section on EXP-mtx by just taking the whole queue onto its own private head for processing, so there really should be virtually no contention on this lock. Also I find it quite appealing to take all of the expire / free handling out of the delivery code path with the exp thread mailbox concept. Could we hand off additional functions to async processing in the exp thread, like - EXP_Rearm - EXP_Insert - all of HSH_Deref* which is not related to the oh - BAN_DestroyObj - freeing stuff ? On the other handl I am not so sure if keeping objs on the exp->q until their refcount becomes 0 is the best solution. Maybe one could make them private (optionally lockless private as proposed above) and let Deref hand them over to Exp. Other than that, yes, I do understand that LRU has to rip objects out of the oh, and there is the special case that LRU needs to make room "now". Do I understand correctly that there currently is no "proactive LRU nuke" (trying keep free space above thresholt per stevedore)? Thanks, Nils From martin at varnish-software.com Thu Sep 19 12:45:20 2013 From: martin at varnish-software.com (Martin Blix Grydeland) Date: Thu, 19 Sep 2013 14:45:20 +0200 Subject: [PATCH] Make all VSL records null terminated Message-ID: <1379594720-6512-1-git-send-email-martin@varnish-software.com> Make all VSL records null terminated so that it is safe to use string functions on the SHMLOG directly. --- bin/varnishd/cache/cache_shmlog.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/bin/varnishd/cache/cache_shmlog.c b/bin/varnishd/cache/cache_shmlog.c index 0093c74..6a80e9a 100644 --- a/bin/varnishd/cache/cache_shmlog.c +++ b/bin/varnishd/cache/cache_shmlog.c @@ -218,13 +218,14 @@ VSL(enum VSL_tag_e tag, uint32_t vxid, const char *fmt, ...) if (strchr(fmt, '%') == NULL) { - vslr(tag, vxid, fmt, strlen(fmt)); + vslr(tag, vxid, fmt, strlen(fmt) + 1); } else { va_start(ap, fmt); n = vsnprintf(buf, mlen, fmt, ap); va_end(ap); - if (n > mlen) - n = mlen; + if (n > mlen - 1) + n = mlen - 1; + buf[n++] = '\0'; vslr(tag, vxid, buf, n); } } @@ -261,6 +262,7 @@ void VSLbt(struct vsl_log *vsl, enum VSL_tag_e tag, txt t) { unsigned l, mlen; + char *p; Tcheck(t); if (vsl_tag_is_masked(tag)) @@ -269,16 +271,18 @@ VSLbt(struct vsl_log *vsl, enum VSL_tag_e tag, txt t) /* Truncate */ l = Tlen(t); - if (l > mlen) - l = mlen; + if (l > mlen - 1) + l = mlen - 1; assert(vsl->wlp < vsl->wle); /* Flush if necessary */ - if (VSL_END(vsl->wlp, l) >= vsl->wle) + if (VSL_END(vsl->wlp, l + 1) >= vsl->wle) VSL_Flush(vsl, 1); - assert(VSL_END(vsl->wlp, l) < vsl->wle); - memcpy(VSL_DATA(vsl->wlp), t.b, l); + assert(VSL_END(vsl->wlp, l + 1) < vsl->wle); + p = VSL_DATA(vsl->wlp); + memcpy(p, t.b, l); + p[l++] = '\0'; vsl->wlp = vsl_hdr(tag, vsl->wlp, l, vsl->wid); assert(vsl->wlp < vsl->wle); vsl->wlr++; @@ -321,15 +325,16 @@ VSLb(struct vsl_log *vsl, enum VSL_tag_e tag, const char *fmt, ...) mlen = cache_param->shm_reclen; /* Flush if we cannot fit a full size record */ - if (VSL_END(vsl->wlp, mlen) >= vsl->wle) + if (VSL_END(vsl->wlp, mlen + 1) >= vsl->wle) VSL_Flush(vsl, 1); p = VSL_DATA(vsl->wlp); va_start(ap, fmt); n = vsnprintf(p, mlen, fmt, ap); va_end(ap); - if (n > mlen) - n = mlen; /* we truncate long fields */ + if (n > mlen - 1) + n = mlen - 1; /* we truncate long fields */ + p[n++] = '\0'; vsl->wlp = vsl_hdr(tag, vsl->wlp, n, vsl->wid); assert(vsl->wlp < vsl->wle); vsl->wlr++; -- 1.7.10.4 From leifj at mnt.se Thu Sep 19 13:24:50 2013 From: leifj at mnt.se (Leif Johansson) Date: Thu, 19 Sep 2013 15:24:50 +0200 Subject: how many backends are "a lot"? Message-ID: <523AFB22.4080000@mnt.se> Are there any significant limits in varnish that makes it hard to keep dozens or even 100s of backends, assuming they don't serve up huge amounts of cacheable data each? Cheers Leif From dridi.boukelmoune at zenika.com Thu Sep 19 13:46:01 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Thu, 19 Sep 2013 15:46:01 +0200 Subject: how many backends are "a lot"? In-Reply-To: <523AFB22.4080000@mnt.se> References: <523AFB22.4080000@mnt.se> Message-ID: "So imagine you have 1000 backends in your VCL, not an unreasonable number, each configured with health-polling." From: https://www.varnish-cache.org/docs/trunk/phk/backends.html Regards, Dridi On Thu, Sep 19, 2013 at 3:24 PM, Leif Johansson wrote: > > Are there any significant limits in varnish that makes it hard to > keep dozens or even 100s of backends, assuming they don't serve > up huge amounts of cacheable data each? > > Cheers Leif > > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev From slink at schokola.de Thu Sep 19 13:52:09 2013 From: slink at schokola.de (Nils Goroll) Date: Thu, 19 Sep 2013 15:52:09 +0200 Subject: how many backends are "a lot"? In-Reply-To: <523AFB22.4080000@mnt.se> References: <523AFB22.4080000@mnt.se> Message-ID: <523B0189.1060400@schokola.de> On 09/19/13 03:24 PM, Leif Johansson wrote: > Are there any significant limits in varnish that makes it hard to > keep dozens or even 100s of backends No, just keep an eye on your file descriptor limit. > assuming they don't serve > up huge amounts of cacheable data each? I don't see how cacheable data would relate to the number of backends. Nils From perbu at varnish-software.com Thu Sep 19 13:56:05 2013 From: perbu at varnish-software.com (Per Buer) Date: Thu, 19 Sep 2013 15:56:05 +0200 Subject: how many backends are "a lot"? In-Reply-To: <523AFB22.4080000@mnt.se> References: <523AFB22.4080000@mnt.se> Message-ID: Hi, On Thu, Sep 19, 2013 at 3:24 PM, Leif Johansson wrote: > > Are there any significant limits in varnish that makes it hard to > keep dozens or even 100s of backends, assuming they don't serve > up huge amounts of cacheable data each? > No. Go wild. -- *Per Buer* CTO | Varnish Software AS Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Europe Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leifj at mnt.se Thu Sep 19 14:15:10 2013 From: leifj at mnt.se (Leif Johansson) Date: Thu, 19 Sep 2013 16:15:10 +0200 Subject: how many backends are "a lot"? In-Reply-To: References: <523AFB22.4080000@mnt.se> Message-ID: <523B06EE.6080107@mnt.se> On 09/19/2013 03:56 PM, Per Buer wrote: > Hi, > > > On Thu, Sep 19, 2013 at 3:24 PM, Leif Johansson > wrote: > > > Are there any significant limits in varnish that makes it hard to > keep dozens or even 100s of backends, assuming they don't serve > up huge amounts of cacheable data each? > > > No. Go wild. > > > > schweet! thx for the quick and very clear response :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkarsten at varnish-software.com Mon Sep 23 14:24:07 2013 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Mon, 23 Sep 2013 16:24:07 +0200 Subject: Getting ready for 4.0; Time-to-first-byte measurements Message-ID: <20130923142400.GA24671@immer.varnish-software.com> [ tl;dr: master starts sending bytes quicker than 3.0. ] Hello all. I've been running some benchmarks to see how master stands up to the earlier version. One of the metrics are time-to-first-byte, which I've got (preliminary) results for now. First something to compare with; Apache (worker mpm) from ubuntu 12.04: localhost:80 (2013-09-23 15:42:43.971109 on fryer1) -------------------------------------------------- mean: 508.937?s (stdev 0.000030301) median: 505.162?s mean: 492.875?s (stdev 0.000028883) median: 492.782?s mean: 513.742?s (stdev 0.000026687) median: 514.409?s mean: 512.047?s (stdev 0.000032193) median: 510.670?s mean: 510.367?s (stdev 0.000031974) median: 510.384?s Numbers for current 3.0 branch: (0a7e6ca) localhost:6081 (2013-09-23 15:30:58.719143 on fryer1) -------------------------------------------------- mean: 317.403?s (stdev 0.000032792) median: 320.449?s mean: 262.953?s (stdev 0.000037567) median: 243.901?s mean: 339.497?s (stdev 0.000031443) median: 344.593?s mean: 344.804?s (stdev 0.000005411) median: 342.978?s mean: 320.578?s (stdev 0.000032696) median: 326.909?s and finally for current master: (988cd77) localhost:6081 (2013-09-23 15:44:48.687103 on fryer1) -------------------------------------------------- mean: 234.249?s (stdev 0.000013593) median: 225.852?s mean: 272.104?s (stdev 0.000049566) median: 263.452?s mean: 234.516?s (stdev 0.000033360) median: 226.136?s mean: 287.847?s (stdev 0.000050505) median: 285.355?s mean: 243.210?s (stdev 0.000048766) median: 226.050?s Method is a single connect/GET/read(1) loop, 5 runs of 100 connections with 2s sleep between. One warmup run before measurements are done. The object fetched was 177 bytes. Software used is up on github, if anyone feels like replicating the results or improving the benchmark: https://github.com/lkarsten/httpttfb -- With regards, Lasse Karstensen Varnish Software AS From lkarsten at varnish-software.com Mon Sep 23 14:38:38 2013 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Mon, 23 Sep 2013 16:38:38 +0200 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: <20130923142400.GA24671@immer.varnish-software.com> References: <20130923142400.GA24671@immer.varnish-software.com> Message-ID: <20130923143836.GB24671@immer.varnish-software.com> On Mon, Sep 23, 2013 at 04:24:07PM +0200, Lasse Karstensen wrote: > I've been running some benchmarks to see how master stands up to the earlier > version. > One of the metrics are time-to-first-byte, which I've got (preliminary) > results for now. I should also mention that I'm running this on 4 year old hardware, so the numbers are primarily useful for comparing new vs old. fryer1 hw: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz kernel: 3.2.0-51-generic On my 2 year old X220 Thinkpad the numbers for Varnish 3.0 are much better: (Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz) with debian wheezy: (3.2.0-4-amd64)) localhost:6081 (2013-09-23 14:35:16.566675 on immer) -------------------------------------------------- mean: 93.045?s (stdev 0.000001108) median: 92.959?s mean: 136.669?s (stdev 0.000039922) median: 147.365?s mean: 145.904?s (stdev 0.000005585) median: 144.362?s mean: 135.787?s (stdev 0.000012227) median: 136.225?s mean: 137.303?s (stdev 0.000008220) median: 138.007?s -- With regards, Lasse Karstensen Varnish Software AS From phk at phk.freebsd.dk Tue Sep 24 07:23:07 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 24 Sep 2013 07:23:07 +0000 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: <20130923142400.GA24671@immer.varnish-software.com> References: <20130923142400.GA24671@immer.varnish-software.com> Message-ID: <1789.1380007387@critter.freebsd.dk> Are these cache hits or cache misses ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From lkarsten at varnish-software.com Tue Sep 24 08:23:53 2013 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Tue, 24 Sep 2013 10:23:53 +0200 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: <1789.1380007387@critter.freebsd.dk> References: <20130923142400.GA24671@immer.varnish-software.com> <1789.1380007387@critter.freebsd.dk> Message-ID: <20130924082352.GC24671@immer.varnish-software.com> On Tue, Sep 24, 2013 at 07:23:07AM +0000, Poul-Henning Kamp wrote: > Are these cache hits or cache misses ? The numbers are for GET-ing a single cached object from a client on localhost. -- With regards, Lasse Karstensen Varnish Software AS From phk at phk.freebsd.dk Tue Sep 24 09:24:19 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 24 Sep 2013 09:24:19 +0000 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: <20130924082352.GC24671@immer.varnish-software.com> References: <20130923142400.GA24671@immer.varnish-software.com> <1789.1380007387@critter.freebsd.dk> <20130924082352.GC24671@immer.varnish-software.com> Message-ID: <2180.1380014659@critter.freebsd.dk> In message <20130924082352.GC24671 at immer.varnish-software.com>, Lasse Karstense n writes: >On Tue, Sep 24, 2013 at 07:23:07AM +0000, Poul-Henning Kamp wrote: >> Are these cache hits or cache misses ? > >The numbers are for GET-ing a single cached object from >a client on localhost. and you get ~ 200 msec numbers ??? That should be 10 microseconds... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From perbu at varnish-software.com Tue Sep 24 10:09:11 2013 From: perbu at varnish-software.com (Per Buer) Date: Tue, 24 Sep 2013 12:09:11 +0200 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: <2180.1380014659@critter.freebsd.dk> References: <20130923142400.GA24671@immer.varnish-software.com> <1789.1380007387@critter.freebsd.dk> <20130924082352.GC24671@immer.varnish-software.com> <2180.1380014659@critter.freebsd.dk> Message-ID: On Tue, Sep 24, 2013 at 11:24 AM, Poul-Henning Kamp wrote: > In message <20130924082352.GC24671 at immer.varnish-software.com>, Lasse > Karstense > n writes: > >On Tue, Sep 24, 2013 at 07:23:07AM +0000, Poul-Henning Kamp wrote: > >> Are these cache hits or cache misses ? > > > >The numbers are for GET-ing a single cached object from > >a client on localhost. > > and you get ~ 200 msec numbers ??? > His posts states ?s, not msec. -- *Per Buer* CTO | Varnish Software AS Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Europe Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Sep 24 10:16:55 2013 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 24 Sep 2013 10:16:55 +0000 Subject: Getting ready for 4.0; Time-to-first-byte measurements In-Reply-To: References: <20130923142400.GA24671@immer.varnish-software.com> <1789.1380007387@critter.freebsd.dk> <20130924082352.GC24671@immer.varnish-software.com> <2180.1380014659@critter.freebsd.dk> Message-ID: <2317.1380017815@critter.freebsd.dk> In message , Per Buer writes: >> and you get ~ 200 msec numbers ??? > >His posts states =ECs, not msec. Oh, unicode-confusion then... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From geoff at uplex.de Mon Sep 30 19:11:58 2013 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 30 Sep 2013 21:11:58 +0200 Subject: [PATCH] Option to choose another root tmp-directory in varnishtest & make check Message-ID: <5249CCFE.6070208@uplex.de> Hello all, The attached patch adds an option '-m tmpdir' to varnishtest, which sets the directory within which the vtc.* temp directories are created for each test, previously hard-wired to /tmp (-t was already taken). This became necessary for me because Varnish was being built on a build server for which /tmp was mounted with the noexec option. That caused 'make check' to fail, because some of the tests save shared objects into a temp directory and then attempt to load them. The Makefile for varnishtest is patched so that if you have TMPDIR defined in the environment before 'make check' is called, then varnishtest uses that, otherwise it uses /tmp. Best, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Option-to-choose-another-root-tmp-directory-in-varni.patch Type: text/x-patch Size: 2625 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: