From rainer at ultra-secure.de Tue Oct 1 07:40:24 2013 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 1 Oct 2013 09:40:24 +0200 Subject: Is there a way to invalidate a cached item on multiple varnish servers at once? In-Reply-To: <1380573765-sup-3917@geror.local> References: <316A162B-065D-4C24-9C74-730F3F7ABF67@ultra-secure.de> <1380573765-sup-3917@geror.local> Message-ID: <20131001094024.6d4548f4@suse3> Am Mon, 30 Sep 2013 13:44:34 -0700 schrieb James Pearson : > Excerpts from Rainer Duffner's message of 2013-09-29 13:45:51 -0700: > > Hi, > > > > suppose you've got a couple of load-balanced varnish-servers, is > > there a way to have an item removed from the cache on multiple > > servers at the same time? > > How small of a window do you need? I'd probably just fire off a > PURGE request to each server, but that'd give a few seconds where > some servers would have a cache entry and others would not. It's not a question of a certain time-window. More a question of transparency. E.g. in Drupal, you can only enter a single IP. It's unreasonable to assume a user would be able to issue a purge on a 2nd (or 3rd) server himself - as he may not know the IPs of these servers anyway. From dridi.boukelmoune at zenika.com Tue Oct 1 08:47:23 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Tue, 1 Oct 2013 10:47:23 +0200 Subject: Is there a way to invalidate a cached item on multiple varnish servers at once? In-Reply-To: <20131001094024.6d4548f4@suse3> References: <316A162B-065D-4C24-9C74-730F3F7ABF67@ultra-secure.de> <1380573765-sup-3917@geror.local> <20131001094024.6d4548f4@suse3> Message-ID: Well, this is a bit twisted, but you could have your varnish instances as back-ends of themselves, and with the use of PURGE/BAN methods, add a restart logic that would push the invalidation request to the next varnish. Example: V1 -> B1 V2 -> B2 V3 -> B3 Your back-ends B1 to B3 could purge their respective varnish instances. The purge request would be first interpreted and then restarted to use another Varnish as the backend: V1 -> V2 -> V3 -> V1 With the proper use of a guard variable, probably a [be]req header, you can propagate the invalidation without falling into an infinite loop. But this is just theory, I wouldn't go down that path :p I haven't heard of any tool doing general-purpose http replication (or multiple forwarding), but in the Varnish ecosystem, the super fast purger is probably what you're looking for. Regards, Dridi On Tue, Oct 1, 2013 at 9:40 AM, Rainer Duffner wrote: > Am Mon, 30 Sep 2013 13:44:34 -0700 > schrieb James Pearson : > >> Excerpts from Rainer Duffner's message of 2013-09-29 13:45:51 -0700: >> > Hi, >> > >> > suppose you've got a couple of load-balanced varnish-servers, is >> > there a way to have an item removed from the cache on multiple >> > servers at the same time? >> >> How small of a window do you need? I'd probably just fire off a >> PURGE request to each server, but that'd give a few seconds where >> some servers would have a cache entry and others would not. > > It's not a question of a certain time-window. > More a question of transparency. > E.g. in Drupal, you can only enter a single IP. > > It's unreasonable to assume a user would be able to issue a purge on a > 2nd (or 3rd) server himself - as he may not know the IPs of these > servers anyway. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From daniel.carrillo at gmail.com Tue Oct 1 08:48:22 2013 From: daniel.carrillo at gmail.com (Daniel Carrillo) Date: Tue, 1 Oct 2013 10:48:22 +0200 Subject: Is there a way to invalidate a cached item on multiple varnish servers at once? In-Reply-To: <20131001094024.6d4548f4@suse3> References: <316A162B-065D-4C24-9C74-730F3F7ABF67@ultra-secure.de> <1380573765-sup-3917@geror.local> <20131001094024.6d4548f4@suse3> Message-ID: 2013/10/1 Rainer Duffner > > Am Mon, 30 Sep 2013 13:44:34 -0700 > schrieb James Pearson : > > > Excerpts from Rainer Duffner's message of 2013-09-29 13:45:51 -0700: > > > Hi, > > > > > > suppose you've got a couple of load-balanced varnish-servers, is > > > there a way to have an item removed from the cache on multiple > > > servers at the same time? > > > > How small of a window do you need? I'd probably just fire off a > > PURGE request to each server, but that'd give a few seconds where > > some servers would have a cache entry and others would not. > > It's not a question of a certain time-window. > More a question of transparency. > E.g. in Drupal, you can only enter a single IP. I think, you can set multiple servers by adding separated servers with spaces in the text field. Did you try that ? Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at krischer.nl Tue Oct 1 10:23:17 2013 From: paul at krischer.nl (Paul Krischer) Date: Tue, 1 Oct 2013 12:23:17 +0200 Subject: Is there a way to invalidate a cached item on multiple varnish servers at once? In-Reply-To: <20131001094024.6d4548f4@suse3> References: <316A162B-065D-4C24-9C74-730F3F7ABF67@ultra-secure.de> <1380573765-sup-3917@geror.local> <20131001094024.6d4548f4@suse3> Message-ID: Hi James, I wrote the Purge module for Drupal. ( http://drupal.org/project/purge ) and the "stable" 1.x branch allows you to enter multiple proxy addresses, separated by spaces. like: "http://10.0.1.10 http://10.0.1.11". This will fire off all purge requests to both of the servers as a single burst of http requests (using curl_multi), preserving your sites domain name. Admittingly thte current state of the module isn't that great but I expect to work on it some more on it soon. Paul Krischer "SqyD" On Tue, Oct 1, 2013 at 9:40 AM, Rainer Duffner wrote: > Am Mon, 30 Sep 2013 13:44:34 -0700 > schrieb James Pearson : > > > Excerpts from Rainer Duffner's message of 2013-09-29 13:45:51 -0700: > > > Hi, > > > > > > suppose you've got a couple of load-balanced varnish-servers, is > > > there a way to have an item removed from the cache on multiple > > > servers at the same time? > > > > How small of a window do you need? I'd probably just fire off a > > PURGE request to each server, but that'd give a few seconds where > > some servers would have a cache entry and others would not. > > It's not a question of a certain time-window. > More a question of transparency. > E.g. in Drupal, you can only enter a single IP. > > It's unreasonable to assume a user would be able to issue a purge on a > 2nd (or 3rd) server himself - as he may not know the IPs of these > servers anyway. > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Travis.Crowder at penton.com Tue Oct 1 17:29:19 2013 From: Travis.Crowder at penton.com (Crowder, Travis) Date: Tue, 1 Oct 2013 17:29:19 +0000 Subject: Sizing Varnish RAM after over allocation Message-ID: I am trying to properly size our Varnish cache size after it's been allocated way too much memory. The system is using about 10GB of RAM for Varnish. Here is what I am using to calculate the total needed: ~ 100MB = Varnish Master Process ~ 80MB = Varnish Shared Memory Log ~ 2KB * n objects = Object storage overhead ~ yKB * n objects = Object storage With this, I am estimating about 4.2GB for Varnish using a 32KB average object size. What I am having trouble with is the grace (set for 1 hour) and how that comes into play with storing objects. Are they stored for 1 hour after expiry in case a backend goes down and after the 1 hour available to be evicted? When exactly do objects free memory? I've found different information, but it seems the prevailing thought is only when a purge command occurs. I am under the impression the expired objects just sit in memory until the cache is full (just before nuking objects) and then LRU eviction occurs. Is this accurate? I mean, even though I need about 5GB for Varnish, since I allocate 16GB, will the entire 16GB be used by expired objects? Is there a way to tell exactly how much RAM is being used for VALID objects? TIA, Travis client_conn 7668521 58.59 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 15846658 121.06 Client requests received cache_hit 10757881 82.19 Cache hits cache_hitpass 32698 0.25 Cache hits for pass cache_miss 4011878 30.65 Cache misses backend_conn 4465314 34.11 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 0 0.00 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_retry 0 0.00 Backend conn. retry fetch_head 89 0.00 Fetch head fetch_length 2928665 22.37 Fetch with Length fetch_chunked 1416614 10.82 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 95492 0.73 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 8 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 22537 0.17 Fetch no body (304) n_sess_mem 682 . N struct sess_mem n_sess 369 . N struct sess n_object 119236 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 119437 . N struct objectcore n_objecthead 127556 . N struct objecthead n_waitinglist 6994 . N struct waitinglist n_vbc 10 . N struct vbc n_wrk 400 . N worker threads n_wrk_create 400 0.00 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 0 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 11 . N backends n_expired 3890039 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 8128646 . N LRU moved objects losthdr 91 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 13494880 103.10 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 7668501 58.59 Total Sessions s_req 15846658 121.06 Total Requests s_pipe 1251 0.01 Total pipe s_pass 453681 3.47 Total pass s_fetch 4462324 34.09 Total fetch s_hdrbytes 7307416930 55826.98 Total header bytes s_bodybytes 435088023618 3323972.25 Total body bytes sess_closed 2239685 17.11 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 13878128 106.03 Session Linger sess_herd 12875482 98.37 Session herd shm_records 916560440 7002.31 SHM records shm_writes 61086120 466.68 SHM writes shm_flushes 14 0.00 SHM flushes due to overflow shm_cont 10334 0.08 SHM MTX contention shm_cycles 405 0.00 SHM cycles through buffer sms_nreq 313389 2.39 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 544043304 . SMS bytes allocated sms_bfree 544043304 . SMS bytes freed backend_req 4463489 34.10 Backend requests made n_vcl 3 0.00 N vcl total n_vcl_avail 3 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 29993 . N total active bans n_ban_gone 29127 . N total gone bans n_ban_add 61384 0.47 N new bans added n_ban_retire 31391 0.24 N old bans deleted n_ban_obj_test 16197865 123.75 N objects tested n_ban_re_test 3174198250 24250.14 N regexps tested against n_ban_dups 33565 0.26 N duplicate bans removed hcb_nolock 14802058 113.08 HCB Lookups without lock hcb_lock 3831356 29.27 HCB Lookups with lock hcb_insert 3831344 29.27 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 130894 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 3226586 24.65 Gunzip operations LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 940167 7.18 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 21106576 161.25 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 7544189 57.64 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 17100 0.13 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 7668837 58.59 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 7703601 58.85 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 146130 1.12 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 2 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 30510977 233.10 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 3831455 29.27 Created locks LCK.objhdr.destroy 3703972 28.30 Destroyed locks LCK.objhdr.locks 79026186 603.74 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 8038327 61.41 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 4017535 30.69 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 1802 0.01 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 37966814 290.06 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 286759 2.19 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 8930660 68.23 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 11 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 13425358 102.57 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.s0.c_req 8642204 66.02 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 413441801561 3158600.10 Bytes allocated SMA.s0.c_freed 405211048073 3095719.04 Bytes freed SMA.s0.g_alloc 238498 . Allocations outstanding SMA.s0.g_bytes 8230753488 . Bytes outstanding SMA.s0.g_space 8949115696 . Bytes available SMA.Transient.c_req 884306 6.76 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 31276155463 238942.62 Bytes allocated SMA.Transient.c_freed 31276023159 238941.61 Bytes freed SMA.Transient.g_alloc 2 . Allocations outstanding SMA.Transient.g_bytes 132304 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.web10(10.2.180.74,,80).vcls 2 . VCL references VBE.web10(10.2.180.74,,80).happy18446744073709551615 . Happy health probes VBE.web11(10.2.180.75,,80).vcls 3 . VCL references VBE.web11(10.2.180.75,,80).happy18446744073709551615 . Happy health probes VBE.web12(10.2.180.185,,80).vcls 3 . VCL references VBE.web12(10.2.180.185,,80).happy 18446744073709551615 . Happy health probes VBE.web13(10.2.180.186,,80).vcls 3 . VCL references VBE.web13(10.2.180.186,,80).happy 18446744073709551615 . Happy health probes VBE.web14(10.2.180.198,,80).vcls 3 . VCL references VBE.web14(10.2.180.198,,80).happy 18446744073709551615 . Happy health probes VBE.web15(10.2.180.199,,80).vcls 3 . VCL references VBE.web15(10.2.180.199,,80).happy 18446744073709551615 . Happy health probes VBE.web16(10.2.180.47,,80).vcls 3 . VCL references VBE.web16(10.2.180.47,,80).happy 18446744073709551615 . Happy health probes VBE.web17(10.2.180.48,,80).vcls 3 . VCL references VBE.web17(10.2.180.48,,80).happy 18446744073709551615 . Happy health probes VBE.web18(10.2.180.49,,80).vcls 3 . VCL references VBE.web18(10.2.180.49,,80).happy 18446744073709551615 . Happy health probes VBE.web19(10.2.180.50,,80).vcls 3 . VCL references VBE.web19(10.2.180.50,,80).happy 18446744073709551615 . Happy health probes VBE.web20(10.2.180.44,,80).vcls 2 . VCL references VBE.web20(10.2.180.44,,80).happy 18446744073709551615 . Happy health probes -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.robinson at stanford.edu Wed Oct 2 02:47:06 2013 From: jim.robinson at stanford.edu (James A. Robinson) Date: Tue, 1 Oct 2013 19:47:06 -0700 Subject: client and fallback director questions. Message-ID: Hello, I've got a question about varnish's director capabilities, and I was hoping someone here could help me understand the capabilities of varnish. Right now we've got a varnish server acting as a load balancer for three backends that are supposed to contain identical content. In realiity, small differences can exist as an update process visits each backend in turn, updating its content before proceeding onto the next. To minimize the possibility of a request seeing an inconsistent view of the backend, we're grouping requests by path to route them to a particular backend. E.g., requests with path /a/ get sent to backend #1, requests with path /b/ get sent to backend #2. So we have backend configurations similar to this: backend b1 { .host = "b1.example.org"; .port = "80"; } backend b2 { .host = "b2.example.org"; .port = "80"; } backend b3 { .host = "b3.example.org"; .port = "80"; } And our vcl_recv section has something like this in it: if (req.url ~ "^/(a|b|c|d)[\.\/]") { set req.backend = b1; } else if (req.url ~ "^/(e|f|g|h)[\.\/]") { set req.backend = b2; } else set req.backend = b3; } The problem with our setup is that we would like to introduce fail over capabilities, but it's not clear how to introduce a director that route all traffic for path // to backend # except in the situation where we detect that backend is down. The documentation discusses a client director, and it seems like it might be possible to use that to enable our fallback logic. Basically to say that, based on the path prefix, to set an identity of X and consistently route to an available backend. But I'm unclear on whether or not the client director will perform any sort of fallback logic if a backend that it *was* routing to fails its probe? Say we define a .probe: backend b1 { .host = "b1.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b2 { .host = "b2.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b3 { .host = "b3.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } and we set up a client director: director member client { { .backend = b1; .weight = 1; } { .backend = b2; .weight = 1; } { .backend = b3; .weight = 1; } } and then we set client.identity based on our regexes: if (req.url ~ "^/(a|b|c|d)[\.\/]") { set client.identity = "group-1" } else if (req.url ~ "^/(e|f|g|h)[\.\/]") { set client.identity = "group-2" } else set client.identity = "group-3" } If the backend pool shrinks from 3 to 2 because one of the backend servers fails its probe, will traffic that was getting routed to that backend get rerouted? Someone has also pointed out to me the fallback director, and how it might be possible for us to create several pools to indicate fail-over order. E.g., set b1 and b2 as one group, and b2 and b3 and another group, but I'm not grasping exactly how that would look. It appears as though I'd need to set up multiple directors and use restart to trigger fail over if a request to one director failed? Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabster at lelutin.ca Wed Oct 2 05:52:12 2013 From: gabster at lelutin.ca (Gabriel Filion) Date: Wed, 02 Oct 2013 01:52:12 -0400 Subject: Sizing Varnish RAM after over allocation In-Reply-To: References: Message-ID: <524BB48C.5040906@lelutin.ca> Hey there, On 01/10/13 01:29 PM, Crowder, Travis wrote: > I am trying to properly size our Varnish cache size after it's been > allocated way too much memory. The system is using about 10GB of RAM > for Varnish. I'm going to venture an answer that you might consider boring, but if your configuration is "too big", then it's quite easy to approximate how much memory you need by graphing the server's actual cache usage. the longer you've been graphing for, the better the estimate. plan to give a little more memory than the maximum consumption and, although not perfect, it "should" be good enough. -- Gabriel Filion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 291 bytes Desc: OpenPGP digital signature URL: From apj at mutt.dk Wed Oct 2 06:12:54 2013 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 2 Oct 2013 08:12:54 +0200 Subject: Sizing Varnish RAM after over allocation In-Reply-To: References: Message-ID: <20131002061254.GY19694@nerd.dk> On Tue, Oct 01, 2013 at 05:29:19PM +0000, Crowder, Travis wrote: > > With this, I am estimating about 4.2GB for Varnish using a 32KB average > object size. What I am having trouble with is the grace (set for 1 hour) and > how that comes into play with storing objects. Are they stored for 1 hour > after expiry in case a backend goes down and after the 1 hour available to be > evicted? When exactly do objects free memory? I've found different > information, but it seems the prevailing thought is only when a purge command > occurs. Content will be evicted when ttl, grace (and keep) expires. You'll see an ExpKill log entry in varnishlog when it happens. -- Andreas From dridi.boukelmoune at zenika.com Wed Oct 2 08:28:16 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 2 Oct 2013 10:28:16 +0200 Subject: Sizing Varnish RAM after over allocation In-Reply-To: <20131002061254.GY19694@nerd.dk> References: <20131002061254.GY19694@nerd.dk> Message-ID: Hi Travis, I don't see the worker threads (pool and session) workspaces and stacks in your list. Defaults aren't that high but with a lot of threads it might become significant. Also, is your system really using 10GB (and not just virtually) ? Best Regards, Dridi On Wed, Oct 2, 2013 at 8:12 AM, Andreas Plesner Jacobsen wrote: > On Tue, Oct 01, 2013 at 05:29:19PM +0000, Crowder, Travis wrote: >> >> With this, I am estimating about 4.2GB for Varnish using a 32KB average >> object size. What I am having trouble with is the grace (set for 1 hour) and >> how that comes into play with storing objects. Are they stored for 1 hour >> after expiry in case a backend goes down and after the 1 hour available to be >> evicted? When exactly do objects free memory? I've found different >> information, but it seems the prevailing thought is only when a purge command >> occurs. > > Content will be evicted when ttl, grace (and keep) expires. You'll see an > ExpKill log entry in varnishlog when it happens. > > -- > Andreas > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From jim.robinson at stanford.edu Thu Oct 3 23:01:09 2013 From: jim.robinson at stanford.edu (James A. Robinson) Date: Thu, 3 Oct 2013 16:01:09 -0700 Subject: client and fallback director questions. In-Reply-To: References: Message-ID: On Tue, Oct 1, 2013 at 7:47 PM, James A. Robinson wrote: > if (req.url ~ "^/(a|b|c|d)[\.\/]") { > set client.identity = "group-1" > } > else if (req.url ~ "^/(e|f|g|h)[\.\/]") { > set client.identity = "group-2" > } > else > set client.identity = "group-3" > } > I forgot that I'd need to set set req.backend = member within each of those, but testing out the rest of the configuration as I outlined seems to work as I expected. If I simulate a failure by using .probe that will never succeed, I see traffic for one group get routed from its original server to a new one, and when the .probe is reset to recover I see the traffic move back. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.robinson at stanford.edu Thu Oct 3 23:54:58 2013 From: jim.robinson at stanford.edu (James A. Robinson) Date: Thu, 3 Oct 2013 16:54:58 -0700 Subject: client and fallback director questions. In-Reply-To: References: Message-ID: On Thu, Oct 3, 2013 at 4:01 PM, James A. Robinson wrote: > I forgot that I'd need to set > > set req.backend = member > > within each of those, but testing out the rest of the > configuration as I outlined seems to work as I > expected. > > If I simulate a failure by using .probe that will > never succeed, I see traffic for one group get > routed from its original server to a new one, > and when the .probe is reset to recover I see > the traffic move back. > Last note: Setting up multiple director pools and weighting a "primary" node higher than the others appears to allow for fine grained control over which host serves the content. If the primary dies then the remaining nodes in the pool take over. So for a given set of backends: backend b1 { .host = "b1.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b2 { .host = "b2.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } backend b3 { .host = "b3.example.org"; .port = "80"; .probe = { .url = "/"; .timeout = 34 ms; .interval = 1s; .window = 10; .threshold = 8; } } define n pools, weighting a "primary" backend and having the remaining evently weighted if the primary dies: director b1_pool client { { .backend = b1; .weight = 10; } { .backend = b2; .weight = 1; } { .backend = b3; .weight = 1; } } director b2_pool client { { .backend = b1; .weight = 1; } { .backend = b2; .weight = 10; } { .backend = b3; .weight = 1; } } director b3_pool client { { .backend = b1; .weight = 1; } { .backend = b2; .weight = 1; } { .backend = b3; .weight = 10; } } and in sub vcl_recv { ... } we set if (req.url ~ "^/(a|b|c|d)[\.\/]") { set req.backend = b1_pool set client.identity = "b1" } else if (req.url ~ "^/(e|f|g|h)[\.\/]") { set req.backend = b2_pool set client.identity = "b2" } else set req.backend = b3_pool set client.identity = "b3" } -------------- next part -------------- An HTML attachment was scrubbed... URL: From imanandshah at gmail.com Fri Oct 4 08:49:24 2013 From: imanandshah at gmail.com (Anand Shah) Date: Fri, 4 Oct 2013 08:49:24 +0000 Subject: http first read error Message-ID: Hi All, Recently I am seeing too many http first read errors for one of my varnish driven domain. To me this looks like the backend does not answer inside the "first_byte" timeout. But the timeout mentioned is 60 seconds and it times out in one second. To troubleshoot it more I ran a tcpdump to understand if the backend is not closing the connection but unfortunately I never received a request at the backend for one such request which went unattended. Any clue on how this could be traced down further. first_byte_timeout 60.000000 [s] connect_timeout = 1s; Regards, Anand -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil at lanjewar.com Fri Oct 4 09:28:31 2013 From: nikhil at lanjewar.com (Nikhil Lanjewar) Date: Fri, 4 Oct 2013 14:58:31 +0530 Subject: [ESI] Best practice for caching user specific content Message-ID: Hi, I have been playing with Varnish for some time now. I will try to explain the scenario in a lengthy email that follows. However, here's the summary too. Please bear with me and take a look at the detailed scenario in case the summary doesn't make enough sense. *Summary*: I would like to pass some query parameters from a source URL to an ESI URL that was returned while fetching the source URL object itself. Is there a way to do this? *Detailed Explanation:* My web application is a typical consumer web application with the following features: 1. Consumer pages do not need Login/Signup. User identification takes place via a unique value passed as a query parameter. 2. Consumer pages show user's email in the header in case user identity was present in the URL's query parameters. These are absent in case the pages are being accessed without a user's identity. 3. Some links on the page are also appended with the user's identity in case it was present with the parent page. 4. Every other piece of information on these pages is the same for every user, except for the links (3) and email shown in the header (2). 5. Listing pages contain pagination controls. Paged listings contain page number as one of the query parameters in the URL. 6. The whole system is RESTful and does not rely on sessions or even Cookies as of now. Given the above, I have come up with the following Varnish setup: 1. vcl_recv - Extracts user identity and sets it on a custom request header such as X-UserIdentity. Cleans req.url by removing user specific query parameter and other front-end related query parameters such as Google Analytics query parameters used by GA's Javascript. 2. Backend always receives a request without the user's identity so that single copy of the parent page can be cached. 3. Internal links on the parent page that require user's identity are rendered via tags. Similarly, email shown in the header is also replaced by an which calls an HTTP end-point which is capable of returning a user's email if an identity was provided. Problem: 1. ESI URLs can't contain any user specific parameters. If they do, the first user hit will be cached and every subsequent request will result in reflecting the first user's identity. This is not desired at all. 2. Since ESI URLs do not contain any user specific parameters, the onus of adding user context lies with Varnish (since Varnish vcl_recv is where user's identity is removed). 3. I tried setting user identity in a request header for the parent URL (as explained above in 1. vcl_recv). This header is lost while fetching ESI URLs. 4. I tried setting a cookie in vcl_deliver. However, the cookie is inaccessible to the ESI URL fetch initiated during the same cycle. This is probably because the cookie is actually set *after* the first parent request completes and a response is sent back to the browser. I would want to make the cookie accessible *during* the parent request's ESI processing activity. -- Nikhil Lanjewar http://twitter.com/rhetonik -------------- next part -------------- An HTML attachment was scrubbed... URL: From shokri.md at gmail.com Fri Oct 4 19:56:04 2013 From: shokri.md at gmail.com (Mohammad Shokri) Date: Fri, 4 Oct 2013 23:26:04 +0330 Subject: New Varnish-Cache 3.0 Modular Configuration Templates Message-ID: Hi! I just want to let you all varnishy folks know that I've created MIT licensed modular configuration templates for varnish-cache 3.0, it's still a work in-progress but we are already using it in production sites. Any kind of comments are welcome. Repo URL: https://github.com/slashsBin/nuCache ?P.S.: The documentation is not complete but feel free to? contact me regarding anything related to it ______________________________________________________________________ *Mohammad* [/s?in] { ? http://slashsbin.com } cat /dev/infinity/mysteries | /s?in/cyberRoze.md -0 > /dev/null 2>&1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From PaytonK at email.chop.edu Tue Oct 8 17:05:02 2013 From: PaytonK at email.chop.edu (Payton, Karen J) Date: Tue, 8 Oct 2013 17:05:02 +0000 Subject: Forcing backend http requests to go thru varnish Message-ID: Hi all, I have some js in a site on my backend that does a jQuery get on an ajax page within the same site (so in theory all the work takes place in the same webserver). The code looks something like this: jQuery.get('/ajax?reqtype=providersearch&location='+mylocation+'&maxdistance='+maxdistance+'&cat=,'+mycat); Of course I'd like to repopulate the Varnish cache for a long list of zip codes, so I set up a wget that would hit this ajax page for any number of location variables. The only problem is that from within the site, it doesn't know about Varnish and doesn't go thru Varnish as the front end? so my prepopulated cache is worthless when my page hits that same ajax from within a web session. At least I think that's what's going on. I thought that forcing a fully qualified url might cause the jQuery.get to go outside the server and come thru the Varnish front door, so I coded this: jQuery.get('http://[my.website.address.com]/programs/roadmap/ajax/?reqtype=providersearch&location='+mylocation+'&maxdistance='+maxdistance+'&cat=,'+mycat); Still no dice. Anyone have experience getting an ajaxed chunk of data cached effectively using Varnish? Thanks for your help, I'm new so if this is obvious please forgive my ignorance! Joy Payton Web Services X 67658 great service --> great discoveries Have I provided you with great service that powers your great discoveries? Tell me about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From russ at eatnumber1.com Wed Oct 9 23:05:04 2013 From: russ at eatnumber1.com (Russell Harmon) Date: Wed, 9 Oct 2013 16:05:04 -0700 Subject: New VMod, libvmod-json Message-ID: Hi all, Excuse me if this is the incorrect place to announce this, but it seems the most appropriate place. I've been working on a varnish module which allows you to create json strings from within a vcl file. I'd like to announce the availability of my first revision of it. It can be found here: https://github.com/academia-edu/libvmod-json Thanks, Russell Harmon -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef at scaleengine.com Tue Oct 15 19:57:59 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Tue, 15 Oct 2013 15:57:59 -0400 Subject: adding a double quote into a regsub in vcl_fetch Message-ID: howdy (been a while) in fetch i am trying to set a Link: header to normalize a bunch of urls for the google i put this if (req.url ~ "^/blah") { set beresp.http.Link = regsub(req.url, "^/(blah).*", "; rel=\" + "canonical" + " "); return(deliver); } which gets me Link: ; rel=canonical this is close to what the rfc wants, but what i need is Link: ; rel="canonical" and i cannot get canonical in double quotes vcl complains that the ')' is unterminated, and other such unpleasantness does anyone know how to output the " character in a regsub? ---- Stefan Caunter E: stef at scaleengine.com Skype: stefan.caunter Toronto: +1 647 459 9475 +1 800 224 0192 From pprocacci at datapipe.com Tue Oct 15 21:00:13 2013 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Tue, 15 Oct 2013 16:00:13 -0500 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: References: Message-ID: <20131015210013.GD4386@nat.myhome> > if (req.url ~ "^/blah") { > set beresp.http.Link = regsub(req.url, "^/(blah).*", > "; rel=\" + "canonical" + " "); > return(deliver); > } > set beresp.http.Link = regsub(req.url, '^/blah.*', '; rel="canonical" '); This maybe? ________________________________ This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. Unless otherwise specified in a written agreement, this e-mail neither constitutes an agreement to conduct transactions by electronic means nor creates any legally binding contract or enforceable agreement. From stef at scaleengine.com Tue Oct 15 22:32:07 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Tue, 15 Oct 2013 18:32:07 -0400 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: <20131015210013.GD4386@nat.myhome> References: <20131015210013.GD4386@nat.myhome> Message-ID: On Tue, Oct 15, 2013 at 5:00 PM, Paul A. Procacci wrote: >> if (req.url ~ "^/blah") { >> set beresp.http.Link = regsub(req.url, "^/(blah).*", >> "; rel=\" + "canonical" + " "); >> return(deliver); >> } >> > > set beresp.http.Link = regsub(req.url, '^/blah.*', '; rel="canonical" '); > > This maybe? i tried single quotes initially, vcl seems to dislike single quoting in regsub, and backslash escaping is also ineffective when trying to print out a " character Syntax error at ('input' Line 149 Pos 56) set beresp.http.Link = regsub(req.url, '^/blah.*', '; rel="canonical" '); -------------------------------------------------------#------------------------------------------------------------- Running VCC-compiler failed, exit 1 if i put "^/blah.*", in double quotes, it shifts the error over to ('input' Line 149 Pos 71) set beresp.http.Link = regsub(req.url, "^/couples.*", '; rel="canonical" '); which is where the single quote tries to protect the double quote From pprocacci at datapipe.com Tue Oct 15 23:03:40 2013 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Tue, 15 Oct 2013 18:03:40 -0500 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: References: <20131015210013.GD4386@nat.myhome> Message-ID: <20131015230340.GF4386@nat.myhome> Yep, I see that now. Didn't realize the parser was that picky. " Can you use the above in place of a double quote? I.E: set beresp.http.Link = regsub(req.url, "^/blah.*", "; rel="canonical" "); On Tue, Oct 15, 2013 at 06:32:07PM -0400, Stefan Caunter wrote: > On Tue, Oct 15, 2013 at 5:00 PM, Paul A. Procacci > wrote: > >> if (req.url ~ "^/blah") { > >> set beresp.http.Link = regsub(req.url, "^/(blah).*", > >> "; rel=\" + "canonical" + " "); > >> return(deliver); > >> } > >> > > > > set beresp.http.Link = regsub(req.url, '^/blah.*', '; rel="canonical" '); > > > > This maybe? > > i tried single quotes initially, vcl seems to dislike single quoting > in regsub, and backslash escaping is also ineffective when trying to > print out a " character > > Syntax error at > ('input' Line 149 Pos 56) > set beresp.http.Link = regsub(req.url, '^/blah.*', > '; rel="canonical" '); > -------------------------------------------------------#------------------------------------------------------------- > > Running VCC-compiler failed, exit 1 > > if i put "^/blah.*", in double quotes, it shifts the error over to > > ('input' Line 149 Pos 71) > set beresp.http.Link = regsub(req.url, "^/couples.*", > '; rel="canonical" '); > > which is where the single quote tries to protect the double quote ________________________________ This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. Unless otherwise specified in a written agreement, this e-mail neither constitutes an agreement to conduct transactions by electronic means nor creates any legally binding contract or enforceable agreement. From stef at scaleengine.com Tue Oct 15 23:16:45 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Tue, 15 Oct 2013 19:16:45 -0400 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: <20131015230340.GF4386@nat.myhome> References: <20131015210013.GD4386@nat.myhome> <20131015230340.GF4386@nat.myhome> Message-ID: On Tue, Oct 15, 2013 at 7:03 PM, Paul A. Procacci wrote: > Yep, > > I see that now. Didn't realize the parser was that picky. > > " > > Can you use the above in place of a double quote? > > I.E: > set beresp.http.Link = regsub(req.url, "^/blah.*", "; rel="canonical" "); > it faithfully prints that out Link: ; rel="canonical" > > > On Tue, Oct 15, 2013 at 06:32:07PM -0400, Stefan Caunter wrote: >> On Tue, Oct 15, 2013 at 5:00 PM, Paul A. Procacci >> wrote: >> >> if (req.url ~ "^/blah") { >> >> set beresp.http.Link = regsub(req.url, "^/(blah).*", >> >> "; rel=\" + "canonical" + " "); >> >> return(deliver); >> >> } >> >> >> > >> > set beresp.http.Link = regsub(req.url, '^/blah.*', '; rel="canonical" '); >> > >> > This maybe? >> >> i tried single quotes initially, vcl seems to dislike single quoting >> in regsub, and backslash escaping is also ineffective when trying to >> print out a " character >> >> Syntax error at >> ('input' Line 149 Pos 56) >> set beresp.http.Link = regsub(req.url, '^/blah.*', >> '; rel="canonical" '); >> -------------------------------------------------------#------------------------------------------------------------- >> >> Running VCC-compiler failed, exit 1 >> >> if i put "^/blah.*", in double quotes, it shifts the error over to >> >> ('input' Line 149 Pos 71) >> set beresp.http.Link = regsub(req.url, "^/couples.*", >> '; rel="canonical" '); >> >> which is where the single quote tries to protect the double quote > > ________________________________ > > This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. Unless otherwise specified in a written agreement, this e-mail neither constitutes an agreement to conduct transactions by electronic means nor creates any legally binding contract or enforceable agreement. From pprocacci at datapipe.com Tue Oct 15 23:44:54 2013 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Tue, 15 Oct 2013 18:44:54 -0500 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: References: <20131015210013.GD4386@nat.myhome> <20131015230340.GF4386@nat.myhome> Message-ID: <20131015234454.GG4386@nat.myhome> On Tue, Oct 15, 2013 at 07:16:45PM -0400, Stefan Caunter wrote: > On Tue, Oct 15, 2013 at 7:03 PM, Paul A. Procacci > wrote: > > Yep, > > > > I see that now. Didn't realize the parser was that picky. > > > > " > > > > Can you use the above in place of a double quote? > > > > I.E: > > set beresp.http.Link = regsub(req.url, "^/blah.*", "; rel="canonical" "); > > > > it faithfully prints that out > > Link: ; rel="canonical" Instead of the back and forth, how about from the documenation: (https://www.varnish-cache.org/trac/wiki/VCLSyntaxStrings) "Given that we have never been able to come up with a valid need for any escaped characters apart from that, we decided that %22 for " and %25 for %, was much less suffering than doubling all backslashes in regexp/regsub contexts." ________________________________ This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. Unless otherwise specified in a written agreement, this e-mail neither constitutes an agreement to conduct transactions by electronic means nor creates any legally binding contract or enforceable agreement. From stef at scaleengine.com Tue Oct 15 23:55:38 2013 From: stef at scaleengine.com (Stefan Caunter) Date: Tue, 15 Oct 2013 19:55:38 -0400 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: <20131015234454.GG4386@nat.myhome> References: <20131015210013.GD4386@nat.myhome> <20131015230340.GF4386@nat.myhome> <20131015234454.GG4386@nat.myhome> Message-ID: On Tue, Oct 15, 2013 at 7:44 PM, Paul A. Procacci wrote: > On Tue, Oct 15, 2013 at 07:16:45PM -0400, Stefan Caunter wrote: >> On Tue, Oct 15, 2013 at 7:03 PM, Paul A. Procacci >> wrote: >> > Yep, >> > >> > I see that now. Didn't realize the parser was that picky. >> > >> > " >> > >> > Can you use the above in place of a double quote? >> > >> > I.E: >> > set beresp.http.Link = regsub(req.url, "^/blah.*", "; rel="canonical" "); >> > >> >> it faithfully prints that out >> >> Link: ; rel="canonical" > > Instead of the back and forth, how about from the documenation: > (https://www.varnish-cache.org/trac/wiki/VCLSyntaxStrings) > > "Given that we have never been able to come up with a valid need for any escaped characters apart from that, we decided that %22 for " and %25 for %, was much less suffering than doubling all backslashes in regexp/regsub contexts." > ah, from that page, the long string syntax works, i.e. {" payload including "double quotes" "} set beresp.http.Link = regsub(req.url, "^/blah.*", {"; rel="canonical""}); thanks for getting me there S From apj at mutt.dk Wed Oct 16 07:13:03 2013 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Wed, 16 Oct 2013 09:13:03 +0200 Subject: adding a double quote into a regsub in vcl_fetch In-Reply-To: <20131015234454.GG4386@nat.myhome> References: <20131015210013.GD4386@nat.myhome> <20131015230340.GF4386@nat.myhome> <20131015234454.GG4386@nat.myhome> Message-ID: <20131016071302.GA19694@nerd.dk> On Tue, Oct 15, 2013 at 06:44:54PM -0500, Paul A. Procacci wrote: > > Instead of the back and forth, how about from the documenation: > (https://www.varnish-cache.org/trac/wiki/VCLSyntaxStrings) That's not the documentation. That's the user editable wiki. The real docs are here: https://www.varnish-cache.org/docs/3.0/reference/vcl.html#syntax > "Given that we have never been able to come up with a valid need for any escaped characters apart from that, we decided that %22 for " and %25 for %, was much less suffering than doubling all backslashes in regexp/regsub contexts." %-escapes went away with 3.0. You can only use long strings now, i.e. {"..."} I've updated the wiki. -- Andreas From beradrian at yahoo.com Wed Oct 16 09:54:12 2013 From: beradrian at yahoo.com (Adrian Ber) Date: Wed, 16 Oct 2013 02:54:12 -0700 (PDT) Subject: Varnish + Tomcat vs Apache + mod_cache + mod_jk +Tomcat Message-ID: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> Does anyone have some comparison data in terms of performance for using in front of Tomcat either Varnish or Apache with mod_jk. I know that AJ connector suppose to be faster than HTTP, but I was thinking that in combination Varnish which is lighter and highly optimized could perform better. There is also the discussion between static resources (which I think will perform faster with Varnish than Apache, even with mod_cache) and dynamic pages. I asked this question on ServerFault too http://serverfault.com/questions/545793/varnish-tomcat-vs-apache-mod-jk-tomcat Which configuration would be advisable Varnish + Tomcat or Apache + mod_cache + mod_jk +Tomcat? Thanks, Adrian Ber. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Wed Oct 16 10:53:49 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 16 Oct 2013 11:53:49 +0100 Subject: Varnish + Tomcat vs Apache + mod_cache + mod_jk +Tomcat In-Reply-To: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> References: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> Message-ID: Hi, I have no data to show, but since I use all three tools, I can give you my two cents :) Varnish + Tomcat is definitely the simplest architecture, because it does not involve AJP. I would also consider changing the default (blocking) http connector on the Tomcat side and measuring performance improvements (non blocking, native...). I'm also a big fan of the VCL which feels a lot more natural than httpd's configuration to me. As I trust Varnish not to be the bottleneck, I am not keen on adding a new indirection (httpd) for a binary protocol that is not relevant to me anymore. I believe (still no data) having a 10Gb/s connection between Varnish and Tomcat (I assume they're not sitting too far from each other) outperforms the compactness of AJP (serialization involved). Best Regards, Dridi On Wed, Oct 16, 2013 at 10:54 AM, Adrian Ber wrote: > Does anyone have some comparison data in terms of performance for using in > front of Tomcat either Varnish or Apache with mod_jk. I know that AJ > connector suppose to be faster than HTTP, but I was thinking that in > combination Varnish which is lighter and highly optimized could perform > better. There is also the discussion between static resources (which I think > will perform faster with Varnish than Apache, even with mod_cache) and > dynamic pages. > I asked this question on ServerFault too > http://serverfault.com/questions/545793/varnish-tomcat-vs-apache-mod-jk-tomcat > Which configuration would be advisable Varnish + Tomcat or Apache + > mod_cache + mod_jk +Tomcat? > > Thanks, > Adrian Ber. > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From beradrian at yahoo.com Wed Oct 16 11:06:24 2013 From: beradrian at yahoo.com (Adrian Ber) Date: Wed, 16 Oct 2013 04:06:24 -0700 (PDT) Subject: Varnish + Tomcat vs Apache + mod_cache + mod_jk +Tomcat In-Reply-To: References: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> Message-ID: <1381921584.31914.YahooMailNeo@web125801.mail.ne1.yahoo.com> Definitely VCL is easier to use than Apache config. And Varnish/Apache and Tomcat will be sitting on the same (cloud) machine. Then practically I would be interested in a comparison of the overhead added by Apache vs Varnish in terms of non-cached requests. Thanks, Adrian. On Wednesday, 16 October 2013, 11:54, Dridi Boukelmoune wrote: Hi, > >I have no data to show, but since I use all three tools, I can give >you my two cents :) > >Varnish + Tomcat is definitely the simplest architecture, because it >does not involve AJP. I would also consider changing the default >(blocking) http connector on the Tomcat side and measuring performance >improvements (non blocking, native...). I'm also a big fan of the VCL >which feels a lot more natural than httpd's configuration to me. > >As I trust Varnish not to be the bottleneck, I am not keen on adding a >new indirection (httpd) for a binary protocol that is not relevant to >me anymore. I believe (still no data) having a 10Gb/s connection >between Varnish and Tomcat (I assume they're not sitting too far from >each other) outperforms the compactness of AJP (serialization >involved). > >Best Regards, >Dridi > > >On Wed, Oct 16, 2013 at 10:54 AM, Adrian Ber wrote: >> Does anyone have some comparison data in terms of performance for using in >> front of Tomcat either Varnish or Apache with mod_jk. I know that AJ >> connector suppose to be faster than HTTP, but I was thinking that in >> combination Varnish which is lighter and highly optimized could perform >> better. There is also the discussion between static resources (which I think >> will perform faster with Varnish than Apache, even with mod_cache) and >> dynamic pages. >> I asked this question on ServerFault too >> http://serverfault.com/questions/545793/varnish-tomcat-vs-apache-mod-jk-tomcat >> Which configuration would be advisable Varnish + Tomcat or Apache + >> mod_cache + mod_jk +Tomcat? >> >> Thanks, >> Adrian Ber. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jennings at internetseer.com Wed Oct 16 15:26:12 2013 From: jennings at internetseer.com (Jeff Jennings) Date: Wed, 16 Oct 2013 11:26:12 -0400 Subject: unsubscribe References: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> <1381921584.31914.YahooMailNeo@web125801.mail.ne1.yahoo.com> Message-ID: <1EB03B824DDD9C4D84B976527234AEFC011C4B03@mymail.minddrivers.com> unsubscribe -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben at varnish-software.com Wed Oct 16 20:44:11 2013 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Wed, 16 Oct 2013 22:44:11 +0200 Subject: Welcome to #VUG8 week in Berlin, Germany - November 25th to 29th, 2013 - http://varnish.org/VUG8 Message-ID: Hallo, Alles gut? Sch?n. Das ist gut. ** Please help us to spread the word to all Varnish & Web Performance enthusiasts you know ** In behalf of the Varnish Team, it is my pleasure to announce to you the next Varnish User Group Meeting, the eight of its kind, which is to be held in Berlin, Germany. #VUG8 is possible thanks to the sponsorship of Axel Springer, Uplex and Varnish Software. *** Short story: Registration is open for the Varnish User Day @ Axel Springer HQ on Thursday, November 28th, 2013. Get your free ticket now: < http://vug8.eventbrite.com> *** *** Long story *** * Varnish User Day is open for everyone and it will be at Axel Springer on Thursday, November 28th, 2013. If you want to share your Varnish & VCL experience with the crowd please let me know; these meetings are, after all, about how YOU use Varnish ;-) For agenda and general information [1] * Varnish Developer meeting: will happen on Wednesday, November 27th, 2013. This event is invite only. See and use the wiki for coordinating details [2] * Hacking Day / Community Roundtable: this is new for #VUG8. It's open for everyone and takes place on Friday, November 29th, 2013. If you want to participate keep an eye on the VUG8 page[3] for updates or reply to this email. Also we need a venue for this one. If you know someone that can help us with a meeting room for that day, please let me know. * For the first time in Berlin: 2-day Varnish Administration Certification Class starting on Monday, November 25th, 2013. More information, pricing and registration [4] We are very much looking forward to meet the Varnish community in Berlin and spending time with you while making Varnish Cache better. And before I let you go: Do not forget to let your colleagues and friends know about #VUG8. Wir sehen uns in Berlin! Links: [1] If you register to the User Day, I reserve the right to ask you if can hold a presentation :-) [2] [3] [4] Mit freundlichen Gr??en,, -- *Rub?n Romero* Global Sales Executive & Community Liaison | Varnish Software AS Cell: +47 95964088 / Office: +47 21989260 Skype & Twitter: ruben_varnish We Make Websites Fly!Winner of the 2013 Red Herring Top 100 Europe Awards -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.arrowood at oit.gatech.edu Thu Oct 17 14:10:46 2013 From: adam.arrowood at oit.gatech.edu (Adam Arrowood) Date: Thu, 17 Oct 2013 10:10:46 -0400 Subject: "straight insufficient bytes" 503 error Message-ID: Hi, I am having a problem with a varnish cache installation that I cannot find the answer to... I'm getting 503 (Service Unavailable) errors when trying to view an image served from a drupal 7 backend. Here are details (ips, names are changed for privacy): varnish cache -> foo.gatech.edu (192.168.0.1), RHEL6, varnish-3.0.4-4.el6.art.x86_64 from the atomicorp repo drupal server -> bar.gatech.edu (192.168.0.2), RHEL6, apache 2.2, drupal 7, defined in vcl as backend 'bar', has 'foo.gatech.edu' as a ServerAlias web client (Chrome) -> 10.10.10.1 Users can access the site via http://foo.gatech.edu/ , which goes through the varnish cache, and works well, generally. The admins of the site can access the site via http://bar.gatech.edu/ , which does not go through the cache. When I use my web browser to try an fetch a JPEG image from http://foo.gatech.edu/home/images/50384 , I get a 503 error from varnish. This is odd for several reasons: - I can see a successful (HTTP 200 result) request for the image both in apache logs of bar.gatech.edu, and in the back-end (b) entries from the varnishlog on foo.gatech.edu - It happens for any default-sized being served from the drupal application on bar.gatech.edu, so it's not specific to one image. Images in the /home/images/ directory are served via the "Adaptive Image Styles" varnish module - I can always successfully fetch the image via http://bar.gatech.edu/home/images/50384 , bypassing the cache - It does *not* happen for smaller versions of the same image. So, http://foo.gatech.edu/home/images/50384/thumbnail_scaled returns a smaller jpeg version of the image, through the cache, successfully every time. Using HTTPFox (similar to HTTP Live Headers) on Firefox shows nothing significantly different between fetching the original image and the thumbnail version of the image from bar.gatec.edu directly Things I've tried: - restarting varnish, rebooting the box, etc. - using both file and malloc storage options in /etc/sysconfig/varnish - upping the backend-timeouts to very large values - eliminate any vhosts/hostname issues by changing my /etc/hosts so that foo.gatech.edu resolves to 198.168.0.2, and then verifying http://foo.gatech.edu/home/images/50384 returns a good image to my browser Here is the varnishlog output from a failed request... The key line is "11 FetchError c straight insufficient bytes" : ----------------------------------- 13 BackendOpen b bar 192.168.0.1 44950 192.168.0.2 80 13 TxRequest b GET 13 TxURL b /home/images/50384 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: foo.gatech.edu 13 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 13 TxHeader b Pragma: no-cache 13 TxHeader b User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 13 TxHeader b DNT: 1 13 TxHeader b Accept-Language: en-US,en;q=0.8 13 TxHeader b X-Forwarded-For: 10.10.10.1 13 TxHeader b X-Varnish: 1892649961 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Wed, 16 Oct 2013 20:17:23 GMT 13 RxHeader b Server: Apache/2.2.3 (Red Hat) 13 RxHeader b X-Powered-By: PHP/5.3.14 ZendServer/5.0 13 RxHeader b X-Drupal-Cache: MISS 13 RxHeader b Expires: Sun, 19 Nov 1978 05:00:00 GMT 13 RxHeader b Last-Modified: Wed, 16 Oct 2013 20:17:23 +0000 13 RxHeader b Cache-Control: public, max-age=21600 13 RxHeader b ETag: "1381954643-1" 13 RxHeader b Content-Length: 55293 13 RxHeader b Content-Language: en 13 RxHeader b Vary: Accept-Encoding 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Connection: close 13 RxHeader b Content-Type: image/jpeg 13 Fetch_Body b 4(length) cls -1 mklen 1 13 BackendClose b bar 11 SessionOpen c 10.10.10.1 64106 :80 11 ReqStart c 10.10.10.1 64106 1892649961 11 RxRequest c GET 11 RxURL c /home/images/50384 11 RxProtocol c HTTP/1.1 11 RxHeader c Host: foo.gatech.edu 11 RxHeader c Connection: keep-alive 11 RxHeader c Cache-Control: no-cache 11 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 11 RxHeader c Pragma: no-cache 11 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 11 RxHeader c DNT: 1 11 RxHeader c Accept-Encoding: gzip,deflate,sdch 11 RxHeader c Accept-Language: en-US,en;q=0.8 11 VCL_call c recv 11 VCL_return c lookup 11 VCL_call c hash 11 Hash c /home/images/50384 11 Hash c foo.gatech.edu 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 bar bar 11 TTL c 1892649961 RFC 21600 -1 -1 1381954643 0 1381954643 280299600 21600 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Wed, 16 Oct 2013 20:17:23 GMT 11 ObjHeader c Server: Apache/2.2.3 (Red Hat) 11 ObjHeader c X-Powered-By: PHP/5.3.14 ZendServer/5.0 11 ObjHeader c X-Drupal-Cache: MISS 11 ObjHeader c Expires: Sun, 19 Nov 1978 05:00:00 GMT 11 ObjHeader c Last-Modified: Wed, 16 Oct 2013 20:17:23 +0000 11 ObjHeader c Cache-Control: public, max-age=21600 11 ObjHeader c ETag: "1381954643-1" 11 ObjHeader c Content-Language: en 11 ObjHeader c Vary: Accept-Encoding 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Content-Type: image/jpeg 11 ObjHeader c X-Varnish-IP: 192.168.0.1 11 FetchError c straight insufficient bytes 11 Gzip c u F - 29537 55293 80 80 236227 11 VCL_call c error deliver 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 503 11 TxResponse c Service Unavailable 11 TxHeader c Server: Varnish 11 TxHeader c Content-Type: text/html; charset=utf-8 11 TxHeader c Content-Length: 829 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Date: Wed, 16 Oct 2013 20:17:23 GMT 11 TxHeader c X-Varnish: 1892649961 11 TxHeader c Age: 0 11 TxHeader c Via: 1.1 varnish 11 TxHeader c Connection: close 11 TxHeader c X-Cache: MISS 11 Length c 829 11 ReqEnd c 1892649961 1381954643.203719378 1381954643.275374174 0.000027418 0.071626902 0.000027895 11 SessionClose c error 11 StatSess c 10.10.10.1 64106 0 1 1 0 0 0 256 829 ----------------------------------- As I mentioned, I've Goolged around for this error "straight insufficient bytes" and didn't find any solution. I can find the line that returns that error in the varnish cache source code, but without knowning the internals of the src, it's Greek to me. Anyone got any guesses/suggestions? Any help will be greatly appreciated. Thanks, Adam A -- :: Adam Arrowood :: adam.arrowood at oit.gatech.edu :: Office of Information Technology/A&I :: Georgia Institute of Technology, Atlanta, GA USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry.magnien at sfr.com Thu Oct 17 15:16:56 2013 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Thu, 17 Oct 2013 15:16:56 +0000 Subject: "straight insufficient bytes" 503 error In-Reply-To: References: Message-ID: <5D103CE839D50E4CBC62C9FD7B83287C274A4AFD@EXCN015.encara.local.ads> Hi, >From source code it seems that there are 2 main reasons for this error to raise: - either storage could not be obtained, which would be strange for a 55k object - or content-length header and real content length do not match : varnish expects more data than what is given on the network connection. If you have a possibility to make a network dump to check if Drupal sends as much as it tells in content-length, this would probably help. Hope this helps, Thierry De : varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org [mailto:varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org] De la part de Adam Arrowood Envoy? : jeudi 17 octobre 2013 16:11 ? : varnish-misc at varnish-cache.org Objet : "straight insufficient bytes" 503 error Hi, I am having a problem with a varnish cache installation that I cannot find the answer to... I'm getting 503 (Service Unavailable) errors when trying to view an image served from a drupal 7 backend. Here are details (ips, names are changed for privacy): varnish cache -> foo.gatech.edu (192.168.0.1), RHEL6, varnish-3.0.4-4.el6.art.x86_64 from the atomicorp repo drupal server -> bar.gatech.edu (192.168.0.2), RHEL6, apache 2.2, drupal 7, defined in vcl as backend 'bar', has 'foo.gatech.edu' as a ServerAlias web client (Chrome) -> 10.10.10.1 Users can access the site via http://foo.gatech.edu/ , which goes through the varnish cache, and works well, generally. The admins of the site can access the site via http://bar.gatech.edu/ , which does not go through the cache. When I use my web browser to try an fetch a JPEG image from http://foo.gatech.edu/home/images/50384 , I get a 503 error from varnish. This is odd for several reasons: - I can see a successful (HTTP 200 result) request for the image both in apache logs of bar.gatech.edu, and in the back-end (b) entries from the varnishlog on foo.gatech.edu - It happens for any default-sized being served from the drupal application on bar.gatech.edu, so it's not specific to one image. Images in the /home/images/ directory are served via the "Adaptive Image Styles" varnish module - I can always successfully fetch the image via http://bar.gatech.edu/home/images/50384 , bypassing the cache - It does *not* happen for smaller versions of the same image. So, http://foo.gatech.edu/home/images/50384/thumbnail_scaled returns a smaller jpeg version of the image, through the cache, successfully every time. Using HTTPFox (similar to HTTP Live Headers) on Firefox shows nothing significantly different between fetching the original image and the thumbnail version of the image from bar.gatec.edu directly Things I've tried: - restarting varnish, rebooting the box, etc. - using both file and malloc storage options in /etc/sysconfig/varnish - upping the backend-timeouts to very large values - eliminate any vhosts/hostname issues by changing my /etc/hosts so that foo.gatech.edu resolves to 198.168.0.2, and then verifying http://foo.gatech.edu/home/images/50384 returns a good image to my browser Here is the varnishlog output from a failed request... The key line is "11 FetchError c straight insufficient bytes" : ----------------------------------- 13 BackendOpen b bar 192.168.0.1 44950 192.168.0.2 80 13 TxRequest b GET 13 TxURL b /home/images/50384 13 TxProtocol b HTTP/1.1 13 TxHeader b Host: foo.gatech.edu 13 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 13 TxHeader b Pragma: no-cache 13 TxHeader b User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 13 TxHeader b DNT: 1 13 TxHeader b Accept-Language: en-US,en;q=0.8 13 TxHeader b X-Forwarded-For: 10.10.10.1 13 TxHeader b X-Varnish: 1892649961 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Wed, 16 Oct 2013 20:17:23 GMT 13 RxHeader b Server: Apache/2.2.3 (Red Hat) 13 RxHeader b X-Powered-By: PHP/5.3.14 ZendServer/5.0 13 RxHeader b X-Drupal-Cache: MISS 13 RxHeader b Expires: Sun, 19 Nov 1978 05:00:00 GMT 13 RxHeader b Last-Modified: Wed, 16 Oct 2013 20:17:23 +0000 13 RxHeader b Cache-Control: public, max-age=21600 13 RxHeader b ETag: "1381954643-1" 13 RxHeader b Content-Length: 55293 13 RxHeader b Content-Language: en 13 RxHeader b Vary: Accept-Encoding 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Connection: close 13 RxHeader b Content-Type: image/jpeg 13 Fetch_Body b 4(length) cls -1 mklen 1 13 BackendClose b bar 11 SessionOpen c 10.10.10.1 64106 :80 11 ReqStart c 10.10.10.1 64106 1892649961 11 RxRequest c GET 11 RxURL c /home/images/50384 11 RxProtocol c HTTP/1.1 11 RxHeader c Host: foo.gatech.edu 11 RxHeader c Connection: keep-alive 11 RxHeader c Cache-Control: no-cache 11 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 11 RxHeader c Pragma: no-cache 11 RxHeader c User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 11 RxHeader c DNT: 1 11 RxHeader c Accept-Encoding: gzip,deflate,sdch 11 RxHeader c Accept-Language: en-US,en;q=0.8 11 VCL_call c recv 11 VCL_return c lookup 11 VCL_call c hash 11 Hash c /home/images/50384 11 Hash c foo.gatech.edu 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 bar bar 11 TTL c 1892649961 RFC 21600 -1 -1 1381954643 0 1381954643 280299600 21600 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Wed, 16 Oct 2013 20:17:23 GMT 11 ObjHeader c Server: Apache/2.2.3 (Red Hat) 11 ObjHeader c X-Powered-By: PHP/5.3.14 ZendServer/5.0 11 ObjHeader c X-Drupal-Cache: MISS 11 ObjHeader c Expires: Sun, 19 Nov 1978 05:00:00 GMT 11 ObjHeader c Last-Modified: Wed, 16 Oct 2013 20:17:23 +0000 11 ObjHeader c Cache-Control: public, max-age=21600 11 ObjHeader c ETag: "1381954643-1" 11 ObjHeader c Content-Language: en 11 ObjHeader c Vary: Accept-Encoding 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Content-Type: image/jpeg 11 ObjHeader c X-Varnish-IP: 192.168.0.1 11 FetchError c straight insufficient bytes 11 Gzip c u F - 29537 55293 80 80 236227 11 VCL_call c error deliver 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 503 11 TxResponse c Service Unavailable 11 TxHeader c Server: Varnish 11 TxHeader c Content-Type: text/html; charset=utf-8 11 TxHeader c Content-Length: 829 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Date: Wed, 16 Oct 2013 20:17:23 GMT 11 TxHeader c X-Varnish: 1892649961 11 TxHeader c Age: 0 11 TxHeader c Via: 1.1 varnish 11 TxHeader c Connection: close 11 TxHeader c X-Cache: MISS 11 Length c 829 11 ReqEnd c 1892649961 1381954643.203719378 1381954643.275374174 0.000027418 0.071626902 0.000027895 11 SessionClose c error 11 StatSess c 10.10.10.1 64106 0 1 1 0 0 0 256 829 ----------------------------------- As I mentioned, I've Goolged around for this error "straight insufficient bytes" and didn't find any solution. I can find the line that returns that error in the varnish cache source code, but without knowning the internals of the src, it's Greek to me. Anyone got any guesses/suggestions? Any help will be greatly appreciated. Thanks, Adam A -- :: Adam Arrowood :: adam.arrowood at oit.gatech.edu :: Office of Information Technology/A&I :: Georgia Institute of Technology, Atlanta, GA USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From beradrian at yahoo.com Tue Oct 22 17:31:58 2013 From: beradrian at yahoo.com (Adrian Ber) Date: Tue, 22 Oct 2013 10:31:58 -0700 (PDT) Subject: Varnish + Tomcat vs Apache + mod_cache + mod_jk +Tomcat In-Reply-To: <1381921584.31914.YahooMailNeo@web125801.mail.ne1.yahoo.com> References: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> <1381921584.31914.YahooMailNeo@web125801.mail.ne1.yahoo.com> Message-ID: <1382463118.79467.YahooMailNeo@web125801.mail.ne1.yahoo.com> Hi Dridi, If you post your answer on ServerFault, I want to accept it. Thanks, Adrian. On Wednesday, 16 October 2013, 12:06, Adrian Ber wrote: Definitely VCL is easier to use than Apache config. And Varnish/Apache and Tomcat will be sitting on the same (cloud) machine. >Then practically I would be interested in a comparison of the overhead added by Apache vs Varnish in terms of non-cached requests. > >Thanks, >Adrian. > > > > >On Wednesday, 16 October 2013, 11:54, Dridi Boukelmoune wrote: > >Hi, >> >>I have no data to show, but since I use all three tools, I can give >>you my two cents :) >> >>Varnish + Tomcat is definitely the simplest architecture, because it >>does not involve AJP. I would also consider changing the default >>(blocking) http connector on the Tomcat side and measuring performance >>improvements (non blocking, native...). I'm also a big fan of the VCL >>which feels a lot more natural than httpd's configuration to me. >> >>As I trust Varnish not to be the bottleneck, I am not keen on adding a >>new indirection (httpd) for a binary protocol that is not relevant to >>me anymore. I believe (still no data) having a 10Gb/s connection >>between Varnish and Tomcat (I assume they're not sitting too far from >>each other) outperforms the compactness of AJP (serialization >>involved). >> >>Best Regards, >>Dridi >> >> >>On Wed, Oct 16, 2013 at 10:54 AM, Adrian Ber wrote: >>> Does anyone have some comparison data in terms of performance for using in >>> front of Tomcat either Varnish or Apache with mod_jk. I know that AJ >>> connector suppose to be faster than HTTP, but I was thinking that in >>> combination Varnish which is lighter and highly optimized could perform >>> better. There is also the discussion between static resources (which I think >>> will perform faster with Varnish than Apache, even with mod_cache) and >>> dynamic pages. >>> I asked this question on ServerFault too >>> http://serverfault.com/questions/545793/varnish-tomcat-vs-apache-mod-jk-tomcat >>> Which configuration would be advisable Varnish + Tomcat or Apache + >>> mod_cache + mod_jk +Tomcat? >>> >>> Thanks, >>> Adrian Ber. >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Tue Oct 22 18:10:17 2013 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Tue, 22 Oct 2013 19:10:17 +0100 Subject: Varnish + Tomcat vs Apache + mod_cache + mod_jk +Tomcat In-Reply-To: <1382463118.79467.YahooMailNeo@web125801.mail.ne1.yahoo.com> References: <1381917252.75605.YahooMailNeo@web125802.mail.ne1.yahoo.com> <1381921584.31914.YahooMailNeo@web125801.mail.ne1.yahoo.com> <1382463118.79467.YahooMailNeo@web125801.mail.ne1.yahoo.com> Message-ID: Hi Adrian, Sorry I missed your previous mail. I don't use ServerFault, StackOverflow and the likes, only anonymously when it shows up in my search results. Feel free to reuse my answer on ServerFault, I'm glad it helped you :) Cheers, Dridi On Tue, Oct 22, 2013 at 6:31 PM, Adrian Ber wrote: > Hi Dridi, > > If you post your answer on ServerFault, I want to accept it. > > Thanks, > Adrian. > > > > On Wednesday, 16 October 2013, 12:06, Adrian Ber > wrote: > > Definitely VCL is easier to use than Apache config. And Varnish/Apache and > Tomcat will be sitting on the same (cloud) machine. > Then practically I would be interested in a comparison of the overhead added > by Apache vs Varnish in terms of non-cached requests. > > Thanks, > Adrian. > > > On Wednesday, 16 October 2013, 11:54, Dridi Boukelmoune > wrote: > > Hi, > > I have no data to show, but since I use all three tools, I can give > you my two cents :) > > Varnish + Tomcat is definitely the simplest architecture, because it > does not involve AJP. I would also consider changing the default > (blocking) http connector on the Tomcat side and measuring performance > improvements (non blocking, native...). I'm also a big fan of the VCL > which feels a lot more natural than httpd's configuration to me. > > As I trust Varnish not to be the bottleneck, I am not keen on adding a > new indirection (httpd) for a binary protocol that is not relevant to > me anymore. I believe (still no data) having a 10Gb/s connection > between Varnish and Tomcat (I assume they're not sitting too far from > each other) outperforms the compactness of AJP (serialization > involved). > > Best Regards, > Dridi > > On Wed, Oct 16, 2013 at 10:54 AM, Adrian Ber wrote: >> Does anyone have some comparison data in terms of performance for using in >> front of Tomcat either Varnish or Apache with mod_jk. I know that AJ >> connector suppose to be faster than HTTP, but I was thinking that in >> combination Varnish which is lighter and highly optimized could perform >> better. There is also the discussion between static resources (which I >> think >> will perform faster with Varnish than Apache, even with mod_cache) and >> dynamic pages. >> I asked this question on ServerFault too >> >> http://serverfault.com/questions/545793/varnish-tomcat-vs-apache-mod-jk-tomcat >> Which configuration would be advisable Varnish + Tomcat or Apache + >> mod_cache + mod_jk +Tomcat? >> >> Thanks, >> Adrian Ber. >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > From dererk at deadbeef.com.ar Wed Oct 23 22:06:26 2013 From: dererk at deadbeef.com.ar (Dererk) Date: Wed, 23 Oct 2013 19:06:26 -0300 Subject: Our experience with the persistent branch Message-ID: <52684862.6060208@deadbeef.com.ar> Hi there! I thought It would be useful for some people out there to mention a "case of successful" on the Persistent branch of Varnish Cache on a large production scale. Ours is the replacement of the so-long-time-appreciated Squid Cache for delivering (mostly) user-provided content, like Images and other kinds of documents, but static objects like Javascripts and style sheets belonging to our platform, OLX, as well. - TL;DR Version The development around the persistent branch rocks, *big time*! - Full Version // *Disclaimer* // Our configuration is not what one could call "conventional", and we can't really afford loosing large amounts of cache in short periods of time. - The Business Our company, OnLine eXchange or simply OLX for short, has established its core around the Free Classifieds business, enabling Sellers and Buyers to perform peer-to-peer selling transactions with no commission required to any parts. Our portal allows sellers to upload images for better describing their items into our platform and being more attractive to potential buyers and having better chances of selling them. Just like Craiglist does in the US or Allegro in Poland. Although some particular countries are more popular than others, we have a large amount of traffic world-wide, due to an extensive net of operations and localization. To sum up, this translates into having to tune every single piece of software we run out there for literally delivering several gbps of traffic out to the internet. - Massive Delivering Even though we heavily rest on content delivery networks (CDNs) for performing last-mile optimizations, geo-caching and network optimized delivery, they have limits on what they can handle and they also run caching rotation algorithms over objects as well (say, something like LRU). The do all the time, more often that you do actually want. On the other hand, because of their geo-distribution nature, in reality several request are performed from different locations before they effectively cache objects for some small period of time. It might be true that we are on average offloaded +80% of the traffic, but again, in reality even that small portion has average 400 mbits/s, with frequent 500mbps spikes in business-as-usual levels. - Ancient History We used to run this internal caching tier with Squid Cache backing the cache up, with the help of tons of optimizations and an extremely experimental backend called Cyclic Object storage system or COSS. This very very experimental backend made use of a very granular storage configurations to get the most out of every IOP we could save to our loaded storage backends. Due to its immaturity, some particular operations sucked, like a *30 minutes downtime* per instance by restarting the squid engine because of COSS data rebuild & consistency checks. We also rested very heavily on a sibling caching relation ship configuration by using HTCP, on a way to maximize the caching availability and saving some objects to get to the origin, which, as I said, was heavily loaded almost all the time. Squid is a great tool and believe me when I say, we loved it with all the downsides it had, we have had it for several years up to very recently, we knew it backwards and forwards until then. - Modern History Things started to change for the worse recently when our long-standing-loaded storage backend tier, featuring a vendor supposedly standing at the Storage Top Five ranking, could not face our business-as-usual operations any longer. Performance started to fall apart in pieces once we started to hit +95% on this well-known storage provider's solution CPU usage, unable to serve objects in form and shape once 95% was reached. We were forced to start diving into new and radical alternatives on a very short term. We have been using Varnish internally since some years now for boosting our SOLR backend and some other HTTP caching roles, but not for delivering static content up to that moment. Now was the time to give it a chance, "for real" (we have several gbps of traffic internally too, but you know what I mean). - First Steps We started by using the same malloc backend configurations as we were using on this other areas were Varnish was deployed, with some performance tunes around sessions (VARNISH_SESSION_LINGER) and threads capacity (VARNISH_MAX_THREADS &&VARNISH_THREAD_POOLS && VARNISH_THREAD_TIMEOUT). The server profiles handling this tasks were some huge boxes with 128M RAM, but since the 50-ish terabytes dataset didn't actually fit on RAM, once all the memory was allocated we started to suffer same random panics at random periods of time. Unable to replicate them on demand or to produce any debugging due to the amount of traffic this huge devils handled on production, things started to get really ugly really fast. - The Difficult Choice At this point we started to consider every possible option available out there, even switching away to other caching alternatives, like trafficserver, that provided persistence for cached objects, and we decided to give the persistent backend a shoot, but, there was a catch: the persistent backend was (and also currently is) considered experimental (so did COSS, btw!). Effectively, as for what 3.0.4 release on the main stable branch respects, the persistent backend had many bugs that raised up within minutes, many of them produced panics that crashed the child process, but, fortunately for us, the persistence itself worked so well that the first times went through totally unnoticed, which, compared with experiencing a 100mb cache melt down was just an amazing improvement by itself. Now things started to look better for Varnish. Of course, we learned that in fact we were loosing some cached objects by some broken silos in the way, but, doing a side-by-side comparison things looked different as night and day, and the best was yet to come. We were advised, in case we were to stick to a persistent backend, on using the persistent development branch, giving that more improvements were developed in there and major stability changes were introduced. But, think again, proposing something that has the "experimental" and "on development" tags hanging from it usually sells horribly to the management people on the other side of the table. - Summary At the end, with the help of a hash-based balancing algorithm at the load balancing tier in front of our Varnish caches, we were able to *almost cut half* of the CPU usage at our storage solution tier, that is by geting 60% insead of +90% of CPU usage, something similar to a sunny day for a walk through the park, even for serving the +2.000 request/second arriving at our datacenters. We got there by offloading content on a *up to +70% cache hits*, something that was totally unconceivable for anyone at the very beginning of the migration, giving that we used to get less than 30% in the past with Squid. We were able to get up to this point with lots of patience and research, but particularly with the help of the Varnish core development that constantly supported us at the IRC channels and mailing lists. Thanks a lot guys, you and Varnish rock, big time! A happy user! Dererk -- BOFH excuse #274: It was OK before you touched it. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From tousif1988 at gmail.com Thu Oct 24 02:07:00 2013 From: tousif1988 at gmail.com (tousif baig) Date: Thu, 24 Oct 2013 07:37:00 +0530 Subject: Our experience with the persistent branch In-Reply-To: <52684862.6060208@deadbeef.com.ar> References: <52684862.6060208@deadbeef.com.ar> Message-ID: Hey Dererk, Nice to see tht Varnish is now part of OLX, I have been a happy user of it. Keep up the good work. Another simple tip from my experience will be to distribute the load on different servers and not just one which is also possible with varnish and maybe get down to some browser specifics too. But since you are using hash-based balancing algorithm in front, I am not sure if that will be possible. Do let me know if u need more specifics for those. Good Day! Tousif On Thu, Oct 24, 2013 at 3:36 AM, Dererk wrote: > Hi there! > > I thought It would be useful for some people out there to mention a > "case of successful" on the Persistent branch of Varnish Cache on a > large production scale. > Ours is the replacement of the so-long-time-appreciated Squid Cache for > delivering (mostly) user-provided content, like Images and other kinds > of documents, but static objects like Javascripts and style sheets > belonging to our platform, OLX, as well. > > - TL;DR Version > The development around the persistent branch rocks, *big time*! > > - Full Version > // *Disclaimer* > // Our configuration is not what one could call "conventional", and we > can't really afford loosing large amounts of cache in short periods of > time. > > > - The Business > > Our company, OnLine eXchange or simply OLX for short, has established > its core around the Free Classifieds business, enabling Sellers and > Buyers to perform peer-to-peer selling transactions with no commission > required to any parts. Our portal allows sellers to upload images for > better describing their items into our platform and being more > attractive to potential buyers and having better chances of selling > them. Just like Craiglist does in the US or Allegro in Poland. > > Although some particular countries are more popular than others, we have > a large amount of traffic world-wide, due to an extensive net of > operations and localization. > To sum up, this translates into having to tune every single piece of > software we run out there for literally delivering several gbps of > traffic out to the internet. > > > - Massive Delivering > > Even though we heavily rest on content delivery networks (CDNs) for > performing last-mile optimizations, geo-caching and network optimized > delivery, they have limits on what they can handle and they also run > caching rotation algorithms over objects as well (say, something like > LRU). The do all the time, more often that you do actually want. > On the other hand, because of their geo-distribution nature, in reality > several request are performed from different locations before they > effectively cache objects for some small period of time. > > It might be true that we are on average offloaded +80% of the traffic, > but again, in reality even that small portion has average 400 mbits/s, > with frequent 500mbps spikes in business-as-usual levels. > > > - Ancient History > > We used to run this internal caching tier with Squid Cache backing the > cache up, with the help of tons of optimizations and an extremely > experimental backend called Cyclic Object storage system or COSS. This > very very experimental backend made use of a very granular storage > configurations to get the most out of every IOP we could save to our > loaded storage backends. > Due to its immaturity, some particular operations sucked, like a *30 > minutes downtime* per instance by restarting the squid engine because of > COSS data rebuild & consistency checks. > > We also rested very heavily on a sibling caching relation ship > configuration by using HTCP, on a way to maximize the caching > availability and saving some objects to get to the origin, which, as I > said, was heavily loaded almost all the time. > Squid is a great tool and believe me when I say, we loved it with all > the downsides it had, we have had it for several years up to very > recently, we knew it backwards and forwards until then. > > > - Modern History > > Things started to change for the worse recently when our > long-standing-loaded storage backend tier, featuring a vendor supposedly > standing at the Storage Top Five ranking, could not face our > business-as-usual operations any longer. > Performance started to fall apart in pieces once we started to hit +95% > on this well-known storage provider's solution CPU usage, unable to > serve objects in form and shape once 95% was reached. > We were forced to start diving into new and radical alternatives on a > very short term. > > We have been using Varnish internally since some years now for boosting > our SOLR backend and some other HTTP caching roles, but not for > delivering static content up to that moment. > Now was the time to give it a chance, "for real" (we have several gbps > of traffic internally too, but you know what I mean). > > > - First Steps > > We started by using the same malloc backend configurations as we were > using on this other areas were Varnish was deployed, with some > performance tunes around sessions (VARNISH_SESSION_LINGER) and threads > capacity (VARNISH_MAX_THREADS &&VARNISH_THREAD_POOLS && > VARNISH_THREAD_TIMEOUT). > > The server profiles handling this tasks were some huge boxes with 128M > RAM, but since the 50-ish terabytes dataset didn't actually fit on RAM, > once all the memory was allocated we started to suffer same random > panics at random periods of time. Unable to replicate them on demand or > to produce any debugging due to the amount of traffic this huge devils > handled on production, things started to get really ugly really fast. > > > - The Difficult Choice > > At this point we started to consider every possible option available out > there, even switching away to other caching alternatives, like > trafficserver, that provided persistence for cached objects, and we > decided to give the persistent backend a shoot, but, there was a catch: > the persistent backend was (and also currently is) considered > experimental (so did COSS, btw!). > > Effectively, as for what 3.0.4 release on the main stable branch > respects, the persistent backend had many bugs that raised up within > minutes, many of them produced panics that crashed the child process, > but, fortunately for us, the persistence itself worked so well that the > first times went through totally unnoticed, which, compared with > experiencing a 100mb cache melt down was just an amazing improvement by > itself. Now things started to look better for Varnish. > Of course, we learned that in fact we were loosing some cached objects > by some broken silos in the way, but, doing a side-by-side comparison > things looked different as night and day, and the best was yet to come. > > We were advised, in case we were to stick to a persistent backend, on > using the persistent development branch, giving that more improvements > were developed in there and major stability changes were introduced. > But, think again, proposing something that has the "experimental" and > "on development" tags hanging from it usually sells horribly to the > management people on the other side of the table. > > > - Summary > > At the end, with the help of a hash-based balancing algorithm at the > load balancing tier in front of our Varnish caches, we were able to > *almost cut half* of the CPU usage at our storage solution tier, that is > by geting 60% insead of +90% of CPU usage, something similar to a sunny > day for a walk through the park, even for serving the +2.000 > request/second arriving at our datacenters. > > We got there by offloading content on a *up to +70% cache hits*, > something that was totally unconceivable for anyone at the very > beginning of the migration, giving that we used to get less than 30% in > the past with Squid. > > We were able to get up to this point with lots of patience and research, > but particularly with the help of the Varnish core development that > constantly supported us at the IRC channels and mailing lists. > > Thanks a lot guys, you and Varnish rock, big time! > > > > A happy user! > > Dererk > > -- > BOFH excuse #274: > It was OK before you touched it. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlane at ahbelo.com Mon Oct 28 19:34:02 2013 From: rlane at ahbelo.com (Lane, Richard) Date: Mon, 28 Oct 2013 14:34:02 -0500 Subject: Varnish Purge - Ban, custom hash Message-ID: I am using multiple hash_data calls to customize the hash key. I need to cache potentially three different version of the page. My problem is how do I purge the items. I have tried implementing purge; in recv, hit, miss functions like Varnish docs show but still not working. I have tried implementing ban using a regex or == on the URL. Below is my VCL hash function. How do I purge a URL that is hashed with key... "/mysection/mystory.htmlwww.example.com_preview_mobile" I had this working prior to Varnish 3 but seems to have stopped working after that. ##VCL Hash function below vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } #if(req.http.Accept-Encoding) { # set req.hash += req.http.Accept-Encoding; #} ## /* Normalize Accept-Encoding to reduce effects of Vary: Accept-Encoding ## (cf. http://varnish-cache.org/wiki/FAQ/Compression) ## Also note that Vary: User-Agent is considered harmful */ if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { // Don't compress already-compressed files remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { // unknown algorithm remove req.http.Accept-Encoding; } } if(req.http.x-preview == "yes" && (req.url ~ ".shtml" || req.url ~ ".html") ) { hash_data("_preview"); ##Add device view to hash hash_data("_" + req.http.x-preferred-device); }else{ ##Add device view to hash hash_data("_" + req.http.x-device); } return (hash); } #Purge pieces vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ localaddr) { error 405 "Method not allowed"; } ban("obj.http.x-url ~ " + req.url + " && obj.http.x-host == " + req.http.host + " && obj.http.x-pd ~ .*" ); std.log("Purge : yes really purge."); return(lookup); } vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } } vcl_miss { if (req.request == "PURGE") { purge; error 200 "Purged."; } vcl_fetch { #setup for banning set beresp.http.x-url = req.url; set beresp.http.x-host = req.http.host; set beresp.http.x-pd = req.http.x-device; } vcl_deliver { #Clean up banning unset resp.http.x-url; unset resp.http.x-host; unset resp.http.x-pd; } Thanks in advance, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugues at betabrand.com Thu Oct 31 05:50:27 2013 From: hugues at betabrand.com (Hugues Alary) Date: Wed, 30 Oct 2013 22:50:27 -0700 Subject: varnishstat reporting weird numbers Message-ID: Hi there, I've got a problem with varnishstat and was hoping to find an answer here. Basically, today, my varnish server (3.0.2) started acting very weirdly. I suspect a misconfiguration on my part was at the origin of some weirdness (like pages randomly being flushed from the cache, despite the cache being half empty), but, our website was under low traffic (nothing Varnish can't handle), and somehow everything went crazy: - varnishadm would tell me that perfectly valid commands like ban.list, vcl.list, vcl.load were wrong, returning a 105 (?) error. Preventing me from loading a configuration without doing a varnish reload/restart. - varnishstat would report a highly improbable high number of "duplicate bans removed", but the rest of the reported numbers (hit, miss, etc.) seemed pretty normal. The panicked admin that I am decided that, if I was to restart varnish, I should at least use the occasion to upgrade it to 3.0.4.; my dev server and personal machine having been running 3.0.4 for the past few month with no problems. So I run apt-get update varnish, and get the new version. Unfortunately, this made varnishstat even weirder. It now report 0 hits, 0 miss, 0 hit for pass. I also have seen negative numbers for n_wrk_*. I tried restarting varnishd, made sure that varnishstat is 3.0.4, everything is fine, my website is mostly in cache, but still varnishstat reports what I believe are wrong numbers. Has someone ever run into this problem? I can provide my configuration file if needed. Here's a varnishstat -1: client_conn 0 0.00 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 0 0.00 Client requests received cache_hit 0 0.00 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 0 0.00 Cache misses backend_conn 0 0.00 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 271072 50.69 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 560452 104.80 Backend conn. reuses backend_toolate 656588 122.77 Backend conn. was closed backend_recycle 0 0.00 Backend conn. recycles backend_retry 53208 9.95 Backend conn. retry fetch_head 9701 1.81 Fetch head fetch_length 0 0.00 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 66843 12.50 Fetch had bad headers fetch_close 8116 1.52 Fetch wanted close fetch_oldhttp 74960 14.02 Fetch pre HTTP/1.1 closed fetch_zero 16 0.00 Fetch zero len fetch_failed 12 0.00 Fetch failed fetch_1xx 75267 14.07 Fetch no body (1xx) fetch_204 114 0.02 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 0 . N struct sess_mem n_sess 81 . N struct sess n_object 0 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 0 . N struct objectcore n_objecthead 0 . N struct objecthead n_waitinglist 0 . N struct waitinglist n_vbc 772 . N struct vbc n_wrk 299 . N worker threads n_wrk_create 28 0.01 N worker threads created n_wrk_failed 14365 2.69 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 14394 2.69 work request queue length n_wrk_queued 14644 2.74 N queued work requests n_wrk_drop 29 0.01 N dropped work requests n_backend 1 . N backends n_expired 29 . N expired objects n_lru_nuked 485 . N LRU nuked objects n_lru_moved 0 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 2751 0.51 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 1 0.00 Total Sessions s_req 38843 7.26 Total Requests s_pipe 0 0.00 Total pipe s_pass 530752 99.24 Total pass s_fetch 64 0.01 Total fetch s_hdrbytes 0 0.00 Total header bytes s_bodybytes 689741 128.97 Total body bytes sess_closed 0 0.00 Session Closed sess_pipeline 271072 50.69 Session Pipeline sess_readahead 560452 104.80 Session Read Ahead sess_linger 255 0.05 Session Linger sess_herd 23065 4.31 Session herd shm_records 76246 14.26 SHM records shm_writes 203311427 38016.35 SHM writes shm_flushes 34338009106 6420719.73 SHM flushes due to overflow shm_cont 26666 4.99 SHM MTX contention shm_cycles 1234 0.23 SHM cycles through buffer sms_nreq 235 0.04 SMS allocator requests sms_nobj 541285 . SMS outstanding allocations sms_nbytes 566363 . SMS outstanding bytes sms_balloc 30696332 . SMS bytes allocated sms_bfree 2279693 . SMS bytes freed backend_req 0 0.00 Backend requests made n_vcl 2756 0.52 N vcl total n_vcl_avail 13 0.00 N vcl available n_vcl_discard 1447 0.27 N vcl discarded n_ban 0 . N total active bans n_ban_add 0 0.00 N new bans added n_ban_retire 1883994 352.28 N old bans deleted n_ban_obj_test 1883994 352.28 N objects tested n_ban_re_test 76246 14.26 N regexps tested against n_ban_dups 6 0.00 N duplicate bans removed hcb_nolock 6 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 11 0.00 HCB Inserts esi_errors 6 0.00 ESI parse errors (unlock) esi_warnings 6723 1.26 ESI parse warnings (unlock) accept_fail 6712 1.26 Accept failures client_drop_late 2054111 384.09 Connection dropped late uptime 23573960 4408.00 Client uptime dir_dns_lookups 5348 1.00 DNS director lookups dir_dns_failed 709796 132.72 DNS director failed lookups dir_dns_hit 50845 9.51 DNS director cached lookups hit dir_dns_cache_full 50844 9.51 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations LCK.sms.creat 0 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 0 0.00 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 0 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 0 0.00 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 0 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 0 0.00 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 0 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 0 0.00 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 0 0.00 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 0 0.00 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 0 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 0 0.00 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 0 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 0 0.00 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 0 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 0 0.00 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 0 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 0 0.00 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 0 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 0 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 0 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 0 0.00 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 0 0.00 Created locks LCK.objhdr.destroy 0 0.00 Destroyed locks LCK.objhdr.locks 0 0.00 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 0 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 0 0.00 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 0 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 0 0.00 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 0 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 0 0.00 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 0 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 0 0.00 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 0 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 0 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 0 0.00 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 0 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 0 0.00 Lock Operations LCK.backend.colls 0 0.00 Collisions SMA.memory.c_req 0 0.00 Allocator requests SMA.memory.c_fail 0 0.00 Allocator failures SMA.memory.c_bytes 0 0.00 Bytes allocated SMA.memory.c_freed 0 0.00 Bytes freed SMA.memory.g_alloc 0 . Allocations outstanding SMA.memory.g_bytes 0 . Bytes outstanding SMA.memory.g_space 0 . Bytes available SMA.Transient.c_req 0 0.00 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 0 0.00 Bytes allocated SMA.Transient.c_freed 0 0.00 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.default(127.0.0.1,,8080).vcls 0 . VCL references VBE.default(127.0.0.1,,8080).happy 0 . Happy health probes Thanks for any help! -Hugues -------------- next part -------------- An HTML attachment was scrubbed... URL: