From duja at torlen.net Mon Jun 1 14:01:56 2009 From: duja at torlen.net (duja at torlen.net) Date: Mon, 1 Jun 2009 16:01:56 +0200 Subject: Rewrite Url and hash Message-ID: Hi, Im using varnish to cache certain urls depending on the querystring. Example, the url looks like this index.php?var1=123&var2=345&var3=567 but I want the hash to be index.php?var1=123 I have succeeded doing this but not 100%. First of I tried to do the regsub replaces in vcl_hash but understood that this was the wrong because it didnt work as expected. it got hash correctly but when next request was received it could not be found. What im doing now is to rewrite the url in vcl_recv, with this the url us hashed and looked up correctly. But the problem is that the url is sent rewritten to the backend which I dont want :( How do I solve this? Should I set bereq.url before manipulating req.url? / D From lists at zopyx.com Mon Jun 1 17:47:15 2009 From: lists at zopyx.com (Andreas Jung) Date: Mon, 01 Jun 2009 19:47:15 +0200 Subject: Poor #requests/second performance Message-ID: <4A241423.4090509@zopyx.com> Running Varnish 2.0.4 on a Debian installation (dual-core, 2.8 GHz). Apache-Bench gives me a performance of 500-600 requests/second against a cached HTML page - even with an almost empty VCL configuration file. Squid gives me about 4000 requests/second on the same cached page. What is the best approach for narrowing down the bottleneck? Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: lists.vcf Type: text/x-vcard Size: 316 bytes Desc: not available URL: From phk at phk.freebsd.dk Mon Jun 1 17:58:00 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 01 Jun 2009 17:58:00 +0000 Subject: Poor #requests/second performance In-Reply-To: Your message of "Mon, 01 Jun 2009 19:47:15 +0200." <4A241423.4090509@zopyx.com> Message-ID: <3345.1243879080@critter.freebsd.dk> In message <4A241423.4090509 at zopyx.com>, Andreas Jung writes: >Running Varnish 2.0.4 on a Debian installation (dual-core, 2.8 GHz). > >Apache-Bench gives me a performance of 500-600 requests/second against >a cached HTML page - even with an almost empty VCL configuration file. > >Squid gives me about 4000 requests/second on the same cached page. > >What is the best approach for narrowing down the bottleneck? Examining varnishstat to see what happens. Varnish employs a kind of "slow-start" algorithm to thread creation which does not play well with synthetic loads like ab. Typically you have to precreate a suitable number of threads to avoid dropped requests while varnish ramps up. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From lists at zopyx.com Tue Jun 2 04:15:26 2009 From: lists at zopyx.com (Andreas Jung) Date: Tue, 02 Jun 2009 06:15:26 +0200 Subject: Poor #requests/second performance In-Reply-To: <3345.1243879080@critter.freebsd.dk> References: <3345.1243879080@critter.freebsd.dk> Message-ID: <4A24A75E.1030608@zopyx.com> On 01.06.09 19:58, Poul-Henning Kamp wrote: > In message <4A241423.4090509 at zopyx.com>, Andreas Jung writes: > > >> Running Varnish 2.0.4 on a Debian installation (dual-core, 2.8 GHz). >> >> Apache-Bench gives me a performance of 500-600 requests/second against >> a cached HTML page - even with an almost empty VCL configuration file. >> >> Squid gives me about 4000 requests/second on the same cached page. >> >> What is the best approach for narrowing down the bottleneck? >> > Examining varnishstat to see what happens. > At what in particular. Looking at varnishstat does not give me a clue about a possible problem. > Varnish employs a kind of "slow-start" algorithm to thread creation > which does not play well with synthetic loads like ab. Typically > you have to precreate a suitable number of threads to avoid dropped > requests while varnish ramps up. > The test is not syntetic. The cache is "warm" and the slow performance remains the same after 100k requests. Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: lists.vcf Type: text/x-vcard Size: 316 bytes Desc: not available URL: From phk at phk.freebsd.dk Tue Jun 2 06:04:14 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 02 Jun 2009 06:04:14 +0000 Subject: Poor #requests/second performance In-Reply-To: Your message of "Tue, 02 Jun 2009 06:15:26 +0200." <4A24A75E.1030608@zopyx.com> Message-ID: <66650.1243922654@critter.freebsd.dk> In message <4A24A75E.1030608 at zopyx.com>, Andreas Jung writes: >> Examining varnishstat to see what happens. >> >At what in particular. Looking at varnishstat does not give me a clue >about a possible problem. Dropped requests. A small number is OK, a continuos growth is not. ("overflowed" requests are OK). Threads not created should be zero. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From lists at zopyx.com Tue Jun 2 06:10:06 2009 From: lists at zopyx.com (Andreas Jung) Date: Tue, 02 Jun 2009 08:10:06 +0200 Subject: Poor #requests/second performance In-Reply-To: <66650.1243922654@critter.freebsd.dk> References: <66650.1243922654@critter.freebsd.dk> Message-ID: <4A24C23E.2060104@zopyx.com> On 02.06.09 08:04, Poul-Henning Kamp wrote: > In message <4A24A75E.1030608 at zopyx.com>, Andreas Jung writes: > > >>> Examining varnishstat to see what happens. >>> >>> >> At what in particular. Looking at varnishstat does not give me a clue >> about a possible problem. >> > Dropped requests. A small number is OK, a continuos growth is not. > > ("overflowed" requests are OK). > > Threads not created should be zero. > Output of varnishstat: 0+00:40:24 diaweb04 Hitrate ratio: 10 83 83 Hitrate avg: 1.0000 0.9912 0.9912 50050 0.00 20.65 Client connections accepted 50049 0.00 20.65 Client requests received 50015 0.00 20.63 Cache hits 35 0.00 0.01 Cache misses 35 0.00 0.01 Backend connections success 28 0.00 0.01 Backend connections reuses 35 0.00 0.01 Backend connections recycles 1 . . N struct srcaddr 0 . . N active struct srcaddr 93 . . N struct sess_mem 0 . . N struct sess 17 . . N struct object 17 . . N struct objecthead 28 . . N struct smf 3 . . N small free smf 3 . . N large free smf 7 . . N struct vbe_conn 6 . . N struct bereq 12 . . N worker threads 12 0.00 0.00 N worker threads created 62 0.00 0.03 N overflowed work requests 2 . . N backends 30 . . N expired objects 55 . . N LRU moved objects 50048 0.00 20.65 Objects sent with write 50050 0.00 20.65 Total Sessions 50050 0.00 20.65 Total Requests 35 0.00 0.01 Total fetch 13213046 0.00 5450.93 Total header bytes 423243959 0.00 174605.59 Total body bytes 50050 0.00 20.65 Session herd and the result of 'ab': ajung at blackmoon:~> cat out.txt Server Software: Unknown Server Hostname: xxxxxxxx.de Server Port: 80 Document Path: /logo.jpg Document Length: 8448 bytes Concurrency Level: 10 Time taken for tests: 81.439 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Total transferred: 436644839 bytes HTML transferred: 422400000 bytes Requests per second: 613.95 [#/sec] (mean) Time per request: 16.288 [ms] (mean) Time per request: 1.629 [ms] (mean, across all concurrent requests) Transfer rate: 5235.93 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 35.5 1 2999 Processing: 2 14 31.3 12 3009 Waiting: 0 11 19.4 10 2895 Total: 3 16 47.3 14 3015 Percentage of the requests served within a certain time (ms) 50% 14 66% 16 75% 18 80% 19 90% 23 95% 26 98% 31 99% 36 100% 3015 (longest request) Andreas -------------- next part -------------- A non-text attachment was scrubbed... Name: lists.vcf Type: text/x-vcard Size: 316 bytes Desc: not available URL: From michael at dynamine.net Tue Jun 2 06:15:33 2009 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 1 Jun 2009 23:15:33 -0700 Subject: Poor #requests/second performance In-Reply-To: <4A24C23E.2060104@zopyx.com> References: <66650.1243922654@critter.freebsd.dk> <4A24C23E.2060104@zopyx.com> Message-ID: Ok, so your average latency is 16ms. At a concurrency of 10, at most, you can obtain 625r/s. (1 request/connection / 0.016s = 62.5 request/s/connection * 10 connections = 625 request/s) Try increasing your benchmark concurrency. --Michael On Jun 1, 2009, at 11:10 PM, Andreas Jung wrote: > On 02.06.09 08:04, Poul-Henning Kamp wrote: >> In message <4A24A75E.1030608 at zopyx.com>, Andreas Jung writes: >> >> >>>> Examining varnishstat to see what happens. >>>> >>>> >>> At what in particular. Looking at varnishstat does not give me a >>> clue >>> about a possible problem. >>> >> Dropped requests. A small number is OK, a continuos growth is not. >> >> ("overflowed" requests are OK). >> >> Threads not created should be zero. >> > Output of varnishstat: > > 0+00:40:24 > diaweb04 > Hitrate ratio: 10 83 83 > Hitrate avg: 1.0000 0.9912 0.9912 > > 50050 0.00 20.65 Client connections accepted > 50049 0.00 20.65 Client requests received > 50015 0.00 20.63 Cache hits > 35 0.00 0.01 Cache misses > 35 0.00 0.01 Backend connections success > 28 0.00 0.01 Backend connections reuses > 35 0.00 0.01 Backend connections recycles > 1 . . N struct srcaddr > 0 . . N active struct srcaddr > 93 . . N struct sess_mem > 0 . . N struct sess > 17 . . N struct object > 17 . . N struct objecthead > 28 . . N struct smf > 3 . . N small free smf > 3 . . N large free smf > 7 . . N struct vbe_conn > 6 . . N struct bereq > 12 . . N worker threads > 12 0.00 0.00 N worker threads created > 62 0.00 0.03 N overflowed work requests > 2 . . N backends > 30 . . N expired objects > 55 . . N LRU moved objects > 50048 0.00 20.65 Objects sent with write > 50050 0.00 20.65 Total Sessions > 50050 0.00 20.65 Total Requests > 35 0.00 0.01 Total fetch > 13213046 0.00 5450.93 Total header bytes > 423243959 0.00 174605.59 Total body bytes > 50050 0.00 20.65 Session herd > > > and the result of 'ab': > > ajung at blackmoon:~> cat out.txt > Server Software: Unknown > Server Hostname: xxxxxxxx.de > Server Port: 80 > > Document Path: /logo.jpg > Document Length: 8448 bytes > > Concurrency Level: 10 > Time taken for tests: 81.439 seconds > Complete requests: 50000 > Failed requests: 0 > Write errors: 0 > Total transferred: 436644839 bytes > HTML transferred: 422400000 bytes > Requests per second: 613.95 [#/sec] (mean) > Time per request: 16.288 [ms] (mean) > Time per request: 1.629 [ms] (mean, across all concurrent > requests) > Transfer rate: 5235.93 [Kbytes/sec] received > > Connection Times (ms) > min mean[+/-sd] median max > Connect: 0 2 35.5 1 2999 > Processing: 2 14 31.3 12 3009 > Waiting: 0 11 19.4 10 2895 > Total: 3 16 47.3 14 3015 > > Percentage of the requests served within a certain time (ms) > 50% 14 > 66% 16 > 75% 18 > 80% 19 > 90% 23 > 95% 26 > 98% 31 > 99% 36 > 100% 3015 (longest request) > > Andreas > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc From phk at phk.freebsd.dk Tue Jun 2 06:22:10 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 02 Jun 2009 06:22:10 +0000 Subject: Poor #requests/second performance In-Reply-To: Your message of "Tue, 02 Jun 2009 08:10:06 +0200." <4A24C23E.2060104@zopyx.com> Message-ID: <1919.1243923730@critter.freebsd.dk> In message <4A24C23E.2060104 at zopyx.com>, Andreas Jung writes: >Output of varnishstat: >[...] >and the result of 'ab': >Percentage of the requests served within a certain time (ms) > 50% 14 > 66% 16 > 75% 18 > 80% 19 > 90% 23 > 95% 26 > 98% 31 > 99% 36 > 100% 3015 (longest request) No idea... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kristian at redpill-linpro.com Tue Jun 2 07:27:02 2009 From: kristian at redpill-linpro.com (Kristian Lyngstol) Date: Tue, 2 Jun 2009 09:27:02 +0200 Subject: Poor #requests/second performance In-Reply-To: <4A24C23E.2060104@zopyx.com> References: <66650.1243922654@critter.freebsd.dk> <4A24C23E.2060104@zopyx.com> Message-ID: <20090602072702.GA20863@kjeks.linpro.no> On Tue, Jun 02, 2009 at 08:10:06AM +0200, Andreas Jung wrote: > Output of varnishstat: (greedy-snipping) Seeing as how you're testing with only a concurrency of 10, I doubt thread startup is a big issue, but nevertheless, 12 threads is too low if you intend to use this for production. I advice setting thread_pool_min to reflect your actual expected load. I also see a few expiries, it's not really easy to tell from varnishstat what is causing the slowdowns, but it I would try setting up grace to ensure that cache misses don't drag down a significant number of threads. However, with only 10 concurrent requests, even one request going to the backend will cause a significant hit to the requestrate. To solve this, you'll have to either increase the concurrency of the test and/or increase the lifespan of the cached objects. Other than that, varnishstat -1 and some tinkering with varnishtop could further track down the lousy performance you're seeing. -- Kristian Lyngst?l Redpill Linpro AS Tlf: +47 21544179 Mob: +47 99014497 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From lists at zopyx.com Tue Jun 2 18:12:38 2009 From: lists at zopyx.com (Andreas Jung) Date: Tue, 02 Jun 2009 20:12:38 +0200 Subject: Poor #requests/second performance In-Reply-To: <4A241423.4090509@zopyx.com> References: <4A241423.4090509@zopyx.com> Message-ID: <4A256B96.3080702@zopyx.com> Sorry for the noise. The customer was running a transparent proxy within the network without telling me. Now I reach a performance of roughly 3000 requests/seconds which is fast enough however still slower than Squid. Andreas On 01.06.09 19:47, Andreas Jung wrote: > Running Varnish 2.0.4 on a Debian installation (dual-core, 2.8 GHz). > > Apache-Bench gives me a performance of 500-600 requests/second against > a cached HTML page - even with an almost empty VCL configuration file. > > Squid gives me about 4000 requests/second on the same cached page. > > What is the best approach for narrowing down the bottleneck? > > Andreas > > > ------------------------------------------------------------------------ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -- ZOPYX Ltd. & Co. KG - Charlottenstr. 37/1 - 72070 T?bingen - Germany Web: www.zopyx.com - Email: info at zopyx.com - Phone +49 - 7071 - 793376 Registergericht: Amtsgericht Stuttgart, Handelsregister A 381535 Gesch?ftsf?hrer/Gesellschafter: ZOPYX Limited, Birmingham, UK ------------------------------------------------------------------------ E-Publishing, Python, Zope & Plone development, Consulting -------------- next part -------------- A non-text attachment was scrubbed... Name: lists.vcf Type: text/x-vcard Size: 316 bytes Desc: not available URL: From sten at blinkenlights.nl Wed Jun 3 06:58:22 2009 From: sten at blinkenlights.nl (Sten Spans) Date: Wed, 3 Jun 2009 08:58:22 +0200 (CEST) Subject: Poor #requests/second performance In-Reply-To: <4A256B96.3080702@zopyx.com> References: <4A241423.4090509@zopyx.com> <4A256B96.3080702@zopyx.com> Message-ID: On Tue, 2 Jun 2009, Andreas Jung wrote: > Sorry for the noise. The customer was running a transparent proxy within > the network > without telling me. Now I reach a performance of roughly 3000 > requests/seconds which > is fast enough however still slower than Squid. If you are testing with 10 open connections then you are not running a realistic test. In the real world a busy site will see roughly 10-100 times more connections than requests because http 1.1 keepalive will keep end-user connections open. I maintained a site doing 200-500mbit a few years ago and req/s was in the 10k-100k range, but connections where at least an order of magnitude higher. Furthermore for a truly realistic test you need to figure out what kind of latency/speed clients will have and add that to the workload. This is why ab is useless for real tests. The really big problem with http is that even when succesfully loading 90%-95% of the objects a site will feel "broken". If traffic spikes are expected then you really want to have the capacity to handle them. -- Sten Spans "There is a crack in everything, that's how the light gets in." Leonard Cohen - Anthem From duja at torlen.net Wed Jun 3 09:05:37 2009 From: duja at torlen.net (duja at torlen.net) Date: Wed, 3 Jun 2009 11:05:37 +0200 Subject: Rewrite Url and hash Message-ID: Anyone that has an answer or can help me with this? / D Original Message ----------------------- Hi, Im using varnish to cache certain urls depending on the querystring. Example, the url looks like this index.php?var1=123&var2=345&var3=567 but I want the hash to be index.php?var1=123 I have succeeded doing this but not 100%. First of I tried to do the regsub replaces in vcl_hash but understood that this was the wrong because it didnt work as expected. it got hash correctly but when next request was received it could not be found. What im doing now is to rewrite the url in vcl_recv, with this the url us hashed and looked up correctly. But the problem is that the url is sent rewritten to the backend which I dont want :( How do I solve this? Should I set bereq.url before manipulating req.url? / D _______________________________________________ varnish-misc mailing list varnish-misc at projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-misc From gaute at pht.no Wed Jun 3 13:55:56 2009 From: gaute at pht.no (Gaute Amundsen) Date: Wed, 3 Jun 2009 15:55:56 +0200 Subject: Converting config to use inline Message-ID: <200906031555.58014.gaute@pht.no> Hi After removing comments, how would I have to process the normal config file to be able to load it as a string using vcl.inline? I have tried a number of varieties, but often as not, varnish will hang, or crash... This line from a test I found fails as well. "\n\tbackend b { .host = \"127.0.0.1\"; }\n\tacl a {\n\t\t\"1.2.3.4\"/31; \n\t\t\"1.2.3.4\"/31;\n\t}\n\tsub vcl_recv { if (client.ip ~ a) { pass; } }\n" I could make it work by pasting into telnet, but not with varnishadm, or python telnetlib for that matter. Any suggestions? Regards Gaute Amundsen -- Programmerer - Pixelhospitalet AS Prinsessealleen 50, 0276 Oslo Tlf. 24 12 97 81 - 9074 7344 From alecshenry at gmail.com Fri Jun 5 02:49:18 2009 From: alecshenry at gmail.com (Alecs Henry) Date: Thu, 4 Jun 2009 23:49:18 -0300 Subject: varnishncsa backend logging Message-ID: <3c54843f0906041949u7365a0fcmb68fccb9ed44d63c@mail.gmail.com> Hey guys! I've just run into the most peculiar problem.. Actually, I just now realized this happened.. Logging the requests to the backend with varnishncsa, generates a log file that is missing the object lenght field. Actually not missing, but it always comes out as "-". This does not happen to client requests log, just backend. I thought that it could be something related to the VCL, so I tested with an empty vcl (just the backend declarations) and the problem persists. Looking into varnishlog, it shows the object being fetched from the backend alright, but all the content-lenght headers are not there (nor in RxHeader or ObjHeader), yet varnish knows the size of the object and delivers it to the client with the correct lenght header. I would think that varnish does not recompute the size of the object in order to deliver it to the client.. So the question is: what is happening? Running 2.0.4 in debian lenny amd64. If anyone can confirm that this is the case in their installation and maybe provide it would be great! Thanks! Alecs -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaute at pht.no Fri Jun 5 07:20:12 2009 From: gaute at pht.no (Gaute Amundsen) Date: Fri, 5 Jun 2009 09:20:12 +0200 Subject: Converting config to use inline In-Reply-To: <200906031555.58014.gaute@pht.no> References: <200906031555.58014.gaute@pht.no> Message-ID: <200906050920.13329.gaute@pht.no> For the record, using python telnetlib and r'raw string', I was able to load the inline test config mentioned below. Processing the existing configfile to load inline however, is more of a challenge, as I don't have a test instance of varnish, and my attempts seem to crash or hang varnish when I get close. I guess I will have to resort to tempfiles and vcl.load :) Gaute On Wednesday 03 June 2009 15:55:56 Gaute Amundsen wrote: > Hi > > After removing comments, how would I have to process the normal config file > to be able to load it as a string using vcl.inline? > > I have tried a number of varieties, but often as not, varnish will hang, or > crash... > > This line from a test I found fails as well. > > "\n\tbackend b { .host = \"127.0.0.1\"; }\n\tacl a {\n\t\t\"1.2.3.4\"/31; > \n\t\t\"1.2.3.4\"/31;\n\t}\n\tsub vcl_recv { if (client.ip ~ a) { pass; } > }\n" > > I could make it work by pasting into telnet, but not with varnishadm, or > python telnetlib for that matter. > > Any suggestions? > > > Regards > Gaute Amundsen -- Programmerer - Pixelhospitalet AS Prinsessealleen 50, 0276 Oslo Tlf. 24 12 97 81 - 9074 7344 From jauderho at gmail.com Tue Jun 9 16:16:55 2009 From: jauderho at gmail.com (Jauder Ho) Date: Tue, 9 Jun 2009 09:16:55 -0700 Subject: varnish code swarm visualization Message-ID: This is a little off topic but I was generating some code swarm videos for other projects and decided to make one for varnish. This shows the evolution of changes to the varnish code base from the beginning to trunk as of around June 1. Enjoy. http://bit.ly/QZnut --Jauder -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssm at fnord.no Tue Jun 9 18:55:33 2009 From: ssm at fnord.no (Stig Sandbeck Mathisen) Date: Tue, 09 Jun 2009 20:55:33 +0200 Subject: cache purging questions In-Reply-To: (Matthew Hoopes's message of "Thu, 28 May 2009 13:46:31 -0400") References: <86zlczir1n.fsf@ds4.des.no> Message-ID: <87hbypb5ju.fsf@fnord.no> Matthew Hoopes writes: > I've tried a whole gang of regular expressions involving backslashes > and .* everywhere, but I get either "Syntax Error: Illegal backslash > sequence" or it just doesn't clear the cache of the objects i'm trying > to clear. > > Is it even possible to clear the cache based on hostname? yes, see below. > If someone could show me an example of how to clear the cache of every > object from a domain (if possible) i'd be very grateful. When I do "varnishadm -T localhost:6082 help", the "url.purge" mentioned in varnishd(1) is not visible, but rather a group of commands starting with "purge" After a minute of testing: varnishadm -T localhost:6082 \ 'purge req.http.host == "www.example.org" && req.url ~ /foo|bar/' ...neat. varnishadm -T localhost:6082 'purge.list' ...even more neat :) This is on varnishd 2.0.4, by the way. -- Stig Sandbeck Mathisen From mperham at onespot.com Thu Jun 11 17:09:45 2009 From: mperham at onespot.com (Mike Perham) Date: Thu, 11 Jun 2009 12:09:45 -0500 Subject: varnish storage tuning Message-ID: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> We're using Varnish and finding that Linux runs the OOM killer on the large varnish child process every few days. I'm not sure what's causing the memory to grow but now I want to tune it so that I know configuration is not an issue. The default config we were using was 10MB. We're using a small 32-bit EC2 instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so I changed the storage specification to "file,/var/lib/varnish/varnish_storage.bin,1500M". I'd like to be able give varnish 8GB of disk but it complains about sizes larger than 2GB. 32-bit limitation? Side note: I couldn't find any good doc on the various command line parameters for varnishd. The 2.0.4 src only contains a man page for vcl. It would be nice to see a man page for varnishd and its options. We are using purge_url heavily as we update documents - this shouldn't cause unchecked grow though, right? We aren't using regexps to purge. Attached is the /var/log/messages output from the oom-killer and here's a few lines for the lazy. I can't grok the output. Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 [...snip...] Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:13 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355572kB inactive:346580kB present:739644kB pages_scanned:1108980 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497824kB inactive:495208kB present:995688kB pages_scanned:1537436 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581746, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Jun 11 15:35:02 (none) kernel: printk: 98 messages suppressed. Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] delayacct_end+0x70/0x77 Jun 11 15:35:02 (none) kernel: [] sync_page+0x0/0x3b Jun 11 15:35:02 (none) kernel: [] __delayacct_blkio_end+0x5b/0x5e Jun 11 15:35:02 (none) kernel: [] __wait_on_bit_lock+0x4b/0x52 Jun 11 15:35:02 (none) kernel: [] __lock_page+0x58/0x5e Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] move_addr_to_user+0x50/0x68 Jun 11 15:35:02 (none) kernel: [] fd_install+0x1e/0x47 Jun 11 15:35:02 (none) kernel: [] inet_getname+0x0/0x7d Jun 11 15:35:02 (none) kernel: [] sys_accept+0x197/0x1d5 Jun 11 15:35:02 (none) kernel: [] rwsem_down_read_failed+0x1a/0x23 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] sys_socketcall+0xd6/0x261 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:13 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355572kB inactive:346580kB present:739644kB pages_scanned:1108980 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497824kB inactive:495208kB present:995688kB pages_scanned:1537436 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581746, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 233 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: dd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] do_blkif_request+0x36f/0x377 [xenblk] Jun 11 15:35:02 (none) kernel: [] read_swap_cache_async+0x2f/0xac Jun 11 15:35:02 (none) kernel: [] swapin_readahead+0x3a/0x58 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0xb6c/0x1146 Jun 11 15:35:02 (none) kernel: [] do_IRQ+0xbd/0xd2 Jun 11 15:35:02 (none) kernel: [] evtchn_do_upcall+0x82/0xdb Jun 11 15:35:02 (none) kernel: [] hypervisor_callback+0x46/0x4e Jun 11 15:35:02 (none) kernel: [] avc_has_perm+0x4e/0x58 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] task_has_capability+0x56/0x5e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: [] do_syslog+0x14c/0x37a Jun 11 15:35:02 (none) kernel: [] autoremove_wake_function+0x0/0x35 Jun 11 15:35:02 (none) kernel: [] kmsg_read+0x0/0x36 Jun 11 15:35:02 (none) kernel: [] vfs_read+0xa6/0x152 Jun 11 15:35:02 (none) kernel: [] sys_read+0x41/0x67 Jun 11 15:35:02 (none) kernel: [] syscall_call+0x7/0xb Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213365 inactive:210425 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:13 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355652kB inactive:346500kB present:739644kB pages_scanned:1109088 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497808kB inactive:495200kB present:995688kB pages_scanned:1537562 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 233 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] getnstimeofday+0xd/0x21 Jun 11 15:35:02 (none) kernel: [] ktime_get_ts+0x16/0x44 Jun 11 15:35:02 (none) kernel: [] delayacct_end+0x70/0x77 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __lock_page+0x58/0x5e Jun 11 15:35:02 (none) kernel: [] mon_text_init+0x1/0x3f Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] find_extend_vma+0x12/0x49 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __activate_task+0x1c/0x29 Jun 11 15:35:02 (none) kernel: [] try_to_wake_up+0x344/0x34e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] do_mmap_pgoff+0x303/0x6ec Jun 11 15:35:02 (none) kernel: [] rwsem_wake+0xed/0xf8 Jun 11 15:35:02 (none) kernel: [] call_rwsem_wake+0xa/0xc Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 100 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213365 inactive:210425 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355652kB inactive:346500kB present:739644kB pages_scanned:1109088 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497808kB inactive:495200kB present:995688kB pages_scanned:1537562 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: dd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] do_blkif_request+0x36f/0x377 [xenblk] Jun 11 15:35:02 (none) kernel: [] read_swap_cache_async+0x2f/0xac Jun 11 15:35:02 (none) kernel: [] swapin_readahead+0x3a/0x58 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0xb6c/0x1146 Jun 11 15:35:02 (none) kernel: [] do_IRQ+0xbd/0xd2 Jun 11 15:35:02 (none) kernel: [] evtchn_do_upcall+0x82/0xdb Jun 11 15:35:02 (none) kernel: [] hypervisor_callback+0x46/0x4e Jun 11 15:35:02 (none) kernel: [] avc_has_perm+0x4e/0x58 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] task_has_capability+0x56/0x5e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: [] do_syslog+0x14c/0x37a Jun 11 15:35:02 (none) kernel: [] autoremove_wake_function+0x0/0x35 Jun 11 15:35:02 (none) kernel: [] kmsg_read+0x0/0x36 Jun 11 15:35:02 (none) kernel: [] vfs_read+0xa6/0x152 Jun 11 15:35:02 (none) kernel: [] sys_read+0x41/0x67 Jun 11 15:35:02 (none) kernel: [] syscall_call+0x7/0xb Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213387 inactive:210403 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355732kB inactive:346420kB present:739644kB pages_scanned:1109196 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497816kB inactive:495192kB present:995688kB pages_scanned:1537688 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: dd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] do_blkif_request+0x36f/0x377 [xenblk] Jun 11 15:35:02 (none) kernel: [] read_swap_cache_async+0x2f/0xac Jun 11 15:35:02 (none) kernel: [] swapin_readahead+0x3a/0x58 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0xb6c/0x1146 Jun 11 15:35:02 (none) kernel: [] do_IRQ+0xbd/0xd2 Jun 11 15:35:02 (none) kernel: [] evtchn_do_upcall+0x82/0xdb Jun 11 15:35:02 (none) kernel: [] hypervisor_callback+0x46/0x4e Jun 11 15:35:02 (none) kernel: [] avc_has_perm+0x4e/0x58 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] task_has_capability+0x56/0x5e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: [] do_syslog+0x14c/0x37a Jun 11 15:35:02 (none) kernel: [] autoremove_wake_function+0x0/0x35 Jun 11 15:35:02 (none) kernel: [] kmsg_read+0x0/0x36 Jun 11 15:35:02 (none) kernel: [] vfs_read+0xa6/0x152 Jun 11 15:35:02 (none) kernel: [] sys_read+0x41/0x67 Jun 11 15:35:02 (none) kernel: [] syscall_call+0x7/0xb Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213435 inactive:210355 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355812kB inactive:346340kB present:739644kB pages_scanned:1109304 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497928kB inactive:495080kB present:995688kB pages_scanned:1559894 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] getnstimeofday+0xd/0x21 Jun 11 15:35:02 (none) kernel: [] ktime_get_ts+0x16/0x44 Jun 11 15:35:02 (none) kernel: [] delayacct_end+0x70/0x77 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __lock_page+0x58/0x5e Jun 11 15:35:02 (none) kernel: [] mon_text_init+0x1/0x3f Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] find_extend_vma+0x12/0x49 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __activate_task+0x1c/0x29 Jun 11 15:35:02 (none) kernel: [] try_to_wake_up+0x344/0x34e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] do_mmap_pgoff+0x303/0x6ec Jun 11 15:35:02 (none) kernel: [] rwsem_wake+0xed/0xf8 Jun 11 15:35:02 (none) kernel: [] call_rwsem_wake+0xa/0xc Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213457 inactive:210333 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355892kB inactive:346260kB present:739644kB pages_scanned:1109412 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497936kB inactive:495072kB present:995688kB pages_scanned:1560020 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: haproxy invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] sk_reset_timer+0xc/0x16 Jun 11 15:35:02 (none) kernel: [] tcp_connect+0x32b/0x335 Jun 11 15:35:02 (none) kernel: [] tcp_v4_connect+0x531/0x650 Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] sys_connect+0x82/0xad Jun 11 15:35:02 (none) kernel: [] _spin_lock_bh+0x8/0x18 Jun 11 15:35:02 (none) kernel: [] release_sock+0x12/0x9d Jun 11 15:35:02 (none) kernel: [] tcp_setsockopt+0x324/0x33c Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] sock_common_setsockopt+0x1d/0x22 Jun 11 15:35:02 (none) kernel: [] sys_socketcall+0xac/0x261 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213457 inactive:210333 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355892kB inactive:346260kB present:739644kB pages_scanned:1109412 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497936kB inactive:495072kB present:995688kB pages_scanned:1560020 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] getnstimeofday+0xd/0x21 Jun 11 15:35:02 (none) kernel: [] ktime_get_ts+0x16/0x44 Jun 11 15:35:02 (none) kernel: [] delayacct_end+0x70/0x77 Jun 11 15:35:02 (none) kernel: [] mon_init+0x47/0x8e Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __lock_page+0x58/0x5e Jun 11 15:35:02 (none) kernel: [] mon_text_init+0x1/0x3f Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] find_extend_vma+0x12/0x49 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __activate_task+0x1c/0x29 Jun 11 15:35:02 (none) kernel: [] try_to_wake_up+0x344/0x34e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] do_mmap_pgoff+0x303/0x6ec Jun 11 15:35:02 (none) kernel: [] rwsem_wake+0xed/0xf8 Jun 11 15:35:02 (none) kernel: [] call_rwsem_wake+0xa/0xc Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213479 inactive:210311 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:355972kB inactive:346180kB present:739644kB pages_scanned:1109520 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497944kB inactive:495064kB present:995688kB pages_scanned:1560146 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: dd invoked oom-killer: gfp_mask=0x200d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] do_blkif_request+0x36f/0x377 [xenblk] Jun 11 15:35:02 (none) kernel: [] read_swap_cache_async+0x2f/0xac Jun 11 15:35:02 (none) kernel: [] swapin_readahead+0x3a/0x58 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0xb6c/0x1146 Jun 11 15:35:02 (none) kernel: [] do_IRQ+0xbd/0xd2 Jun 11 15:35:02 (none) kernel: [] evtchn_do_upcall+0x82/0xdb Jun 11 15:35:02 (none) kernel: [] hypervisor_callback+0x46/0x4e Jun 11 15:35:02 (none) kernel: [] avc_has_perm+0x4e/0x58 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] dequeue_task+0x13/0x26 Jun 11 15:35:02 (none) kernel: [] __sched_text_start+0x762/0x83f Jun 11 15:35:02 (none) kernel: [] task_has_capability+0x56/0x5e Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: [] do_syslog+0x14c/0x37a Jun 11 15:35:02 (none) kernel: [] autoremove_wake_function+0x0/0x35 Jun 11 15:35:02 (none) kernel: [] kmsg_read+0x0/0x36 Jun 11 15:35:02 (none) kernel: [] vfs_read+0xa6/0x152 Jun 11 15:35:02 (none) kernel: [] sys_read+0x41/0x67 Jun 11 15:35:02 (none) kernel: [] syscall_call+0x7/0xb Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213501 inactive:210289 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:356052kB inactive:346100kB present:739644kB pages_scanned:1109628 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497952kB inactive:495056kB present:995688kB pages_scanned:1560272 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) kernel: haproxy invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Jun 11 15:35:02 (none) kernel: [] out_of_memory+0x69/0x185 Jun 11 15:35:02 (none) kernel: [] __alloc_pages+0x220/0x2aa Jun 11 15:35:02 (none) kernel: [] __do_page_cache_readahead+0xe2/0x202 Jun 11 15:35:02 (none) kernel: [] sk_reset_timer+0xc/0x16 Jun 11 15:35:02 (none) kernel: [] tcp_connect+0x32b/0x335 Jun 11 15:35:02 (none) kernel: [] tcp_v4_connect+0x531/0x650 Jun 11 15:35:02 (none) kernel: [] filemap_nopage+0x14d/0x319 Jun 11 15:35:02 (none) kernel: [] __handle_mm_fault+0x30f/0x1146 Jun 11 15:35:02 (none) kernel: [] sys_connect+0x82/0xad Jun 11 15:35:02 (none) kernel: [] _spin_lock_bh+0x8/0x18 Jun 11 15:35:02 (none) kernel: [] release_sock+0x12/0x9d Jun 11 15:35:02 (none) kernel: [] tcp_setsockopt+0x324/0x33c Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x72d/0xc24 Jun 11 15:35:02 (none) kernel: [] sock_common_setsockopt+0x1d/0x22 Jun 11 15:35:02 (none) kernel: [] sys_socketcall+0xac/0x261 Jun 11 15:35:02 (none) kernel: [] do_page_fault+0x0/0xc24 Jun 11 15:35:02 (none) kernel: [] error_code+0x35/0x3c Jun 11 15:35:02 (none) kernel: ======================= Jun 11 15:35:02 (none) kernel: Mem-info: Jun 11 15:35:02 (none) kernel: DMA per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 101 Cold: hi: 62, btch: 15 usd: 60 Jun 11 15:35:02 (none) kernel: HighMem per-cpu: Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 Cold: hi: 62, btch: 15 usd: 14 Jun 11 15:35:02 (none) kernel: Active:213523 inactive:210267 dirty:0 writeback:0 unstable:0 Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 pagetables:1493 bounce:0 Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB high:5160kB active:356132kB inactive:346020kB present:739644kB pages_scanned:1109736 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB high:2824kB active:497960kB inactive:495048kB present:995688kB pages_scanned:1560398 all_unreclaimable? yes Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, find 572160/581747, race 3+9 Jun 11 15:35:02 (none) kernel: Free swap = 0kB Jun 11 15:35:02 (none) kernel: Total swap = 917496kB Jun 11 15:35:02 (none) kernel: Free swap: 0kB Jun 11 15:35:02 (none) kernel: 437248 pages of RAM Jun 11 15:35:02 (none) kernel: 250882 pages of HIGHMEM Jun 11 15:35:02 (none) kernel: 5414 reserved pages Jun 11 15:35:02 (none) kernel: 235 pages shared Jun 11 15:35:02 (none) kernel: 10 pages swap cached Jun 11 15:35:02 (none) kernel: 0 pages dirty Jun 11 15:35:02 (none) kernel: 0 pages writeback Jun 11 15:35:02 (none) kernel: 23 pages mapped Jun 11 15:35:02 (none) kernel: 1078 pages slab Jun 11 15:35:02 (none) kernel: 1493 pages pagetables Jun 11 15:35:02 (none) varnishd[17785]: Child (30377) died signal=9 Jun 11 15:35:02 (none) varnishd[17785]: child (22753) Started Jun 11 15:35:02 (none) varnishd[17785]: Child (22753) said Closed fds: 4 5 6 9 10 12 13 Jun 11 15:35:02 (none) varnishd[17785]: Child (22753) said Child starts Jun 11 15:35:02 (none) varnishd[17785]: Child (22753) said managed to mmap 10240000 bytes of 10240000 Jun 11 15:35:02 (none) varnishd[17785]: Child (22753) said Ready From darryl.dixon at winterhouseconsulting.com Thu Jun 11 21:46:59 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Fri, 12 Jun 2009 09:46:59 +1200 (NZST) Subject: varnish storage tuning In-Reply-To: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> Message-ID: <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> Hi Mike, Quite possibly the purge_url usage is causing you a problem. I assume this is something that is being invoked from your VCL, rather than telnet-ing to the administrative interface or by varnishadm? My testing showed that with purge_url in the VCL, a 'purge record' was created every time the rule was struck, and that record never seemed to be removed, which meant that memory grew without bound nearly continuously (new memory allocated for each new purge record). See the thread I started here: http://www.mail-archive.com/varnish-misc at projects.linpro.no/msg02520.html Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 and then restart the request. This solved the problem for me. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com > We're using Varnish and finding that Linux runs the OOM killer on the > large > varnish child process every few days. I'm not sure what's causing the > memory to grow but now I want to tune it so that I know configuration is > not > an issue. > The default config we were using was 10MB. We're using a small 32-bit EC2 > instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so I > changed the storage specification to > "file,/var/lib/varnish/varnish_storage.bin,1500M". I'd like to be able > give > varnish 8GB of disk but it complains about sizes larger than 2GB. 32-bit > limitation? > > Side note: I couldn't find any good doc on the various command line > parameters for varnishd. The 2.0.4 src only contains a man page for vcl. > It would be nice to see a man page for varnishd and its options. > > We are using purge_url heavily as we update documents - this shouldn't > cause > unchecked grow though, right? We aren't using regexps to purge. > > > > Attached is the /var/log/messages output from the oom-killer and here's a > few lines for the lazy. I can't grok the output. > > Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: > gfp_mask=0x201d2, order=0, oomkilladj=0 > [...snip...] > Jun 11 15:35:02 (none) kernel: Mem-info: > Jun 11 15:35:02 (none) kernel: DMA per-cpu: > Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 94 > Cold: hi: 62, btch: 15 usd: 60 > Jun 11 15:35:02 (none) kernel: HighMem per-cpu: > Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: 26 > Cold: hi: 62, btch: 15 usd: 14 > Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0 > writeback:0 unstable:0 > Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 > pagetables:1493 bounce:13 > Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB > high:5160kB active:355572kB inactive:346580kB present:739644kB > pages_scanned:1108980 all_unreclaimable? yes > > Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 > > Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB > high:2824kB active:497824kB inactive:495208kB present:995688kB > pages_scanned:1537436 all_unreclaimable? yes > Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 > > Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB > 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB > Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB > 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB > Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, > find > 572160/581746, race 3+9 > Jun 11 15:35:02 (none) kernel: Free swap = 0kB > > Jun 11 15:35:02 (none) kernel: Total swap = 917496kB > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From mperham at onespot.com Thu Jun 11 22:40:02 2009 From: mperham at onespot.com (Mike Perham) Date: Thu, 11 Jun 2009 17:40:02 -0500 Subject: varnish storage tuning In-Reply-To: <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> Message-ID: <3397bf0906111540s23d04049k90606c2e766a6a5d@mail.gmail.com> Darryl, that sounds right. Yes, this is in our vcl_recv handler. I was watching varnish in top today and the memory just crept up by 1-2k every few seconds, monotonically increasing. This seems like a major issue - I'm surprised that purge_url doesn't just do that under the covers. I'll see if I can't adjust our VCL logic as you suggest. Thanks. On Thu, Jun 11, 2009 at 4:46 PM, Darryl Dixon - Winterhouse Consulting < darryl.dixon at winterhouseconsulting.com> wrote: > Hi Mike, > > Quite possibly the purge_url usage is causing you a problem. I assume this > is something that is being invoked from your VCL, rather than telnet-ing > to the administrative interface or by varnishadm? > > My testing showed that with purge_url in the VCL, a 'purge record' was > created every time the rule was struck, and that record never seemed to be > removed, which meant that memory grew without bound nearly continuously > (new memory allocated for each new purge record). See the thread I started > here: > http://www.mail-archive.com/varnish-misc at projects.linpro.no/msg02520.html > > Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 and > then restart the request. This solved the problem for me. > > regards, > Darryl Dixon > Winterhouse Consulting Ltd > http://www.winterhouseconsulting.com > > > > > > We're using Varnish and finding that Linux runs the OOM killer on the > > large > > varnish child process every few days. I'm not sure what's causing the > > memory to grow but now I want to tune it so that I know configuration is > > not > > an issue. > > The default config we were using was 10MB. We're using a small 32-bit > EC2 > > instance (Linux 2.6.21.7-2.fc8xen) with 1.75G of RAM and 10GB of disk so > I > > changed the storage specification to > > "file,/var/lib/varnish/varnish_storage.bin,1500M". I'd like to be able > > give > > varnish 8GB of disk but it complains about sizes larger than 2GB. 32-bit > > limitation? > > > > Side note: I couldn't find any good doc on the various command line > > parameters for varnishd. The 2.0.4 src only contains a man page for vcl. > > It would be nice to see a man page for varnishd and its options. > > > > We are using purge_url heavily as we update documents - this shouldn't > > cause > > unchecked grow though, right? We aren't using regexps to purge. > > > > > > > > Attached is the /var/log/messages output from the oom-killer and here's a > > few lines for the lazy. I can't grok the output. > > > > Jun 11 15:35:02 (none) kernel: varnishd invoked oom-killer: > > gfp_mask=0x201d2, order=0, oomkilladj=0 > > [...snip...] > > Jun 11 15:35:02 (none) kernel: Mem-info: > > Jun 11 15:35:02 (none) kernel: DMA per-cpu: > > Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: > 94 > > Cold: hi: 62, btch: 15 usd: 60 > > Jun 11 15:35:02 (none) kernel: HighMem per-cpu: > > Jun 11 15:35:02 (none) kernel: CPU 0: Hot: hi: 186, btch: 31 usd: > 26 > > Cold: hi: 62, btch: 15 usd: 14 > > Jun 11 15:35:02 (none) kernel: Active:213349 inactive:210447 dirty:0 > > writeback:0 unstable:0 > > Jun 11 15:35:02 (none) kernel: free:1957 slab:1078 mapped:23 > > pagetables:1493 bounce:13 > > Jun 11 15:35:02 (none) kernel: DMA free:7324kB min:3440kB low:4300kB > > high:5160kB active:355572kB inactive:346580kB present:739644kB > > pages_scanned:1108980 all_unreclaimable? yes > > > > Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 972 > > > > Jun 11 15:35:02 (none) kernel: HighMem free:504kB min:512kB low:1668kB > > high:2824kB active:497824kB inactive:495208kB present:995688kB > > pages_scanned:1537436 all_unreclaimable? yes > > Jun 11 15:35:02 (none) kernel: lowmem_reserve[]: 0 0 0 > > > > Jun 11 15:35:02 (none) kernel: DMA: 11*4kB 10*8kB 42*16kB 12*32kB 2*64kB > > 23*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 7324kB > > Jun 11 15:35:02 (none) kernel: HighMem: 1*4kB 6*8kB 4*16kB 0*32kB 0*64kB > > 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB > > Jun 11 15:35:02 (none) kernel: Swap cache: add 1563900, delete 1563890, > > find > > 572160/581746, race 3+9 > > Jun 11 15:35:02 (none) kernel: Free swap = 0kB > > > > Jun 11 15:35:02 (none) kernel: Total swap = 917496kB > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at projects.linpro.no > > http://projects.linpro.no/mailman/listinfo/varnish-misc > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperham at onespot.com Thu Jun 11 22:59:53 2009 From: mperham at onespot.com (Mike Perham) Date: Thu, 11 Jun 2009 17:59:53 -0500 Subject: varnish storage tuning In-Reply-To: <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> Message-ID: <3397bf0906111559l49accad2g1393e6fb985026d2@mail.gmail.com> Does restart cause the backend to be queried for the page again? I'd prefer if a purge did not re-fetch the URL from the backend as this can caused unwanted content to stay in the cache and my cache is relatively limited in size. Can I do something like: sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0; error 200 "Purged"; } } sub vcl_miss { if (req.request == "PURGE") { error 200 "Purged (not in cache)"; } } On Thu, Jun 11, 2009 at 4:46 PM, Darryl Dixon - Winterhouse Consulting < darryl.dixon at winterhouseconsulting.com> wrote: > > Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 and > then restart the request. This solved the problem for me. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From darryl.dixon at winterhouseconsulting.com Thu Jun 11 23:29:12 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Fri, 12 Jun 2009 11:29:12 +1200 (NZST) Subject: varnish storage tuning In-Reply-To: <3397bf0906111559l49accad2g1393e6fb985026d2@mail.gmail.com> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> <3397bf0906111559l49accad2g1393e6fb985026d2@mail.gmail.com> Message-ID: <53399.58.28.124.90.1244762952.squirrel@services.directender.co.nz> Hi Mike, Yes. If all you want to do is just get rid of the object, then all you need to do is set obj.ttl = 0 (as per your example below). In my case I wanted to honour the Cache-Control: no-cache header, which involved re-fetching the object from the backend. If you simply want to remove the item, then definitely you don't need to restart. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com > Does restart cause the backend to be queried for the page again? I'd > prefer > if a purge did not re-fetch the URL from the backend as this can caused > unwanted content to stay in the cache and my cache is relatively limited > in > size. Can I do something like: > sub vcl_hit { > if (req.request == "PURGE") { > set obj.ttl = 0; > error 200 "Purged"; > } > } > > sub vcl_miss { > if (req.request == "PURGE") { > error 200 "Purged (not in cache)"; > } > } > > > On Thu, Jun 11, 2009 at 4:46 PM, Darryl Dixon - Winterhouse Consulting < > darryl.dixon at winterhouseconsulting.com> wrote: > >> >> Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 >> and >> then restart the request. This solved the problem for me. >> >> > From anders at fupp.net Fri Jun 12 14:33:50 2009 From: anders at fupp.net (Anders Nordby) Date: Fri, 12 Jun 2009 16:33:50 +0200 Subject: Problems getting Varnish to cache certain things Message-ID: <20090612143350.GA76903@fupp.net> Hi, I had a situation where some cache servers would cache a URL, and others would not. The strange thing is, if I restarted Varnish on the servers that did not cache it, then it would cache the URL. How is that? When Varnish delivers a miss, it should fetch (and if possible) cache the URL on the next attempt? Why does a restart have any significance? I tried some purging, without any success. After all, how can you purge something that is not cached.. I wrote a simple script to check: Checking server cache2.xx.no for URI banner.xx.no/rest/foo_rest/foo/prepackage/result?&BAR=200907&sort=1 => HTTP/1.1 200 OK X-Varnish-IP: 192.168.39.142 X-Varnish-Server: cache2 X-Varnish: 942085260 941884380 X-Cache: HIT Sjekker server cache3.xx.no for URI banner.xx.no/rest/foo_rest/foo/prepackage/result?&BAR=200907&sort=1 => HTTP/1.1 200 OK X-Varnish-IP: 192.168.39.143 X-Varnish-Server: cache3 X-Varnish: 3642432563 X-Cache: MISS Miss.. -- Anders. From darryl.dixon at winterhouseconsulting.com Fri Jun 12 19:15:46 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Sat, 13 Jun 2009 07:15:46 +1200 (NZST) Subject: Problems getting Varnish to cache certain things In-Reply-To: <20090612143350.GA76903@fupp.net> References: <20090612143350.GA76903@fupp.net> Message-ID: <41184.118.93.77.92.1244834146.squirrel@services.directender.co.nz> Hi Anders, If you perform certain actions in vcl_hit, Varnish will mark the object as a 'hit for pass' which means that from then on it will always simply pass every matching request to the backend. A restart clears this. If I recall correctly, one of the triggers is using 'pass' inside vcl_hit or vcl_fetch. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com > Hi, > > I had a situation where some cache servers would cache a URL, and others > would not. The strange thing is, if I restarted Varnish on the servers > that did not cache it, then it would cache the URL. How is that? When > Varnish delivers a miss, it should fetch (and if possible) cache the URL > on the next attempt? Why does a restart have any significance? I tried > some purging, without any success. After all, how can you purge > something that is not cached.. > > I wrote a simple script to check: > > Checking server cache2.xx.no for URI > banner.xx.no/rest/foo_rest/foo/prepackage/result?&BAR=200907&sort=1 > => > HTTP/1.1 200 OK > X-Varnish-IP: 192.168.39.142 > X-Varnish-Server: cache2 > X-Varnish: 942085260 941884380 > X-Cache: HIT > > Sjekker server cache3.xx.no for URI > banner.xx.no/rest/foo_rest/foo/prepackage/result?&BAR=200907&sort=1 > => > HTTP/1.1 200 OK > X-Varnish-IP: 192.168.39.143 > X-Varnish-Server: cache3 > X-Varnish: 3642432563 > X-Cache: MISS > > Miss.. > > -- > Anders. > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From phk at phk.freebsd.dk Fri Jun 12 21:42:39 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 12 Jun 2009 21:42:39 +0000 Subject: Problems getting Varnish to cache certain things In-Reply-To: Your message of "Sat, 13 Jun 2009 07:15:46 +1200." <41184.118.93.77.92.1244834146.squirrel@services.directender.co.nz> Message-ID: <30723.1244842959@critter.freebsd.dk> In message <41184.118.93.77.92.1244834146.squirrel at services.directender.co.nz>, "Darryl Dixon - Winterhouse Consulting" writes: >Hi Anders, > >If you perform certain actions in vcl_hit, Varnish will mark the object as >a 'hit for pass' which means that from then on it will always simply pass >every matching request to the backend. A restart clears this. If I recall >correctly, one of the triggers is using 'pass' inside vcl_hit or >vcl_fetch. Only pass in vcl_fetch will create a "hit for pass" object. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From anders at fupp.net Mon Jun 15 09:25:09 2009 From: anders at fupp.net (Anders Nordby) Date: Mon, 15 Jun 2009 11:25:09 +0200 Subject: Problems getting Varnish to cache certain things In-Reply-To: <30723.1244842959@critter.freebsd.dk> References: <41184.118.93.77.92.1244834146.squirrel@services.directender.co.nz> <30723.1244842959@critter.freebsd.dk> Message-ID: <20090615092509.GA91218@fupp.net> Hi, On Fri, Jun 12, 2009 at 09:42:39PM +0000, Poul-Henning Kamp wrote: >>If you perform certain actions in vcl_hit, Varnish will mark the object as >>a 'hit for pass' which means that from then on it will always simply pass >>every matching request to the backend. A restart clears this. If I recall >>correctly, one of the triggers is using 'pass' inside vcl_hit or >>vcl_fetch. > Only pass in vcl_fetch will create a "hit for pass" object. Is it possible to clear "hit for pass" objects without restarting? Is there any TTL for "hit for pass" objects? I'd like a short "hit for pass" timeout value, I think. If an object is passed in vcl_fetch due to being !beresp.cacheable, it's nice to have the object cached one you have removed the obstacles to have that URL cached. After all, we often do want to cache even though web developers do mistakes. Regards, -- Anders. From phk at phk.freebsd.dk Mon Jun 15 09:49:59 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 15 Jun 2009 09:49:59 +0000 Subject: Problems getting Varnish to cache certain things In-Reply-To: Your message of "Mon, 15 Jun 2009 11:25:09 +0200." <20090615092509.GA91218@fupp.net> Message-ID: <20150.1245059399@critter.freebsd.dk> In message <20090615092509.GA91218 at fupp.net>, Anders Nordby writes: >Is it possible to clear "hit for pass" objects without restarting? Is >there any TTL for "hit for pass" objects? I'd like a short "hit for >pass" timeout value, I think. You can set the TTL in vcl_fetch{}. The default is the default_ttl parameter, (2 minutes ?) >If an object is passed in vcl_fetch due to being !beresp.cacheable, it's >nice to have the object cached one you have removed the obstacles to >have that URL cached. After all, we often do want to cache even though >web developers do mistakes. The reason for the hit-for-pass, is to avoid serializing clients on objects that cannot be cached. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Mon Jun 15 10:07:23 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 15 Jun 2009 10:07:23 +0000 Subject: Time for a Varnish user meeting ? Message-ID: <20229.1245060443@critter.freebsd.dk> Isn't it time we start to plan a Varnish User Meeting ? We can either do it as a stand alone thing, a one day event somewhere convenient (Oslo ? Copenhagen ? London ?) or we can try to piggyback onto some related conference and hold our meeting before/after the conference. Anybody willing to try to organize something ? Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From mperham at onespot.com Mon Jun 15 15:27:59 2009 From: mperham at onespot.com (Mike Perham) Date: Mon, 15 Jun 2009 10:27:59 -0500 Subject: varnish storage tuning In-Reply-To: <53399.58.28.124.90.1244762952.squirrel@services.directender.co.nz> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> <3397bf0906111559l49accad2g1393e6fb985026d2@mail.gmail.com> <53399.58.28.124.90.1244762952.squirrel@services.directender.co.nz> Message-ID: <3397bf0906150827o54224bbci25a8282c3180e349@mail.gmail.com> Just a follow up, I refactored my VCL to look like this: sub vcl_recv { if (req.request == "PURGE") { lookup; } } # Varnish's purge* routines are basically endorsed memory leaks. # Don't ever use them. sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged"; } } sub vcl_miss { if (req.request == "PURGE") { error 200 "Purged (404)"; } } And now my Varnish servers have stabilized in terms of RSS and seem to be running great. Thanks for the advice, Darryl! On Thu, Jun 11, 2009 at 6:29 PM, Darryl Dixon - Winterhouse Consulting wrote: > Hi Mike, > > Yes. If all you want to do is just get rid of the object, then all you > need to do is set obj.ttl = 0 (as per your example below). In my case I > wanted to honour the Cache-Control: no-cache header, which involved > re-fetching the object from the backend. If you simply want to remove the > item, then definitely you don't need to restart. > > regards, > Darryl Dixon > Winterhouse Consulting Ltd > http://www.winterhouseconsulting.com > >> Does restart cause the backend to be queried for the page again? ?I'd >> prefer >> if a purge did not re-fetch the URL from the backend as this can caused >> unwanted content to stay in the cache and my cache is relatively limited >> in >> size. ?Can I do something like: >> sub vcl_hit { >> ? ? if (req.request == "PURGE") { >> ? ? ? ? set obj.ttl = 0; >> ? ? ? ? error 200 "Purged"; >> ? ? } >> } >> >> sub vcl_miss { >> ? ? if (req.request == "PURGE") { >> ? ? ? ? error 200 "Purged (not in cache)"; >> ? ? } >> } >> >> >> On Thu, Jun 11, 2009 at 4:46 PM, Darryl Dixon - Winterhouse Consulting < >> darryl.dixon at winterhouseconsulting.com> wrote: >> >>> >>> Instead, in vcl_hit, if the object should be purged, set obj.ttl to 0 >>> and >>> then restart the request. This solved the problem for me. >>> >>> >> > > From jauderho at gmail.com Mon Jun 15 18:02:27 2009 From: jauderho at gmail.com (Jauder Ho) Date: Mon, 15 Jun 2009 11:02:27 -0700 Subject: Time for a Varnish user meeting ? In-Reply-To: <20229.1245060443@critter.freebsd.dk> References: <20229.1245060443@critter.freebsd.dk> Message-ID: Well, Velocity is in 2 weeks in San Jose if anyone wants to meet. It's short notice but probably an appropriate conference. http://en.oreilly.com/velocity2009 --Jauder On Mon, Jun 15, 2009 at 3:07 AM, Poul-Henning Kamp wrote: > > Isn't it time we start to plan a Varnish User Meeting ? > > We can either do it as a stand alone thing, a one day event somewhere > convenient (Oslo ? Copenhagen ? London ?) or we can try to piggyback > onto some related conference and hold our meeting before/after the > conference. > > Anybody willing to try to organize something ? > > Poul-Henning > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at dynamine.net Mon Jun 15 18:28:51 2009 From: michael at dynamine.net (Michael S. Fischer) Date: Mon, 15 Jun 2009 11:28:51 -0700 Subject: Time for a Varnish user meeting ? In-Reply-To: References: <20229.1245060443@critter.freebsd.dk> Message-ID: I think you mean 1 week :) --Michael On Jun 15, 2009, at 11:02 AM, Jauder Ho wrote: > Well, Velocity is in 2 weeks in San Jose if anyone wants to meet. > It's short notice but probably an appropriate conference. > > http://en.oreilly.com/velocity2009 > > --Jauder > > On Mon, Jun 15, 2009 at 3:07 AM, Poul-Henning Kamp > wrote: > > Isn't it time we start to plan a Varnish User Meeting ? > > We can either do it as a stand alone thing, a one day event somewhere > convenient (Oslo ? Copenhagen ? London ?) or we can try to piggyback > onto some related conference and hold our meeting before/after the > conference. > > Anybody willing to try to organize something ? > > Poul-Henning > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by > incompetence. > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From tfheen at redpill-linpro.com Tue Jun 16 10:49:35 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Tue, 16 Jun 2009 12:49:35 +0200 Subject: varnish storage tuning In-Reply-To: <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> (Darryl Dixon's message of "Fri, 12 Jun 2009 09:46:59 +1200 (NZST)") References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> Message-ID: <87ws7c4fnk.fsf@qurzaw.linpro.no> ]] "Darryl Dixon - Winterhouse Consulting" | My testing showed that with purge_url in the VCL, a 'purge record' was | created every time the rule was struck, and that record never seemed to be | removed, which meant that memory grew without bound nearly continuously | (new memory allocated for each new purge record). See the thread I started | here: | http://www.mail-archive.com/varnish-misc at projects.linpro.no/msg02520.html It gets removed when all the objects in the cache older than the purge either have expired or have been requested. If your cache is big, this can obviously take a while. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From darryl.dixon at winterhouseconsulting.com Tue Jun 16 21:48:33 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Wed, 17 Jun 2009 09:48:33 +1200 (NZST) Subject: varnish storage tuning In-Reply-To: <87ws7c4fnk.fsf@qurzaw.linpro.no> References: <3397bf0906111009n5f92ca49u658e597e4b4745cb@mail.gmail.com> <64254.58.28.124.90.1244756819.squirrel@services.directender.co.nz> <87ws7c4fnk.fsf@qurzaw.linpro.no> Message-ID: <54083.58.28.124.90.1245188913.squirrel@services.directender.co.nz> > ]] "Darryl Dixon - Winterhouse Consulting" > > | My testing showed that with purge_url in the VCL, a 'purge record' was > | created every time the rule was struck, and that record never seemed to > be > | removed, which meant that memory grew without bound nearly continuously > | (new memory allocated for each new purge record). See the thread I > started > | here: > | > http://www.mail-archive.com/varnish-misc at projects.linpro.no/msg02520.html > > It gets removed when all the objects in the cache older than the purge > either have expired or have been requested. If your cache is big, this > can obviously take a while. I saw this problem with a cache size of 250MB with about 10% (22MB) occupancy. The log of purge_url records pushed the overall memory envelope of the instance well above 3GB. It may well be as you say that eventually the records will be removed, but for all practical intents and purposes they may as well not have been :( regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com From darryl.dixon at winterhouseconsulting.com Wed Jun 17 04:18:14 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Wed, 17 Jun 2009 16:18:14 +1200 (NZST) Subject: Problems getting Varnish to cache certain things In-Reply-To: <30723.1244842959@critter.freebsd.dk> References: <30723.1244842959@critter.freebsd.dk> Message-ID: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> > In message > <41184.118.93.77.92.1244834146.squirrel at services.directender.co.nz>, > "Darryl Dixon - Winterhouse Consulting" writes: >>Hi Anders, >> >>If you perform certain actions in vcl_hit, Varnish will mark the object >> as >>a 'hit for pass' which means that from then on it will always simply pass >>every matching request to the backend. A restart clears this. If I recall >>correctly, one of the triggers is using 'pass' inside vcl_hit or >>vcl_fetch. > > Only pass in vcl_fetch will create a "hit for pass" object. Hrm. When did this change? We tested and confirmed in 2.0.3(?) that pass'ing out of vcl_hit caused the hit-for-pass behaviour. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com From phk at phk.freebsd.dk Wed Jun 17 20:32:07 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 17 Jun 2009 20:32:07 +0000 Subject: Problems getting Varnish to cache certain things In-Reply-To: Your message of "Wed, 17 Jun 2009 16:18:14 +1200." <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> Message-ID: <8885.1245270727@critter.freebsd.dk> In message <57402.58.28.124.90.1245212294.squirrel at services.directender.co.nz>, "Darryl Dixon - Winterhouse Consulting" writes: >> Only pass in vcl_fetch will create a "hit for pass" object. > >Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >pass'ing out of vcl_hit caused the hit-for-pass behaviour. The object will not be marked as hit-for-pass if you do that, but the resulting behaviour is the same, so unless you inspect stats, you cannot tell the difference. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kb+varnish at slide.com Fri Jun 19 06:20:38 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Thu, 18 Jun 2009 23:20:38 -0700 Subject: [PATCH] Initial backend health Message-ID: <52C21B1B-13D5-45A0-BC7E-4D3C6A1C23A2@slide.com> [Apologies if this belongs on varnish-dev; this list seemed much more active.] This patch came about from observations in tickets #512 and #518. The attached patch creates a backend flag to change the initial health of backends upon varnishd startup: backend foo { .initial_health = 1; } The backend healthy flag internally is an unsigned int, so I kept the same type. A value > 0 here will cause this backend to default to healthy. Also, the initial "Probe" output at startup will note "initially healthy" if this flag is set. The backend will immediately go "sick" if a single health check fails*, until the window is flushed. This seems like the safest way to implement this option, since a bad host will only receive hits until the first probe can execute against it, which in my testing was nearly immediate. Healthy hosts will continue to pass and stay healthy. I'm not using probed backends in production right now, but I'm running one instance of the patched version at ~1,000 requests per second (at 8% of a single 2.5G Xeon) and it's stable. YMMV, protect yourself, etc. I thought about having this flag apply to the director, and backends would inherit this central flag... but while setting this for every backend in the config is a little verbose, it just seems cleaner, conceptually. And a per-backend setting seems cleaner and more flexible than a command-line flag. TODO: I should probably add "-p initial_health=1", as it fits the defaults like between_bytes_timeout. Comments? Thoughts? -- Ken. *(I'm marking the "oldest" .threshold probes in the .window as "pass" and the others "fail", so a single probe fail will cause sickness until the window has flushed. The BITMAP()/vt->happy stuff in bin/ varnishd/cache_backend_poll.[ch] made my face bleed, but look there for specifics of the implementation.) -------------- next part -------------- A non-text attachment was scrubbed... Name: varnish-kb0.patch Type: application/octet-stream Size: 2542 bytes Desc: not available URL: From kb+varnish at slide.com Fri Jun 19 07:12:22 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Fri, 19 Jun 2009 00:12:22 -0700 Subject: Thread memory allocation question Message-ID: When looking at /proc/map info for varnish threads, I'm seeing the following allocations in numbers that essentially match the child count: 0000000040111000 8192K rw--- [ anon ] And this at almost double the child count: 00007f4d57900000 1024K rw--- [ anon ] For example, for 64 worker threads, I see 69 of the 8192K allocations, and 121 of the 1024K allocations. For 32 worker threads I see 37 and 82, respectively. I noticed some pretty intense memory usage when I had a backend issue and the thread count increased into the hundreds. Obviously the threads will need memory as they scale, but are these large allocations intentional, and are they tunable (beyond the relatively small workspaces?) Many thanks, -- Ken. From tfheen at redpill-linpro.com Fri Jun 19 14:15:29 2009 From: tfheen at redpill-linpro.com (Tollef Fog Heen) Date: Fri, 19 Jun 2009 16:15:29 +0200 Subject: Thread memory allocation question In-Reply-To: (Ken Brownfield's message of "Fri, 19 Jun 2009 00:12:22 -0700") References: Message-ID: <87ws78s41q.fsf@qurzaw.linpro.no> ]] Ken Brownfield | When looking at /proc/map info for varnish threads, I'm seeing the | following allocations in numbers that essentially match the child count: | | 0000000040111000 8192K rw--- [ anon ] Looks like the default stack size. | And this at almost double the child count: | | 00007f4d57900000 1024K rw--- [ anon ] Unsure what this is. | I noticed some pretty intense memory usage when I had a backend issue | and the thread count increased into the hundreds. Obviously the | threads will need memory as they scale, but are these large | allocations intentional, and are they tunable (beyond the relatively | small workspaces?) You should be able to tune it using ulimit -s. If you turn it too low, things will break, though. -- Tollef Fog Heen Redpill Linpro -- Changing the game! t: +47 21 54 41 73 From nick at loman.net Fri Jun 19 14:41:23 2009 From: nick at loman.net (Nick Loman) Date: Fri, 19 Jun 2009 15:41:23 +0100 Subject: Apache DoS - is Varnish affected? Message-ID: <4A3BA393.3010306@loman.net> I would guess that Varnish isn't affected by this, but does anyone know for sure? Does Varnish protect against this attack in all cases if you have Apache as your backend? http://isc.sans.org/diary.html?storyid=6601 Many thanks, Nick. From phk at phk.freebsd.dk Fri Jun 19 15:35:51 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 19 Jun 2009 15:35:51 +0000 Subject: Apache DoS - is Varnish affected? In-Reply-To: Your message of "Fri, 19 Jun 2009 15:41:23 +0100." <4A3BA393.3010306@loman.net> Message-ID: <2013.1245425751@critter.freebsd.dk> In message <4A3BA393.3010306 at loman.net>, Nick Loman writes: >I would guess that Varnish isn't affected by this, but does anyone know >for sure? Does Varnish protect against this attack in all cases if you >have Apache as your backend? > >http://isc.sans.org/diary.html?storyid=6601 Varnish will abandon the connection after a fixed number of header lines. This attack is more or less exactly _why_ varnish has a fixed limit on HTTP headers. I won't claim that varnish is imune, but the impact should be manageable. Systems using "http accept filters" (FreeBSD possibly others) the Varnish (or apache) will never even see these connections in the first place. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From nick at loman.net Fri Jun 19 15:46:41 2009 From: nick at loman.net (Nick Loman) Date: Fri, 19 Jun 2009 16:46:41 +0100 Subject: Apache DoS - is Varnish affected? In-Reply-To: <2013.1245425751@critter.freebsd.dk> References: <2013.1245425751@critter.freebsd.dk> Message-ID: <4A3BB2E1.8090300@loman.net> Poul-Henning Kamp wrote: > In message <4A3BA393.3010306 at loman.net>, Nick Loman writes: > >> I would guess that Varnish isn't affected by this, but does anyone know >> for sure? Does Varnish protect against this attack in all cases if you >> have Apache as your backend? >> >> http://isc.sans.org/diary.html?storyid=6601 >> > > Varnish will abandon the connection after a fixed number of header > lines. > > This attack is more or less exactly _why_ varnish has a fixed limit > on HTTP headers. > Hi Poul-Henning, That's reassuring. Out of interest, what is the limit? Presumably that limit * the read timeout is the length of time a connection could be held open by a rogue client? I agree that is probably manageable but of course still potentially serious in the context of a significant DoS attempt. Cheers, Nick. From svein-listmail at stillbilde.net Fri Jun 19 16:42:14 2009 From: svein-listmail at stillbilde.net (Svein Skogen (listmail account)) Date: Fri, 19 Jun 2009 18:42:14 +0200 Subject: Apache DoS - is Varnish affected? In-Reply-To: <2013.1245425751@critter.freebsd.dk> References: <2013.1245425751@critter.freebsd.dk> Message-ID: <4A3BBFE6.7070804@stillbilde.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Poul-Henning Kamp wrote: > > Systems using "http accept filters" (FreeBSD possibly others) the Varnish > (or apache) will never even see these connections in the first place. > Does this basically mean that in these uncertain times where kiddiots DoS (Destroying our Sanity) just for the fun of it, using accf_http should go into the checklist for every webserver, or am I reading this wrong? //Svein - -- - --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE - --------+-------------------+------------------------------- Picture Gallery: https://gallery.stillbilde.net/v/svein/ - ------------------------------------------------------------ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAko7v+YACgkQODUnwSLUlKTYbACeKqc2n1UfTGLhkuq1QdP5g+IC PUIAoJrA4eC13T1JGCt2Y7GYIxdYT/Eo =2dIU -----END PGP SIGNATURE----- From phk at phk.freebsd.dk Fri Jun 19 17:08:30 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 19 Jun 2009 17:08:30 +0000 Subject: Apache DoS - is Varnish affected? In-Reply-To: Your message of "Fri, 19 Jun 2009 16:46:41 +0100." <4A3BB2E1.8090300@loman.net> Message-ID: <2355.1245431310@critter.freebsd.dk> In message <4A3BB2E1.8090300 at loman.net>, Nick Loman writes: >Poul-Henning Kamp wrote: >> Varnish will abandon the connection after a fixed number of header >> lines. > >That's reassuring. Out of interest, what is the limit? 32 - 3 (for the first line fields) >Presumably that limit * the read timeout is the length of time a >connection could be held open by a rogue client? Something like that, I have not tried it. Worst case it would be a timeout for each character. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From phk at phk.freebsd.dk Fri Jun 19 17:09:14 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 19 Jun 2009 17:09:14 +0000 Subject: Apache DoS - is Varnish affected? In-Reply-To: Your message of "Fri, 19 Jun 2009 18:42:14 +0200." <4A3BBFE6.7070804@stillbilde.net> Message-ID: <2369.1245431354@critter.freebsd.dk> In message <4A3BBFE6.7070804 at stillbilde.net>, "Svein Skogen (listmail account)" writes: >> Systems using "http accept filters" (FreeBSD possibly others) the Varnish >> (or apache) will never even see these connections in the first place. > >Does this basically mean that in these uncertain times where kiddiots >DoS (Destroying our Sanity) just for the fun of it, using accf_http >should go into the checklist for every webserver, or am I reading this >wrong? Good question, I don't know how accf_http will handle these attacks. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From freebsd-listen at fabiankeil.de Fri Jun 19 18:07:01 2009 From: freebsd-listen at fabiankeil.de (Fabian Keil) Date: Fri, 19 Jun 2009 20:07:01 +0200 Subject: Apache DoS - is Varnish affected? In-Reply-To: <2013.1245425751@critter.freebsd.dk> References: <4A3BA393.3010306@loman.net> <2013.1245425751@critter.freebsd.dk> Message-ID: <20090619200701.0dd70975@fabiankeil.de> "Poul-Henning Kamp" wrote: > In message <4A3BA393.3010306 at loman.net>, Nick Loman writes: > >I would guess that Varnish isn't affected by this, but does anyone know > >for sure? Does Varnish protect against this attack in all cases if you > >have Apache as your backend? > > > >http://isc.sans.org/diary.html?storyid=6601 > > Varnish will abandon the connection after a fixed number of header > lines. > > This attack is more or less exactly _why_ varnish has a fixed limit > on HTTP headers. > > I won't claim that varnish is imune, but the impact should be manageable. > > Systems using "http accept filters" (FreeBSD possibly others) the Varnish > (or apache) will never even see these connections in the first place. Actually I think accf_http(9) would only delay the attack. While the man page doesn't mention it, accf_http passes incomplete requests to the userland if its buffer is full. Fabian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From phk at phk.freebsd.dk Fri Jun 19 18:28:32 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 19 Jun 2009 18:28:32 +0000 Subject: Apache DoS - is Varnish affected? In-Reply-To: Your message of "Fri, 19 Jun 2009 20:07:01 +0200." <20090619200701.0dd70975@fabiankeil.de> Message-ID: <3377.1245436112@critter.freebsd.dk> In message <20090619200701.0dd70975 at fabiankeil.de>, Fabian Keil writes: >Actually I think accf_http(9) would only delay the attack. > >While the man page doesn't mention it, accf_http passes >incomplete requests to the userland if its buffer is full. Yeah, but I'm pretty sure the buffer would contain enough junk to make varnish shut the connection immediately, so the fd starvation would not happen. Anyway, if you are interested in this DoS, you can trivially test it yourselv with a telnet connection and patience in front of the keyboard. Poul-Henning -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From kb+varnish at slide.com Fri Jun 19 21:17:29 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Fri, 19 Jun 2009 14:17:29 -0700 Subject: Thread memory allocation question In-Reply-To: <87ws78s41q.fsf@qurzaw.linpro.no> References: <87ws78s41q.fsf@qurzaw.linpro.no> Message-ID: <5C056AE2-7207-42F8-9E4B-0F541DC4B1B2@slide.com> On Jun 19, 2009, at 7:15 AM, Tollef Fog Heen wrote: > | 0000000040111000 8192K rw--- [ anon ] > Looks like the default stack size. Ah, of course. Good find, thanks. I'm thinking it might be nice to have a thread track its stack history and emit its approximate largest size when it's reaped (and the workspaces too I suppose). Would a stack overflow take out the whole child, or just that thread? The 1024K blocks roughly add up to the SMA outstanding bytes, so I'm assuming these are jemalloc block allocations, and not related to thread count: lib/jemalloc/malloc.c: #define CHUNK_2POW_DEFAULT 20 Thanks! -- Ken. On Jun 19, 2009, at 7:15 AM, Tollef Fog Heen wrote: > ]] Ken Brownfield > > | When looking at /proc/map info for varnish threads, I'm seeing the > | following allocations in numbers that essentially match the child > count: > | > | 0000000040111000 8192K rw--- [ anon ] > > Looks like the default stack size. > > | And this at almost double the child count: > | > | 00007f4d57900000 1024K rw--- [ anon ] > > Unsure what this is. > > | I noticed some pretty intense memory usage when I had a backend > issue > | and the thread count increased into the hundreds. Obviously the > | threads will need memory as they scale, but are these large > | allocations intentional, and are they tunable (beyond the relatively > | small workspaces?) > > You should be able to tune it using ulimit -s. If you turn it too > low, > things will break, though. > > -- > Tollef Fog Heen > Redpill Linpro -- Changing the game! > t: +47 21 54 41 73 From phk at phk.freebsd.dk Fri Jun 19 21:21:52 2009 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 19 Jun 2009 21:21:52 +0000 Subject: Thread memory allocation question In-Reply-To: Your message of "Fri, 19 Jun 2009 14:17:29 MST." <5C056AE2-7207-42F8-9E4B-0F541DC4B1B2@slide.com> Message-ID: <5039.1245446512@critter.freebsd.dk> In message <5C056AE2-7207-42F8-9E4B-0F541DC4B1B2 at slide.com>, Ken Brownfield wri tes: >Would a stack overflow take out the whole child, or just that thread? The kernel would try to extend the stack and provided you are not on a 32 bit system, it shouldn't ever have a problem with that. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From me at mgoldman.com Sat Jun 20 02:06:52 2009 From: me at mgoldman.com (Martin Goldman) Date: Fri, 19 Jun 2009 22:06:52 -0400 Subject: "ExpBan: nnn was banned" Message-ID: <24a219a50906191906v59c6588bkeac4747d7547db85@mail.gmail.com> Hi folks, We have a newly deployed Varnish installation (v2.0.4 on Debian 5.0 64-bit). Our hitrate is pretty good (80%+) and the load on the server is consistently 0.00. The cache size is 50G and the TTL is set to basically cache things indefinitely (it's set to like 1000 years, I think). I got a complaint from a user that some percentage of his page views seem too slow to be coming from the cache. I brought up our home page and started refreshing it over and over. Sure enough, while most of the page views are in fact getting cached, once every 5 or 6 times, I get a cache miss. When this happens, varnishlog shows something like this: 32 VCL_call c recv lookup 32 VCL_call c hash hash * 32 ExpBan c 1607291667 was banned * 32 VCL_call c miss fetch I apologize if my google-fu is broken today, but I can't seem to find a description of what the message above means. I would certainly appreciate any information (or links) that would help explain this message and how to prevent the problem going forward. Thanks and regards, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lerouxb at gmail.com Mon Jun 22 15:05:09 2009 From: lerouxb at gmail.com (Le Roux Bodenstein) Date: Mon, 22 Jun 2009 17:05:09 +0200 Subject: vcl_error restart Message-ID: <4f00879a0906220805h332a3160s8ee9e3ce0fdd67ef@mail.gmail.com> I'm trying to setup varnish so that it will try one backend and then if there's an error (for example - the backend isn't up at all - not even throwing 500 errors), it tries a fallback backend. I'm trying to get varnish to immediately try another backend if I restart my app. So the first backend is my django powered web app and the second one is a smaller, dumb one that just tries to read from a cache that I manage. I only want to use this cache as a fallback - it isn't a max-age based thing. I think backend polling isn't perfect here - I don't want requests to die for seconds before health polling figures it out - the app will be restarted within seconds already anyway. I also don't want to use the grace functionality here - this is only for non-cacheable (not by varnish, anyway) web app urls that need to be re-evaluated all the time. (I'm thinking of sending static, cacheable urls to a third backend - a traditional webserver) So basically, I want to do something like this: backend b1 { .host = "localhost"; .port = "8000"; } backend b2 { .host = "localhost"; .port = "8001"; } sub vcl_recv { if (req.restarts == 0) { set req.backend = b1; } else { set req.backend = b2; } } sub vcl_error { if (req.restarts == 0) { set req.backend = b2; restart; } } But my second backend never gets a request. Is this possible? Has anyone done something like this before? Le Roux From darryl.dixon at winterhouseconsulting.com Mon Jun 22 21:41:27 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Tue, 23 Jun 2009 09:41:27 +1200 (NZST) Subject: vcl_error restart In-Reply-To: <4f00879a0906220805h332a3160s8ee9e3ce0fdd67ef@mail.gmail.com> References: <4f00879a0906220805h332a3160s8ee9e3ce0fdd67ef@mail.gmail.com> Message-ID: <53674.58.28.124.90.1245706887.squirrel@services.directender.co.nz> Hi Le Roux, I investigated doing something similar with Varnish a little while ago when the backend health checks were first included. I decided at the time that it couldn't achieve what I wanted, so I left that functionality inside the Pound config one layer down. The way we do this is with Pound's HAPort directive plus its Emergency Backend. Thus, Varnish sends all traffic regularly to Pound, and then Pound onwards to your regular Django webapp. Pound monitors the health of your Django webapp with connections to an 'HAPort' that you control (all it has to do is accept connections and close them straight away to indicate that the Django webapp is working). When you want to restart Django, you first shut down the HAPort, then afterward shut down/restart Django. Once it's up again the HAPort is restarted. You can probably automate this (we did with Zope). The key is that when Pound detects that the HAPort goes down, it wants to send the traffic somewhere, so you specify an 'Emergency' backend (backend of last resort) which points to your custom cache server. While the HAPort is down, all traffic will go there. Hope this helps or gives you some ideas anyway. regards, Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com > I'm trying to setup varnish so that it will try one backend and then > if there's an error (for example - the backend isn't up at all - not > even throwing 500 errors), it tries a fallback backend. I'm trying to > get varnish to immediately try another backend if I restart my app. So > the first backend is my django powered web app and the second one is a > smaller, dumb one that just tries to read from a cache that I manage. > I only want to use this cache as a fallback - it isn't a max-age based > thing. > > I think backend polling isn't perfect here - I don't want requests to > die for seconds before health polling figures it out - the app will be > restarted within seconds already anyway. I also don't want to use the > grace functionality here - this is only for non-cacheable (not by > varnish, anyway) web app urls that need to be re-evaluated all the > time. (I'm thinking of sending static, cacheable urls to a third > backend - a traditional webserver) > > So basically, I want to do something like this: > > backend b1 { > .host = "localhost"; > .port = "8000"; > } > > backend b2 { > .host = "localhost"; > .port = "8001"; > } > > sub vcl_recv { > if (req.restarts == 0) { > set req.backend = b1; > } else { > set req.backend = b2; > } > } > > sub vcl_error { > if (req.restarts == 0) { > set req.backend = b2; > restart; > } > } > > > But my second backend never gets a request. Is this possible? Has > anyone done something like this before? > > > Le Roux > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > From lerouxb at gmail.com Tue Jun 23 08:46:09 2009 From: lerouxb at gmail.com (Le Roux Bodenstein) Date: Tue, 23 Jun 2009 10:46:09 +0200 Subject: vcl_error restart In-Reply-To: <53674.58.28.124.90.1245706887.squirrel@services.directender.co.nz> References: <4f00879a0906220805h332a3160s8ee9e3ce0fdd67ef@mail.gmail.com> <53674.58.28.124.90.1245706887.squirrel@services.directender.co.nz> Message-ID: <4f00879a0906230146r4f091304w3c3f69cb13193e31@mail.gmail.com> > I investigated doing something similar with Varnish a little while ago > when the backend health checks were first included. I decided at the time > that it couldn't achieve what I wanted, so I left that functionality > inside the Pound config one layer down. The way we do this is with Pound's > HAPort directive plus its Emergency Backend. Thank you. I will certainly investigate Pound. My main concern is just all the hops and movable parts, but I guess that's what benchmarking is for. I eventually figured out yesterday that the main reason my config didn't work is because I misinterpreted max-restarts. I thought 4 was a bit much in this case, so I set it to one thinking that one restart means it will restart once, but it turns out that means it won't restart at all. When I set that to 2 it started working. I also did some more thinking and my backend is actually a webserver (lighttpd) that handles the static files and sends the dynamic requests on to django. So if the django server is gone it already returns a 503 which is easy enough to work with in vcl. I almost never restart lighttpd (in years I restarted it once to enable ssl) so that's pretty stable. I want lighttpd to continue to serve up the static files even when I'm on the "emergency backend", so I don't know if backend monitoring is going to work easily for me. When the django server goes down it is only seconds, so I still think health monitoring and status is a bit overkill for me at this stage, so I'll see how this works out. I'm hoping it will fallback immediately and start using the primary backend as soon as it is up again. I will write up my config/setup if/once I get this working. But basically my config is now: backend b1 { .host = "localhost"; .port = "8000"; } backend b2 { .host = "localhost"; .port = "8001"; } sub vcl_recv { if (req.restarts == 0) { set req.backend = b1; } else { set req.backend = b2; } } sub vcl_fetch { if (obj.status >= 500) { restart; } } sub vcl_error { if (req.restarts == 0) { restart; } } And I set max_restarts to 2 elsewhere. From lars at dcmediahosting.com Tue Jun 23 10:04:26 2009 From: lars at dcmediahosting.com (Lars Erik Dangvard Jensen) Date: Tue, 23 Jun 2009 12:04:26 +0200 Subject: avoid force-refresh to backend Message-ID: <4A40A8AA.3040605@dcmediahosting.com> Hello list, We have a customer still using Varnish 1.1.2, and they experience slow backends caused by a force-refresh every 5 minute, which is the TTL set by the backend itself. Now actually the backend caches a page itself, and this cache is always up to date. But when Varnish refreshes its own cache, it makes a force refresh to the backend causing the backend to regenerate a page from scratch using a lot of database-queries each time. Can we avoid Varnish 1.1.2 making a "force-refresh" and instead request/cache the backend with a normal page visit? Thanks. Lars From kb+varnish at slide.com Tue Jun 23 20:21:20 2009 From: kb+varnish at slide.com (Ken Brownfield) Date: Tue, 23 Jun 2009 13:21:20 -0700 Subject: [PATCH] Initial backend health In-Reply-To: <52C21B1B-13D5-45A0-BC7E-4D3C6A1C23A2@slide.com> References: <52C21B1B-13D5-45A0-BC7E-4D3C6A1C23A2@slide.com> Message-ID: On Jun 18, 2009, at 11:20 PM, Ken Brownfield wrote: > [...] > The attached patch creates a backend flag to change the initial > health of backends upon varnishd startup: > > backend foo { > .initial_health = 1; > } > [...] > TODO: I should probably add "-p initial_health=1", as it fits the > defaults like between_bytes_timeout. After looking at this for an hour or two, there's no way to tell whether the backend variable has been specifically set to 0, instead of defaulting to 0. At least without reworking how the be structs work (adding an initialization step that sets them to defaults provided by params). AFAICT. The following are identical in behavior, and therefore params can't be /conditionally/ overridden: .initial_health = 0; # .initial_health = 0; So you can't do "-p initial_health=1" and then set ".initial_health=0". So it's an either/or proposition. I like the per-backend setting myself, but if the general opinion is that a global param is better, I can create a different patch. Cheers, -- Ken. From anders at fupp.net Wed Jun 24 13:11:18 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 24 Jun 2009 15:11:18 +0200 Subject: hit for pass problems (was: Problems getting Varnish to cache certain things) In-Reply-To: <8885.1245270727@critter.freebsd.dk> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> Message-ID: <20090624131118.GA81913@fupp.net> Hi, On Wed, Jun 17, 2009 at 08:32:07PM +0000, Poul-Henning Kamp wrote: >>> Only pass in vcl_fetch will create a "hit for pass" object. >>Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >>pass'ing out of vcl_hit caused the hit-for-pass behaviour. > The object will not be marked as hit-for-pass if you do that, but > the resulting behaviour is the same, so unless you inspect stats, > you cannot tell the difference. I am trying to set ttl before doing pass in vcl_fetch, at all places in vcl_fetch where I do pass. Like for example: if (!beresp.cacheable) { set beresp.ttl = 1200s; pass; } But still, it happens that Varnish will not cache objects, unless I restart it (I waited more than the above TTL value). If I try to purge those URLs, Varnish answers with HTTP 501 Not Supported. After I restart Varnish, I can purge the same URL again. Would much prefer not to have to restart Varnish once I run into a URL that we have difficulties to get Varnish to cache. :-/ Regards, -- Anders. From anders at fupp.net Wed Jun 24 13:11:18 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 24 Jun 2009 15:11:18 +0200 Subject: hit for pass problems (was: Problems getting Varnish to cache certain things) In-Reply-To: <8885.1245270727@critter.freebsd.dk> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> Message-ID: <20090624131118.GA81913@fupp.net> Hi, On Wed, Jun 17, 2009 at 08:32:07PM +0000, Poul-Henning Kamp wrote: >>> Only pass in vcl_fetch will create a "hit for pass" object. >>Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >>pass'ing out of vcl_hit caused the hit-for-pass behaviour. > The object will not be marked as hit-for-pass if you do that, but > the resulting behaviour is the same, so unless you inspect stats, > you cannot tell the difference. I am trying to set ttl before doing pass in vcl_fetch, at all places in vcl_fetch where I do pass. Like for example: if (!beresp.cacheable) { set beresp.ttl = 1200s; pass; } But still, it happens that Varnish will not cache objects, unless I restart it (I waited more than the above TTL value). If I try to purge those URLs, Varnish answers with HTTP 501 Not Supported. After I restart Varnish, I can purge the same URL again. Would much prefer not to have to restart Varnish once I run into a URL that we have difficulties to get Varnish to cache. :-/ Regards, -- Anders. From anders at fupp.net Wed Jun 24 13:11:18 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 24 Jun 2009 15:11:18 +0200 Subject: hit for pass problems (was: Problems getting Varnish to cache certain things) In-Reply-To: <8885.1245270727@critter.freebsd.dk> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> Message-ID: <20090624131118.GA81913@fupp.net> Hi, On Wed, Jun 17, 2009 at 08:32:07PM +0000, Poul-Henning Kamp wrote: >>> Only pass in vcl_fetch will create a "hit for pass" object. >>Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >>pass'ing out of vcl_hit caused the hit-for-pass behaviour. > The object will not be marked as hit-for-pass if you do that, but > the resulting behaviour is the same, so unless you inspect stats, > you cannot tell the difference. I am trying to set ttl before doing pass in vcl_fetch, at all places in vcl_fetch where I do pass. Like for example: if (!beresp.cacheable) { set beresp.ttl = 1200s; pass; } But still, it happens that Varnish will not cache objects, unless I restart it (I waited more than the above TTL value). If I try to purge those URLs, Varnish answers with HTTP 501 Not Supported. After I restart Varnish, I can purge the same URL again. Would much prefer not to have to restart Varnish once I run into a URL that we have difficulties to get Varnish to cache. :-/ Regards, -- Anders. From anders at fupp.net Wed Jun 24 13:11:18 2009 From: anders at fupp.net (Anders Nordby) Date: Wed, 24 Jun 2009 15:11:18 +0200 Subject: hit for pass problems (was: Problems getting Varnish to cache certain things) In-Reply-To: <8885.1245270727@critter.freebsd.dk> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> Message-ID: <20090624131118.GA81913@fupp.net> Hi, On Wed, Jun 17, 2009 at 08:32:07PM +0000, Poul-Henning Kamp wrote: >>> Only pass in vcl_fetch will create a "hit for pass" object. >>Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >>pass'ing out of vcl_hit caused the hit-for-pass behaviour. > The object will not be marked as hit-for-pass if you do that, but > the resulting behaviour is the same, so unless you inspect stats, > you cannot tell the difference. I am trying to set ttl before doing pass in vcl_fetch, at all places in vcl_fetch where I do pass. Like for example: if (!beresp.cacheable) { set beresp.ttl = 1200s; pass; } But still, it happens that Varnish will not cache objects, unless I restart it (I waited more than the above TTL value). If I try to purge those URLs, Varnish answers with HTTP 501 Not Supported. After I restart Varnish, I can purge the same URL again. Would much prefer not to have to restart Varnish once I run into a URL that we have difficulties to get Varnish to cache. :-/ Regards, -- Anders. From darryl.dixon at winterhouseconsulting.com Wed Jun 24 20:42:33 2009 From: darryl.dixon at winterhouseconsulting.com (Darryl Dixon - Winterhouse Consulting) Date: Thu, 25 Jun 2009 08:42:33 +1200 (NZST) Subject: hit for pass problems (was: Problems getting Varnish to cache certain things) In-Reply-To: <20090624131118.GA81913@fupp.net> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> <20090624131118.GA81913@fupp.net> Message-ID: <64516.58.28.124.90.1245876153.squirrel@services.directender.co.nz> > Hi, > > On Wed, Jun 17, 2009 at 08:32:07PM +0000, Poul-Henning Kamp wrote: >>>> Only pass in vcl_fetch will create a "hit for pass" object. >>>Hrm. When did this change? We tested and confirmed in 2.0.3(?) that >>>pass'ing out of vcl_hit caused the hit-for-pass behaviour. >> The object will not be marked as hit-for-pass if you do that, but >> the resulting behaviour is the same, so unless you inspect stats, >> you cannot tell the difference. > > I am trying to set ttl before doing pass in vcl_fetch, at all places in > vcl_fetch where I do pass. Like for example: > > if (!beresp.cacheable) { > set beresp.ttl = 1200s; > pass; > } > > But still, it happens that Varnish will not cache objects, unless I > restart it (I waited more than the above TTL value). The problem is the call to 'pass'. You must not do this in vcl_fetch if you want Varnish to be able to cache the item. Instead, restart the request with 'restart'. Darryl Dixon Winterhouse Consulting Ltd http://www.winterhouseconsulting.com From anders at fupp.net Thu Jun 25 09:56:39 2009 From: anders at fupp.net (Anders Nordby) Date: Thu, 25 Jun 2009 11:56:39 +0200 Subject: hit for pass problems - time to change default config? In-Reply-To: <64516.58.28.124.90.1245876153.squirrel@services.directender.co.nz> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> <20090624131118.GA81913@fupp.net> <64516.58.28.124.90.1245876153.squirrel@services.directender.co.nz> Message-ID: <20090625095639.GA20332@fupp.net> Hi, On Thu, Jun 25, 2009 at 08:42:33AM +1200, Darryl Dixon - Winterhouse Consulting wrote: >> I am trying to set ttl before doing pass in vcl_fetch, at all places in >> vcl_fetch where I do pass. Like for example: >> >> if (!beresp.cacheable) { >> set beresp.ttl = 1200s; >> pass; >> } >> >> But still, it happens that Varnish will not cache objects, unless I >> restart it (I waited more than the above TTL value). > The problem is the call to 'pass'. You must not do this in vcl_fetch if > you want Varnish to be able to cache the item. Instead, restart the > request with 'restart'. Thanks, I'll try that. But it leads to obscure config (just like redirection) due to handling and taking action different places in the config. Do we want it to be this way? in vcl_recv: if (req.restarts == 0) { lookup; } else { pass; } in vcl_fetch: if (!beresp.cacheable) { restart; } If this is intended behaviour the way it should be, then at least maybe the default configuration should be changed to reflect that pass in vcl_fetch is bad? Regards, -- Anders. From rodrigo at mercadolibre.com Thu Jun 25 14:55:21 2009 From: rodrigo at mercadolibre.com (Rodrigo Benzaquen) Date: Thu, 25 Jun 2009 11:55:21 -0300 Subject: Strange caching behavior In-Reply-To: <20090624131118.GA81913@fupp.net> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> <20090624131118.GA81913@fupp.net> Message-ID: <01d301c9f5a4$f820f180$e862d480$@com> Hi guys, today we saw that if our backend servers doesn?t add an HTTP CACHE-CONTROL Varnish cache the URL for 30s. We have tested the same using Netcache and we noticed that Netcache doesn?t cache the object. Do you guys know how to modify Varnish to don?t cache URLs or Objects that doesn?t have CACHE-CONTROL? Example: as you can see our Backend Server is not sending cache-control: no-cache and Varnish is caching the object for 30s. VARNISH (Request-Line) GET /jm/XXXXXX?go=trans HTTP/1.1 Accept image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-ms-application, application/vnd.ms-xpsdocument, application/xaml+xml, application/x-ms-xbap, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Encoding gzip, deflate Accept-Language es-ar Connection Keep-Alive Cookie poison_cookie=313537500; orguseridp=88011902 Host www.XXX.XX Referer http://XXX.XXX.XX UA-CPU x86 User-Agent Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506) (Status-Line) HTTP/1.1 302 Found Age 28 Connection keep-alive Content-Length 619 Date Thu, 25 Jun 2009 12:38:03 GMT Location https://www.xxxx.xxx?go=trans&redirected=Y&setCKorgpago=89482244&setCK__utma =1.3637389798933187000.1242304103.1242310216.1242349919.3&setCKPMS78987160=P MS78987160&setCKpmsword=ITEM&setCKtr_cookie=187.13.29.68&setCKorgnickp=SELL- SHOP&setCKpmsctx=*****SMEMORIA+DDR400%7CS1+GB+DDR+400%7CS512+DDR+400%7C&setC Korgpms=305868&setCKorghash=062508-S4EZVST0K3VSOVBLL86MY7JOG4K5BU-89482244&s etCKorguseridp=89482244&setCKorguserid=h749Th7ZZTT&setCKpoison_cookie=313426 119&setCK__utmz=1.1242304103.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(non e)&setCKoasLoConoce=Y Server Resin/3.0.18 via 1.1 Varnish (yblvarnish4) X-Varnish 1935462643 1935458287 Thanks Rodrigo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From malevo at gmail.com Thu Jun 25 22:14:53 2009 From: malevo at gmail.com (Pablo Garcia Melga) Date: Thu, 25 Jun 2009 19:14:53 -0300 Subject: Strange caching behavior In-Reply-To: <01d301c9f5a4$f820f180$e862d480$@com> References: <57402.58.28.124.90.1245212294.squirrel@services.directender.co.nz> <8885.1245270727@critter.freebsd.dk> <20090624131118.GA81913@fupp.net> <01d301c9f5a4$f820f180$e862d480$@com> Message-ID: <4ec3c3f70906251514l7eb72072uf8b93b9ef9f7314f@mail.gmail.com> Rodrigo, You can either set that by VCL with set obj.ttl = 30s (more powerful and configurable) or in the varnishd command line, with the -t option to specify which is the default TTL. Regards, Pablo On Thu, Jun 25, 2009 at 11:55 AM, Rodrigo Benzaquen wrote: > Hi guys, today we saw that if our backend servers doesn?t add an HTTP > CACHE-CONTROL Varnish cache the URL for 30s. > > > > We have tested the same using Netcache and we noticed that Netcache doesn?t > cache the object. > > > > Do you guys know how to modify Varnish to don?t cache URLs or Objects that > doesn?t have CACHE-CONTROL? > > > > > > Example: as you can see our Backend Server is not sending cache-control: > no-cache and Varnish is caching the object for 30s. > > > > > > > > VARNISH > > (Request-Line) > > GET /jm/XXXXXX?go=trans HTTP/1.1 > > Accept > > image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, > application/x-ms-application, application/vnd.ms-xpsdocument, > application/xaml+xml, application/x-ms-xbap, application/x-shockwave-flash, > application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, > */* > > Accept-Encoding > > gzip, deflate > > Accept-Language > > es-ar > > Connection > > Keep-Alive > > Cookie > > poison_cookie=313537500; orguseridp=88011902 > > Host > > www.XXX.XX > > Referer > > http://XXX.XXX.XX > > UA-CPU > > x86 > > User-Agent > > Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR > 2.0.50727; .NET CLR 3.0.04506) > > (Status-Line) > > HTTP/1.1 302 Found > > Age > > 28 > > Connection > > keep-alive > > Content-Length > > 619 > > Date > > Thu, 25 Jun 2009 12:38:03 GMT > > Location > > https://www.xxxx.xxx?go=trans&redirected=Y&setCKorgpago=89482244&setCK__utma=1.3637389798933187000.1242304103.1242310216.1242349919.3&setCKPMS78987160=PMS78987160&setCKpmsword=ITEM&setCKtr_cookie=187.13.29.68&setCKorgnickp=SELL-SHOP&setCKpmsctx=*****SMEMORIA+DDR400%7CS1+GB+DDR+400%7CS512+DDR+400%7C&setCKorgpms=305868&setCKorghash=062508-S4EZVST0K3VSOVBLL86MY7JOG4K5BU-89482244&setCKorguseridp=89482244&setCKorguserid=h749Th7ZZTT&setCKpoison_cookie=313426119&setCK__utmz=1.1242304103.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&setCKoasLoConoce=Y > > Server > > Resin/3.0.18 > > via > > 1.1 Varnish (yblvarnish4) > > X-Varnish > > 1935462643 1935458287 > > > > > > Thanks > > Rodrigo. > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at projects.linpro.no > http://projects.linpro.no/mailman/listinfo/varnish-misc > >