From bluethundr at gmail.com Tue Aug 5 06:13:26 2014 From: bluethundr at gmail.com (Tim Dunphy) Date: Tue, 5 Aug 2014 02:13:26 -0400 Subject: remove persistence from the vcl Message-ID: hey guys.. I've been asked to 'remove persistence' from my varnish default.vcl -- not clear on how to do that any tips available? Unfortunately I couldn't get a clear explanation from the guy requesting this to be done, so i was wondering if this rings any bells for anyone. Thanks! Tim -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcruzero at gmail.com Tue Aug 5 14:27:15 2014 From: lcruzero at gmail.com (L Cruzero) Date: Tue, 5 Aug 2014 10:27:15 -0400 Subject: VCL continuous integration Message-ID: hi, is anyone on the list using any CI tools or have any ideas possible options for implementing continuous integration with VLC. thanks. lcruzero -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Tue Aug 5 14:32:12 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Tue, 5 Aug 2014 16:32:12 +0200 Subject: VCL continuous integration In-Reply-To: References: Message-ID: <20140805143212.GI19870@nerd.dk> On Tue, Aug 05, 2014 at 10:27:15AM -0400, L Cruzero wrote: > is anyone on the list using any CI tools or have any ideas possible options > for implementing continuous integration with VLC. varnish itself is being tested in Jenkins using varnishtest (make test) http://jenkins.varnish-cache.org/ -- Andreas From pprocacci at datapipe.com Tue Aug 5 16:49:42 2014 From: pprocacci at datapipe.com (Paul A. Procacci) Date: Tue, 5 Aug 2014 11:49:42 -0500 Subject: remove persistence from the vcl In-Reply-To: References: Message-ID: <20140805164942.GA22153@workmachine> On Tue, Aug 05, 2014 at 02:13:26AM -0400, Tim Dunphy wrote: > hey guys.. I've been asked to 'remove persistence' from my varnish > default.vcl -- not clear on how to do that any tips available? > Unfortunately I couldn't get a clear explanation from the guy requesting > this to be done, so i was wondering if this rings any bells for anyone. Sounds like one of two things to me. A) He doesn't want to use the client director and would rather use a round-robin director OR The storage mechanism itself. I'd guess he means the former. > Thanks! > Tim ~Paul From geodni at free.fr Tue Aug 5 18:11:14 2014 From: geodni at free.fr (geodni at free.fr) Date: Tue, 5 Aug 2014 20:11:14 +0200 (CEST) Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <168586776.19182803.1407260557769.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: <1105616762.19233020.1407262274765.JavaMail.root@zimbra59-e10.priv.proxad.net> Hi all, I am working on a FreeBSD 10 amd64 system with Varnish 3.0.5 official port. The hardware I use is : - HP Proliant DL360 G8 - one Intel(R) Xeon(R) CPU E5-2667 0 @ 2.90GHz - 32GB of RAM - 8GB of swap - Smart Array P420i with 1GB not BBU - 1 big RAID5 of 4x4TB Constellation ES.3 SAS2 - dedicated gpt freebsd-ufs partition of 3.6TB for Varnish mounted on /var/cache/varnish (no atime, no softs updates no journal at all) Starting parameters for varnishd are default ones except storage : -s file,/var/cache/varnish/file.bin,90% I am stressing Varnish with big injection (trying to raise 200 millions of 12k average size objects from 51B to 250kB for a future production environment). It runs quite until all memory becomes full and then all swap is consumed, system simply runs out of swap. Performance is still convenient but varnishd child process is killed and a new one is started. Injection process full filled the RAM+SWAP 28+8 after 15718051 transactions in 1h15, it could be good but as the storage is not persistent, all the work done is unusable. The sytem only runs FreeBSD (65MB used), nginx (35MB and no log) as the backend server for the tests and varnishncsa daemon (105MB). I tried to find why all the RAM+SWAP are consumed so quickly on the net but i am at a lost. Documentation says 1kb overhead by objects, 1 million means 1GB RAM used. OK it stores "only" 15 millions of objects it should consume 15GB and several GB for SHM log and so on but not all the 13+8 (RAM+SWAP). Can someone provide some welcome and good tips or misses I made ? Best regards, Denis I can provide more detailed information if required, just let me know. varnish> param.show 200 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec cc -std=gnu99 -O2 -pipe -fno-strict-aliasing -D_THREAD_SAFE -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 10 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group www (80) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :8080 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sendfile_threshold unlimited [bytes] sess_timeout 5 [seconds] sess_workspace 65536 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 500 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user www (80) vcc_err_unref on [bool] vcl_dir /usr/local/etc/varnish vcl_trace off [bool] vmod_dir /usr/local/lib/varnish/vmods waiter default (kqueue, poll) From themadindian at yahoo.com Tue Aug 5 23:29:40 2014 From: themadindian at yahoo.com (Mad Indian) Date: Tue, 5 Aug 2014 16:29:40 -0700 Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <1105616762.19233020.1407262274765.JavaMail.root@zimbra59-e10.priv.proxad.net> References: <168586776.19182803.1407260557769.JavaMail.root@zimbra59-e10.priv.proxad.net> <1105616762.19233020.1407262274765.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: <1407281380.44311.YahooMailNeo@web163906.mail.gq1.yahoo.com> I run a similar configuration except instead of file I use malloc,26G Have you tried that? does it react the same? On Tuesday, August 5, 2014 2:11 PM, "geodni at free.fr" wrote: Hi all, I am working on a FreeBSD 10 amd64 system with Varnish 3.0.5 official port. The hardware I use is : - HP Proliant DL360 G8 - one Intel(R) Xeon(R) CPU E5-2667 0 @ 2.90GHz - 32GB of RAM - 8GB of swap - Smart Array P420i with 1GB not BBU - 1 big RAID5 of 4x4TB Constellation ES.3 SAS2 - dedicated gpt freebsd-ufs partition of 3.6TB for Varnish mounted on /var/cache/varnish (no atime, no softs updates no journal at all) Starting parameters for varnishd are default ones except storage : -s file,/var/cache/varnish/file.bin,90% I am stressing Varnish with big injection (trying to raise 200 millions of 12k average size objects from 51B to 250kB for a future production environment). It runs quite until all memory becomes full and then all swap is consumed, system simply runs out of swap. Performance is still convenient but varnishd child process is killed and a new one is started. Injection process full filled the RAM+SWAP 28+8 after 15718051 transactions in 1h15, it could be good but as the storage is not persistent, all the work done is unusable. The sytem only runs FreeBSD (65MB used), nginx (35MB and no log) as the backend server for the tests and varnishncsa daemon (105MB). I tried to find why all the RAM+SWAP are consumed so quickly on the net but i am at a lost. Documentation says 1kb overhead by objects, 1 million means 1GB RAM used. OK it stores "only" 15 millions of objects it should consume 15GB and several GB for SHM log and so on but not all the 13+8 (RAM+SWAP). Can someone provide some welcome and good tips or misses I made ? Best regards, Denis I can provide more detailed information if required, just let me know. varnish> param.show 200? ? ? ? acceptor_sleep_decay? ? ? ? 0.900000 [] acceptor_sleep_incr? ? ? ? 0.001000 [s] acceptor_sleep_max? ? ? ? ? 0.050000 [s] auto_restart? ? ? ? ? ? ? ? on [bool] ban_dups? ? ? ? ? ? ? ? ? ? on [bool] ban_lurker_sleep? ? ? ? ? ? 0.010000 [s] between_bytes_timeout? ? ? 60.000000 [s] cc_command? ? ? ? ? ? ? ? ? "exec cc -std=gnu99 -O2 -pipe -fno-strict-aliasing -D_THREAD_SAFE -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer? ? ? ? ? ? ? ? ? 8192 [bytes] cli_timeout? ? ? ? ? ? ? ? 10 [seconds] clock_skew? ? ? ? ? ? ? ? ? 10 [s] connect_timeout? ? ? ? ? ? 0.700000 [s] critbit_cooloff? ? ? ? ? ? 180.000000 [s] default_grace? ? ? ? ? ? ? 10.000000 [seconds] default_keep? ? ? ? ? ? ? ? 0.000000 [seconds] default_ttl? ? ? ? ? ? ? ? 120.000000 [seconds] diag_bitmap? ? ? ? ? ? ? ? 0x0 [bitmap] esi_syntax? ? ? ? ? ? ? ? ? 0 [bitmap] expiry_sleep? ? ? ? ? ? ? ? 1.000000 [seconds] fetch_chunksize? ? ? ? ? ? 128 [kilobytes] fetch_maxchunksize? ? ? ? ? 262144 [kilobytes] first_byte_timeout? ? ? ? ? 60.000000 [s] group? ? ? ? ? ? ? ? ? ? ? www (80) gzip_level? ? ? ? ? ? ? ? ? 6 [] gzip_memlevel? ? ? ? ? ? ? 8 [] gzip_stack_buffer? ? ? ? ? 32768 [Bytes] gzip_tmp_space? ? ? ? ? ? ? 0 [] gzip_window? ? ? ? ? ? ? ? 15 [] http_gzip_support? ? ? ? ? on [bool] http_max_hdr? ? ? ? ? ? ? ? 64 [header lines] http_range_support? ? ? ? ? on [bool] http_req_hdr_len? ? ? ? ? ? 8192 [bytes] http_req_size? ? ? ? ? ? ? 32768 [bytes] http_resp_hdr_len? ? ? ? ? 8192 [bytes] http_resp_size? ? ? ? ? ? ? 32768 [bytes] idle_send_timeout? ? ? ? ? 60 [seconds] listen_address? ? ? ? ? ? ? :8080 listen_depth? ? ? ? ? ? ? ? 1024 [connections] log_hashstring? ? ? ? ? ? ? on [bool] log_local_address? ? ? ? ? off [bool] lru_interval? ? ? ? ? ? ? ? 2 [seconds] max_esi_depth? ? ? ? ? ? ? 5 [levels] max_restarts? ? ? ? ? ? ? ? 4 [restarts] nuke_limit? ? ? ? ? ? ? ? ? 50 [allocations] pcre_match_limit? ? ? ? ? ? 10000 [] pcre_match_limit_recursion? 10000 [] ping_interval? ? ? ? ? ? ? 3 [seconds] pipe_timeout? ? ? ? ? ? ? ? 60 [seconds] prefer_ipv6? ? ? ? ? ? ? ? off [bool] queue_max? ? ? ? ? ? ? ? ? 100 [%] rush_exponent? ? ? ? ? ? ? 3 [requests per request] saintmode_threshold? ? ? ? 10 [objects] send_timeout? ? ? ? ? ? ? ? 600 [seconds] sendfile_threshold? ? ? ? ? unlimited [bytes] sess_timeout? ? ? ? ? ? ? ? 5 [seconds] sess_workspace? ? ? ? ? ? ? 65536 [bytes] session_linger? ? ? ? ? ? ? 50 [ms] session_max? ? ? ? ? ? ? ? 100000 [sessions] shm_reclen? ? ? ? ? ? ? ? ? 255 [bytes] shm_workspace? ? ? ? ? ? ? 8192 [bytes] shortlived? ? ? ? ? ? ? ? ? 10.000000 [s] syslog_cli_traffic? ? ? ? ? on [bool] thread_pool_add_delay? ? ? 2 [milliseconds] thread_pool_add_threshold? 2 [requests] thread_pool_fail_delay? ? ? 200 [milliseconds] thread_pool_max? ? ? ? ? ? 500 [threads] thread_pool_min? ? ? ? ? ? 5 [threads] thread_pool_purge_delay? ? 1000 [milliseconds] thread_pool_stack? ? ? ? ? unlimited [bytes] thread_pool_timeout? ? ? ? 300 [seconds] thread_pool_workspace? ? ? 65536 [bytes] thread_pools? ? ? ? ? ? ? ? 2 [pools] thread_stats_rate? ? ? ? ? 10 [requests] user? ? ? ? ? ? ? ? ? ? ? ? www (80) vcc_err_unref? ? ? ? ? ? ? on [bool] vcl_dir? ? ? ? ? ? ? ? ? ? /usr/local/etc/varnish vcl_trace? ? ? ? ? ? ? ? ? off [bool] vmod_dir? ? ? ? ? ? ? ? ? ? /usr/local/lib/varnish/vmods waiter? ? ? ? ? ? ? ? ? ? ? default (kqueue, poll) _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From geodni at free.fr Wed Aug 6 12:19:00 2014 From: geodni at free.fr (geodni at free.fr) Date: Wed, 6 Aug 2014 14:19:00 +0200 (CEST) Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <1407281380.44311.YahooMailNeo@web163906.mail.gq1.yahoo.com> Message-ID: <947378587.21138267.1407327540142.JavaMail.root@zimbra59-e10.priv.proxad.net> > I run a similar configuration except instead of file I use > > malloc,26G > > Have you tried that? does it react the same? Hi, I tried that and the system became unstable after 3 minutes. I had to reboot because of processes appears after that. No hardware failure nor specific system log. After the reboot, I made another injection test with "file" storage. My conclusion is that Varnish really eats 1GB per 1 million objects but it also consume 22GB RAM for the rest of its activity !!! What the hell is it doing with 22GB of RAM ? 80MB for SHM logs, 100MB for threads ok but what about the rest ? The platform I use runs at 2k requests per second for one injection job. One thing I can do is limiting "file" storage to 100GB to be sure the system won't use SWAP. I am currently running this test. Regards, Denis From thierry.magnien at sfr.com Wed Aug 6 13:18:18 2014 From: thierry.magnien at sfr.com (MAGNIEN, Thierry) Date: Wed, 6 Aug 2014 13:18:18 +0000 Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <947378587.21138267.1407327540142.JavaMail.root@zimbra59-e10.priv.proxad.net> References: <1407281380.44311.YahooMailNeo@web163906.mail.gq1.yahoo.com> <947378587.21138267.1407327540142.JavaMail.root@zimbra59-e10.priv.proxad.net> Message-ID: <5D103CE839D50E4CBC62C9FD7B83287CC9C1025D@EXCN015.encara.local.ads> Hi, Can you provide varnishstat output and vcl ? This will help for diagnostic. Regards, Thierry -----Message d'origine----- De?: varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org [mailto:varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org] De la part de geodni at free.fr Envoy??: mercredi 6 ao?t 2014 14:19 ??: varnish-misc at varnish-cache.org Objet?: Re: Varnish 3.0.5 RAM/SWAP usage > I run a similar configuration except instead of file I use > > malloc,26G > > Have you tried that? does it react the same? Hi, I tried that and the system became unstable after 3 minutes. I had to reboot because of processes appears after that. No hardware failure nor specific system log. After the reboot, I made another injection test with "file" storage. My conclusion is that Varnish really eats 1GB per 1 million objects but it also consume 22GB RAM for the rest of its activity !!! What the hell is it doing with 22GB of RAM ? 80MB for SHM logs, 100MB for threads ok but what about the rest ? The platform I use runs at 2k requests per second for one injection job. One thing I can do is limiting "file" storage to 100GB to be sure the system won't use SWAP. I am currently running this test. Regards, Denis _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From geodni at free.fr Wed Aug 6 13:40:55 2014 From: geodni at free.fr (geodni at free.fr) Date: Wed, 6 Aug 2014 15:40:55 +0200 (CEST) Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <5D103CE839D50E4CBC62C9FD7B83287CC9C1025D@EXCN015.encara.local.ads> Message-ID: <1774932071.21279433.1407332455409.JavaMail.root@zimbra59-e10.priv.proxad.net> No problem, but the current stats are for a 100GB limited file storage <--- varnishstat -1 ---> client_conn 1627 6.53 Client connections accepted client_drop 0 0.00 Connection dropped, no sess/wrk client_req 858928 3449.51 Client requests received cache_hit 7864 31.58 Cache hits cache_hitpass 0 0.00 Cache hits for pass cache_miss 851064 3417.93 Cache misses backend_conn 8511 34.18 Backend conn. success backend_unhealthy 0 0.00 Backend conn. not attempted backend_busy 0 0.00 Backend conn. too many backend_fail 0 0.00 Backend conn. failures backend_reuse 842558 3383.77 Backend conn. reuses backend_toolate 0 0.00 Backend conn. was closed backend_recycle 842559 3383.77 Backend conn. recycles backend_retry 0 0.00 Backend conn. retry fetch_head 0 0.00 Fetch head fetch_length 851064 3417.93 Fetch with Length fetch_chunked 0 0.00 Fetch chunked fetch_eof 0 0.00 Fetch EOF fetch_bad 0 0.00 Fetch had bad headers fetch_close 0 0.00 Fetch wanted close fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed fetch_zero 0 0.00 Fetch zero len fetch_failed 0 0.00 Fetch failed fetch_1xx 0 0.00 Fetch no body (1xx) fetch_204 0 0.00 Fetch no body (204) fetch_304 0 0.00 Fetch no body (304) n_sess_mem 10 . N struct sess_mem n_sess 0 . N struct sess n_object 851064 . N struct object n_vampireobject 0 . N unresurrected objects n_objectcore 851064 . N struct objectcore n_objecthead 851064 . N struct objecthead n_waitinglist 2 . N struct waitinglist n_vbc 1 . N struct vbc n_wrk 10 . N worker threads n_wrk_create 10 0.04 N worker threads created n_wrk_failed 0 0.00 N worker threads not created n_wrk_max 0 0.00 N worker threads limited n_wrk_lqueue 0 0.00 work request queue length n_wrk_queued 0 0.00 N queued work requests n_wrk_drop 0 0.00 N dropped work requests n_backend 1 . N backends n_expired 0 . N expired objects n_lru_nuked 0 . N LRU nuked objects n_lru_moved 7865 . N LRU moved objects losthdr 0 0.00 HTTP header overflows n_objsendfile 0 0.00 Objects sent with sendfile n_objwrite 858934 3449.53 Objects sent with write n_objoverflow 0 0.00 Objects overflowing workspace s_sess 1627 6.53 Total Sessions s_req 858928 3449.51 Total Requests s_pipe 0 0.00 Total pipe s_pass 0 0.00 Total pass s_fetch 851064 3417.93 Total fetch s_hdrbytes 227663896 914312.84 Total header bytes s_bodybytes 9793599268 39331723.97 Total body bytes sess_closed 1626 6.53 Session Closed sess_pipeline 0 0.00 Session Pipeline sess_readahead 0 0.00 Session Read Ahead sess_linger 858927 3449.51 Session Linger sess_herd 0 0.00 Session herd shm_records 58989035 236903.76 SHM records shm_writes 2566131 10305.75 SHM writes shm_flushes 0 0.00 SHM flushes due to overflow shm_cont 0 0.00 SHM MTX contention shm_cycles 19 0.08 SHM cycles through buffer sms_nreq 0 0.00 SMS allocator requests sms_nobj 0 . SMS outstanding allocations sms_nbytes 0 . SMS outstanding bytes sms_balloc 0 . SMS bytes allocated sms_bfree 0 . SMS bytes freed backend_req 851070 3417.95 Backend requests made n_vcl 1 0.00 N vcl total n_vcl_avail 1 0.00 N vcl available n_vcl_discard 0 0.00 N vcl discarded n_ban 1 . N total active bans n_ban_gone 1 . N total gone bans n_ban_add 1 0.00 N new bans added n_ban_retire 0 0.00 N old bans deleted n_ban_obj_test 0 0.00 N objects tested n_ban_re_test 0 0.00 N regexps tested against n_ban_dups 0 0.00 N duplicate bans removed hcb_nolock 0 0.00 HCB Lookups without lock hcb_lock 0 0.00 HCB Lookups with lock hcb_insert 0 0.00 HCB Inserts esi_errors 0 0.00 ESI parse errors (unlock) esi_warnings 0 0.00 ESI parse warnings (unlock) accept_fail 0 0.00 Accept failures client_drop_late 0 0.00 Connection dropped late uptime 249 1.00 Client uptime dir_dns_lookups 0 0.00 DNS director lookups dir_dns_failed 0 0.00 DNS director failed lookups dir_dns_hit 0 0.00 DNS director cached lookups hit dir_dns_cache_full 0 0.00 DNS director full dnscache vmods 0 . Loaded VMODs n_gzip 0 0.00 Gzip operations n_gunzip 0 0.00 Gunzip operations sess_pipe_overflow 0 . Dropped sessions due to session pipe overflow LCK.sms.creat 1 0.00 Created locks LCK.sms.destroy 0 0.00 Destroyed locks LCK.sms.locks 0 0.00 Lock Operations LCK.sms.colls 0 0.00 Collisions LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.smp.colls 0 0.00 Collisions LCK.sma.creat 1 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 0 0.00 Lock Operations LCK.sma.colls 0 0.00 Collisions LCK.smf.creat 1 0.00 Created locks LCK.smf.destroy 0 0.00 Destroyed locks LCK.smf.locks 1702141 6835.91 Lock Operations LCK.smf.colls 0 0.00 Collisions LCK.hsl.creat 0 0.00 Created locks LCK.hsl.destroy 0 0.00 Destroyed locks LCK.hsl.locks 0 0.00 Lock Operations LCK.hsl.colls 0 0.00 Collisions LCK.hcb.creat 0 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 0 0.00 Lock Operations LCK.hcb.colls 0 0.00 Collisions LCK.hcl.creat 16383 65.80 Created locks LCK.hcl.destroy 0 0.00 Destroyed locks LCK.hcl.locks 866800 3481.12 Lock Operations LCK.hcl.colls 0 0.00 Collisions LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 4 0.02 Lock Operations LCK.vcl.colls 0 0.00 Collisions LCK.stat.creat 1 0.00 Created locks LCK.stat.destroy 0 0.00 Destroyed locks LCK.stat.locks 1636 6.57 Lock Operations LCK.stat.colls 0 0.00 Collisions LCK.sessmem.creat 1 0.00 Created locks LCK.sessmem.destroy 0 0.00 Destroyed locks LCK.sessmem.locks 1839 7.39 Lock Operations LCK.sessmem.colls 0 0.00 Collisions LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 85891 344.94 Lock Operations LCK.wstat.colls 0 0.00 Collisions LCK.herder.creat 1 0.00 Created locks LCK.herder.destroy 0 0.00 Destroyed locks LCK.herder.locks 1 0.00 Lock Operations LCK.herder.colls 0 0.00 Collisions LCK.wq.creat 2 0.01 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 3749 15.06 Lock Operations LCK.wq.colls 0 0.00 Collisions LCK.objhdr.creat 851070 3417.95 Created locks LCK.objhdr.destroy 0 0.00 Destroyed locks LCK.objhdr.locks 3420011 13734.98 Lock Operations LCK.objhdr.colls 0 0.00 Collisions LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 851312 3418.92 Lock Operations LCK.exp.colls 0 0.00 Collisions LCK.lru.creat 2 0.01 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 851070 3417.95 Lock Operations LCK.lru.colls 0 0.00 Collisions LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 94 0.38 Lock Operations LCK.cli.colls 0 0.00 Collisions LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 851314 3418.93 Lock Operations LCK.ban.colls 0 0.00 Collisions LCK.vbp.creat 1 0.00 Created locks LCK.vbp.destroy 0 0.00 Destroyed locks LCK.vbp.locks 0 0.00 Lock Operations LCK.vbp.colls 0 0.00 Collisions LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 17021 68.36 Lock Operations LCK.vbe.colls 0 0.00 Collisions LCK.backend.creat 1 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 1710652 6870.09 Lock Operations LCK.backend.colls 0 0.00 Collisions SMF.s0.c_req 1702141 6835.91 Allocator requests SMF.s0.c_fail 0 0.00 Allocator failures SMF.s0.c_bytes 21703835648 87163998.59 Bytes allocated SMF.s0.c_freed 0 0.00 Bytes freed SMF.s0.g_alloc 1702142 . Allocations outstanding SMF.s0.g_bytes 21703843840 . Bytes outstanding SMF.s0.g_space 85670338560 . Bytes available SMF.s0.g_smf 1702143 . N struct smf SMF.s0.g_smf_frag 0 . N small free smf SMF.s0.g_smf_large 1 . N large free smf SMA.Transient.c_req 0 0.00 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 0 0.00 Bytes allocated SMA.Transient.c_freed 0 0.00 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.default(10.10.11.28,,80).vcls 1 . VCL references VBE.default(10.10.11.28,,80).happy 0 . Happy health probes <--- config.vcl ---> backend default { .host = "10.10.11.28"; .port = "80"; } acl purge { "localhost"; "10.10.11.0"/24; } # Request from client sub vcl_recv { # allow PURGE from localhost and 10.10.11.0"/24 if (req.request == "PURGE") { if (!client.ip ~ purge) { error 401 "Not allowed."; } return (lookup); } if (req.request == "BAN") { if (!client.ip ~ purge) { error 401 "Not allowed."; } if (req.http.x-ban-url != "" && req.http.x-ban-host != "") { ban("req.url ~ " + req.http.x-ban-url + " && req.http.host ~ " + req.http.x-ban-host); error 200 "Banned " + req.http.x-ban-host + req.http.x-ban-url; } return (lookup); } sub vcl_hash { ##return (hash); } sub vcl_pipe { ### Do not keep connection opened after sending response set bereq.http.connection = "close"; return (pipe); } sub vcl_pass { return (pass); } # Object retrieved from backend (origin) sub vcl_fetch { remove beresp.http.Cache-Control; ### ROBOTS/SITEMAPS if (req.url ~ "^robots.txt") { set beresp.http.Cache-Control = "s-max-age=86400"; set beresp.http.x-cache-type = "robots"; } elsif (req.url ~ "^sitemap.xml") { set beresp.http.Cache-Control = "s-max-age=86400"; set beresp.http.x-cache-type = "sitemaps"; } elsif (req.http.host ~ "^corporate") { set beresp.http.Cache-Control = "no-cache"; set beresp.http.Pragma = "no-cache"; } elsif (req.url ~ "\.(bmp|gif|ico|jpeg|jpg|png)$") { set beresp.http.Cache-Control = "s-max-age=604800"; set beresp.http.x-cache-type = "media-images"; } elsif (req.url ~ "(avi|flv|mov|mepg|mpg|swf)$") { set beresp.http.Cache-Control = "s-max-age=604800"; set beresp.http.x-cache-type = "media-videos"; } elsif (req.url ~ "(doc|docx|docb|xls|xlsx|xlsb|pdf|txt)$") { set beresp.http.Cache-Control = "s-max-age=604800"; set beresp.http.x-cache-type = "media-documents"; } elsif (req.url ~ "(css|js)$") { set beresp.http.Cache-Control = "s-max-age=604800"; set beresp.http.x-cache-type = "media-webs"; } elsif (req.http.user-agent ~ "(iphone|itunes|nokia|symbian|android|mobile|opera mini|opera mobi|sony|windows ce)") { set beresp.http.Cache-Control = "no-cache"; set beresp.http.Pragma = "no-cache"; } else { set beresp.http.Cache-Control = "s-max-age=2592000"; set beresp.http.x-cache-type = "nimportequoi"; } # manipulate RESPONSE HEADERS here if (req.url ~ "^robots.txt") { set beresp.ttl = 86400s; } elsif (req.url ~ "^sitemap.xml") { set beresp.ttl = 86400s; } elsif (req.url ~ "\.(bmp|gif|ico|jpeg|jpg|png)$") { set beresp.ttl = 1800s; } elsif (req.url ~ "(avi|flv|mov|mepg|mpg|swf)$") { set beresp.ttl = 1800s; } elsif (req.url ~ "(doc|docx|docb|xls|xlsx|xlsb|pdf|txt)$") { set beresp.ttl = 1800s; } elsif (req.url ~ "(css|js)$") { set beresp.ttl = 1800s; } else { # varnish default TTL is 120s set beresp.ttl = 2592000s; # for tests only } # Allow for Ban Lurker support. set beresp.http.x-url = req.url; set beresp.http.x-host = req.http.host; return (deliver); } sub vcl_deliver { set resp.http.Server = "Tartempion/1.0.1"; remove resp.http.x-Varnish; remove resp.http.Via; remove resp.http.x-Powered-By; remove resp.http.x-url; remove resp.http.x-host; remove resp.http.x-cache-type; # remove x-http specific media qualifier set resp.http.Cache-Control = "max-age=2592000"; return (deliver); } # Matching object in cache sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } return (deliver); } # object was not in cache until now sub vcl_miss { if (req.request == "PURGE") { # to PURGE all variants purge; error 200 "Purged."; } return (fetch); } > Hi, > > Can you provide varnishstat output and vcl ? This will help for > diagnostic. > > Regards, > Thierry > > -----Message d'origine----- > De?: varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org > [mailto:varnish-misc-bounces+thierry.magnien=sfr.com at varnish-cache.org] > De la part de geodni at free.fr > Envoy??: mercredi 6 ao?t 2014 14:19 > ??: varnish-misc at varnish-cache.org > Objet?: Re: Varnish 3.0.5 RAM/SWAP usage > > > I run a similar configuration except instead of file I use > > > > malloc,26G > > > > Have you tried that? does it react the same? > > Hi, > I tried that and the system became unstable after 3 minutes. I had to > reboot because of processes appears after that. No > hardware failure nor specific system log. After the reboot, I made > another injection test with "file" storage. > > My conclusion is that Varnish really eats 1GB per 1 million objects > but it also consume 22GB RAM for the rest of its activity !!! > What the hell is it doing with 22GB of RAM ? 80MB for SHM logs, 100MB > for threads ok but what about the rest ? > > The platform I use runs at 2k requests per second for one injection > job. One thing I can do is limiting "file" storage to 100GB to be > sure the system won't use SWAP. I am currently running this test. > > Regards, > Denis > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From suyogs at rediff-inc.com Thu Aug 7 04:53:06 2014 From: suyogs at rediff-inc.com (Suyog Shirgaonkar) Date: 7 Aug 2014 04:53:06 -0000 Subject: =?utf-8?B?VkNMIDQuMCB2Y2xfbWlzcyBpc3N1ZQ==?= Message-ID: <20140807045306.19793.qmail@pro236-137.mxout.rediffmailpro.com> Hi,I am  using varnish cache server 4.0.1. I have infrastructure like frontend as varnish server and between varnish and web server there is one more varnish server placed as midgrace. I want to unset the X-Varnish  cache parameter from backend response  ie. midgrace.I have tried to achieve the same with below code in VCL file, but it not working:sub vcl_deliver {#Called before a cached object is delivered to the client.        unset resp.http.Via;        unset resp.http.Age;        unset resp.http.X-Varnish;        unset resp.http.X-Pad;        unset resp.http.X-Powered-By;        unset resp.http.Etag ;        unset resp.http.Retry-After;        unset resp.http.X-Origin-Img-Unavailable;        if( resp.http.cache-control ~ "max-age=0") {                set resp.http.Expires = "Thu, 01 Jan 1970 00:00:00 GMT";        }        unset resp.http.Server;        set resp.http.Server = "XXXX/4.0.1" ;        #set resp. =  std.log("Obj.ttl : " + obj.ttl);        #if (obj.hits > 0) {        #               set resp.http.X-Cache = "TCP_HIT";        #} else {        #               set resp.http.X-Cache = "TCP_MISS";        #}        if (resp.http.X-Varnish ~ " ") {                set resp.http.x-cache = "TCP_HIT";        } else {                set resp.http.x-cache = "TCP_MISS";        }        return (deliver);}  -------------- next part -------------- An HTML attachment was scrubbed... URL: From apj at mutt.dk Thu Aug 7 07:22:31 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Thu, 7 Aug 2014 09:22:31 +0200 Subject: VCL 4.0 vcl_miss issue In-Reply-To: <20140807045306.19793.qmail@pro236-137.mxout.rediffmailpro.com> References: <20140807045306.19793.qmail@pro236-137.mxout.rediffmailpro.com> Message-ID: <20140807072231.GJ19870@nerd.dk> On Thu, Aug 07, 2014 at 04:53:06AM -0000, Suyog Shirgaonkar wrote: > Hi,I am  using varnish cache server 4.0.1. Your mail is unreadable, please send in plain text. -- Andreas From csepulveda at mediastre.am Fri Aug 8 14:21:34 2014 From: csepulveda at mediastre.am (=?UTF-8?B?Q8Opc2FyIFNlcMO6bHZlZGE=?=) Date: Fri, 8 Aug 2014 10:21:34 -0400 Subject: varnishncsa bad Time taken to serve the request Message-ID: Hi guys!. we are using varnish 3.0.5 and have an issue with Time taken to serve the request parameter (%D) we realized that the varnishncsa log is written before the download is complete: example the download: imac-de-cesar:~ csepulveda$ wget --limit-rate=100k http://xxx.xxx.com/assets/img/promo-win.jpg --2014-08-08 09:55:27-- http://xxxx.xxxx.com/assets/img/promo-win.jpg Resolving xxx.xxx.com... xx.xx.xx.xx. Connecting to xxx. xxx.com|xx.xx.xx.xx|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 383810 (375K) [image/jpeg] Saving to: ?promo-win.jpg.2? 100%[=========================================================================>] 383,810 101KB/s in 3.7s 2014-08-08 09:55:31 (101 KB/s) - ?promo-win.jpg.2? saved [383810/383810] The log: xx.xx.xx.xx "xx.xx.xx.xx" - [08/Aug/2014:13:55:33 +0000] "GET http://xxx.xxxx.com/assets/img/promo-win.jpg HTTP/1.0" 200 383810 "-" "Wget/UNKNOWN (darwin12.2.0)" 0.401839018 miss 0.785520 The download take 3.7 seconds but varnishncsa show 0.785520.watching this with tail -f we see the log is writen when the download go on 5 or 8 percent. The varnishncsa format is this: FORMAT="%h \"%{X-Forwarded-For}i\" %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %{Varnish:time_firstbyte}x %{Varnish:handling}x %D" what we are doing wrong? can you help us? Thanks!. -- C?sar Sep?lveda Jefe de Plataforma Mediastream Chile Email: csepulveda at mediastre.am Tel?fono: +56 2 24029750 -------------- next part -------------- An HTML attachment was scrubbed... URL: From csepulveda at mediastre.am Fri Aug 8 14:31:27 2014 From: csepulveda at mediastre.am (=?UTF-8?B?Q8Opc2FyIFNlcMO6bHZlZGE=?=) Date: Fri, 8 Aug 2014 10:31:27 -0400 Subject: varnishncsa bad Time taken to serve the request In-Reply-To: References: Message-ID: a video... my english is not too good. https://copy.com/uaaM3nMGVAuZ8ntk -- C?sar Sep?lveda Jefe de Plataforma Mediastream Chile Email: csepulveda at mediastre.am Tel?fono: +56 2 24029750 2014-08-08 10:21 GMT-04:00 C?sar Sep?lveda : > Hi guys!. > > we are using varnish 3.0.5 and have an issue with Time taken to serve the > request parameter (%D) > > we realized that the varnishncsa log is written before the download is > complete: example > > > the download: > imac-de-cesar:~ csepulveda$ wget --limit-rate=100k > http://xxx.xxx.com/assets/img/promo-win.jpg > --2014-08-08 09:55:27-- http://xxxx.xxxx.com/assets/img/promo-win.jpg > Resolving xxx.xxx.com... xx.xx.xx.xx. > Connecting to xxx. xxx.com|xx.xx.xx.xx|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 383810 (375K) [image/jpeg] > Saving to: ?promo-win.jpg.2? > > 100%[=========================================================================>] > 383,810 101KB/s in 3.7s > > 2014-08-08 09:55:31 (101 KB/s) - ?promo-win.jpg.2? saved [383810/383810] > > > The log: > xx.xx.xx.xx "xx.xx.xx.xx" - [08/Aug/2014:13:55:33 +0000] "GET > http://xxx.xxxx.com/assets/img/promo-win.jpg HTTP/1.0" 200 383810 "-" > "Wget/UNKNOWN (darwin12.2.0)" 0.401839018 miss 0.785520 > > The download take 3.7 seconds but varnishncsa show 0.785520.watching this > with tail -f we see the log is writen when the download go on 5 or 8 > percent. > > The varnishncsa format is this: > FORMAT="%h \"%{X-Forwarded-For}i\" %u %t \"%r\" %s %b \"%{Referer}i\" > \"%{User-agent}i\" %{Varnish:time_firstbyte}x %{Varnish:handling}x %D" > > what we are doing wrong? > can you help us? > > Thanks!. > > -- > C?sar Sep?lveda > Jefe de Plataforma > Mediastream Chile > > Email: csepulveda at mediastre.am > Tel?fono: +56 2 24029750 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From csepulveda at mediastre.am Mon Aug 11 13:19:01 2014 From: csepulveda at mediastre.am (=?UTF-8?B?Q8Opc2FyIFNlcMO6bHZlZGE=?=) Date: Mon, 11 Aug 2014 09:19:01 -0400 Subject: varnishncsa bad Time taken to serve the request In-Reply-To: References: Message-ID: Hi guys. Anyone can help me, or has the same issue? Thanks!!. -- C?sar Sep?lveda Jefe de Plataforma Mediastream Chile Email: csepulveda at mediastre.am Tel?fono: +56 2 24029750 2014-08-08 10:31 GMT-04:00 C?sar Sep?lveda : > a video... my english is not too good. > > https://copy.com/uaaM3nMGVAuZ8ntk > > -- > C?sar Sep?lveda > Jefe de Plataforma > Mediastream Chile > > Email: csepulveda at mediastre.am > Tel?fono: +56 2 24029750 > > > 2014-08-08 10:21 GMT-04:00 C?sar Sep?lveda : > > Hi guys!. >> >> we are using varnish 3.0.5 and have an issue with Time taken to serve the >> request parameter (%D) >> >> we realized that the varnishncsa log is written before the download is >> complete: example >> >> >> the download: >> imac-de-cesar:~ csepulveda$ wget --limit-rate=100k >> http://xxx.xxx.com/assets/img/promo-win.jpg >> --2014-08-08 09:55:27-- http://xxxx.xxxx.com/assets/img/promo-win.jpg >> Resolving xxx.xxx.com... xx.xx.xx.xx. >> Connecting to xxx. xxx.com|xx.xx.xx.xx|:80... connected. >> HTTP request sent, awaiting response... 200 OK >> Length: 383810 (375K) [image/jpeg] >> Saving to: ?promo-win.jpg.2? >> >> 100%[=========================================================================>] >> 383,810 101KB/s in 3.7s >> >> 2014-08-08 09:55:31 (101 KB/s) - ?promo-win.jpg.2? saved [383810/383810] >> >> >> The log: >> xx.xx.xx.xx "xx.xx.xx.xx" - [08/Aug/2014:13:55:33 +0000] "GET >> http://xxx.xxxx.com/assets/img/promo-win.jpg HTTP/1.0" 200 383810 "-" >> "Wget/UNKNOWN (darwin12.2.0)" 0.401839018 miss 0.785520 >> >> The download take 3.7 seconds but varnishncsa show 0.785520.watching >> this with tail -f we see the log is writen when the download go on 5 or 8 >> percent. >> >> The varnishncsa format is this: >> FORMAT="%h \"%{X-Forwarded-For}i\" %u %t \"%r\" %s %b \"%{Referer}i\" >> \"%{User-agent}i\" %{Varnish:time_firstbyte}x %{Varnish:handling}x %D" >> >> what we are doing wrong? >> can you help us? >> >> Thanks!. >> >> -- >> C?sar Sep?lveda >> Jefe de Plataforma >> Mediastream Chile >> >> Email: csepulveda at mediastre.am >> Tel?fono: +56 2 24029750 >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aashisn at hotmail.com Mon Aug 11 15:39:42 2014 From: aashisn at hotmail.com (Ashish Nepal) Date: Mon, 11 Aug 2014 15:39:42 +0000 Subject: sticky session In-Reply-To: References: , , Message-ID: Hi, I am not 100% sure how client.identity = req.http.cookie works,neither i got good firm understanding from varnish-cache website itself. can someone explain me if with below config i can get all request from one user to same backend? If yes, My test gives me diff backend, if no, can someone explain me how this works? } elseif (req.http.host ~ "^test\.domain\.com$") { set client.identity = req.http.cookie; set req.backend = loc1; if(!req.backend.healthy) { set req.backend = loc2; } } (i am currently trying to run internal authentication based portal behind varnish.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Aug 11 15:44:57 2014 From: perbu at varnish-software.com (Per Buer) Date: Mon, 11 Aug 2014 17:44:57 +0200 Subject: varnishncsa bad Time taken to serve the request In-Reply-To: References: Message-ID: Hi. I tested this on repo.varnish-cache.org: $ time wget --limit-rate=100k http://repo.varnish-cache.org/debian/pool/varnish-4.0/v/varnish/varnish_4.0.0.orig.tar.gz ... real 0m23.472s Then from Varnishlog: .. 22 ReqEnd c 315341679 1407768801.649216652 1407768818.704510689 0.000195265 0.037013054 *17.018280983* .. So. Your findings make sense. I think, but I'm not 100% certain, that the difference in time is due to buffering, especially on the server end. The kernel will swallow up a lot of writes that Varnish do and the thread then thinks it is actually done. Since the thread doesn't close the connection it won't block the worker. I think the kernel will block on connection close to make sure the client has gotten all the data but at that point the connection has been handed back to the pool. As I said, I'm not 100% that this is the correct explanation. But I'm pretty certain it works like this and if I'm right it is not a bug, strictly speaking. On Mon, Aug 11, 2014 at 3:19 PM, C?sar Sep?lveda wrote: > Hi guys. > > Anyone can help me, or has the same issue? > > Thanks!!. > > -- > C?sar Sep?lveda > Jefe de Plataforma > Mediastream Chile > > Email: csepulveda at mediastre.am > Tel?fono: +56 2 24029750 > > > 2014-08-08 10:31 GMT-04:00 C?sar Sep?lveda : > > a video... my english is not too good. >> >> https://copy.com/uaaM3nMGVAuZ8ntk >> >> -- >> C?sar Sep?lveda >> Jefe de Plataforma >> Mediastream Chile >> >> Email: csepulveda at mediastre.am >> Tel?fono: +56 2 24029750 >> >> >> 2014-08-08 10:21 GMT-04:00 C?sar Sep?lveda : >> >> Hi guys!. >>> >>> we are using varnish 3.0.5 and have an issue with Time taken to serve >>> the request parameter (%D) >>> >>> we realized that the varnishncsa log is written before the download is >>> complete: example >>> >>> >>> the download: >>> imac-de-cesar:~ csepulveda$ wget --limit-rate=100k >>> http://xxx.xxx.com/assets/img/promo-win.jpg >>> --2014-08-08 09:55:27-- http://xxxx.xxxx.com/assets/img/promo-win.jpg >>> Resolving xxx.xxx.com... xx.xx.xx.xx. >>> Connecting to xxx. xxx.com|xx.xx.xx.xx|:80... connected. >>> HTTP request sent, awaiting response... 200 OK >>> Length: 383810 (375K) [image/jpeg] >>> Saving to: ?promo-win.jpg.2? >>> >>> 100%[=========================================================================>] >>> 383,810 101KB/s in 3.7s >>> >>> 2014-08-08 09:55:31 (101 KB/s) - ?promo-win.jpg.2? saved [383810/383810] >>> >>> >>> The log: >>> xx.xx.xx.xx "xx.xx.xx.xx" - [08/Aug/2014:13:55:33 +0000] "GET >>> http://xxx.xxxx.com/assets/img/promo-win.jpg HTTP/1.0" 200 383810 "-" >>> "Wget/UNKNOWN (darwin12.2.0)" 0.401839018 miss 0.785520 >>> >>> The download take 3.7 seconds but varnishncsa show 0.785520.watching >>> this with tail -f we see the log is writen when the download go on 5 or 8 >>> percent. >>> >>> The varnishncsa format is this: >>> FORMAT="%h \"%{X-Forwarded-For}i\" %u %t \"%r\" %s %b \"%{Referer}i\" >>> \"%{User-agent}i\" %{Varnish:time_firstbyte}x %{Varnish:handling}x %D" >>> >>> what we are doing wrong? >>> can you help us? >>> >>> Thanks!. >>> >>> -- >>> C?sar Sep?lveda >>> Jefe de Plataforma >>> Mediastream Chile >>> >>> Email: csepulveda at mediastre.am >>> Tel?fono: +56 2 24029750 >>> >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Mon Aug 11 15:47:32 2014 From: perbu at varnish-software.com (Per Buer) Date: Mon, 11 Aug 2014 17:47:32 +0200 Subject: sticky session In-Reply-To: References: Message-ID: Hi Ashish, On Mon, Aug 11, 2014 at 5:39 PM, Ashish Nepal wrote: > Hi, I am not 100% sure how client.identity = req.http.cookie works, > neither i got good firm understanding from varnish-cache website itself. > > can someone explain me if with below config i can get all request from one > user to same backend? > > If yes, > My test gives me diff backend, if no, can someone explain me how this > works? > > } elseif (req.http.host ~ "^test\.domain\.com$") { > set client.identity = req.http.cookie; > set req.backend = loc1; > > if(!req.backend.healthy) { > set req.backend = loc2; > } > > } > > The client director should have multiple backends in one director. Have you set it up like this? I would also pick out just the cookie you need and not the whole Cookie header from the client. -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aashisn at hotmail.com Mon Aug 11 16:06:30 2014 From: aashisn at hotmail.com (Ashish Nepal) Date: Mon, 11 Aug 2014 16:06:30 +0000 Subject: sticky session In-Reply-To: References: , , , Message-ID: Hey per, Thanks for response, Yes multiple backend in both directors lo1 and loc2. I can see cookie being same but still goes to to diff backend. when i look into varnishlog rxheader. RegardsAshish From: perbu at varnish-software.com Date: Mon, 11 Aug 2014 17:47:32 +0200 Subject: Re: sticky session To: aashisn at hotmail.com CC: varnish-misc at varnish-cache.org Hi Ashish, On Mon, Aug 11, 2014 at 5:39 PM, Ashish Nepal wrote: Hi, I am not 100% sure how client.identity = req.http.cookie works,neither i got good firm understanding from varnish-cache website itself. can someone explain me if with below config i can get all request from one user to same backend? If yes, My test gives me diff backend, if no, can someone explain me how this works? } elseif (req.http.host ~ "^test\.domain\.com$") { set client.identity = req.http.cookie; set req.backend = loc1; if(!req.backend.healthy) { set req.backend = loc2; } } The client director should have multiple backends in one director. Have you set it up like this? I would also pick out just the cookie you need and not the whole Cookie header from the client. -- Per Buer CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Mon Aug 11 18:24:22 2014 From: james at ifixit.com (James Pearson) Date: Mon, 11 Aug 2014 11:24:22 -0700 Subject: deliver vs hit_for_pass with ttl of 0 Message-ID: <1407781239-sup-2296@geror.local> There are some instances where we know in vcl_fetch that we don't want to cache an object, but we also don't want to cache the no-cache decision (that is, the next request should be sent to the backend). We're doing that by setting `beresp.ttl = 0s`, which works fine. What I'm wondering is as to whether it's better to return `deliver` or `hit_for_pass`, or if it really makes any difference at all. Both should be immediately removed from their respective caches, right? - James -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From csepulveda at mediastre.am Mon Aug 11 18:41:36 2014 From: csepulveda at mediastre.am (=?UTF-8?B?Q8Opc2FyIFNlcMO6bHZlZGE=?=) Date: Mon, 11 Aug 2014 14:41:36 -0400 Subject: varnishncsa bad Time taken to serve the request In-Reply-To: References: Message-ID: we use a fresh ubuntu install, with only one client making the request and we have the same results. Fresh Ubuntu 12.04 and last varnish 3.0.5. imac-de-cesar:~ csepulveda$ wget --limit-rate=1k http://10.0.1.78/thumbs/538907ff2982d00322387b34/2014711/thumb_53c0285317259d33326158f2_original_1768s.jpg --2014-08-11 14:32:36-- http://10.0.1.78/thumbs/538907ff2982d00322387b34/2014711/thumb_53c0285317259d33326158f2_original_1768s.jpg Connecting to 10.0.1.78:80... connected. HTTP request sent, awaiting response... 200 OK Length: 18111 (18K) [image/jpeg] Saving to: ?thumb_53c0285317259d33326158f2_original_1768s.jpg.9? 100%[=====================================================================================>] 18,111 1024B/s * in 18s* 2014-08-11 14:32:53 (1024 B/s) - ?thumb_53c0285317259d33326158f2_original_1768s.jpg.9? saved [18111/18111] ....... 11 SessionOpen c 10.0.1.135 60525 :80 11 ReqStart c 10.0.1.135 60525 1340317141 11 RxRequest c GET 11 RxURL c /thumbs/538907ff2982d00322387b34/2014711/thumb_53c0285317259d33326158f2_original_1768s.jpg 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: Wget/UNKNOWN (darwin12.2.0) 11 RxHeader c Accept: */* 11 RxHeader c Host: 10.0.1.78 11 RxHeader c Connection: Keep-Alive 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /thumbs/538907ff2982d00322387b34/2014711/thumb_53c0285317259d33326158f2_original_1768s.jpg 11 Hash c 10.0.1.78 11 VCL_return c hash 11 Hit c 1340317139 11 VCL_call c hit deliver 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Content-Type: image/jpeg 11 TxHeader c Last-Modified: Fri, 11 Jul 2014 18:09:25 GMT 11 TxHeader c Expires: Thu, 31 Dec 2037 23:55:55 GMT 11 TxHeader c Cache-Control: max-age=315360000, public, must-revalidate, proxy-revalidate 11 TxHeader c Pragma: public 11 TxHeader c Content-Length: 18111 11 TxHeader c Accept-Ranges: bytes 11 TxHeader c Date: Mon, 11 Aug 2014 18:32:36 GMT 11 TxHeader c X-Varnish: 1340317141 1340317139 11 TxHeader c Age: 2451 11 TxHeader c Via: 1.1 varnish 11 TxHeader c Connection: keep-alive 11 Length c 18111 11 ReqEnd c 1340317141 1407781956.576860905 1407781956.577031851 0.000064373 0.000036240 *0.000134706* 11 Debug c herding 11 SessionClose c timeout 11 StatSess c 10.0.1.135 60525 0 1 1 0 0 0 393 18111 ubuntu 12.04 and last varnish 4.0.1-2 * << Request >> 2 - Begin req 1 rxreq - Timestamp Start: 1407782319.675061 0.000000 0.000000 - Timestamp Req: 1407782319.675061 0.000000 0.000000 - ReqStart 10.0.1.135 60616 - ReqMethod GET - ReqURL /thumbs/538907ff2982d00322387b34/2014711/thumb_53c0285317259d33326158f2_original_1768s.jpg - ReqProtocol HTTP/1.1 - ReqHeader User-Agent: Wget/UNKNOWN (darwin12.2.0) - ReqHeader Accept: */* - ReqHeader Host: 10.0.1.78 - ReqHeader Connection: Keep-Alive - ReqHeader X-Forwarded-For: 10.0.1.135 - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Debug "XXXX MISS" - VCL_call MISS - VCL_return fetch - Link bereq 3 fetch - Timestamp Fetch: 1407782319.683342 0.008281 0.008281 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Content-Type: image/jpeg - RespHeader Last-Modified: Fri, 11 Jul 2014 18:09:25 GMT - RespHeader Expires: Thu, 31 Dec 2037 23:55:55 GMT - RespHeader Cache-Control: max-age=315360000, public, must-revalidate, proxy-revalidate - RespHeader Pragma: public - RespHeader Date: Mon, 11 Aug 2014 18:38:39 GMT - RespHeader X-Varnish: 2 - RespHeader Age: 2814 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - VCL_return deliver - Timestamp Process: 1407782319.700854 0.025793 0.017512 - RespHeader Content-Length: 18111 - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - RespHeader Accept-Ranges: bytes - Timestamp Resp: 1407782319.701260 0.026199 0.000406 - Debug "XXX REF 2" - ReqAcct 202 0 202 376 18111 18487 - End We need the most real Time taken to serve the request, do you know if i can get it from another place? Thanks!. -- C?sar Sep?lveda Jefe de Plataforma Mediastream Chile Email: csepulveda at mediastre.am Tel?fono: +56 2 24029750 2014-08-11 9:19 GMT-04:00 C?sar Sep?lveda : > Hi guys. > > Anyone can help me, or has the same issue? > > Thanks!!. > > -- > C?sar Sep?lveda > Jefe de Plataforma > Mediastream Chile > > Email: csepulveda at mediastre.am > Tel?fono: +56 2 24029750 > > > 2014-08-08 10:31 GMT-04:00 C?sar Sep?lveda : > > a video... my english is not too good. >> >> https://copy.com/uaaM3nMGVAuZ8ntk >> >> -- >> C?sar Sep?lveda >> Jefe de Plataforma >> Mediastream Chile >> >> Email: csepulveda at mediastre.am >> Tel?fono: +56 2 24029750 >> >> >> 2014-08-08 10:21 GMT-04:00 C?sar Sep?lveda : >> >> Hi guys!. >>> >>> we are using varnish 3.0.5 and have an issue with Time taken to serve >>> the request parameter (%D) >>> >>> we realized that the varnishncsa log is written before the download is >>> complete: example >>> >>> >>> the download: >>> imac-de-cesar:~ csepulveda$ wget --limit-rate=100k >>> http://xxx.xxx.com/assets/img/promo-win.jpg >>> --2014-08-08 09:55:27-- http://xxxx.xxxx.com/assets/img/promo-win.jpg >>> Resolving xxx.xxx.com... xx.xx.xx.xx. >>> Connecting to xxx. xxx.com|xx.xx.xx.xx|:80... connected. >>> HTTP request sent, awaiting response... 200 OK >>> Length: 383810 (375K) [image/jpeg] >>> Saving to: ?promo-win.jpg.2? >>> >>> 100%[=========================================================================>] >>> 383,810 101KB/s in 3.7s >>> >>> 2014-08-08 09:55:31 (101 KB/s) - ?promo-win.jpg.2? saved [383810/383810] >>> >>> >>> The log: >>> xx.xx.xx.xx "xx.xx.xx.xx" - [08/Aug/2014:13:55:33 +0000] "GET >>> http://xxx.xxxx.com/assets/img/promo-win.jpg HTTP/1.0" 200 383810 "-" >>> "Wget/UNKNOWN (darwin12.2.0)" 0.401839018 miss 0.785520 >>> >>> The download take 3.7 seconds but varnishncsa show 0.785520.watching >>> this with tail -f we see the log is writen when the download go on 5 or 8 >>> percent. >>> >>> The varnishncsa format is this: >>> FORMAT="%h \"%{X-Forwarded-For}i\" %u %t \"%r\" %s %b \"%{Referer}i\" >>> \"%{User-agent}i\" %{Varnish:time_firstbyte}x %{Varnish:handling}x %D" >>> >>> what we are doing wrong? >>> can you help us? >>> >>> Thanks!. >>> >>> -- >>> C?sar Sep?lveda >>> Jefe de Plataforma >>> Mediastream Chile >>> >>> Email: csepulveda at mediastre.am >>> Tel?fono: +56 2 24029750 >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bernard at sprybts.com Mon Aug 11 20:26:47 2014 From: bernard at sprybts.com (Bernard Gardner) Date: Mon, 11 Aug 2014 16:26:47 -0400 Subject: varnishncsa bad Time taken to serve the request In-Reply-To: References: Message-ID: Hi Cesar, I'm not sure if I've properly understood your question - you mention using %D as a format string for logging time taken to serve the request in varnishncsa, but I don't see that as one of the documented strings in the varnishncsa manual page. I do see %{Varnish:time_firstbyte}x - which is what I'm guessing you're looking at and which will only include the time taken for your backend to process the request and start to send the response, the network latency for varnish to receive that first packet and then start to send the response to the client (assuming a miss). %D is a format string for the apache logs, and there it's showing you the time to the last byte, but that time doesn't include the network path to the client if varnish is serving the request. Either of those would explain what you're seeing (and would likely be showing very similar values for the time in question if your backend and varnish instance are close on the network, or running on the same host). I think that the time that you want (time to last byte from varnish) could be calculated from the fields in the ReqEnd tag in the varnish log - I don't think you'll be able to access it via varnishncsa directly, you may need to use some other logging client, or modify varnishncsa to do exactly what you want. The fields in the ReqEnd tag are described here - https://www.varnish-cache.org/trac/wiki/Varnishlog Hope that helps, Bernard. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4139 bytes Desc: not available URL: From riccardo.salamanna at netcentric.biz Tue Aug 12 10:03:16 2014 From: riccardo.salamanna at netcentric.biz (Riccardo Salamanna) Date: Tue, 12 Aug 2014 12:03:16 +0200 Subject: Issue on POST of big files Message-ID: <6CCA1196-43AA-43D0-8FA1-3124B64CA6DD@netcentric.biz> Good morning I hope you can help me with this: I am no Varnish expert and i am facing an issue where on big upload of files i get a 502 error. We have an apache in front of Varnish, which is of course in front of a backend. I have no issue when uploading directly to the backend server. Reading over the internet i found that it might be useful to pipe big uploads, and to verify i added to vcl_recv if (req.request == ?POST?) { return (pipe) } and then sub vcl_pipe { set bereq.http.connection = "close?; } Yet i incur in the same exact issue. can anybody help me debug this? Many thanks BR Riccardo Salamanna -------------- next part -------------- An HTML attachment was scrubbed... URL: From krjensen at ebay.com Wed Aug 13 15:40:58 2014 From: krjensen at ebay.com (Jensen, Kristian) Date: Wed, 13 Aug 2014 15:40:58 +0000 Subject: Upgrade from Varnish 3 to 4 Message-ID: <10DB2D69D5617B45AA54F1AA801DE34D47F681EA@DUB-EXDDA-S11.corp.ebay.com> Hi, We are about to upgrade from 3 to 4, and have discussed whether we should delete .bin file or not. Some believe that we should delete it so that varnish can "start from a fresh." I personally think it is a bad solution, since varnish then write the file from the beginning again, and auto expand it every time there is a need for more space. This will cause fragmentation on and unnecessary operations on the SSD disk. Is there any reason to delete the file during an upgrade? Kristian Jensen -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Wed Aug 13 15:57:13 2014 From: perbu at varnish-software.com (Per Buer) Date: Wed, 13 Aug 2014 17:57:13 +0200 Subject: Upgrade from Varnish 3 to 4 In-Reply-To: <10DB2D69D5617B45AA54F1AA801DE34D47F681EA@DUB-EXDDA-S11.corp.ebay.com> References: <10DB2D69D5617B45AA54F1AA801DE34D47F681EA@DUB-EXDDA-S11.corp.ebay.com> Message-ID: No. Varnish doesn't care about the contents of the file. On Wed, Aug 13, 2014 at 5:40 PM, Jensen, Kristian wrote: > Hi, > > > > We are about to upgrade from 3 to 4, and have discussed whether we should > delete .bin file or not. > > > > Some believe that we should delete it so that varnish can ?start from a > fresh.? > > > > I personally think it is a bad solution, since varnish then write the file > from the beginning again, and auto expand it every time there is a need for > more space. This will cause fragmentation on and unnecessary operations on > the SSD disk. > > > > Is there any reason to delete the file during an upgrade? > > > > Kristian Jensen > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliand at aspedia.net Fri Aug 15 06:09:06 2014 From: juliand at aspedia.net (Julian De Marchi) Date: Fri, 15 Aug 2014 16:09:06 +1000 Subject: Varnish connection reset error Message-ID: <53EDA402.9070009@aspedia.net> Hey! I'm having an issue with Varnish 4.0. I have Varnish setup as my front-loadbalancer to my pool of web servers. I'm having an issue with Varnish were the pool of web servers return a 200 for a URI, but FF returns "The connection was reset". When I dial direct to a server in the pool for the domain, it works as expected. Below is the Varlishlog output for the request. http://pastebin.com/ztbDCzGB Here is my startup command for Varnish from systemd. "ExecStart=/usr/bin/varnishd -a 0.0.0.0:80,[::]:80 -f /etc/varnish/default.vcl -T 0.0.0.0:6082 -s malloc,3G -u nobody -g nobody -p thread_pools=2 -p thread_pool_min=400 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p http_max_hdr=1024 -S /etc/conf.d/varnish.secret -F" I know that the http_max_hdr is rather large, but that solves a bug I had with a site which produced a massive HTTP header. I'm basically after any troubleshooting steps I can take to diagnose this issue? I know it lie within Varnish, and I bet it's just me forgetting a "config option" somewhere. Thanks in advanced! --julian From geodni at free.fr Mon Aug 18 14:49:52 2014 From: geodni at free.fr (geodni at free.fr) Date: Mon, 18 Aug 2014 16:49:52 +0200 (CEST) Subject: Varnish 3.0.5 RAM/SWAP usage In-Reply-To: <63B1C548-64D1-4BAF-B05D-1D4EB7D02DE2@otoh.org> Message-ID: <941973498.50539583.1408373392315.JavaMail.root@zimbra59-e10.priv.proxad.net> Hi, sorry to reply so late, here are the results of my tests : 0 -- - system has 32GB of RAM, 8GB of SWAP and 3TB of available storage mounted on /var/cache/varnish - allocation : "file,/var/cache/varnish/limited100GB.bin,100GB" - 4.2 million of objects max in cache - 30GB of RAM used and 1.4GB of swap - varnishd child process not killed any more - 200k requests per minute before starting eating SWAP, 95k after eating some SWAP - oldest objects are replaced by newer ones, IO average is 53% on HDD Today, I ran some other tests after a memory upgrade from 32GB to 64GB, here are the results : 1 -- - system has 64GB of RAM, 8GB of SWAP and 3TB of available storage mounted on /var/cache/varnish - allocation : "file,/var/cache/varnish/limited200GB.bin,200GB" - 8.8 million of objects max in cache - 56GB of RAM used and 1.9GB of swap - varnishd child process not killed any more - starting at 250k requests per minute for 10 minutes then 200k before starting eating SWAP, 145k after eating some SWAP - oldest objects are replaced by newer ones, IO average is 53% on HDD 2 -- - system has 64GB of RAM, 50MB of SWAP and 3TB of available storage mounted on /var/cache/varnish - allocation : "file,/var/cache/varnish/limited300GB.bin,300GB" - 6.8 million of objects max in cache before varnishd killed - 60GB of RAM used and 500MB of swap - varnishd child process was killed because running out of swap space - starting at 250k requests per minute for 10 minutes then 200k before starting eating SWAP, 95k after eating some SWAP - oldest objects are replaced by newer ones, IO average is 53% on HDD Does anybody have a large platform running Varnish ? I would like to raise 200 millions of objects but it seems it needs a very large amount of RAM or I have a problem with memory leaks using Varnish 3.0.5 under FreeBSD 10... Regards, Denis > De: "Paul Armstrong" > Envoy?: Mercredi 6 Ao?t 2014 17:20:38 > > Look at varnishstat and see if the temporary storage is large as this > can also be restricted (but isn't by default). For my workloads I > typically have malloc set to only 25-50% of RAM in order to avoid > running out. > > Paul > > On 6 Aug 2014, at 5:19, geodni at free.fr wrote: > > >> I run a similar configuration except instead of file I use > >> > >> malloc,26G > >> > >> Have you tried that? does it react the same? > > > > Hi, > > I tried that and the system became unstable after 3 minutes. I had > > to reboot because of processes appears after that. No > > hardware failure nor specific system log. After the reboot, I made > > another injection test with "file" storage. > > > > My conclusion is that Varnish really eats 1GB per 1 million objects > > but it also consume 22GB RAM for the rest of its activity !!! > > What the hell is it doing with 22GB of RAM ? 80MB for SHM logs, > > 100MB for threads ok but what about the rest ? > > > > The platform I use runs at 2k requests per second for one injection > > job. One thing I can do is limiting "file" storage to 100GB to be > > sure the system won't use SWAP. I am currently running this test. > > > > Regards, > > Denis From haazeloud at gmail.com Tue Aug 19 09:40:31 2014 From: haazeloud at gmail.com (David B.) Date: Tue, 19 Aug 2014 11:40:31 +0200 Subject: Varnish 3 | X-forwarded-for sometimes not set Message-ID: <53F31B8F.7070905@gmail.com> Hi Varnish users, I'm facing out an issue since several weeks and I can't figure how to correct this. :( I hope someone can explain me what's I'm doing wrong. Setup is quite simple : client request an web page though http => varnish (cache or pass) => backend (apache) Sometimes, x-forwarded-for was not set and apache recieve an empty x-forwarded-for header. This trigger an alarm on my monitoring system. Backend must have client-ip for legal reason. This happens only a few times per hour. :( This morning, theses errors were seen with very large URI. Perhaps large URI can hit a buffer length and there's no more room space left for x-forwarded-for ? Sometimes, this can happen with short uri as well. :( Varnishlog extract : 446 ReqStart c (client ip here) 63424 998725987 446 RxRequest c GET 446 RxURL c /forums/T2dnUwACAAAAAAAAAACPiNfnAAAAABCLD7gBHgF2b3JiaXMAAAAAAkSsAAD/////APQBAP////+4AU9nZ1MAAAAAAAAAAAAAj4jX5wEAAABuORkUE//q/////////////////////zwDdm9yYmlzDQAAAExhdmY1NS40NC4xMDAJAAAAHwAAAGVuY29kZXI9TGF2YzU1LjY4LjEwMCBsaWJ2b3JiaXMZAAAAdGl0bGU9U3ludGhldGl 446 RxProtocol c HTTP/1.1 446 RxHeader c Host: www.myhost.com 446 RxHeader c Connection: keep-alive 446 RxHeader c Accept-Encoding: identity;q=1, *;q=0 446 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36 OPR/23.0.1522.75 446 RxHeader c Accept: */* 446 RxHeader c Referer: http://www.myhost.com/mypage.html 446 RxHeader c Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 446 RxHeader c Cookie: (some cookie here, not much) 446 RxHeader c Range: bytes=0- 446 VCL_call c recv pass 446 VCL_call c hash 446 Hash c /forums/T2dnUwACAAAAAAAAAACPiNfnAAAAABCLD7gBHgF2b3JiaXMAAAAAAkSsAAD/////APQBAP////+4AU9nZ1MAAAAAAAAAAAAAj4jX5wEAAABuORkUE//q/////////////////////zwDdm9yYmlzDQAAAExhdmY1NS40NC4xMDAJAAAAHwAAAGVuY29kZXI9TGF2YzU1LjY4LjEwMCBsaWJ2b3JiaXMZAAAAdGl0bGU9U3ludGhldG 446 Hash c www.myhost.com 446 VCL_return c hash 446 VCL_call c pass pass 446 Backend c 39 www www3 446 TTL c 998725987 RFC -1 -1 -1 1408439635 0 1408439634 0 0 446 VCL_call c fetch 446 TTL c 998725987 VCL -1 3600 -1 1408439635 -0 446 TTL c 998725987 VCL 120 3600 -1 1408439635 -0 446 VCL_return c hit_for_pass 446 ObjProtocol c HTTP/1.1 446 ObjResponse c Request-URI Too Large 446 ObjHeader c Date: Tue, 19 Aug 2014 09:13:54 GMT 446 ObjHeader c Server: Apache 446 ObjHeader c Content-Length: 26 446 ObjHeader c Content-Type: text/html; charset=iso-8859-1 446 VCL_call c deliver deliver 446 TxProtocol c HTTP/1.1 446 TxStatus c 414 446 TxResponse c Request-URI Too Large 446 TxHeader c Content-Type: text/html; charset=iso-8859-1 446 TxHeader c Content-Length: 26 446 TxHeader c Accept-Ranges: bytes 446 TxHeader c Date: Tue, 19 Aug 2014 09:13:54 GMT 446 TxHeader c Age: 0 446 TxHeader c Connection: keep-alive 446 TxHeader c Cache-Control: , no-transform 446 TxHeader c X-Cache: MISS 446 Length c 26 446 ReqEnd c 998725987 1408439634.963270903 1408439634.964049816 2.867349625 0.000756264 0.000022650 Apache (backend) log extract : 192.168.0.3 - [19/Aug/2014:11:13:54 +0200] "GET /forums/T2dnUwACAAAAAAAAAACPiNfnAAAAABCLD7gBHgF2b3JiaXMAAAAAAkSsAAD/////APQBAP////+4AU9nZ1MAAAAAAAAAAAAAj4jX5wEAAABuORkUE//q/////////////////////zwDdm9yYmlzDQAAAExhdmY1NS40NC4xMDAJAAAAHwAAAGVuY29kZXI9TGF2YzU1LjY4LjEwMCBsaWJ2b3JiaXMZAAAAdGl0bGU9U3ludGhldGljIFdhdGVyZHJvcAwAAABJRU5HPVBvcnBoeXIkAAAAY29weXJpZ2h0PWJ5IFBvcnBoeXIgdW5kZXIgQ0MgQlkgMy4wEwAAAGdlbnJlPVNvdW5kIEVmZmVjdHMOAAAAYXJ0aXN0PVBvcnBoeXJCAAAASUtFWT13YXRlciB3YXRlcmRyb3AgZHJvcCBzeW50aGV0aWMgc3ludGhlc2l6ZWQgZ2FtZSBlZmZlY3QgYnV0dG9uDwAAAGRhdGU9MTUuMDYuMjAxM84AAABERVNDUklQVElPTj1UaGlzIHNvdW5kIHdhcyBtYWRlIGJ5IFBvcnBoeXIgKGh0dHA6Ly9mcmVlc291bmQub3JnL3Blb3BsZS9Qb3JwaHlyLykgYW5kIGlzIGxpY2Vuc2VkIHVuZGVyIENDIEJZIDMuMCAoaHR0cDovL2NyZWF0aXZlY29tbW9ucy5vcmcvbGljZW5zZXMvYnkvMy4wLykuDQpJdCB3YXMgb3JpZ2luYWxseSB1cGxvYWRlZCB0byBmcmVlc291bmQub3JnLgEFdm9yYmlzKUJDVgEACAAAgCJMGMSA0JBVAAAQAACgrDeWe8i99957gahHFHuIvffee+OsR9B6iLn33nvuvacae8u9995zIDRkFQAABACAKQiacuBC6r33HhnmEVEaKse99x4ZhYkwlBmFPZXaWushk9xC6j3nHggNWQUAAAIAQAghhBRSSCGFFFJIIYUUUkgppZhiiimmmGLKKaccc8wxxyCDDjropJNQQgkppFBKKqmklFJKLdZac+69B91z70H4IIQQQgghhBBCCCGEEEIIQkNWAQAgAAAEQgghZBBCCCGEFFJIIaaYYsopp4DQkFUAACAAgAAAAABJkRTLsRzN0RzN8RzPESVREiXRMi3TUjVTMz1VVEXVVFVXVV1dd23Vdm3Vlm3XVm3Vdm3VVm1Ztm3btm3btm3btm3btm3btm0gNGQVACABAKAjOZIjKZIiKZLjOJIEhIasAgBkAAAEAKAoiuM4juRIjiVpkmZ5lmeJmqiZmuipngqEhqwCAAABAAQAAAAAAOB4iud4jmd5kud4jmd5mqdpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpmqZpQGjIKgBAAgBAx3Ecx3Ecx3EcR3IkBwgNWQUAyAAACABAUiTHcixHczTHczxHdETHdEzJlFTJtVwLCA1ZBQAAAgAIAAAAAABAEyxFUzzHkzzPEzXP0zTNE01RNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE3TNE1TFIHQkFUAAAQAACGdZpZqgAgzkGEgNGQVAIAAAAAYoQhDDAgNWQUAAAQAAIih5CCa0JrzzTkOmuWgqRSb08GJVJsnuamYm3POOeecbM4Z45xzzinKmcWgmdCac85JDJqloJnQmnPOeRKbB62p0ppzzhnnnA7GGWGcc85p0poHqdlYm3POWdCa5qi5FJtzzomUmye1uVSbc84555xzzjnnnHPOqV6czsE54Zxzzonam2u5CV2cc875ZJzuzQnhnHPOOeecc84555xzzglCQ1YBAEAAAARh2BjGnYIgfY4GYhQhpiGTHnSPDpOgMcgppB6NjkZKqYNQUhknpXSC0JBVAAAgAACEEFJIIYUUUkghhRRSSCGGGGKIIaeccgoqqKSSiirKKLPMMssss8wyy6zDzjrrsMMQQwwxtNJKLDXVVmONteaec645SGultdZaK6WUUkoppSA0ZBUAAAIAQCBkkEEGGYUUUkghhphyyimnoIIKCA1ZBQAAAgAIAAAA8CTPER3RER3RER3RER3RER3P8RxREiVREiXRMi1TMz1VVFVXdm1Zl3Xbt4Vd2HXf133f141fF4ZlWZZlWZZlWZZlWZZlWZZlCUJDVgEAIAAAAEIIIYQUUkghhZRijDHHnINOQgmB0JBVAAAgAIAAAAAAR3EUx5EcyZEkS7IkTdIszfI0T/M00RNFUTRNUxVd0RV10xZlUzZd0zVl01Vl1XZl2bZlW7d9WbZ93/d93/d93/d93/d939d1IDRkFQAgAQCgIzmSIimSIjmO40iSBISGrAIAZAAABACgKI7iOI4jSZIkWZImeZZniZqpmZ7pqaIKhIasAgAAAQAEAAAAAACgaIqnmIqniIrniI4oiZZpiZqquaJsyq7ruq7ruq7ruq7ruq7ruq7ruq7ruq7ruq7ruq7ruq7ruq7rukBoyCoAQAIAQEdyJEdyJEVSJEVyJAcIDVkFAMgAAAgAwDEcQ1Ikx7IsTfM0T/M00RM90TM9VXRFFwgNWQUAAAIACAAAAAAAwJAMS7EczdEkUVIt1VI11VItVVQ9VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV1TRN0zSB0JCVAAAZAADDtOTScs+NoEgqR7XWklHlJMUcGoqgglZzDRU0iEmLIWIKISYxlg46ppzUGlMpGXNUc2whVIhJDTqmUikGLQhCQ1YIAKEZAA7HASTLAiRLAwAAAAAAAABJ0wDN8wDL8wAAAAAAAABA0jTA8jRA8zwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACRNAzTPAzTPAwAAAAAAAADN8wBPFAFPFAEAAAAAAADA8jzAEz3AE0UAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABxNAzTPAzTPAwAAAAAAAADL8wBPFAHPEwEAAAAAAABA8zzAE0XAE0UAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAABDgAAARZCoSErAoA4AQCHJEGSIEnQNIBkWdA0aBpMEyBZFjQNmgbTBAAAAAAAAAAAAEDyNGgaNA2iCJA0D5oGTYMoAgAAAAAAAAAAACBpGjQNmgZRBEiaBk2DpkEUAQAAAAAAAAAAANBME6IIUYRpAjzThChCFGGaAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAIABBwCAABPKQKEhKwKAOAEAh6JYFgAAOJJjWQAA4DiSZQEAgGVZoggAAJaliSIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAgAEHAIAAE8pAoSErAYAoAACHolgWcBzLAo5jWUCSLAtgWQDNA2gaQBQBgAAAgAIHAIAAGzQlFgcoNGQlABAFAOBQFMvSNFHkOJalaaLIkSxL00SRZWma55kmNM3zTBGi53mmCc/zPNOEaYqiqgJRNE0BAAAFDgAAATZoSiwOUGjISgAgJADA4TiW5Xmi6HmiaJqqynEsy/NEURRNU1VVleNolueJoiiapqqqKsvSNM8TRVE0TVVVXWia54miKJqmqrouPM/zRFEUTVNVXRee53miKIqmqaquC1EURdM0TVVVVdcFomiapqmqquq6QBRF0zRVVVVdF4iiKJqmqqqu6wLTNE1VVVXXlV2Aaaqqqrqu6wJUVVVd13VlGaCqquq6rivLANd1XdeVZVkG4Lqu68qyLAAA4MABACDACDrJqLIIG0248AAUGrIiAIgCAACMYUoxpQxjEkIKoWFMQkghZFJSKimlCkIqJZVSQUilpFIySi2lllIFIZWSSqkgpFJSKQUAgB04AIAdWAiFhqwEAPIAAAhjlGKMMeckQkox5pxzEiGlGHPOOakUY84555yUkjHnnHNOSumYc845J6VkzDnnnJNSOuecc85JKaV0zjnnpJRSQugcdFJKKZ1zDkIBAEAFDgAAATaKbE4wElRoyEoAIBUAwOA4lqVpnieKpmlJkqZ5nueJpqpqkqRpnieKpqmqPM/zRFEUTVNVeZ7niaIomqaqcl1RFEXTNE1VJcuiaIqmqaqqC9M0TdNUVdeFaZqmaaqq68K2VVVVXdd1Yduqqqqu68rAdV3XdWUZyK7ruq4sCwAAT3AAACqwYXWEk6KxwEJDVgIAGQAAhDEIKYQQUsggpBBCSCmFkAAAgAEHAIAAE8pAoSErAYBUAACAEGuttdZaaw1j1lprrbXWEuestdZaa6211lprrbXWWmuttdZaa6211lprrbXWWmuttdZaa6211lprrbXWWmuttdZaa6211lprrbXWWmuttdZaa6211lprrbXWWmuttdZaa6211lprrbVWACB2hQPAToQNqyOcFI0FFhqyEgAIBwAAjEGIMegklFJKhRBj0ElIpbUYK4QYg1BKSq21mDznHIRSWmotxuQ55yCk1FqMMSbXQkgppZZii7G4FkIqKbXWYqzJGJVSai22GGvtxaiUSksxxhhrMMbm1FqMMdZaizE6txJLjDHGWoQRxsUWY6y11yKMEbLF0lqttQZjjLG5tdhqzbkYI4yuLbVWa80FAJg8OABAJdg4w0rSWeFocKEhKwGA3AAAAiGlGGPMOeeccw5CCKlSjDnnHIQQQgihlFJSpRhzzjkIIYRQQimlpIwx5hyEEEIIpZRSSmkpZcw5CCGEUEoppZTSUuuccxBCCKWUUkopJaXUOecghFBKKaWUUkpKLYQQQiihlFJKKaWUlFJKIYRQSimllFJKKamllEIIpZRSSimllFJSSimFEEIppZRSSimlpJRaK6WUUkoppZRSSkkttZRSKKWUUkoppZSSWkoppVJKKaWUUkopJaXUUkqllFJKKaWUUkpLqaWUSimllFJKKaWUlFJKKaVUSimllFJKKSml1FpKKaWUSimllFJaaymlllIqpZRSSimltNRaay21lEoppZRSSmmttZRSSimVUkoppZRSAADQgQMAQIARlRZipxlXHoEjChkmoEJDVgIAZAAADKOUUkktRYIipRiklkIlFXNQUooocw5SrKlCziDmJJWKMYSUg1QyB5VSzEEKIWVMKQatlRg6xpijmGoqoWMMAAAAQQAAgZAJBAqgwEAGABwgJEgBAIUFhg4RIkCMAgPj4tIGACAIkRkiEbEYJCZUA0XFdACwuMCQDwAZGhtpFxfQZYALurjrQAhBCEIQiwMoIAEHJ9zwxBuecIMTdIpKHQgAAAAAgAMAPAAAJBtAREQ0cxwdHh8gISIjJCUmJygCAAAAAOAGAB8AAEkKEBERzRxHh8cHSIjICEmJyQlKAAAggAAAAAAACCAAAQEBAAAAAIAAAAAAAQFPZ2dTAASAawAAAAAAAI+I1+cCAAAASTGGUh84NivetbfEtszRzMi/raykkZOGd3BtaGVfXlVST0cBhCU/+d9jyhKbsOQn/3tMVWKzwUcoWKuddrUxxbibFw3v169boyCIBA29/x+8aOj/GRHz5xsewX8UVhe/44PJVGF18Ts+mExtU2iAaVe7WrXIYqMBdqDYMQzO5ozvu199vV6v1y+vXkUiDhyH4gGccq+/tzP56Cn3+ns7kw+eCzCDCIACFJ96Y0NQegyjiRC1MYoIgRfgQ6MZuugdB57SYipl4WyxrpDoHQdfUpkK7A08I7KorVQrZVEWGQAA+0gMU5kpBtkAGAAAAAAAAEAtTjRBLU5ZAyIjqwgUWTjB9D7aGXe930bVOjDevu0+R+33vGaveXPfL/71693a6Ydubu5vDruYLx7+fdTifh2mzhlci7DToA7JKMzAmcU1JEXuG+0+l3FmoQHQqgJ6JE8hAZo1MYeGnQN1dv2SGir5Gr5X0lMne82umg2bG0rzWLkRX05QBTITm8vzz1Lk5syuhl3kT00yAAX3TAHNgKxaADzdxiAMcAEc/ug9ht9lLBf/Xw/OSQaN3XXwISNCQi+MDTyA59PFtsd6ePFTcWNquk9cARgMshkmMchGNgAAAAAAAAAAAACAD+Xy9XCu2BwnDNcltv1vIgeLi1Janey5d+7kfiSX8+bywapX38pRkHzN6+/rVx2VVeTiLXUsTOQ6zF35Y/SzMJlVRz1Y6kIBWAB8GaUZgMNSMfUD2PNxnfbAFPRKAjAM1vEtsLfqxeLvtm22B2ngbLjJAhrsGr7InQef0rMuwc9JOSkugdk++Ja66eL7a3B22hQ46KPvfIh12aO9X+6XW07IZphKDAMAAAAAAAAAAJtUJte5JN0p3QsAAJ/xHzd/Hv77kaxlsbIO2bKVpKb2YMjtfOOsx7q3h+JzSjRdk0U65VK79F1ZB1/AFAkAVH1RAMBZ3cqWCgLzu4niVftAbweavtJVA2SRZxiiIu+428VAAUlVUUVSXEXfMEDVctzbJnpwUZU1AMEYFSpABd5nnYae0q66eF+b+6QFfdY+8Ja6WeL/Z+A+qWvhYFjtda03/3bx7bvn7/fd6WGgquIMwzCVGAAAAAAAAAAAgPr3TuH+wbq3H671u1Et9mX5VlZ+trc0LlwDAICI5Pwvn1mqEVRfdC3v58zZwf9MgUjw2TsSGby374vDOhvdgsdVgwXwtAMAFgCAnZHERV+q03Z8N8D5GTvLz6wpFu1MYKAn689V5J5n6Lgmq09jYKise2b35C3W8tRB6/4/Yx1AoQ91wQWeR53zL5EBlPj7eHOf1CCPOg9/Sssb+//aPE9qcNCXi37fnH/TxNeXL18+nEB9l2tiGIZlGAAAAAAAAAAAgN6k3+OX1Jef4V1Hsvx8si7++e3n7uaoWwQAoOPi/fbw9z8nZ55D7/oe3Uu8X9TkXRQQeNUz6MnzRpzPFFzM1wBNJUfHsHRVGKgAKAAAAAsAgx57hqeCxUBIoweOcb/n1n5O5e2kGtInNRjVefe/eT7QQ6HBWg0AAP72nDZ4yxhu9v9iOKlBXHQeekq7aBL+2jwP8AB0Wcbaqz/vcXxRvpy8zfPxWLsfPeKV2MQwDLIBAAAAAAAAAAAAQIKUxaXts46LD/e/2jujM7T" 414 201 "-" "-" This is not a "normal" URI for us, but I'm still getting requests like theses. :( OS : Linux Debian # /usr/sbin/varnishd -V varnishd (varnish-3.0.5 revision 1a89b1f) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS varnish deamon command line : /usr/sbin/varnishd -P /var/run/varnishd.pid -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -w 25,2000 -s malloc,28G -p thread_pool_add_delay=2 -p thread_pools=2 -p session_linger=100 Varnish default.vcl : # cat /etc/varnish/default.vcl acl purge { "localhost"; "192.168.0.0"/24; } probe healthcheck { .request = "GET /up.txt HTTP/1.1" "Host: www.myhost.com" "Connection: close"; .timeout = 5s; .interval = 30s; .window = 10; .threshold = 8; } backend www2 { .host = "192.168.0.2"; .port = "80"; .probe = healthcheck; } backend www3 { .host = "192.168.0.3"; .port = "80"; .probe = healthcheck; } director www round-robin { { .backend = www2; } { .backend = www3; } } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } # X-Forwarded-For if (req.http.X-Forwarded-Proto !~ "https") { remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } # no cookie if ( req.url == "/favicon.ico" || req.url ~ "^/css/" || req.url ~ "^/fonts/" ) { remove req.http.Cookie; } if (req.http.host ~ "www.myhost.com$") { set req.backend = www; } if (req.http.Authorization || req.http.Cookie) { return(pass); } if (req.request != "GET" && req.request != "HEAD") { return(pass); } if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 1h; } return (lookup); } sub vcl_fetch { set beresp.grace = 1h; if (beresp.status == 404 && req.restarts == 0) { return(restart); } if (beresp.http.X-Esi) { set beresp.do_esi = true; unset beresp.http.X-Esi; } return (deliver); } sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } return (deliver); } sub vcl_miss { if (req.request == "PURGE") { purge; error 200 "Purged."; } return (fetch); } sub vcl_pass { return(pass); } sub vcl_pipe { set bereq.http.connection = "close"; return(pipe); } sub vcl_deliver { remove resp.http.Via; remove resp.http.X-Varnish; remove resp.http.Server; remove resp.http.X-Powered-By; set resp.http.Cache-Control = resp.http.Cache-Control + ", no-transform"; if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } return (deliver); } Best Regards David. From dridi.boukelmoune at zenika.com Wed Aug 20 08:55:42 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 20 Aug 2014 10:55:42 +0200 Subject: VCL continuous integration In-Reply-To: References: Message-ID: Hi, I'm not currently doing CI stuff around VCL, but I have. If you're using Jenkins, or if your backend is a Java application (or both), you may be interested in varnishtest-exec[1]. One feature I figured could be useful, is a Jenkins integration for test trends. The same kind of histogram graphs you get by default with JUnit test suites. It still needs to be implemented, so please do not hesitate to open an issue on github if you need anything. Cheers, Dridi [1] https://github.com/Zenika/varnishtest-exec On Tue, Aug 5, 2014 at 4:27 PM, L Cruzero wrote: > hi, > > is anyone on the list using any CI tools or have any ideas possible options > for implementing continuous integration with VLC. > > > > thanks. > lcruzero > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From dridi.boukelmoune at zenika.com Wed Aug 20 09:34:03 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 20 Aug 2014 11:34:03 +0200 Subject: Determine if grace object is available In-Reply-To: References: <7C954D35-328D-49B1-B854-302A26CC1981@dragondata.com> <20140723130343.GB11654@immer.varnish-software.com> Message-ID: Hi Mattias, Have you tried something with a restart ? You could decide for instance to change the backend to maintenance and restart if you're in vcl_error and backend is not healthy. Cheers, Dridi On Wed, Jul 30, 2014 at 9:38 AM, Mattias Geniar wrote: > >>> I?d like to get fancy with grace stored objects, but I?m not sure how >>>to do this. Can I determine if there?s a grace object I could deliver? >>>Basically I want my logic to be: > > As a follow-up to this thread, I'm wondering if the following is possible, > given that there is a director present with 2 servers, having health > probes configured. > > If the director has no healthy backends; > 1. See if a grace object is available, if so, deliver it > 2. If no grace object is available, change the backend to a "maintenance" > one to serve a static HTML page for maintenance purposes > > The struggle is in vcl_recv {}, how would this be able to work? If I use > req.backend.healthy to determine the backend health to set a new backend, > I lose the grace ability (as it'll be passed to the new, available, > backend?). Or I'm missing something here. > > # Use grace, when available and when needed > if (! req.backend.healthy) { > # Backends are sick, so fall back to a stale object, if possible > set req.grace = 5m; > > # If no stale object is available, how should we switch to a new backend > here? > set req.backend = maintenance_backend; # This could serve static pages > with maintenance info > } > > > I'm thing something like this, but it's not possible? > if (req.grace.available) { > set req.grace = 5m; > } else { > # No grace object available, set new backend > set req.backend = maintenance_backend; > } > > Thanks, > > Mattias > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From hernan at cmsmedios.com Wed Aug 20 14:02:47 2014 From: hernan at cmsmedios.com (=?UTF-8?Q?Hern=C3=A1n_Marsili?=) Date: Wed, 20 Aug 2014 11:02:47 -0300 Subject: another architecture question Message-ID: Hi, A couple of month ago I asked a question here regarding the best VARNISH architecture for a high traffic site. We then do some testing and we are still unsure of the best scenario, so I ask again with a little more information. We have a 40.000 concurrent users sites. This is handled by 5 boxes. On each box we have TOMCAT, APACHE and VARNISH using malloc. What we want to determine, is the impact on each server VARNISH makes, mostly at a 'connections' level and determine if is best to: a) growth adding same kind of servers (boxes with all 3 services) b) separete Varnish from the boxes and have, for example, 2 varnishes balacing with 4 backend servers On the b scenario, we can assign more memory to the TOMCAT and handle more load. The CACHE HIT RATE is not an issue, since all the boxes have a very high hitrate. Any advice on this matter will be appreciated. We are inclining now for the A option. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi.boukelmoune at zenika.com Wed Aug 20 16:52:02 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 20 Aug 2014 18:52:02 +0200 Subject: another architecture question In-Reply-To: References: Message-ID: Hi, As a Java developer, I'd say that adding more RAM to a JVM may probably incur longer GC pauses while on the other hand I'd trust Varnish to gracefully scale with more RAM and CPU. Please do not interpret that as a bad opinion on modern JVMs or Tomcat, you'd be very wrong :-) Architecture-wise, my first advice would be to think about removing your httpd servers and put your Tomcat instances right behind Varnish. Unless you need it for something other than a reverse proxy for your webapp of course. Regarding solutions a and b, my intuition goes towards b. Instead of pinning one apache/tomcat pair for one Varnish instance, I'd put them all in a director. If one backend is sick, Varnish can try again with a healthy backend. Performance-wise, measure, don't guess. Cheers, Dridi On Wed, Aug 20, 2014 at 4:02 PM, Hern?n Marsili wrote: > Hi, > > > A couple of month ago I asked a question here regarding the best VARNISH > architecture for a high traffic site. We then do some testing and we are > still unsure of the best scenario, so I ask again with a little more > information. > > We have a 40.000 concurrent users sites. This is handled by 5 boxes. On each > box we have TOMCAT, APACHE and VARNISH using malloc. > > What we want to determine, is the impact on each server VARNISH makes, > mostly at a 'connections' level and determine if is best to: > > a) growth adding same kind of servers (boxes with all 3 services) > b) separete Varnish from the boxes and have, for example, 2 varnishes > balacing with 4 backend servers > > On the b scenario, we can assign more memory to the TOMCAT and handle more > load. The CACHE HIT RATE is not an issue, since all the boxes have a very > high hitrate. > > Any advice on this matter will be appreciated. We are inclining now for the > A option. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From bluethundr at gmail.com Wed Aug 20 18:13:49 2014 From: bluethundr at gmail.com (Tim Dunphy) Date: Wed, 20 Aug 2014 14:13:49 -0400 Subject: clearing the varnish cache Message-ID: Hey all, I've been asked to flush the varnish cache in our environment. So I used the following command to do that: varnishadm -T 127.0.0.1:2000 url.purge . However the command doesn't provide any feedback or output. How can we verify that the cache has indeed been flushed? Wer'e using varnish 2.1.5. Thanks Tim -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at ifixit.com Wed Aug 20 18:27:27 2014 From: james at ifixit.com (James Pearson) Date: Wed, 20 Aug 2014 11:27:27 -0700 Subject: VCL continuous integration In-Reply-To: References: Message-ID: <1408559106-sup-465@geror.local> Excerpts from L Cruzero's message of 2014-08-05 07:27:15 -0700: > is anyone on the list using any CI tools or have any ideas possible options > for implementing continuous integration with VLC. I don't know what you're looking for, but one of the tests in our CI suite does a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 1>/dev/null` and checking the exit code. Doing much more is difficult because Varnish, like other services, is fairly global, as opposed to application code, which can be much more easily tested in parallel on a single machine. - P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From dridi.boukelmoune at zenika.com Wed Aug 20 20:58:28 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Wed, 20 Aug 2014 22:58:28 +0200 Subject: VCL continuous integration In-Reply-To: <1408559106-sup-465@geror.local> References: <1408559106-sup-465@geror.local> Message-ID: On Wed, Aug 20, 2014 at 8:27 PM, James Pearson wrote: > Excerpts from L Cruzero's message of 2014-08-05 07:27:15 -0700: >> is anyone on the list using any CI tools or have any ideas possible options >> for implementing continuous integration with VLC. > > I don't know what you're looking for, but one of the tests in our CI suite does > a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 You may want to add parameters like vcl_dir if like me you split your VCL in multiple files. I usually put at least the backend(s)/director(s) in a separate file, so that I can include the actual policy in test cases, or share the same policy for different environments (QA, preprod) without duplicating code (avoids divergence). > 1>/dev/null` and checking the exit code. Doing much more is difficult because > Varnish, like other services, is fairly global, as opposed to application code, > which can be much more easily tested in parallel on a single machine. But if you want to test your cache policy (namely your VCL) you can do so by mocking the backends and clients behaviors. Et voil?, you can run your varnishtest suite in parallel :-) Cheers, Dridi > - P > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From perbu at varnish-software.com Thu Aug 21 09:13:05 2014 From: perbu at varnish-software.com (Per Buer) Date: Thu, 21 Aug 2014 11:13:05 +0200 Subject: clearing the varnish cache In-Reply-To: References: Message-ID: Hi Tim. First of all, the version of Varnish you're running is pretty outdated. 3.0, which was released three years replaced it and since then we've have a 4.0 release. Please consider upgrading. To answer your question. The easiest way to verify that it does what it should do is to monitor the varnishlog. You'll just a huge number of cache misses that naturally follows the purge of the cache. If you have no caches behind Varnish you can just grep out the Age header and you'll see it will initially be all zeroes. Per. On Wed, Aug 20, 2014 at 8:13 PM, Tim Dunphy wrote: > Hey all, > > I've been asked to flush the varnish cache in our environment. So I used > the following command to do that: > > varnishadm -T 127.0.0.1:2000 url.purge . > > However the command doesn't provide any feedback or output. How can we > verify that the cache has indeed been flushed? > > Wer'e using varnish 2.1.5. > > Thanks > Tim > > -- > GPG me!! > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- *Per Buer* CTO | Varnish Software Phone: +47 958 39 117 | Skype: per.buer We Make Websites Fly! Winner of the Red Herring Top 100 Global Award 2013 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcruzero at gmail.com Thu Aug 21 12:12:18 2014 From: lcruzero at gmail.com (L Cruzero) Date: Thu, 21 Aug 2014 08:12:18 -0400 Subject: VCL continuous integration Message-ID: > is anyone on the list using any CI tools or have any ideas possible options > for implementing continuous integration with VLC. I don't know what you're looking for, but one of the tests in our CI suite does a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 1>/dev/null` and checking the exit code. Doing much more is difficult because Varnish, like other services, is fairly global, as opposed to application code, which can be much more easily tested in parallel on a single machine. - P thanks James, and others for some of the useful suggestions. while varnishtest and varnishD respectively, will be used as suggested, for testing cache and vcl syntax as part of unit tests, in a "bamboo" CI agent. it would also be useful to test, perhaps outside of a syntax/cache unit test, some of the external resources a vcl config depends on. ie. -defined backends connectivity. -response header from host header request to defined backends thanks. -LC On Thu, Aug 21, 2014 at 5:13 AM, wrote: > > Send varnish-misc mailing list submissions to > varnish-misc at varnish-cache.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > or, via email, send a message with subject or body 'help' to > varnish-misc-request at varnish-cache.org > > You can reach the person managing the list at > varnish-misc-owner at varnish-cache.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of varnish-misc digest..." > > > Today's Topics: > > 1. another architecture question (Hern?n Marsili) > 2. Re: another architecture question (Dridi Boukelmoune) > 3. clearing the varnish cache (Tim Dunphy) > 4. Re: VCL continuous integration (James Pearson) > 5. Re: VCL continuous integration (Dridi Boukelmoune) > 6. Re: clearing the varnish cache (Per Buer) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 20 Aug 2014 11:02:47 -0300 > From: Hern?n Marsili > To: varnish-misc > Subject: another architecture question > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi, > > > A couple of month ago I asked a question here regarding the best VARNISH > architecture for a high traffic site. We then do some testing and we are > still unsure of the best scenario, so I ask again with a little more > information. > > We have a 40.000 concurrent users sites. This is handled by 5 boxes. On > each box we have TOMCAT, APACHE and VARNISH using malloc. > > What we want to determine, is the impact on each server VARNISH makes, > mostly at a 'connections' level and determine if is best to: > > a) growth adding same kind of servers (boxes with all 3 services) > b) separete Varnish from the boxes and have, for example, 2 varnishes > balacing with 4 backend servers > > On the b scenario, we can assign more memory to the TOMCAT and handle more > load. The CACHE HIT RATE is not an issue, since all the boxes have a very > high hitrate. > > Any advice on this matter will be appreciated. We are inclining now for the > A option. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Wed, 20 Aug 2014 18:52:02 +0200 > From: Dridi Boukelmoune > To: Hern?n Marsili > Cc: varnish-misc > Subject: Re: another architecture question > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > Hi, > > As a Java developer, I'd say that adding more RAM to a JVM may > probably incur longer GC pauses while on the other hand I'd trust > Varnish to gracefully scale with more RAM and CPU. Please do not > interpret that as a bad opinion on modern JVMs or Tomcat, you'd be > very wrong :-) > > Architecture-wise, my first advice would be to think about removing > your httpd servers and put your Tomcat instances right behind Varnish. > Unless you need it for something other than a reverse proxy for your > webapp of course. > > Regarding solutions a and b, my intuition goes towards b. Instead of > pinning one apache/tomcat pair for one Varnish instance, I'd put them > all in a director. If one backend is sick, Varnish can try again with > a healthy backend. > > Performance-wise, measure, don't guess. > > Cheers, > Dridi > > On Wed, Aug 20, 2014 at 4:02 PM, Hern?n Marsili wrote: > > Hi, > > > > > > A couple of month ago I asked a question here regarding the best VARNISH > > architecture for a high traffic site. We then do some testing and we are > > still unsure of the best scenario, so I ask again with a little more > > information. > > > > We have a 40.000 concurrent users sites. This is handled by 5 boxes. On each > > box we have TOMCAT, APACHE and VARNISH using malloc. > > > > What we want to determine, is the impact on each server VARNISH makes, > > mostly at a 'connections' level and determine if is best to: > > > > a) growth adding same kind of servers (boxes with all 3 services) > > b) separete Varnish from the boxes and have, for example, 2 varnishes > > balacing with 4 backend servers > > > > On the b scenario, we can assign more memory to the TOMCAT and handle more > > load. The CACHE HIT RATE is not an issue, since all the boxes have a very > > high hitrate. > > > > Any advice on this matter will be appreciated. We are inclining now for the > > A option. > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > ------------------------------ > > Message: 3 > Date: Wed, 20 Aug 2014 14:13:49 -0400 > From: Tim Dunphy > To: "varnish-misc at varnish-cache.org" > Subject: clearing the varnish cache > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hey all, > > I've been asked to flush the varnish cache in our environment. So I used > the following command to do that: > > varnishadm -T 127.0.0.1:2000 url.purge . > > However the command doesn't provide any feedback or output. How can we > verify that the cache has indeed been flushed? > > Wer'e using varnish 2.1.5. > > Thanks > Tim > > -- > GPG me!! > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 4 > Date: Wed, 20 Aug 2014 11:27:27 -0700 > From: James Pearson > To: varnish-misc > Subject: Re: VCL continuous integration > Message-ID: <1408559106-sup-465 at geror.local> > Content-Type: text/plain; charset="utf-8" > > Excerpts from L Cruzero's message of 2014-08-05 07:27:15 -0700: > > is anyone on the list using any CI tools or have any ideas possible options > > for implementing continuous integration with VLC. > > I don't know what you're looking for, but one of the tests in our CI suite does > a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 > 1>/dev/null` and checking the exit code. Doing much more is difficult because > Varnish, like other services, is fairly global, as opposed to application code, > which can be much more easily tested in parallel on a single machine. > - P > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 819 bytes > Desc: not available > URL: > > ------------------------------ > > Message: 5 > Date: Wed, 20 Aug 2014 22:58:28 +0200 > From: Dridi Boukelmoune > To: James Pearson > Cc: varnish-misc > Subject: Re: VCL continuous integration > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > On Wed, Aug 20, 2014 at 8:27 PM, James Pearson wrote: > > Excerpts from L Cruzero's message of 2014-08-05 07:27:15 -0700: > >> is anyone on the list using any CI tools or have any ideas possible options > >> for implementing continuous integration with VLC. > > > > I don't know what you're looking for, but one of the tests in our CI suite does > > a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 > > You may want to add parameters like vcl_dir if like me you split your > VCL in multiple files. I usually put at least the backend(s)/director(s) > in a separate file, so that I can include the actual policy in test cases, > or share the same policy for different environments (QA, preprod) > without duplicating code (avoids divergence). > > > 1>/dev/null` and checking the exit code. Doing much more is difficult because > > Varnish, like other services, is fairly global, as opposed to application code, > > which can be much more easily tested in parallel on a single machine. > > But if you want to test your cache policy (namely your VCL) you can do > so by mocking the backends and clients behaviors. Et voil?, you can run > your varnishtest suite in parallel :-) > > Cheers, > Dridi > > > - P > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > ------------------------------ > > Message: 6 > Date: Thu, 21 Aug 2014 11:13:05 +0200 > From: Per Buer > To: Tim Dunphy > Cc: "varnish-misc at varnish-cache.org" > Subject: Re: clearing the varnish cache > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi Tim. > > First of all, the version of Varnish you're running is pretty outdated. > 3.0, which was released three years replaced it and since then we've have a > 4.0 release. Please consider upgrading. > > To answer your question. The easiest way to verify that it does what it > should do is to monitor the varnishlog. You'll just a huge number of cache > misses that naturally follows the purge of the cache. If you have no caches > behind Varnish you can just grep out the Age header and you'll see it will > initially be all zeroes. > > Per. > > > > On Wed, Aug 20, 2014 at 8:13 PM, Tim Dunphy wrote: > > > Hey all, > > > > I've been asked to flush the varnish cache in our environment. So I used > > the following command to do that: > > > > varnishadm -T 127.0.0.1:2000 url.purge . > > > > However the command doesn't provide any feedback or output. How can we > > verify that the cache has indeed been flushed? > > > > Wer'e using varnish 2.1.5. > > > > Thanks > > Tim > > > > -- > > GPG me!! > > > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > -- > *Per Buer* > CTO | Varnish Software > Phone: +47 958 39 117 | Skype: per.buer > We Make Websites Fly! > > Winner of the Red Herring Top 100 Global Award 2013 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > End of varnish-misc Digest, Vol 101, Issue 15 > ********************************************* From ojoa at vwd.com Thu Aug 21 12:37:50 2014 From: ojoa at vwd.com (Oliver Joa) Date: Thu, 21 Aug 2014 14:37:50 +0200 Subject: req.restarts in varnish 4.0.1. What to do? Message-ID: <53F5E81E.2010809@vwd.com> Hi, I used to have this: sub vcl_fetch { ... ... if ((beresp.status == 500) && (req.request == "GET" )) { if (req.restarts < 2 ) { std.log( "USF req restart 500 try #" + req.restarts); return(restart); } else { std.log ("USF req restart 500 miss"); } } } ... ... } In vcl_backend_response it is not possible to use req.restarts. How to do the restarts in 4.0.1? I don't want to change the default in max_restarts. Thanks Oliver Joa ********************************************************************* Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte, dass jede Form der Kenntnisnahme, Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen. The content of this e-mail is meant exclusively for the person to whom it is addressed. If you are not the person to whom this e-mail is addressed or his/her representative, please be informed that any form of knowledge, publication, duplication or distribution of the content of this e-mail is inadmissible. In such cases we kindly ask you to contact the sender of this e-mail. _ ********************************************************************* From dridi.boukelmoune at zenika.com Thu Aug 21 12:38:40 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Thu, 21 Aug 2014 14:38:40 +0200 Subject: VCL continuous integration In-Reply-To: References: Message-ID: On Thu, Aug 21, 2014 at 2:12 PM, L Cruzero wrote: >> is anyone on the list using any CI tools or have any ideas possible options >> for implementing continuous integration with VLC. > > I don't know what you're looking for, but one of the tests in our CI suite does > a Varnish syntax check by calling `varnishd -C -f -n /tmp 2>&1 > 1>/dev/null` and checking the exit code. Doing much more is difficult because > Varnish, like other services, is fairly global, as opposed to application code, > which can be much more easily tested in parallel on a single machine. > - P > > thanks James, and others for some of the useful suggestions. while > varnishtest and varnishD respectively, > will be used as suggested, for testing cache and vcl syntax as part of > unit tests, in a "bamboo" CI agent. it would also be useful to test, > perhaps outside of a syntax/cache unit test, some of the external > resources a vcl config depends on. > > > ie. > > -defined backends connectivity. What do you mean exactly? > -response header from host header request to defined backends This is something you can do with varnishtest, use your real vcl on a test backend, and only mock the client. I actually have such an example in my test suite for the maven plugin: https://github.com/Zenika/varnishtest-exec/tree/master/varnishtest-maven-plugin/src/it/run-war You can see in the POM that I'm running the dummy webapp during the pre-integration-test phase, and shutting it down during the post-integration-test phase. The varnishtest-maven-plugin is automatically bound to the integration-test phase: https://github.com/Zenika/varnishtest-exec/blob/812d894/varnishtest-maven-plugin/src/it/run-war/pom.xml And a dummy varnishtest case: https://github.com/Zenika/varnishtest-exec/blob/812d894/varnishtest-maven-plugin/src/it/run-war/src/test/varnish/test.vtc If it's possible to do that with a twisted build system like maven (don't get me started;-) you should be able to do the same. Cheers, Dridi > thanks. > > -LC From ojoa at vwd.com Thu Aug 21 13:00:06 2014 From: ojoa at vwd.com (Oliver Joa) Date: Thu, 21 Aug 2014 15:00:06 +0200 Subject: req.restarts in varnish 4.0.1. What to do? In-Reply-To: References: Message-ID: <53F5ED56.8080506@vwd.com> Hi, On 21.08.2014 14:41, Sam Pegler wrote: > It's now set to return(retry); as per the documentation. I know this. The point is that I can not read req.restarts. I don't know how often the Request was restartet already. Thanks Oliver Joa > > """ > > Backend restarts are now retry > restarts-are-now-retry>In 3.0 it was possible to do return(restart) after > noticing that the backend response was wrong, to change to a different > backend. > This is now called return(retry), and jumps back up to vcl_backend_fetch. > This only influences the backend fetch thread, client-side handling is not > affected. > > > """ > > https://www.varnish-cache.org/docs/trunk/whats-new/upgrading.html > > > > ********************************************************************* Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte, dass jede Form der Kenntnisnahme, Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen. The content of this e-mail is meant exclusively for the person to whom it is addressed. If you are not the person to whom this e-mail is addressed or his/her representative, please be informed that any form of knowledge, publication, duplication or distribution of the content of this e-mail is inadmissible. In such cases we kindly ask you to contact the sender of this e-mail. _ ********************************************************************* From apj at mutt.dk Thu Aug 21 13:33:03 2014 From: apj at mutt.dk (Andreas Plesner Jacobsen) Date: Thu, 21 Aug 2014 15:33:03 +0200 Subject: req.restarts in varnish 4.0.1. What to do? In-Reply-To: <53F5ED56.8080506@vwd.com> References: <53F5ED56.8080506@vwd.com> Message-ID: <20140821133303.GK19870@nerd.dk> On Thu, Aug 21, 2014 at 03:00:06PM +0200, Oliver Joa wrote: > > >It's now set to return(retry); as per the documentation. > > I know this. The point is that I can not read req.restarts. I don't > know how often the Request was restartet already. Restarts are client side. Retries are backend side. If you retry a backend request, bereq.retries will be incremented. -- Andreas From ojoa at vwd.com Thu Aug 21 14:10:48 2014 From: ojoa at vwd.com (Oliver Joa) Date: Thu, 21 Aug 2014 16:10:48 +0200 Subject: req.restarts in varnish 4.0.1. What to do? In-Reply-To: <20140821133303.GK19870@nerd.dk> References: <53F5ED56.8080506@vwd.com> <20140821133303.GK19870@nerd.dk> Message-ID: <53F5FDE8.4000400@vwd.com> Hi, On 21.08.2014 15:33, Andreas Plesner Jacobsen wrote: > On Thu, Aug 21, 2014 at 03:00:06PM +0200, Oliver Joa wrote: >> >>> It's now set to return(retry); as per the documentation. >> >> I know this. The point is that I can not read req.restarts. I don't >> know how often the Request was restartet already. > > Restarts are client side. Retries are backend side. If you retry a backend > request, bereq.retries will be incremented. it is working, thanks a lot. I didn't find it in the documentation. Oliver Joa ********************************************************************* Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte, dass jede Form der Kenntnisnahme, Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen. The content of this e-mail is meant exclusively for the person to whom it is addressed. If you are not the person to whom this e-mail is addressed or his/her representative, please be informed that any form of knowledge, publication, duplication or distribution of the content of this e-mail is inadmissible. In such cases we kindly ask you to contact the sender of this e-mail. _ ********************************************************************* From viktor.villafuerte at optusnet.com.au Fri Aug 22 06:46:18 2014 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Fri, 22 Aug 2014 16:46:18 +1000 Subject: Varnish package build - tests fail Message-ID: <20140822064618.GA2076@optusnet.com.au> Hi all, I'm trying to build Varnish 4.0.1 on our build servers using Koji. The build fails on one of the tests varnish-4.0.1/bin/varnishtest/tests/c00017.vtc I think I could simply disable the test during the build but I don't really want to have to do that :) I found a post (varnish ticket trac) which says Don't worry of a single or two tests fail, some of the tests are a bit too timing sensitive (Please tell us which so we can fix it) but if a lot of them fails, and in particular if the b00000.vtc test fails, something is horribly wrong, and you will get nowhere without figuring out what. Is this so? Or do I need to make sure all the test do finish successfully? Anybody has any 'quick' (and good) tips on how to go about this? thanks v -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 From YARIV-H at yit.co.il Fri Aug 22 07:18:51 2014 From: YARIV-H at yit.co.il (=?utf-8?B?15nXqNeZ15Eg15TXqNeQ15w=?=) Date: Fri, 22 Aug 2014 07:18:51 +0000 Subject: changing backend and host header not working Message-ID: <1DC6764DE551494993B3C6C09A6045F3014B6299@mailbox10.yedioth.co.il> Hi , First some info : varnish-3.0.3-1.el5.centos.x86_64 CentOS release 6.3 (Final) Linux xxxxxx 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux varnishd -P /var/run/varnish.pid -a :80 -f /etc/varnish/default.vcl -T 127.0.0.1:6082 -t 86400 -w 50,1000,120 -u varnish -g varnish -S /etc/varnish/secret -s file,/var/lib/varnish/varnish_storage.bin,256M -p sess_timeout 305 -p http_req_hdr_len 16384 -p http_req_size 65536 -p http_resp_hdr_len 16384 -p http_resp_size 65536 -p http_gzip_support off -p pipe_timeout 120 -p sess_workspace 131072 -p http_max_hdr 256 I?m trying to do virtual hosting via changing backend based on host header and at the same time normalize the host header : Two backend statements : backend example_qa { .host = "192.xxx.xxx.xxx"; .port = "80"; .first_byte_timeout = 1200s; .connect_timeout = 1200s; .between_bytes_timeout = 1200s; } backend test_other_sites_qa { .host = "192.xxx.xxx.xxx"; .port = "80"; .first_byte_timeout = 1200s; .connect_timeout = 1200s; .between_bytes_timeout = 1200s; } Selecting backend and normalizing host header : sub vcl_recv { if (req.http.host == "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il") { set req.http.host = "qa.example.co.il"; set req.backend = example_qa; } } sub vcl_recv { if (req.http.host == "source-qa-test.co.il" || "qa-test.co.il" ) { set req.http.host = "qa-test.ynet.co.il"; set req.backend = test_other_sites_qa; } } Now the problem is that when I send a request with one of the host headers "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il" The request is always sent to test_other_sites_qa which is the second backend and not the correct one . Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlos.abalde at gmail.com Fri Aug 22 07:28:03 2014 From: carlos.abalde at gmail.com (Carlos Abalde) Date: Fri, 22 Aug 2014 09:28:03 +0200 Subject: changing backend and host header not working In-Reply-To: <1DC6764DE551494993B3C6C09A6045F3014B6299@mailbox10.yedioth.co.il> References: <1DC6764DE551494993B3C6C09A6045F3014B6299@mailbox10.yedioth.co.il> Message-ID: <878AC0E9-10DA-49F8-A989-AD7EC09413DB@gmail.com> On 22 Aug 2014, at 09:18, ???? ???? wrote: > Selecting backend and normalizing host header : > > sub vcl_recv { > if (req.http.host == "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il") { > set req.http.host = "qa.example.co.il"; > set req.backend = example_qa; > } > } > > sub vcl_recv { > if (req.http.host == "source-qa-test.co.il" || "qa-test.co.il" ) { > set req.http.host = "qa-test.ynet.co.il"; > set req.backend = test_other_sites_qa; > } > } > > Now the problem is that when I send a request with one of the host headers "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il" > > The request is always sent to test_other_sites_qa which is the second backend and not the correct one . Hi, It seems the second vcl_recv subroutine is matching for every request. You should write: if (req.http.host == "source-qa.example.co.il" || req.http.host == "source-qa-pp.example.co.il" || req.http.host == "qa.example.co.il") { ... if (req.http.host == "source-qa-test.co.il" || req.http.host = "qa-test.co.il? ) { ... Cheers, ? Carlos Abalde. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From YARIV-H at yit.co.il Fri Aug 22 07:47:20 2014 From: YARIV-H at yit.co.il (=?utf-8?B?15nXqNeZ15Eg15TXqNeQ15w=?=) Date: Fri, 22 Aug 2014 07:47:20 +0000 Subject: changing backend and host header not working In-Reply-To: <878AC0E9-10DA-49F8-A989-AD7EC09413DB@gmail.com> References: <1DC6764DE551494993B3C6C09A6045F3014B6299@mailbox10.yedioth.co.il> <878AC0E9-10DA-49F8-A989-AD7EC09413DB@gmail.com> Message-ID: <1DC6764DE551494993B3C6C09A6045F3014B68BF@mailbox10.yedioth.co.il> Thanks Carlos , That solved the issue. [yariv_sig_outlook] From: Carlos Abalde [mailto:carlos.abalde at gmail.com] Sent: Friday, August 22, 2014 10:28 AM To: ???? ???? Cc: varnish-misc at varnish-cache.org Subject: Re: changing backend and host header not working On 22 Aug 2014, at 09:18, ???? ???? > wrote: Selecting backend and normalizing host header : sub vcl_recv { if (req.http.host == "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il") { set req.http.host = "qa.example.co.il"; set req.backend = example_qa; } } sub vcl_recv { if (req.http.host == "source-qa-test.co.il" || "qa-test.co.il" ) { set req.http.host = "qa-test.ynet.co.il"; set req.backend = test_other_sites_qa; } } Now the problem is that when I send a request with one of the host headers "source-qa.example.co.il" || "source-qa-pp.example.co.il" || "qa.example.co.il" The request is always sent to test_other_sites_qa which is the second backend and not the correct one . Hi, It seems the second vcl_recv subroutine is matching for every request. You should write: if (req.http.host == "source-qa.example.co.il" || req.http.host == "source-qa-pp.example.co.il" || req.http.host == "qa.example.co.il") { ... if (req.http.host == "source-qa-test.co.il" || req.http.host = "qa-test.co.il? ) { ... Cheers, ? Carlos Abalde. [Banner] Powered by U?BTech XTRABANNER -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 10506 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Banner1-d236.jpg Type: image/jpeg Size: 50706 bytes Desc: Banner1-d236.jpg URL: From ojoa at vwd.com Fri Aug 22 13:14:30 2014 From: ojoa at vwd.com (Oliver Joa) Date: Fri, 22 Aug 2014 15:14:30 +0200 Subject: generate custom error in vcl_backend_response Message-ID: <53F74236.5010505@vwd.com> Hi, in version 3 I was able to generate a custom error message in vcl_fetch. In version 4 I there seems to be only the possibility to abandon. How can I generate a custom error message after I got a wrong response? Thanks Oliver Joa ********************************************************************* Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, so beachten Sie bitte, dass jede Form der Kenntnisnahme, Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sich in diesem Fall mit dem Absender der E-Mail in Verbindung zu setzen. The content of this e-mail is meant exclusively for the person to whom it is addressed. If you are not the person to whom this e-mail is addressed or his/her representative, please be informed that any form of knowledge, publication, duplication or distribution of the content of this e-mail is inadmissible. In such cases we kindly ask you to contact the sender of this e-mail. _ ********************************************************************* From phk at phk.freebsd.dk Mon Aug 25 08:39:52 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 25 Aug 2014 08:39:52 +0000 Subject: Updating VMODs with zero downtime In-Reply-To: References: <27950.1406542880@critter.freebsd.dk> <2EB395BA-A8A4-4458-AD5F-C5F7EF0E3960@gmail.com> <16375.1406707125@critter.freebsd.dk> Message-ID: <75602.1408955992@critter.freebsd.dk> -------- In message , Carlos Abalde writ es: >> So one murky bit here is if your OS's dlopen(3) implementation discovers >> that the file has changed. It sounds like it doesn't, and just reuses >> the old already loaded copy :-( >> >> The surefire way to fix that is to append a version number to the >> vmod filename, but it's kind of ugly... > >Hi, > >I've been checking the sources of Varnish and doing some more testing, >and, as you suggest, this definitely seems related to dlopen(3) / >dlclose(3) and how they are used when reloading Varnish. I have changed the vmod management code in -trunk to stat the compiled vmod and check the st_dev/st_ino fields against the previously opened vmods and to use fdlopen(3) to bypass the dlopen(3) name check. Hope this helps. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From geoff at uplex.de Mon Aug 25 08:51:26 2014 From: geoff at uplex.de (Geoff Simmons) Date: Mon, 25 Aug 2014 10:51:26 +0200 Subject: Updating VMODs with zero downtime In-Reply-To: <75602.1408955992@critter.freebsd.dk> References: <27950.1406542880@critter.freebsd.dk> <2EB395BA-A8A4-4458-AD5F-C5F7EF0E3960@gmail.com> <16375.1406707125@critter.freebsd.dk> <75602.1408955992@critter.freebsd.dk> Message-ID: <53FAF90E.7010107@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 08/25/2014 10:39 AM, Poul-Henning Kamp wrote: > > I have changed the vmod management code in -trunk to stat the > compiled vmod and check the st_dev/st_ino fields against the > previously opened vmods and to use fdlopen(3) to bypass the > dlopen(3) name check. AFAICT, fdlopen() is only available on FreeBSD; at any rate it's not on Linux. Should we append "... where fdlopen(3) is supported" to this explanation? %^) - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJT+vkDAAoJEOUwvh9pJNURSesQAIxv+mDgM5V8bE6vKO/VL/zh DPHNnmGlT1FWyxlvmwSvau1KxD8rstCM16kovU71f4CENbdtJ47xy2/CS24z+5x0 GQCjlGwyh6VEvo3yixUNSH32JRSuhlnQNFHo/uNjUiHNJIkw0vr3DVN6SC36KNMD KhGi/up8LQa1U892uJ9gLy3T9p7MWnGmThq2UQyf3/gaKweuCuXEsi+l0k6/RgqW uYiYcmNh0Tsbr+DVuP0r27maCrg2EOR4oKlEB3yjhwZt6+1Z69ZJyD2VGcFlFIZ0 7lFo+s32ZAyy8+JBoxXa7KzZ23zpCwzBhfoM7PagJ/RwuMa9QyrUouhQO+e5c2zZ uRlhKqNm98mOdkLYonFXlvLZp0q8k/uOv4B/cCxKx97MWYgtyYHj/lkjlY+K/eJ8 4cpvMnFdCZWX9oShDYdAC+3coyhdo12qFjJLlhAkRullGr3ybmpYzBTpi5W/Tdhj gNgZTlrTrnky+rLYVHAzImanfvPELmjpXePwFxgUW2iyNcj5DhxOLm+0kpJgZu9b wO/FILakc7pSEM0bwxVQ+/nGSerjAZnIbArz5gnPSfctzSGRGk7rvfMLXzPkjnyG eH5SDBnoHNsgW/nLv0pmok+LbFAshhChiynfA5/vmmNZTw1pbLilvjNrdYAFRzuG 79Jq4nD5XYodwQZxWZHR =LHVM -----END PGP SIGNATURE----- From phk at phk.freebsd.dk Mon Aug 25 09:13:04 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 25 Aug 2014 09:13:04 +0000 Subject: Updating VMODs with zero downtime In-Reply-To: <75602.1408955992@critter.freebsd.dk> References: <27950.1406542880@critter.freebsd.dk> <2EB395BA-A8A4-4458-AD5F-C5F7EF0E3960@gmail.com> <16375.1406707125@critter.freebsd.dk> <75602.1408955992@critter.freebsd.dk> Message-ID: <75760.1408957984@critter.freebsd.dk> -------- In message <75602.1408955992 at critter.freebsd.dk>, "Poul-Henning Kamp" writes: >-------- >In message , Carlos Abalde writ >es: > >>> So one murky bit here is if your OS's dlopen(3) implementation discovers >>> that the file has changed. It sounds like it doesn't, and just reuses >>> the old already loaded copy :-( >>> >>> The surefire way to fix that is to append a version number to the >>> vmod filename, but it's kind of ugly... >> >>Hi, >> >>I've been checking the sources of Varnish and doing some more testing, >>and, as you suggest, this definitely seems related to dlopen(3) / >>dlclose(3) and how they are used when reloading Varnish. > >I have changed the vmod management code in -trunk to stat the compiled >vmod and check the st_dev/st_ino fields against the previously opened >vmods and to use fdlopen(3) to bypass the dlopen(3) name check. > >Hope this helps. Well, just learned that Linux doesn't have fldopen(3) like FreeBSD does, I'll have to revert this change :-( -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From dridi.boukelmoune at zenika.com Mon Aug 25 09:15:25 2014 From: dridi.boukelmoune at zenika.com (Dridi Boukelmoune) Date: Mon, 25 Aug 2014 05:15:25 -0400 Subject: Updating VMODs with zero downtime In-Reply-To: <75760.1408957984@critter.freebsd.dk> References: <27950.1406542880@critter.freebsd.dk> <2EB395BA-A8A4-4458-AD5F-C5F7EF0E3960@gmail.com> <16375.1406707125@critter.freebsd.dk> <75602.1408955992@critter.freebsd.dk> <75760.1408957984@critter.freebsd.dk> Message-ID: On Mon, Aug 25, 2014 at 5:13 AM, Poul-Henning Kamp wrote: > -------- > In message <75602.1408955992 at critter.freebsd.dk>, "Poul-Henning Kamp" writes: >>-------- >>In message , Carlos Abalde writ >>es: >> >>>> So one murky bit here is if your OS's dlopen(3) implementation discovers >>>> that the file has changed. It sounds like it doesn't, and just reuses >>>> the old already loaded copy :-( >>>> >>>> The surefire way to fix that is to append a version number to the >>>> vmod filename, but it's kind of ugly... >>> >>>Hi, >>> >>>I've been checking the sources of Varnish and doing some more testing, >>>and, as you suggest, this definitely seems related to dlopen(3) / >>>dlclose(3) and how they are used when reloading Varnish. >> >>I have changed the vmod management code in -trunk to stat the compiled >>vmod and check the st_dev/st_ino fields against the previously opened >>vmods and to use fdlopen(3) to bypass the dlopen(3) name check. >> >>Hope this helps. > > Well, just learned that Linux doesn't have fldopen(3) like FreeBSD does, > I'll have to revert this change :-( What about some cpp macros to enable it on relevant platforms? > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From james at ifixit.com Mon Aug 25 23:31:15 2014 From: james at ifixit.com (James Pearson) Date: Mon, 25 Aug 2014 16:31:15 -0700 Subject: Varnish package build - tests fail In-Reply-To: <20140822064618.GA2076@optusnet.com.au> References: <20140822064618.GA2076@optusnet.com.au> Message-ID: <1409009251-sup-2857@geror.local> Excerpts from Viktor Villafuerte's message of 2014-08-21 23:46:18 -0700: > Hi all, > > I'm trying to build Varnish 4.0.1 on our build servers using Koji. The > build fails on one of the tests > > varnish-4.0.1/bin/varnishtest/tests/c00017.vtc > > I think I could simply disable the test during the build but I don't > really want to have to do that :) > > > I found a post (varnish ticket trac) which says > > Don't worry of a single or two tests fail, some of the tests are a bit > too timing sensitive (Please tell us which so we can fix it) but if a > lot of them fails, and in particular if the b00000.vtc test fails, > something is horribly wrong, and you will get nowhere without figuring > out what. > > > Is this so? Or do I need to make sure all the test do finish > successfully? > > Anybody has any 'quick' (and good) tips on how to go about this? An approach we've used for (non-Varnish) timing-sensitive tests is to rerun them a few times. At its most basic, that's something like this: #!/bin/sh run_tests || run_tests || run_tests Note that this will rerun *all* the tests, not just the failing one(s). HTH. - P -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From cdgraff at gmail.com Tue Aug 26 02:14:27 2014 From: cdgraff at gmail.com (Alejandro) Date: Mon, 25 Aug 2014 23:14:27 -0300 Subject: Some are using WAF vmod ? Message-ID: Hi guys, we are thinking on implement some WAF module into Varnish, searching we found this https://github.com/comotion/VSF Any know others? or are using this into production? Any comment will be welcome! Thanks Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Aug 26 08:24:03 2014 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 26 Aug 2014 08:24:03 +0000 Subject: Updating VMODs with zero downtime In-Reply-To: <75760.1408957984@critter.freebsd.dk> References: <27950.1406542880@critter.freebsd.dk> <2EB395BA-A8A4-4458-AD5F-C5F7EF0E3960@gmail.com> <16375.1406707125@critter.freebsd.dk> <75602.1408955992@critter.freebsd.dk> <75760.1408957984@critter.freebsd.dk> Message-ID: <99201.1409041443@critter.freebsd.dk> -------- >>>Hi, >>> >>>I've been checking the sources of Varnish and doing some more testing, >>>and, as you suggest, this definitely seems related to dlopen(3) / >>>dlclose(3) and how they are used when reloading Varnish. Ok, after a lot of thinking and protyping I have given up. There is no sensible way to work around this, and we cannot hope to get the dlopen(3) API changed any time soon I have updated the reference manual to explain and reflect this. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From milot.jean at gmail.com Tue Aug 26 15:36:37 2014 From: milot.jean at gmail.com (Jean Milot) Date: Tue, 26 Aug 2014 17:36:37 +0200 Subject: Error at boot with varnishncsa Message-ID: Hello, I have a problem with varnishncsa. I modified the /etc/default/varnishncsa file to launch varnishncsa at the boot but it seems that varnishncsa is launched before varnish. I have this error : Can't open VSM file (Abandoned VSM file (Varnish not running?) /var/lib/varnish/myhost/_.vsm) For information : i use Debian 7 and varnishd -V varnishd (varnish-4.0.1 revision 4354e5e) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS If i add sleep 10 in the /etc/init.d/varnishncsa, it works. How i can make it work without change the init.d file ? Sincerely, -- MILOT Jean T?l. : 0659514624 milot.jean at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.villafuerte at optusnet.com.au Tue Aug 26 22:30:35 2014 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Wed, 27 Aug 2014 08:30:35 +1000 Subject: Varnish package build - tests fail In-Reply-To: <1409009251-sup-2857@geror.local> References: <20140822064618.GA2076@optusnet.com.au> <1409009251-sup-2857@geror.local> Message-ID: <20140826223035.GB2076@optusnet.com.au> Hi James, thanks for your answer. Excuse my ignorance though.. :) What exactly do mean by running the test several times? In the spec file there's a section which runs the tests. I can comment out that and skip them all.. I'm just not sure how to run them few times as you suggest. Could you elaborate on the 'tests rerun' a little more? thanks v On Mon 25 Aug 2014 16:31:15, James Pearson wrote: > Excerpts from Viktor Villafuerte's message of 2014-08-21 23:46:18 -0700: > > Hi all, > > > > I'm trying to build Varnish 4.0.1 on our build servers using Koji. The > > build fails on one of the tests > > > > varnish-4.0.1/bin/varnishtest/tests/c00017.vtc > > > > I think I could simply disable the test during the build but I don't > > really want to have to do that :) > > > > > > I found a post (varnish ticket trac) which says > > > > Don't worry of a single or two tests fail, some of the tests are a bit > > too timing sensitive (Please tell us which so we can fix it) but if a > > lot of them fails, and in particular if the b00000.vtc test fails, > > something is horribly wrong, and you will get nowhere without figuring > > out what. > > > > > > Is this so? Or do I need to make sure all the test do finish > > successfully? > > > > Anybody has any 'quick' (and good) tips on how to go about this? > > An approach we've used for (non-Varnish) timing-sensitive tests is to rerun > them a few times. At its most basic, that's something like this: > > #!/bin/sh > run_tests || run_tests || run_tests > > Note that this will rerun *all* the tests, not just the failing one(s). > > HTH. > - P > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 From james at ifixit.com Tue Aug 26 22:41:42 2014 From: james at ifixit.com (James Pearson) Date: Tue, 26 Aug 2014 15:41:42 -0700 Subject: Varnish package build - tests fail In-Reply-To: <20140826223035.GB2076@optusnet.com.au> References: <20140822064618.GA2076@optusnet.com.au> <1409009251-sup-2857@geror.local> <20140826223035.GB2076@optusnet.com.au> Message-ID: <1409092718-sup-2484@geror.local> Excerpts from Viktor Villafuerte's message of 2014-08-26 15:30:35 -0700: > Hi James, > > thanks for your answer. Excuse my ignorance though.. :) What exactly do > mean by running the test several times? In the spec file there's a > section which runs the tests. I can comment out that and skip them all.. > I'm just not sure how to run them few times as you suggest. Could you > elaborate on the 'tests rerun' a little more? I don't have much experience with spec files, so perhaps what I'm suggesting doesn't make much sense. Are the tests run automatically as part of the command that builds an rpm? Can you paste the snippet that you've commented out? - James -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From viktor.villafuerte at optusnet.com.au Tue Aug 26 23:10:18 2014 From: viktor.villafuerte at optusnet.com.au (Viktor Villafuerte) Date: Wed, 27 Aug 2014 09:10:18 +1000 Subject: Varnish package build - tests fail In-Reply-To: <1409092718-sup-2484@geror.local> References: <20140822064618.GA2076@optusnet.com.au> <1409009251-sup-2857@geror.local> <20140826223035.GB2076@optusnet.com.au> <1409092718-sup-2484@geror.local> Message-ID: <20140826231017.GC2076@optusnet.com.au> this is the spec file section. Please, note the 'make check' line. I haven't actually tried to comment it out, but found on the internet an article that says to take it out if tests fail etc. I don't feel very comfortable to taking them all out as I'm sure that (at least some) they're important..? Also as the code snippet below shows, I could simply delete the one test in question from the source, but again I'm not sure that it's the best thing to do. ... ... # The redhat ppc builders seem to have some ulimit problems? # These tests work on a rhel4 ppc/ppc64 instance outside the builders %ifarch ppc64 ppc %if 0%{?rhel} == 4 rm bin/varnishtest/tests/c00031.vtc rm bin/varnishtest/tests/r00387.vtc %endif %endif make check %{?_smp_mflags} LD_LIBRARY_PATH="../../lib/libvarnish/.libs:../../lib/libvarnishcompat/.libs:../../lib/libvarnishapi/.libs:../../lib/libvcc/.libs:../../lib/libvgz/.libs" VERBOSE=1 %install rm -rf %{buildroot} make install DESTDIR=%{buildroot} INSTALL="install -p" ... ... On Tue 26 Aug 2014 15:41:42, James Pearson wrote: > Excerpts from Viktor Villafuerte's message of 2014-08-26 15:30:35 -0700: > > Hi James, > > > > thanks for your answer. Excuse my ignorance though.. :) What exactly do > > mean by running the test several times? In the spec file there's a > > section which runs the tests. I can comment out that and skip them all.. > > I'm just not sure how to run them few times as you suggest. Could you > > elaborate on the 'tests rerun' a little more? > > I don't have much experience with spec files, so perhaps what I'm suggesting > doesn't make much sense. > > Are the tests run automatically as part of the command that builds an rpm? Can > you paste the snippet that you've commented out? > - James -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 From lkarsten at varnish-software.com Thu Aug 28 12:07:25 2014 From: lkarsten at varnish-software.com (Lasse Karstensen) Date: Thu, 28 Aug 2014 14:07:25 +0200 Subject: Error at boot with varnishncsa In-Reply-To: References: Message-ID: <20140828120724.GM19219@immer.varnish-software.com> On Tue, Aug 26, 2014 at 05:36:37PM +0200, Jean Milot wrote: > I have a problem with varnishncsa. > I modified the /etc/default/varnishncsa file to launch varnishncsa at the > boot but it seems that varnishncsa is launched before varnish. > I have this error : > Can't open VSM file (Abandoned VSM file (Varnish not running?) > /var/lib/varnish/myhost/_.vsm) > For information : i use Debian 7 and > varnishd -V > varnishd (varnish-4.0.1 revision 4354e5e) > Copyright (c) 2006 Verdens Gang AS > Copyright (c) 2006-2011 Varnish Software AS > If i add sleep 10 in the /etc/init.d/varnishncsa, it works. > How i can make it work without change the init.d file ? Hi. I don't think you can, this looks like a bug in the init script. diff --git a/varnish.varnishncsa.init b/varnish.varnishncsa.init index 8504fce..1d098ad 100644 --- a/varnish.varnishncsa.init +++ b/varnish.varnishncsa.init @@ -2,8 +2,8 @@ ### BEGIN INIT INFO # Provides: varnishncsa -# Required-Start: $local_fs $remote_fs $network -# Required-Stop: $local_fs $remote_fs $network +# Required-Start: $local_fs $remote_fs $network varnish +# Required-Stop: $local_fs $remote_fs $network varnish # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start HTTP accelerator log daemon Does it work without sleep 10 if you apply this tiny change? (you may have to do update-rc.d to get the ordering recomputed) -- Lasse Karstensen Varnish Software AS From jdh132 at psu.edu Thu Aug 28 15:42:29 2014 From: jdh132 at psu.edu (Jason Heffner) Date: Thu, 28 Aug 2014 11:42:29 -0400 Subject: varnishlog -m -w Message-ID: <1304E667-5031-4ADB-A18D-942E9FEF25F5@psu.edu> We are running Varnish 3.0.5-1 from the el6 repo. When trying to use varnishlog with the -w option we are noticing that it ignores the -m option. For instance varnishlog -a -w /var/log/varnish/varnish.log -m TxStatus:503 will log all entries. I tried to rearrange the options but with the same result. I?ve tried varnishncsa and it works as expected. Is this a bug or the expected behavior? Thanks, Jason From geoff at uplex.de Thu Aug 28 16:17:45 2014 From: geoff at uplex.de (Geoff Simmons) Date: Thu, 28 Aug 2014 18:17:45 +0200 Subject: varnishlog -m -w In-Reply-To: <1304E667-5031-4ADB-A18D-942E9FEF25F5@psu.edu> References: <1304E667-5031-4ADB-A18D-942E9FEF25F5@psu.edu> Message-ID: <53FF5629.50907@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 08/28/2014 05:42 PM, Jason Heffner wrote: > We are running Varnish 3.0.5-1 from the el6 repo. When trying to > use varnishlog with the -w option we are noticing that it ignores > the -m option. > > For instance > > varnishlog -a -w /var/log/varnish/varnish.log -m TxStatus:503 In my experience, -m in Varnish 3 does not filter at all unless you also provide -b or -c for backend or client transactions, respectively. It's unrelated to -w. Since you're evidently looking for client transactions including responses with status 503, try this: $ varnishlog -c -a -w /var/log/varnish/varnish.log -m TxStatus:503 (BTW, this has all been greatly improved in Varnish 4.) HTH, Geoff - -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Icedove - http://www.enigmail.net/ iQIcBAEBCAAGBQJT/1YpAAoJEOUwvh9pJNURnLwP/3ugGQKcBlqCvg00Hy6/eaR6 5KugupgyQNCRNYqPuP0tykUJuDwqqoI+aF+RMa8N10AyW06yBCHuvqnxCmJ31GWp ohEVIOSsl5THcbCJHq9R6TWnOCnudC2kCB+QwdJATRbaNchlaJdwwEuigSNK1tQk mX9dyLG1FF2GQCVVPiQ3NtsZCQfkVlp6B/FSy1NZoDkiGrvr51hb/dJMokCyx7fv Zs1ci8HDITjX6dFms4WH+rzppvogaZj0fCbzhxlqkNWF+MnCAeMiyuiJT9oEayyN NUQWWz2gb9s24sN8SkVPLkMo1oSw9Wk4ZLYN/KstSe7EM1KAzeQ6le+wJigNOCwE xvqRsiMCyqoqo68rIFtfJunZYkxB0gpMpLHxGo9wk6/9mSxpPbI43rI8u8YiJCTb 79BVBmJXH4297tWksdawWJcfjDuSEqz5Wv7NYD/AI3lj519UFT0XLrmzzh80HiGg MGCJKtQD3k2XQvL8CxZ5ikQBXGhmFXcOox0LLvgwb9qH1TsfRCyQrcz73GJ7OBt4 yw2bniPzFF8bIirxpTv2VY9uNMLLe2Ua/g6IjWF109EfQiPcF/mmiuw49KKu57hQ cy20c3vB1UGlJlFvADoqLM5ImkgoQMkEiZVKNMwPl8aPFviJCCwkyjppevOv7oWV OIeJfXprjMHBTzXa3JhD =fJXN -----END PGP SIGNATURE----- From ruben at varnish-software.com Fri Aug 29 12:15:15 2014 From: ruben at varnish-software.com (=?UTF-8?Q?Rub=C3=A9n_Romero?=) Date: Fri, 29 Aug 2014 14:15:15 +0200 Subject: VDD14Q3 in Oslo on September 17th Message-ID: Hello everyone everywhere, SSIA. More details> https://www.varnish-cache.org/trac/wiki/VDD14Q3 Best regards, -- *Rub?n Romero* Community & Sales | Varnish Software AS Cell: +47 95964088 / Office: +47 21989260 Skype, Twitter & IRC: ruben_varnish We Make Websites Fly! [image: Varnish Summits Autumn 2014] -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.ramanna at ymail.com Fri Aug 29 20:05:31 2014 From: m.ramanna at ymail.com (Madhusudan Ramanna) Date: Fri, 29 Aug 2014 13:05:31 -0700 Subject: Varnish 4.0.1 and IMS Message-ID: <1409342731.97519.YahooMailNeo@web121704.mail.ne1.yahoo.com> Hello, Per this blog https://www.varnish-software.com/blog/varnish-40-qa-performance-vmods-ssl-ims-swr-and-more I thought Varnish 4.0.1 has support for IMS. But looks like varnish is not sending the IMS header to the backend. Or I'm missing something. * << Request >> 32820 - ReqHeader User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5 - ReqHeader Host: <> - ReqHeader Accept: */* - ReqHeader If-Modified-Since: Sat, 05 Oct 2013 10:19:56 GMT - ReqHeader X-Forwarded-For: <> - BereqHeader User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5 - BereqHeader Host: <> - BereqHeader Accept: */* - BereqHeader X-Forwarded-For: <> - BereqHeader Accept-Encoding: gzip - BereqHeader X-Varnish: 65570 Is IMS enabled in Varnish ? or is it still in the experimental branch. Given varnish is being used to serve images, we'd like to cut down on the traffic between varnish cache and backend server as much as possible. thank you, Madhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From perbu at varnish-software.com Sat Aug 30 08:20:37 2014 From: perbu at varnish-software.com (Per Buer) Date: Sat, 30 Aug 2014 10:20:37 +0200 Subject: Varnish 4.0.1 and IMS In-Reply-To: <1409342731.97519.YahooMailNeo@web121704.mail.ne1.yahoo.com> References: <1409342731.97519.YahooMailNeo@web121704.mail.ne1.yahoo.com> Message-ID: Hi Madhu, On Fri, Aug 29, 2014 at 10:05 PM, Madhusudan Ramanna wrote: > Hello, > > Per this blog > > > https://www.varnish-software.com/blog/varnish-40-qa-performance-vmods-ssl-ims-swr-and-more > > I thought Varnish 4.0.1 has support for IMS. But looks like varnish is > not sending the IMS header to the backend. Or I'm missing something. > It is probably missing the object. When there is no object there can be no IMS. Extend beresp.keep to keep objects longer past their TTL and you'll see more IMS stuff going on. Per. -- *Per Buer* CTO | Varnish Software AS Cell: +47 95839117 We Make Websites Fly! www.varnish-software.com [image: Register now] -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Sat Aug 30 08:37:57 2014 From: geoff at uplex.de (Geoff Simmons) Date: Sat, 30 Aug 2014 10:37:57 +0200 Subject: Varnish 4.0.1 and IMS In-Reply-To: <1409342731.97519.YahooMailNeo@web121704.mail.ne1.yahoo.com> References: <1409342731.97519.YahooMailNeo@web121704.mail.ne1.yahoo.com> Message-ID: <54018D65.6020207@uplex.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 8/29/14 10:05 PM, Madhusudan Ramanna wrote: > > I thought Varnish 4.0.1 has support for IMS. But looks like > varnish is not sending the IMS header to the backend. Or I'm > missing something. > > * << Request >> 32820 - ReqHeader User-Agent: curl/7.24.0 > (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5 > - ReqHeader Host: <> - ReqHeader Accept: */* - > ReqHeader If-Modified-Since: Sat, 05 Oct 2013 10:19:56 GMT - > ReqHeader X-Forwarded-For: <> > > - BereqHeader User-Agent: curl/7.24.0 > (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8y zlib/1.2.5 > - BereqHeader Host: <> - BereqHeader Accept: */* - > BereqHeader X-Forwarded-For: <> - BereqHeader > Accept-Encoding: gzip - BereqHeader X-Varnish: 65570 You have IMS in the client request, which Varnish has handled for some time. If Varnish finds a cache hit, and the object in cache has a Last-Modified header, then Varnish will respond accordingly (no backend request necessary). That's not new in version 4. What's new is that Varnish will add IMS to backend requests to refresh objects from its cache with expired TTL, for those objects that have a Last-Modified header, and if the keep timer has not expired. What you're missing is that you apparently expected the IMS header in the client request to be forwarded to the backend request. But Varnish won't do that for a cache miss -- the response might be cacheable, in which case we'll want the whole thing. Once you have the response in cache, then it can be refreshed via IMS. Best, Geoff - -- UPLEX Systemoptimierung Scheffelstra?e 32 22301 Hamburg http://uplex.de/ Mob: +49-176-63690917 -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.14 (Darwin) iQIcBAEBCAAGBQJUAY1kAAoJEOUwvh9pJNURoNcP/1N7NBZxpySWVnpY0Bzew1gb uXIuaxL/YDIE1kGVMB6XeHX0ChDCehswMRgdQKMmNjQhQfrXIRnwJT83s4NX3cIf y8j2IjrjSEaniQS9K1uhPs1HlCg+6M7Xsz/wfaRokhsVZTXNmQo9jcSAYJHkfUjo nLO27VGObO7lknCcHPObMj77XLs5EgMxyaVk5TOvYkakLQ5cLJbVaUmz/GR5fqU7 A8uHQms9xYsxiffinTxC2WkKL7n/T3VBvZ/KavFje/CoXt66+Ggc9rc/pBbBY9SJ 8BYzOrJVYDvXL8EiNhpEk/DJHdGrmbANX7S/rihAXoDuRZKihRfDSzJrT7d1/949 KtoOXbns45PxXvDkDaU7lPTFfwVjxE5yC3ewDCXlmzTKd+KsOmov85yylJNpw29T 7WPTAHIsm14Y/BKRhNOUtjNzrg51dDvVS/IQHo3t+M0T2VJrTkWWEf3oWtYWTpJ2 wuBKdHZD5j4x0KSbvd4sPU6KaAPf7Nz/8Xlrg/2Cg3jjZm0B7xkA2q0wQUzGG83c oNtoExllCOdUg0LFo5F5mYzSG6ROKJTnucCrk9VgevYRfxNHnZdCMpIcGJ+8jp0u Rn/rdlPxwbY0eC1zilExafpHLBuaa70uuUqPRtWG7yJtgBwtQyQTLyPBr37bw7Mk POXZqXET1JdC+cJKulIU =sItt -----END PGP SIGNATURE-----