From teohhanhui at gmail.com Tue May 2 07:41:43 2017 From: teohhanhui at gmail.com (Teoh Han Hui) Date: Tue, 2 May 2017 15:41:43 +0800 Subject: Alpine Linux / musl libc support Message-ID: Building varnish on Alpine Linux currently requires a few patches: https://git.alpinelinux.org/cgit/aports/tree/main/varnish?h=3.5-stable It'd be great if the compatibility can be incorporated upstream, especially for the benefit of those of us who need to build from source. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue May 2 08:11:20 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 2 May 2017 10:11:20 +0200 Subject: Alpine Linux / musl libc support In-Reply-To: References: Message-ID: On Tue, May 2, 2017 at 9:41 AM, Teoh Han Hui wrote: > Building varnish on Alpine Linux currently requires a few patches: > https://git.alpinelinux.org/cgit/aports/tree/main/varnish?h=3.5-stable > > It'd be great if the compatibility can be incorporated upstream, especially > for the benefit of those of us who need to build from source. Hello, Thanks for letting us know. I'm currently taking care of the mode_t patch. Regarding the Werror patch, it would be more helpful to know the warnings that are failing the build instead of disabling them. Also the comment says it was for Varnish 4.1.3, is it still needed with later releases? I don't know any better abouut the libvarnishcompat patch. And finally the epoll patch looks OK but maybe we could make sure we create the thread with a large-enough stack in the first place. Thanks, Dridi From jmathiesen at tripadvisor.com Wed May 3 18:04:29 2017 From: jmathiesen at tripadvisor.com (James Mathiesen) Date: Wed, 3 May 2017 18:04:29 +0000 Subject: Bottleneck on connection accept rates Message-ID: I'm running the epel build of varnish on CentOS 7 inside a kubernetes pod (docker container). # rpm -qa | grep varnish varnish-4.1.5-1.el7.x86_64 # uname -a Linux media-cdn-2400684925-p137g 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Currently I seem to be hitting a bottleneck with connection accept rates in varnish. Or maybe I have another problem and it's manifesting like this. Commands like "netstat -nt" show many connections to varnish in the SYN_SENT state which is why I think varnish can't keep up with the listen backlog. I believe I've ruled out the acceptor_sleep scenario (no debug messages that would accompany it are logged), but I'm going to try and disable it explicitly and see if that helps. I'm also going to try using the accept-filter feature, although I'm not sure how supported it is. And maybe try reducing timeout_linger. My goal is to have 1-2K simultaneous connections with an establish rate of 1-2K/second. Cache miss rate will be 100% so there will be lots of backend connection management going on. Is this a realistic goal? Below I have examples of param.show + "varnishtop -i Debug" + "varnishstat -1" I've been assuming that I need roughly one thread per simultaneous client connection. Is that reasonable or do I need to factor in backend connections too? james accept_filter? ? ? ? ? ? ? off [bool] (default) acceptor_sleep_decay ? ? ? 0.9 (default) acceptor_sleep_incr? ? ? ? 0.000 [seconds] (default) acceptor_sleep_max ? ? ? ? 0.050 [seconds] (default) auto_restart ? ? ? ? ? ? ? on [bool] (default) backend_idle_timeout ? ? ? 60.000 [seconds] (default) ban_dups ? ? ? ? ? ? ? ? ? on [bool] (default) ban_lurker_age ? ? ? ? ? ? 60.000 [seconds] (default) ban_lurker_batch ? ? ? ? ? 1000 (default) ban_lurker_sleep ? ? ? ? ? 0.010 [seconds] (default) between_bytes_timeout? ? ? 60.000 [seconds] (default) cc_command ? ? ? ? ? ? ? ? "exec gcc -std=gnu99 -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches ? -m64 -mtune=generic -Wall -Werror -Wno-error=unused-result -pthread -fpic -shared -Wl,-x -o %o %s" (default) cli_buffer ? ? ? ? ? ? ? ? 8k [bytes] (default) cli_limit? ? ? ? ? ? ? ? ? 48k [bytes] (default) cli_timeout? ? ? ? ? ? ? ? 60.000 [seconds] (default) clock_skew ? ? ? ? ? ? ? ? 10 [seconds] (default) clock_step ? ? ? ? ? ? ? ? 1.000 [seconds] (default) connect_timeout? ? ? ? ? ? 3.500 [seconds] (default) critbit_cooloff? ? ? ? ? ? 180.000 [seconds] (default) debug? ? ? ? ? ? ? ? ? ? ? none (default) default_grace? ? ? ? ? ? ? 10.000 [seconds] (default) default_keep ? ? ? ? ? ? ? 0.000 [seconds] (default) default_ttl? ? ? ? ? ? ? ? 120.000 [seconds] (default) feature? ? ? ? ? ? ? ? ? ? none (default) fetch_chunksize? ? ? ? ? ? 16k [bytes] (default) fetch_maxchunksize ? ? ? ? 0.25G [bytes] (default) first_byte_timeout ? ? ? ? 60.000 [seconds] (default) gzip_buffer? ? ? ? ? ? ? ? 32k [bytes] (default) gzip_level ? ? ? ? ? ? ? ? 6 (default) gzip_memlevel? ? ? ? ? ? ? 8 (default) http_gzip_support? ? ? ? ? on [bool] (default) http_max_hdr ? ? ? ? ? ? ? 64 [header lines] (default) http_range_support ? ? ? ? on [bool] (default) http_req_hdr_len ? ? ? ? ? 8k [bytes] (default) http_req_size? ? ? ? ? ? ? 32k [bytes] (default) http_resp_hdr_len? ? ? ? ? 8k [bytes] (default) http_resp_size ? ? ? ? ? ? 32k [bytes] (default) idle_send_timeout? ? ? ? ? 60.000 [seconds] (default) listen_depth ? ? ? ? ? ? ? 1024 [connections] (default) lru_interval ? ? ? ? ? ? ? 2.000 [seconds] (default) max_esi_depth? ? ? ? ? ? ? 5 [levels] (default) max_restarts ? ? ? ? ? ? ? 4 [restarts] (default) max_retries? ? ? ? ? ? ? ? 4 [retries] (default) nuke_limit ? ? ? ? ? ? ? ? 50 [allocations] (default) pcre_match_limit ? ? ? ? ? 10000 (default) pcre_match_limit_recursion 20 (default) ping_interval? ? ? ? ? ? ? 3 [seconds] (default) pipe_timeout ? ? ? ? ? ? ? 60.000 [seconds] (default) pool_req ? ? ? ? ? ? ? ? ? 10,100,10 (default) pool_sess? ? ? ? ? ? ? ? ? 10,100,10 (default) pool_vbo ? ? ? ? ? ? ? ? ? 10,100,10 (default) prefer_ipv6? ? ? ? ? ? ? ? off [bool] (default) rush_exponent? ? ? ? ? ? ? 3 [requests per request] (default) send_timeout ? ? ? ? ? ? ? 5.000 [seconds] session_max? ? ? ? ? ? ? ? 100000 [sessions] (default) shm_reclen ? ? ? ? ? ? ? ? 255b [bytes] (default) shortlived ? ? ? ? ? ? ? ? 10.000 [seconds] (default) sigsegv_handler? ? ? ? ? ? on [bool] (default) syslog_cli_traffic ? ? ? ? on [bool] (default) tcp_fastopen ? ? ? ? ? ? ? off [bool] (default) tcp_keepalive_intvl? ? ? ? 75.000 [seconds] (default) tcp_keepalive_probes ? ? ? 9 [probes] (default) tcp_keepalive_time ? ? ? ? 7200.000 [seconds] (default) thread_pool_add_delay? ? ? 0.000 [seconds] (default) thread_pool_destroy_delay? 1.000 [seconds] (default) thread_pool_fail_delay ? ? 0.200 [seconds] (default) thread_pool_max? ? ? ? ? ? 1000 [threads] thread_pool_min? ? ? ? ? ? 1000 [threads] thread_pool_reserve? ? ? ? 0 [threads] (default) thread_pool_stack? ? ? ? ? 48k [bytes] (default) thread_pool_timeout? ? ? ? 300.000 [seconds] (default) thread_pools ? ? ? ? ? ? ? 2 [pools] (default) thread_queue_limit ? ? ? ? 10 thread_stats_rate? ? ? ? ? 10 [requests] (default) timeout_idle ? ? ? ? ? ? ? 5.000 [seconds] (default) timeout_linger ? ? ? ? ? ? 0.050 [seconds] (default) vcc_allow_inline_c ? ? ? ? off [bool] (default) vcc_err_unref? ? ? ? ? ? ? on [bool] (default) vcc_unsafe_path? ? ? ? ? ? on [bool] (default) vcl_cooldown ? ? ? ? ? ? ? 600.000 [seconds] (default) vcl_dir? ? ? ? ? ? ? ? ? ? /etc/varnish (default) vmod_dir ? ? ? ? ? ? ? ? ? /usr/lib64/varnish/vmods (default) vsl_buffer ? ? ? ? ? ? ? ? 4k [bytes] (default) vsl_mask ? ? ? ? ? ? ? ? ? -VCL_trace,-WorkThread,-Hash,-VfpAcct (default) vsl_reclen ? ? ? ? ? ? ? ? 255b [bytes] (default) vsl_space? ? ? ? ? ? ? ? ? 1G [bytes] vsm_free_cooldown? ? ? ? ? 60.000 [seconds] (default) vsm_space? ? ? ? ? ? ? ? ? 10M [bytes] workspace_backend? ? ? ? ? 64k [bytes] (default) workspace_client ? ? ? ? ? 64k [bytes] (default) workspace_session? ? ? ? ? 0.50k [bytes] (default) workspace_thread ? ? ? ? ? 2k [bytes] (default) varnishtop -i Debug list length 82 20982.70 Debug RES_MODE 2 58.56 Debug RES_MODE 0 4.05 Debug Write error, retval = -1, len = 17408, errno = Broken pipe 2.32 Debug Write error, retval = -1, len = 34816, errno = Broken pipe 2.31 Debug Write error, retval = -1, len = 52224, errno = Broken pipe 1.00 Debug Write error, retval = -1, len = 922, errno = Broken pipe 1.00 Debug Write error, retval = -1, len = 1580, errno = Broken pipe 0.99 Debug Write error, retval = -1, len = 1413, errno = Broken pipe 0.75 Debug Write error, retval = -1, len = 5192, errno = Broken pipe 0.71 Debug Write error, retval = -1, len = 3067, errno = Broken pipe 0.71 Debug Write error, retval = -1, len = 4604, errno = Broken pipe 0.62 Debug Write error, retval = -1, len = 6228, errno = Broken pipe 0.61 Debug Write error, retval = -1, len = 4900, errno = Broken pipe 0.61 Debug Write error, retval = -1, len = 7327, errno = Broken pipe 0.60 Debug Write error, retval = -1, len = 8311, errno = Broken pipe 0.60 Debug Write error, retval = -1, len = 10087, errno = Broken pipe 0.60 Debug Write error, retval = -1, len = 11, errno = Broken pipe 0.57 Debug Write error, retval = -1, len = 5129, errno = Broken pipe 0.55 Debug Write error, retval = -1, len = 4821, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 11224, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 5044, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 2474, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 3257, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 11788, errno = Broken pipe 0.54 Debug Write error, retval = -1, len = 29455, errno = Broken pipe 0.53 Debug Write error, retval = -1, len = 3346, errno = Broken pipe 0.53 Debug Write error, retval = -1, len = 6534, errno = Broken pipe 0.53 Debug Write error, retval = -1, len = 6552, errno = Broken pipe 0.53 Debug Write error, retval = -1, len = 9850, errno = Broken pipe varnishstat -1 MAIN.uptime 7546 1.00 Child process uptime MAIN.sess_conn 277145 36.73 Sessions accepted MAIN.sess_drop 0 0.00 Sessions dropped MAIN.sess_fail 0 0.00 Session accept failures MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors MAIN.client_req 4307812 570.87 Good client requests received MAIN.cache_hit 0 0.00 Cache hits MAIN.cache_hitpass 0 0.00 Cache hits for pass MAIN.cache_miss 0 0.00 Cache misses MAIN.backend_conn 61597 8.16 Backend conn. success MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted MAIN.backend_busy 0 0.00 Backend conn. too many MAIN.backend_fail 0 0.00 Backend conn. failures MAIN.backend_reuse 4250487 563.28 Backend conn. reuses MAIN.backend_recycle 4279797 567.16 Backend conn. recycles MAIN.backend_retry 23 0.00 Backend conn. retry MAIN.fetch_head 219 0.03 Fetch no body (HEAD) MAIN.fetch_length 4302294 570.14 Fetch with Length MAIN.fetch_chunked 0 0.00 Fetch chunked MAIN.fetch_eof 0 0.00 Fetch EOF MAIN.fetch_bad 0 0.00 Fetch bad T-E MAIN.fetch_none 146 0.02 Fetch no body MAIN.fetch_1xx 0 0.00 Fetch no body (1xx) MAIN.fetch_204 0 0.00 Fetch no body (204) MAIN.fetch_304 7624 1.01 Fetch no body (304) MAIN.fetch_failed 67 0.01 Fetch failed (all causes) MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread) MAIN.pools 2 . Number of thread pools MAIN.threads 2000 . Total number of threads MAIN.threads_limited 0 0.00 Threads hit max MAIN.threads_created 2000 0.27 Threads created MAIN.threads_destroyed 0 0.00 Threads destroyed MAIN.threads_failed 0 0.00 Thread creation failed MAIN.thread_queue_len 234 . Length of session queue MAIN.busy_sleep 0 0.00 Number of requests sent to sleep on busy objhdr MAIN.busy_wakeup 0 0.00 Number of requests woken after sleep on busy objhdr MAIN.busy_killed 0 0.00 Number of requests killed after sleep on busy objhdr MAIN.sess_queued 0 0.00 Sessions queued for thread MAIN.sess_dropped 0 0.00 Sessions dropped for thread MAIN.n_object 2490 . object structs made MAIN.n_vampireobject 0 . unresurrected objects MAIN.n_objectcore 18446744073709551613 . objectcore structs made MAIN.n_objecthead 0 . objecthead structs made MAIN.n_waitinglist 0 . waitinglist structs made MAIN.n_backend 1 . Number of backends MAIN.n_expired 0 . Number of expired objects MAIN.n_lru_nuked 0 . Number of LRU nuked objects MAIN.n_lru_moved 0 . Number of LRU moved objects MAIN.losthdr 0 0.00 HTTP header overflows MAIN.s_sess 277145 36.73 Total sessions seen MAIN.s_req 4307812 570.87 Total requests seen MAIN.s_pipe 0 0.00 Total pipe sessions seen MAIN.s_pass 4307812 570.87 Total pass-ed requests seen MAIN.s_fetch 4307812 570.87 Total backend fetches initiated MAIN.s_synth 0 0.00 Total synthethic responses made MAIN.s_req_hdrbytes 8366066701 1108675.68 Request header bytes MAIN.s_req_bodybytes 0 0.00 Request body bytes MAIN.s_resp_hdrbytes 1874113868 248358.58 Response header bytes MAIN.s_resp_bodybytes 197969033005 26234963.29 Response body bytes MAIN.s_pipe_hdrbytes 0 0.00 Pipe request header bytes MAIN.s_pipe_in 0 0.00 Piped bytes from client MAIN.s_pipe_out 0 0.00 Piped bytes to client MAIN.sess_closed 1625 0.22 Session Closed MAIN.sess_closed_err 178659 23.68 Session Closed with error MAIN.sess_readahead 0 0.00 Session Read Ahead MAIN.sess_herd 1372346 181.86 Session herd MAIN.sc_rem_close 97498 12.92 Session OK REM_CLOSE MAIN.sc_req_close 0 0.00 Session OK REQ_CLOSE MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10 MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD MAIN.sc_rx_body 0 0.00 Session Err RX_BODY MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW MAIN.sc_rx_timeout 178660 23.68 Session Err RX_TIMEOUT MAIN.sc_tx_pipe 0 0.00 Session OK TX_PIPE MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF MAIN.sc_resp_close 0 0.00 Session OK RESP_CLOSE MAIN.sc_overload 0 0.00 Session Err OVERLOAD MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT MAIN.shm_records 574687872 76157.95 SHM records MAIN.shm_writes 14046734 1861.48 SHM writes MAIN.shm_flushes 164 0.02 SHM flushes due to overflow MAIN.shm_cont 127243 16.86 SHM MTX contention MAIN.shm_cycles 22 0.00 SHM cycles through buffer MAIN.backend_req 4311083 571.31 Backend requests made MAIN.n_vcl 1 0.00 Number of loaded VCLs in total MAIN.n_vcl_avail 1 0.00 Number of VCLs available MAIN.n_vcl_discard 0 0.00 Number of discarded VCLs MAIN.bans 1 . Count of bans MAIN.bans_completed 1 . Number of bans marked 'completed' MAIN.bans_obj 0 . Number of bans using obj.* MAIN.bans_req 0 . Number of bans using req.* MAIN.bans_added 1 0.00 Bans added MAIN.bans_deleted 0 0.00 Bans deleted MAIN.bans_tested 0 0.00 Bans tested against objects (lookup) MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup) MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker) MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup) MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker) MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker) MAIN.bans_dups 0 0.00 Bans superseded by other bans MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup MAIN.bans_persisted_bytes 16 . Bytes used by the persisted ban lists MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation MAIN.n_purges 0 . Number of purge operations executed MAIN.n_obj_purged 0 . Number of purged objects MAIN.exp_mailed 0 0.00 Number of objects mailed to expiry thread MAIN.exp_received 0 0.00 Number of objects received by expiry thread MAIN.hcb_nolock 0 0.00 HCB Lookups without lock MAIN.hcb_lock 0 0.00 HCB Lookups with lock MAIN.hcb_insert 0 0.00 HCB Inserts MAIN.esi_errors 0 0.00 ESI parse errors (unlock) MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock) MAIN.vmods 0 . Loaded VMODs MAIN.n_gzip 0 0.00 Gzip operations MAIN.n_gunzip 723 0.10 Gunzip operations MAIN.vsm_free 10410896 . Free VSM space MAIN.vsm_used 1073816640 . Used VSM space MAIN.vsm_cooling 0 . Cooling VSM space MAIN.vsm_overflow 0 . Overflow VSM space MAIN.vsm_overflowed 0 0.00 Overflowed VSM space MGT.uptime 7547 1.00 Management process uptime MGT.child_start 1 0.00 Child process started MGT.child_exit 0 0.00 Child process normal exit MGT.child_stop 0 0.00 Child process unexpected exit MGT.child_died 0 0.00 Child process died (signal) MGT.child_dump 0 0.00 Child process core dumped MGT.child_panic 0 0.00 Child process panic MEMPOOL.busyobj.live 949 . In use MEMPOOL.busyobj.pool 9 . In Pool MEMPOOL.busyobj.sz_wanted 65536 . Size requested MEMPOOL.busyobj.sz_actual 65504 . Size allocated MEMPOOL.busyobj.allocs 4312087 571.44 Allocations MEMPOOL.busyobj.frees 4311138 571.31 Frees MEMPOOL.busyobj.recycle 4275161 566.55 Recycled from pool MEMPOOL.busyobj.timeout 11975 1.59 Timed out from pool MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle MEMPOOL.busyobj.surplus 30375 4.03 Too many for pool MEMPOOL.busyobj.randry 36926 4.89 Pool ran dry MEMPOOL.req0.live 478 . In use MEMPOOL.req0.pool 12 . In Pool MEMPOOL.req0.sz_wanted 65536 . Size requested MEMPOOL.req0.sz_actual 65504 . Size allocated MEMPOOL.req0.allocs 736958 97.66 Allocations MEMPOOL.req0.frees 736480 97.60 Frees MEMPOOL.req0.recycle 723240 95.84 Recycled from pool MEMPOOL.req0.timeout 9382 1.24 Timed out from pool MEMPOOL.req0.toosmall 0 0.00 Too small to recycle MEMPOOL.req0.surplus 14481 1.92 Too many for pool MEMPOOL.req0.randry 13718 1.82 Pool ran dry MEMPOOL.sess0.live 641 . In use MEMPOOL.sess0.pool 100 . In Pool MEMPOOL.sess0.sz_wanted 512 . Size requested MEMPOOL.sess0.sz_actual 480 . Size allocated MEMPOOL.sess0.allocs 138665 18.38 Allocations MEMPOOL.sess0.frees 138024 18.29 Frees MEMPOOL.sess0.recycle 116552 15.45 Recycled from pool MEMPOOL.sess0.timeout 12728 1.69 Timed out from pool MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle MEMPOOL.sess0.surplus 26627 3.53 Too many for pool MEMPOOL.sess0.randry 22113 2.93 Pool ran dry MEMPOOL.req1.live 476 . In use MEMPOOL.req1.pool 15 . In Pool MEMPOOL.req1.sz_wanted 65536 . Size requested MEMPOOL.req1.sz_actual 65504 . Size allocated MEMPOOL.req1.allocs 734145 97.29 Allocations MEMPOOL.req1.frees 733669 97.23 Frees MEMPOOL.req1.recycle 720439 95.47 Recycled from pool MEMPOOL.req1.timeout 9303 1.23 Timed out from pool MEMPOOL.req1.toosmall 0 0.00 Too small to recycle MEMPOOL.req1.surplus 14442 1.91 Too many for pool MEMPOOL.req1.randry 13706 1.82 Pool ran dry MEMPOOL.sess1.live 617 . In use MEMPOOL.sess1.pool 108 . In Pool MEMPOOL.sess1.sz_wanted 512 . Size requested MEMPOOL.sess1.sz_actual 480 . Size allocated MEMPOOL.sess1.allocs 138757 18.39 Allocations MEMPOOL.sess1.frees 138140 18.31 Frees MEMPOOL.sess1.recycle 117157 15.53 Recycled from pool MEMPOOL.sess1.timeout 13106 1.74 Timed out from pool MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle MEMPOOL.sess1.surplus 26405 3.50 Too many for pool MEMPOOL.sess1.randry 21600 2.86 Pool ran dry SMA.s0.c_req 0 0.00 Allocator requests SMA.s0.c_fail 0 0.00 Allocator failures SMA.s0.c_bytes 0 0.00 Bytes allocated SMA.s0.c_freed 0 0.00 Bytes freed SMA.s0.g_alloc 0 . Allocations outstanding SMA.s0.g_bytes 0 . Bytes outstanding SMA.s0.g_space 268435456 . Bytes available SMA.Transient.c_req 8614289 1141.57 Allocator requests SMA.Transient.c_fail 0 0.00 Allocator failures SMA.Transient.c_bytes 202527906070 26839107.62 Bytes allocated SMA.Transient.c_freed 202527906070 26839107.62 Bytes freed SMA.Transient.g_alloc 0 . Allocations outstanding SMA.Transient.g_bytes 0 . Bytes outstanding SMA.Transient.g_space 0 . Bytes available VBE.boot.default.happy 0 . Happy health probes VBE.boot.default.bereq_hdrbytes 3087002285 409091.21 Request header bytes VBE.boot.default.bereq_bodybytes 0 0.00 Request body bytes VBE.boot.default.beresp_hdrbytes 1702354674 225596.96 Response header bytes VBE.boot.default.beresp_bodybytes 198106537669 26253185.48 Response body bytes VBE.boot.default.pipe_hdrbytes 0 0.00 Pipe request header bytes VBE.boot.default.pipe_out 0 0.00 Piped bytes to backend VBE.boot.default.pipe_in 0 0.00 Piped bytes from backend VBE.boot.default.conn 949 . Concurrent connections to backend VBE.boot.default.req 4312088 571.44 Backend requests sent LCK.backend.creat 2 0.00 Created locks LCK.backend.destroy 0 0.00 Destroyed locks LCK.backend.locks 8623228 1142.75 Lock Operations LCK.backend_tcp.creat 1 0.00 Created locks LCK.backend_tcp.destroy 0 0.00 Destroyed locks LCK.backend_tcp.locks 17152562 2273.07 Lock Operations LCK.ban.creat 1 0.00 Created locks LCK.ban.destroy 0 0.00 Destroyed locks LCK.ban.locks 4311758 571.40 Lock Operations LCK.busyobj.creat 4311923 571.42 Created locks LCK.busyobj.destroy 4311117 571.31 Destroyed locks LCK.busyobj.locks 44955817 5957.57 Lock Operations LCK.cli.creat 1 0.00 Created locks LCK.cli.destroy 0 0.00 Destroyed locks LCK.cli.locks 2529 0.34 Lock Operations LCK.exp.creat 1 0.00 Created locks LCK.exp.destroy 0 0.00 Destroyed locks LCK.exp.locks 2404 0.32 Lock Operations LCK.hcb.creat 1 0.00 Created locks LCK.hcb.destroy 0 0.00 Destroyed locks LCK.hcb.locks 42 0.01 Lock Operations LCK.lru.creat 2 0.00 Created locks LCK.lru.destroy 0 0.00 Destroyed locks LCK.lru.locks 0 0.00 Lock Operations LCK.mempool.creat 5 0.00 Created locks LCK.mempool.destroy 0 0.00 Destroyed locks LCK.mempool.locks 12374550 1639.88 Lock Operations LCK.objhdr.creat 1 0.00 Created locks LCK.objhdr.destroy 0 0.00 Destroyed locks LCK.objhdr.locks 50106608 6640.15 Lock Operations LCK.pipestat.creat 1 0.00 Created locks LCK.pipestat.destroy 0 0.00 Destroyed locks LCK.pipestat.locks 0 0.00 Lock Operations LCK.sess.creat 277251 36.74 Created locks LCK.sess.destroy 276164 36.60 Destroyed locks LCK.sess.locks 0 0.00 Lock Operations LCK.smp.creat 0 0.00 Created locks LCK.smp.destroy 0 0.00 Destroyed locks LCK.smp.locks 0 0.00 Lock Operations LCK.vbe.creat 1 0.00 Created locks LCK.vbe.destroy 0 0.00 Destroyed locks LCK.vbe.locks 2519 0.33 Lock Operations LCK.vcapace.creat 1 0.00 Created locks LCK.vcapace.destroy 0 0.00 Destroyed locks LCK.vcapace.locks 0 0.00 Lock Operations LCK.vcl.creat 1 0.00 Created locks LCK.vcl.destroy 0 0.00 Destroyed locks LCK.vcl.locks 8648114 1146.05 Lock Operations LCK.vxid.creat 1 0.00 Created locks LCK.vxid.destroy 0 0.00 Destroyed locks LCK.vxid.locks 1930 0.26 Lock Operations LCK.waiter.creat 2 0.00 Created locks LCK.waiter.destroy 0 0.00 Destroyed locks LCK.waiter.locks 16897202 2239.23 Lock Operations LCK.wq.creat 3 0.00 Created locks LCK.wq.destroy 0 0.00 Destroyed locks LCK.wq.locks 17093091 2265.19 Lock Operations LCK.wstat.creat 1 0.00 Created locks LCK.wstat.destroy 0 0.00 Destroyed locks LCK.wstat.locks 5415146 717.62 Lock Operations LCK.sma.creat 2 0.00 Created locks LCK.sma.destroy 0 0.00 Destroyed locks LCK.sma.locks 17228582 2283.14 Lock Operations From dridi at varni.sh Wed May 3 22:38:06 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 4 May 2017 00:38:06 +0200 Subject: Bottleneck on connection accept rates In-Reply-To: References: Message-ID: > I believe I've ruled out the acceptor_sleep scenario (no debug messages that would accompany it are logged), but I'm going to try and disable it explicitly and see if that helps. I'm also going to try using the accept-filter feature, although I'm not sure how supported it is. And maybe try reducing timeout_linger. > > My goal is to have 1-2K simultaneous connections with an establish rate of 1-2K/second. Cache miss rate will be 100% so there will be lots of backend connection management going on. Is this a realistic goal? Not sure about the bottleneck but the 2000 workers will become one if you reach 2K concurrent connections. > thread_pool_add_delay 0.000 [seconds] (default) > thread_pool_destroy_delay 1.000 [seconds] (default) > thread_pool_fail_delay 0.200 [seconds] (default) > thread_pool_max 1000 [threads] > thread_pool_min 1000 [threads] > thread_pool_reserve 0 [threads] (default) > thread_pool_stack 48k [bytes] (default) > thread_pool_timeout 300.000 [seconds] (default) > thread_pools 2 [pools] (default) Bump thread_pool_max back to 5000 (the default value) to get enough room to handle the traffic you are expecting. > MAIN.uptime 7546 1.00 Child process uptime > MAIN.sess_conn 277145 36.73 Sessions accepted > MAIN.sess_drop 0 0.00 Sessions dropped > MAIN.sess_fail 0 0.00 Session accept failures No failures, and no session dropped, looking good. > MAIN.thread_queue_len 234 . Length of session queue And here you are running out of workers apparently. Dridi From jmathiesen at tripadvisor.com Wed May 3 22:55:09 2017 From: jmathiesen at tripadvisor.com (James Mathiesen) Date: Wed, 3 May 2017 22:55:09 +0000 Subject: Bottleneck on connection accept rates In-Reply-To: References: , Message-ID: <09a001c310174dac9c0cc6ec458776bc@tripadvisor.com> Thank you Dridi. I will bump that tomorrow morning and test again. Premature optimization on my part apparently. I had thought I would be fine with 2 thread pools of 2,000 threads each. Do I need a thread for each backend connection as well as for each client connection? james ________________________________ From: Dridi Boukelmoune Sent: Wednesday, May 3, 2017 6:38 PM To: James Mathiesen Cc: varnish-misc at varnish-cache.org Subject: Re: Bottleneck on connection accept rates > I believe I've ruled out the acceptor_sleep scenario (no debug messages that would accompany it are logged), but I'm going to try and disable it explicitly and see if that helps. I'm also going to try using the accept-filter feature, although I'm not sure how supported it is. And maybe try reducing timeout_linger. > > My goal is to have 1-2K simultaneous connections with an establish rate of 1-2K/second. Cache miss rate will be 100% so there will be lots of backend connection management going on. Is this a realistic goal? Not sure about the bottleneck but the 2000 workers will become one if you reach 2K concurrent connections. > thread_pool_add_delay 0.000 [seconds] (default) > thread_pool_destroy_delay 1.000 [seconds] (default) > thread_pool_fail_delay 0.200 [seconds] (default) > thread_pool_max 1000 [threads] > thread_pool_min 1000 [threads] > thread_pool_reserve 0 [threads] (default) > thread_pool_stack 48k [bytes] (default) > thread_pool_timeout 300.000 [seconds] (default) > thread_pools 2 [pools] (default) Bump thread_pool_max back to 5000 (the default value) to get enough room to handle the traffic you are expecting. > MAIN.uptime 7546 1.00 Child process uptime > MAIN.sess_conn 277145 36.73 Sessions accepted > MAIN.sess_drop 0 0.00 Sessions dropped > MAIN.sess_fail 0 0.00 Session accept failures No failures, and no session dropped, looking good. > MAIN.thread_queue_len 234 . Length of session queue And here you are running out of workers apparently. Dridi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed May 3 23:06:23 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 4 May 2017 01:06:23 +0200 Subject: Bottleneck on connection accept rates In-Reply-To: <09a001c310174dac9c0cc6ec458776bc@tripadvisor.com> References: <09a001c310174dac9c0cc6ec458776bc@tripadvisor.com> Message-ID: > I had thought I would be fine with 2 thread pools of 2,000 threads each. Do According to your parameters you have 2 pools of 1000 threads each. > I need a thread for each backend connection as well as for each client > connection? I'm not in the mood to go down that rabbit hole, so let's say that for a miss or a pass it's the case. Keep the default max, monitor/graph the number of threads and see for yourself what you need. It may depend on how the clients and backends typically behave so there's no magic recipe. Drido From kurtanatlus at gmail.com Thu May 4 12:33:38 2017 From: kurtanatlus at gmail.com (Kurt Sultana) Date: Thu, 4 May 2017 14:33:38 +0200 Subject: Backend definition in multiple VCL files in Varnish 5 Message-ID: Hi all, I'm a bit newish to Varnish though I do have some background. I have a Varnish 5 instance connected to 2 backend servers (Magento 2 applications). I'm using the new Varnish 5 feature of loading multiple VCL files. My ultimate problem is during purging however I'd like to ensure things are set up correctly because documentation regarding multiple VCL files in Varnish 5 is somewhat lacking. To keep things very simple for now, I'm going to use 1 backend server in my example. So, I have a magento.vcl defined as follows: *vcl 4.0;* *import std;* *# The minimal Varnish version is 4.0* *# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'* *backend default {* * .host = "127.0.0.1";* * .port = "8088";* *}* *include "/etc/varnish/common.vcl";* And a top.vcl *vcl 4.0;* *import std;* *backend default { .host = "127.0.0.1"; }* *sub vcl_recv {* * if (req.http.host == "magento2.dev") {* * return (vcl(magento_vcl));* * }* *}* Then I run *service varnish restart* *varnishadm* *vcl.load magento /etc/varnish/conf.d/magento.vcl* *vcl.label magento_vcl magento * *vcl.load top /etc/varnish/top.vcl* *vcl.use top* *quit* When I browse to magento2.dev, I get a backend fetch error after some seconds. It's only when I go in magento.vcl and change the name of the backend and make a backend hint that it works. See below: *vcl 4.0;* *import std;* *# The minimal Varnish version is 4.0* *# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'* *backend magento {* * .host = "127.0.0.1";* * .port = "8088";* *}* *sub vcl_recv { set req.backend_hint = magento;}* *include "/etc/varnish/common.vcl";* Why should I be specifying a backend hint? Shouldn't Varnish be loading a different VCL according to the host specified in top.vcl? Or is there something wrong? Thanks in advance, Kurt -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmathiesen at tripadvisor.com Thu May 4 14:54:56 2017 From: jmathiesen at tripadvisor.com (James Mathiesen) Date: Thu, 4 May 2017 14:54:56 +0000 Subject: Bottleneck on connection accept rates In-Reply-To: References: <09a001c310174dac9c0cc6ec458776bc@tripadvisor.com> Message-ID: <63E81934-1493-4C04-A20D-98225BBA37FA@tripadvisor.com> Yes, that was it. Everything is scaling up much better now. Thank you for your help! james On 5/3/17, 7:06 PM, "Dridi Boukelmoune" wrote: > I had thought I would be fine with 2 thread pools of 2,000 threads each. Do According to your parameters you have 2 pools of 1000 threads each. > I need a thread for each backend connection as well as for each client > connection? I'm not in the mood to go down that rabbit hole, so let's say that for a miss or a pass it's the case. Keep the default max, monitor/graph the number of threads and see for yourself what you need. It may depend on how the clients and backends typically behave so there's no magic recipe. Drido From rnickb731 at gmail.com Sat May 6 02:15:30 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Fri, 5 May 2017 22:15:30 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? Message-ID: Hello, >From VCL, is it possible to execute code that runs after a request has been processed? I'm looking into writing a module that enables Varnish for distributed tracing using the OpenTracing project [opentracing.io]. This requires invoking code at the beginning of a request to start a span and insert tracing context into the request's headers and invoking code after a request's been processed to finish the span and measure how long it took to process. I recently did a similar project for nginx [github.com/rnburn/nginx-opentracing]. Nginx provides an NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you to set up handlers run after requests are serviced. Can anything equivalent be done using VCL? I image you could accomplish this by subscribing and regularly reading from Varnish's shared memory log, but I'd much rather do it directly if possible. Thanks, Ryan From kurtanatlus at gmail.com Sat May 6 19:40:01 2017 From: kurtanatlus at gmail.com (Kurt Sultana) Date: Sat, 6 May 2017 21:40:01 +0200 Subject: Backend definition in multiple VCL files in Varnish 5 In-Reply-To: References: Message-ID: Ideas anyone? Any help is appreciated. Thanks On 4 May 2017 2:33 p.m., "Kurt Sultana" wrote: > Hi all, > > I'm a bit newish to Varnish though I do have some background. I have a > Varnish 5 instance connected to 2 backend servers (Magento 2 applications). > > I'm using the new Varnish 5 feature of loading multiple VCL files. My > ultimate problem is during purging however I'd like to ensure things are > set up correctly because documentation regarding multiple VCL files in > Varnish 5 is somewhat lacking. > > To keep things very simple for now, I'm going to use 1 backend server in > my example. > > So, I have a magento.vcl defined as follows: > > *vcl 4.0;* > > *import std;* > > *# The minimal Varnish version is 4.0* > *# For SSL offloading, pass the following header in your proxy server or > load balancer: 'X-Forwarded-Proto: https'* > > *backend default {* > * .host = "127.0.0.1";* > * .port = "8088";* > *}* > > *include "/etc/varnish/common.vcl";* > > And a top.vcl > > *vcl 4.0;* > > *import std;* > > *backend default { .host = "127.0.0.1"; }* > > *sub vcl_recv {* > * if (req.http.host == "magento2.dev") {* > * return (vcl(magento_vcl));* > * }* > *}* > > Then I run > > *service varnish restart* > > *varnishadm* > > *vcl.load magento /etc/varnish/conf.d/magento.vc l* > > *vcl.label magento_vcl magento * > > *vcl.load top /etc/varnish/top.vcl* > > *vcl.use top* > > *quit* > > When I browse to magento2.dev, I get a backend fetch error after some > seconds. It's only when I go in magento.vcl and change the name of the > backend and make a backend hint that it works. See below: > > *vcl 4.0;* > > *import std;* > > *# The minimal Varnish version is 4.0* > *# For SSL offloading, pass the following header in your proxy server or > load balancer: 'X-Forwarded-Proto: https'* > > *backend magento {* > * .host = "127.0.0.1";* > * .port = "8088";* > *}* > > *sub vcl_recv { set req.backend_hint = magento;}* > > *include "/etc/varnish/common.vcl";* > > Why should I be specifying a backend hint? Shouldn't Varnish be loading a > different VCL according to the host specified in top.vcl? Or is there > something wrong? > > Thanks in advance, > Kurt > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Sat May 6 19:45:45 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Sat, 6 May 2017 14:45:45 -0500 Subject: Backend definition in multiple VCL files in Varnish 5 In-Reply-To: References: Message-ID: <82131a88-215c-4672-585c-b83e18424f4f@sharphosting.uk> You might try Stack Overflow if you're not having any luck on this list. There are a few folks who answer Varnish questions on there. Just tag it "varnish" and "varnish-vcl". Nigel On 06/05/2017 14:40, Kurt Sultana wrote: > Ideas anyone? Any help is appreciated. > > Thanks > > On 4 May 2017 2:33 p.m., "Kurt Sultana" > wrote: > > Hi all, > > I'm a bit newish to Varnish though I do have some background. I have > a Varnish 5 instance connected to 2 backend servers (Magento 2 > applications). > > I'm using the new Varnish 5 feature of loading multiple VCL files. > My ultimate problem is during purging however I'd like to ensure > things are set up correctly because documentation regarding multiple > VCL files in Varnish 5 is somewhat lacking. > > To keep things very simple for now, I'm going to use 1 backend > server in my example. > > So, I have a magento.vcl defined as follows: > > *vcl 4.0;* > > *import std;* > * > * > *# The minimal Varnish version is 4.0* > *# For SSL offloading, pass the following header in your proxy > server or load balancer: 'X-Forwarded-Proto: https'* > * > * > *backend default {* > * .host = "127.0.0.1";* > * .port = "8088";* > *}* > * > * > *include "/etc/varnish/common.vcl";* > > And a top.vcl > > *vcl 4.0;* > * > * > *import std;* > * > * > *backend default { .host = "127.0.0.1"; }* > * > * > *sub vcl_recv {* > * if (req.http.host == "magento2.dev") {* > * return (vcl(magento_vcl));* > * }* > *}* > > Then I run > > *service varnish restart* > > *varnishadm* > > *vcl.load magento /etc/varnish/conf.d/magento.vc l* > > *vcl.label magento_vcl magento* > > *vcl.load top /etc/varnish/top.vcl* > > *vcl.use top* > > *quit* > > > When I browse to magento2.dev, I get a backend fetch error after > some seconds. It's only when I go in magento.vcl and change the name > of the backend and make a backend hint that it works. See below: > > *vcl 4.0;* > > *import std;* > * > * > *# The minimal Varnish version is 4.0* > *# For SSL offloading, pass the following header in your proxy > server or load balancer: 'X-Forwarded-Proto: https'* > * > * > *backend magento {* > * .host = "127.0.0.1";* > * .port = "8088";* > *}* > * > * > * > sub vcl_recv { > set req.backend_hint = magento; > } > * > * > * > *include "/etc/varnish/common.vcl";* > * > * > Why should I be specifying a backend hint? Shouldn't Varnish be > loading a different VCL according to the host specified in top.vcl? > Or is there something wrong? > > Thanks in advance, > Kurt > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From np.lists at sharphosting.uk Sun May 7 20:34:11 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Sun, 7 May 2017 15:34:11 -0500 Subject: PURGE and restart() Without Increasing Miss Count? Message-ID: <62232ff9-04a8-6c66-d786-686855b9d2fe@sharphosting.uk> Hi all, Is there anything I can do in VCL to prevent a particular request from increasing the miss count? This is because I use the PURGE method to keep the cache up to date, issuing a restart from vcl_purge. I would like to not increase the miss count for those requests, if it's possible? Thanks Nigel From reza at varnish-software.com Mon May 8 15:10:23 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 8 May 2017 11:10:23 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: You can do this in a VMOD via PRIV_TASK: -- Reza Naghibi Varnish Software On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: > Hello, > From VCL, is it possible to execute code that runs after a request has > been processed? > > I'm looking into writing a module that enables Varnish for distributed > tracing using the OpenTracing project [opentracing.io]. This requires > invoking code at the beginning of a request to start a span and insert > tracing context into the request's headers and invoking code after a > request's been processed to finish the span and measure how long it > took to process. > > I recently did a similar project for nginx > [github.com/rnburn/nginx-opentracing]. Nginx provides an > NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you > to set up handlers run after requests are serviced. Can anything > equivalent be done using VCL? > > I image you could accomplish this by subscribing and regularly reading > from Varnish's shared memory log, but I'd much rather do it directly > if possible. > > Thanks, Ryan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Mon May 8 15:13:25 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 8 May 2017 11:13:25 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Sorry, email misfire. You can do this in a VMOD via PRIV_TASK: https://varnish-cache.org/docs/trunk/reference/vmod.html#private-pointers It might make sense to track this stuff in some kind of struct, in which case, put it into *priv and then register a *free callback. Otherwise, just put a dummy value into the *priv. *free will get called after the request is done and you can put your custom code in there. -- Reza Naghibi Varnish Software On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi wrote: > You can do this in a VMOD via PRIV_TASK: > > > -- > Reza Naghibi > Varnish Software > > On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: > >> Hello, >> From VCL, is it possible to execute code that runs after a request has >> been processed? >> >> I'm looking into writing a module that enables Varnish for distributed >> tracing using the OpenTracing project [opentracing.io]. This requires >> invoking code at the beginning of a request to start a span and insert >> tracing context into the request's headers and invoking code after a >> request's been processed to finish the span and measure how long it >> took to process. >> >> I recently did a similar project for nginx >> [github.com/rnburn/nginx-opentracing]. Nginx provides an >> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you >> to set up handlers run after requests are serviced. Can anything >> equivalent be done using VCL? >> >> I image you could accomplish this by subscribing and regularly reading >> from Varnish's shared memory log, but I'd much rather do it directly >> if possible. >> >> Thanks, Ryan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Mon May 8 15:19:58 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 8 May 2017 11:19:58 -0400 Subject: PURGE and restart() Without Increasing Miss Count? In-Reply-To: <62232ff9-04a8-6c66-d786-686855b9d2fe@sharphosting.uk> References: <62232ff9-04a8-6c66-d786-686855b9d2fe@sharphosting.uk> Message-ID: This isnt possible. -- Reza Naghibi Varnish Software On Sun, May 7, 2017 at 4:34 PM, Nigel Peck wrote: > > Hi all, > > Is there anything I can do in VCL to prevent a particular request from > increasing the miss count? > > This is because I use the PURGE method to keep the cache up to date, > issuing a restart from vcl_purge. I would like to not increase the miss > count for those requests, if it's possible? > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Mon May 8 15:24:07 2017 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 8 May 2017 11:24:07 -0400 Subject: Backend definition in multiple VCL files in Varnish 5 In-Reply-To: <82131a88-215c-4672-585c-b83e18424f4f@sharphosting.uk> References: <82131a88-215c-4672-585c-b83e18424f4f@sharphosting.uk> Message-ID: Can you provide a varnishlog of the request where you get the backend errors? -- Reza Naghibi Varnish Software On Sat, May 6, 2017 at 3:45 PM, Nigel Peck wrote: > > You might try Stack Overflow if you're not having any luck on this list. > There are a few folks who answer Varnish questions on there. Just tag it > "varnish" and "varnish-vcl". > > Nigel > > On 06/05/2017 14:40, Kurt Sultana wrote: > >> Ideas anyone? Any help is appreciated. >> >> Thanks >> >> On 4 May 2017 2:33 p.m., "Kurt Sultana" > kurtanatlus at gmail.com>> wrote: >> >> Hi all, >> >> I'm a bit newish to Varnish though I do have some background. I have >> a Varnish 5 instance connected to 2 backend servers (Magento 2 >> applications). >> >> I'm using the new Varnish 5 feature of loading multiple VCL files. >> My ultimate problem is during purging however I'd like to ensure >> things are set up correctly because documentation regarding multiple >> VCL files in Varnish 5 is somewhat lacking. >> >> To keep things very simple for now, I'm going to use 1 backend >> server in my example. >> >> So, I have a magento.vcl defined as follows: >> >> *vcl 4.0;* >> >> *import std;* >> * >> * >> *# The minimal Varnish version is 4.0* >> *# For SSL offloading, pass the following header in your proxy >> server or load balancer: 'X-Forwarded-Proto: https'* >> * >> * >> *backend default {* >> * .host = "127.0.0.1";* >> * .port = "8088";* >> *}* >> * >> * >> *include "/etc/varnish/common.vcl";* >> >> And a top.vcl >> >> *vcl 4.0;* >> * >> * >> *import std;* >> * >> * >> *backend default { .host = "127.0.0.1"; }* >> * >> * >> *sub vcl_recv {* >> * if (req.http.host == "magento2.dev") {* >> * return (vcl(magento_vcl));* >> * }* >> *}* >> >> Then I run >> >> *service varnish restart* >> >> *varnishadm* >> >> *vcl.load magento /etc/varnish/conf.d/magento.vc > >l* >> >> *vcl.label magento_vcl magento* >> >> *vcl.load top /etc/varnish/top.vcl* >> >> *vcl.use top* >> >> *quit* >> >> >> When I browse to magento2.dev, I get a backend fetch error after >> some seconds. It's only when I go in magento.vcl and change the name >> of the backend and make a backend hint that it works. See below: >> >> *vcl 4.0;* >> >> *import std;* >> * >> * >> *# The minimal Varnish version is 4.0* >> *# For SSL offloading, pass the following header in your proxy >> server or load balancer: 'X-Forwarded-Proto: https'* >> * >> * >> *backend magento {* >> * .host = "127.0.0.1";* >> * .port = "8088";* >> *}* >> * >> * >> * >> sub vcl_recv { >> set req.backend_hint = magento; >> } >> * >> * >> * >> *include "/etc/varnish/common.vcl";* >> * >> * >> Why should I be specifying a backend hint? Shouldn't Varnish be >> loading a different VCL according to the host specified in top.vcl? >> Or is there something wrong? >> >> Thanks in advance, >> Kurt >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Mon May 8 15:31:07 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Mon, 8 May 2017 10:31:07 -0500 Subject: PURGE and restart() Without Increasing Miss Count? In-Reply-To: References: <62232ff9-04a8-6c66-d786-686855b9d2fe@sharphosting.uk> Message-ID: <2ed67150-52f9-1034-21fa-0fda0192f74f@sharphosting.uk> Thanks for that, good to know. Nigel On 08/05/2017 10:19, Reza Naghibi wrote: > This isnt possible. > > -- > Reza Naghibi > Varnish Software > > On Sun, May 7, 2017 at 4:34 PM, Nigel Peck > wrote: > > > Hi all, > > Is there anything I can do in VCL to prevent a particular request > from increasing the miss count? > > This is because I use the PURGE method to keep the cache up to date, > issuing a restart from vcl_purge. I would like to not increase the > miss count for those requests, if it's possible? > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > From guillaume at varnish-software.com Mon May 8 16:13:50 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 8 May 2017 18:13:50 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: That's the way to do it in a vmod, indeed. However Ryan, I don't get why you are reluctant to use the logs. By using the c api, you can just define callbacks and get called everything a request/transaction ends, so you don't need to read regularly. -- Guillaume Quintard On May 8, 2017 17:45, "Reza Naghibi" wrote: You can do this in a VMOD via PRIV_TASK: -- Reza Naghibi Varnish Software On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: > Hello, > From VCL, is it possible to execute code that runs after a request has > been processed? > > I'm looking into writing a module that enables Varnish for distributed > tracing using the OpenTracing project [opentracing.io]. This requires > invoking code at the beginning of a request to start a span and insert > tracing context into the request's headers and invoking code after a > request's been processed to finish the span and measure how long it > took to process. > > I recently did a similar project for nginx > [github.com/rnburn/nginx-opentracing]. Nginx provides an > NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you > to set up handlers run after requests are serviced. Can anything > equivalent be done using VCL? > > I image you could accomplish this by subscribing and regularly reading > from Varnish's shared memory log, but I'd much rather do it directly > if possible. > > Thanks, Ryan > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From rnickb731 at gmail.com Mon May 8 23:08:56 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Mon, 8 May 2017 19:08:56 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Thanks Reza and Guillaume. I didn't realize there was a way to set up callbacks on the VSM. I think either of the approaches will work for me. On Mon, May 8, 2017 at 12:13 PM, Guillaume Quintard wrote: > > That's the way to do it in a vmod, indeed. > > However Ryan, I don't get why you are reluctant to use the logs. By using > the c api, you can just define callbacks and get called everything a > request/transaction ends, so you don't need to read regularly. > -- > Guillaume Quintard > > > On May 8, 2017 17:45, "Reza Naghibi" wrote: > > You can do this in a VMOD via PRIV_TASK: > > > -- > Reza Naghibi > Varnish Software > > On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: >> >> Hello, >> From VCL, is it possible to execute code that runs after a request has >> been processed? >> >> I'm looking into writing a module that enables Varnish for distributed >> tracing using the OpenTracing project [opentracing.io]. This requires >> invoking code at the beginning of a request to start a span and insert >> tracing context into the request's headers and invoking code after a >> request's been processed to finish the span and measure how long it >> took to process. >> >> I recently did a similar project for nginx >> [github.com/rnburn/nginx-opentracing]. Nginx provides an >> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you >> to set up handlers run after requests are serviced. Can anything >> equivalent be done using VCL? >> >> I image you could accomplish this by subscribing and regularly reading >> from Varnish's shared memory log, but I'd much rather do it directly >> if possible. >> >> Thanks, Ryan >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From guillaume at varnish-software.com Tue May 9 07:40:42 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 9 May 2017 09:40:42 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Have a look at varnishncsa and varnishlog, and more specifically to the function they set VUT.dispatch_f to, that should put you on the right tracks. If not, ping me on IRC, or here. -- Guillaume Quintard On Tue, May 9, 2017 at 1:08 AM, Ryan Burn wrote: > Thanks Reza and Guillaume. I didn't realize there was a way to set up > callbacks on the VSM. I think either of the approaches will work for > me. > > On Mon, May 8, 2017 at 12:13 PM, Guillaume Quintard > wrote: > > > > That's the way to do it in a vmod, indeed. > > > > However Ryan, I don't get why you are reluctant to use the logs. By using > > the c api, you can just define callbacks and get called everything a > > request/transaction ends, so you don't need to read regularly. > > -- > > Guillaume Quintard > > > > > > On May 8, 2017 17:45, "Reza Naghibi" wrote: > > > > You can do this in a VMOD via PRIV_TASK: > > > > > > -- > > Reza Naghibi > > Varnish Software > > > > On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: > >> > >> Hello, > >> From VCL, is it possible to execute code that runs after a request has > >> been processed? > >> > >> I'm looking into writing a module that enables Varnish for distributed > >> tracing using the OpenTracing project [opentracing.io]. This requires > >> invoking code at the beginning of a request to start a span and insert > >> tracing context into the request's headers and invoking code after a > >> request's been processed to finish the span and measure how long it > >> took to process. > >> > >> I recently did a similar project for nginx > >> [github.com/rnburn/nginx-opentracing]. Nginx provides an > >> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you > >> to set up handlers run after requests are serviced. Can anything > >> equivalent be done using VCL? > >> > >> I image you could accomplish this by subscribing and regularly reading > >> from Varnish's shared memory log, but I'd much rather do it directly > >> if possible. > >> > >> Thanks, Ryan > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > > > > _______________________________________________ > > varnish-misc mailing list > > varnish-misc at varnish-cache.org > > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue May 9 10:13:58 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 9 May 2017 12:13:58 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: On Tue, May 9, 2017 at 9:40 AM, Guillaume Quintard wrote: > Have a look at varnishncsa and varnishlog, and more specifically to the > function they set VUT.dispatch_f to, that should put you on the right > tracks. If not, ping me on IRC, or here. The shared memory log is the best place to do that, however the VUT (Varnish UTility) API is not yet public [1] so for now you have to carry a copy of the headers in your project. For Varnish 5.1.2+ I started a tutorial-as-a-repo [2] although progress on the tutorial side is around 0% at the moment. You still get a working project for a VUT (and you can get rid of the VMOD or VCL stuff if you don't need them). You can probably find libvarnishapi bindings for other languages but that's not something we support. Dridi [1] https://github.com/varnishcache/varnish-cache/pull/2314 [2] https://github.com/Dridi/varnish-template From dridi at varni.sh Tue May 9 10:17:10 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 9 May 2017 12:17:10 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: > I'm looking into writing a module that enables Varnish for distributed > tracing using the OpenTracing project [opentracing.io]. This requires > invoking code at the beginning of a request to start a span and insert > tracing context into the request's headers and invoking code after a > request's been processed to finish the span and measure how long it > took to process. You can see this kind of tracing in action in the Zipnish [1] project. It's Varnish's equivalent to Zipkin and if I'm not wrong was or is compatible with the latter. Dridi [1] https://github.com/varnish/zipnish From sreeranj4droid at gmail.com Tue May 9 10:48:05 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Tue, 9 May 2017 16:18:05 +0530 Subject: Set cache headers for browsers by varnish. Message-ID: Hi, Is it possible for varnish to send the "Cache-Control: no-cache, no-store, must-revalidate" cache control response to browser, while varnish caches the response. Scenario is like this. 1) Backend sends cache control :Cache-Control: no-cache, no-store, must-revalidate 2) Varnish should cache the response. 3) Browser should not cache the contents so in response from varnish it should show cache control :Cache-Control: no-cache, no-store, must-revalidate I have tried using Cache-Control: no-cache, no-store, must-revalidate in set beresp.http.Cache-Control, but this causes varnish not to cache the responses. Given below is the vcl_backend_response ============================= sub vcl_backend_response { if (bereq.url == "/") { unset beresp.http.expires; unset beresp.http.set-cookie; set beresp.ttl = 3600s; set beresp.http.Cache-Control = "max-age=0"; #Prevent caching of 404 and 500 errors if (beresp.status >= 400 && beresp.status <= 599) { set beresp.ttl = 0s; } } } ============================= Any help is highly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Tue May 9 13:07:21 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 9 May 2017 15:07:21 +0200 Subject: Set cache headers for browsers by varnish. In-Reply-To: References: Message-ID: Easy fix is: set it in vcl_deliver instead of vcl_backend_response. Have a look at builtin.vcl (varnishadm vcl.show -v boot) that gets automatically appended to your vcl. If you wish to bypass it, return(deliver) from your code and the built-in code won't be ran. -- Guillaume Quintard On May 9, 2017 14:00, "sreeranj s" wrote: > Hi, > > > Is it possible for varnish to send the "Cache-Control: no-cache, no-store, > must-revalidate" cache control response to browser, while varnish caches > the response. > > > Scenario is like this. > > 1) Backend sends cache control :Cache-Control: no-cache, no-store, > must-revalidate > 2) Varnish should cache the response. > 3) Browser should not cache the contents so in response from varnish it > should show > cache control :Cache-Control: no-cache, no-store, must-revalidate > > > I have tried using Cache-Control: no-cache, no-store, must-revalidate in > set beresp.http.Cache-Control, but this causes varnish not to cache the > responses. > > > Given below is the vcl_backend_response > ============================= > sub vcl_backend_response { > if (bereq.url == "/") { > > unset beresp.http.expires; > unset beresp.http.set-cookie; > set beresp.ttl = 3600s; > set beresp.http.Cache-Control = "max-age=0"; > #Prevent caching of 404 and 500 errors > if (beresp.status >= 400 && beresp.status <= 599) { > set beresp.ttl = 0s; > } > } > } > ============================= > > Any help is highly appreciated. > > > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Thu May 11 23:40:05 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 11 May 2017 18:40:05 -0500 Subject: Hit for Pass after "return(pass)" in vcl_recv Message-ID: <2bdfb20e-0830-0ab0-3732-579fafa25a1e@sharphosting.uk> Hi, I'm wondering what happens with requests that were marked return(pass) by vcl_recv, when they get to vcl_backend_response. If I do the following: set beresp.http.Cache-Control = "private, max-age=0, no-cache, no-store"; set beresp.http.Expires = "Mon, 01 Jan 2001 00:00:00 GMT"; set beresp.ttl = 120s; set beresp.uncacheable = true; return (deliver); Is that going to mark them hit-for-pass? Or can they not be cached in any way now? (even hit for pass) Is the "uncacheable" redundant because they were already marked for pass? I would appreciate any further info on this. Thanks Nigel From rnickb731 at gmail.com Fri May 12 00:33:57 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Thu, 11 May 2017 20:33:57 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: >From the free function, is there any way to get the status code or other properties of the request? I tried using VRT_r_obj_status with a stored reference to the context, but that doesn't seem to work since some of the request's resources have already been reclaimed: https://github.com/rnburn/varnish-opentracing/blob/master/opentracing/src/trace.cpp#L22 Is there any other place something like the status would be stored? On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi wrote: > Sorry, email misfire. > > You can do this in a VMOD via PRIV_TASK: > > https://varnish-cache.org/docs/trunk/reference/vmod.html#private-pointers > > It might make sense to track this stuff in some kind of struct, in which > case, put it into *priv and then register a *free callback. Otherwise, just > put a dummy value into the *priv. *free will get called after the request is > done and you can put your custom code in there. > > -- > Reza Naghibi > Varnish Software > > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi > wrote: >> >> You can do this in a VMOD via PRIV_TASK: >> >> >> -- >> Reza Naghibi >> Varnish Software >> >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: >>> >>> Hello, >>> From VCL, is it possible to execute code that runs after a request has >>> been processed? >>> >>> I'm looking into writing a module that enables Varnish for distributed >>> tracing using the OpenTracing project [opentracing.io]. This requires >>> invoking code at the beginning of a request to start a span and insert >>> tracing context into the request's headers and invoking code after a >>> request's been processed to finish the span and measure how long it >>> took to process. >>> >>> I recently did a similar project for nginx >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you >>> to set up handlers run after requests are serviced. Can anything >>> equivalent be done using VCL? >>> >>> I image you could accomplish this by subscribing and regularly reading >>> from Varnish's shared memory log, but I'd much rather do it directly >>> if possible. >>> >>> Thanks, Ryan >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> varnish-misc at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> > From rnickb731 at gmail.com Fri May 12 00:48:49 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Thu, 11 May 2017 20:48:49 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Is it possible to set up a VUT.dispatch_f function in a varnish module? The OpenTracing API for managing a span requires operating on a single object; and since I need to modify varnish backend requests to inject tracing context into their headers, I would think it has to run from a module. Are the functions VUT_Init, VUT_Setup, VUT_Main, etc allowed to be invoked from a module, or are they only meant to work in a stand-alone process? On Tue, May 9, 2017 at 3:40 AM, Guillaume Quintard wrote: > Have a look at varnishncsa and varnishlog, and more specifically to the > function they set VUT.dispatch_f to, that should put you on the right > tracks. If not, ping me on IRC, or here. > > -- > Guillaume Quintard > > On Tue, May 9, 2017 at 1:08 AM, Ryan Burn wrote: >> >> Thanks Reza and Guillaume. I didn't realize there was a way to set up >> callbacks on the VSM. I think either of the approaches will work for >> me. >> >> On Mon, May 8, 2017 at 12:13 PM, Guillaume Quintard >> wrote: >> > >> > That's the way to do it in a vmod, indeed. >> > >> > However Ryan, I don't get why you are reluctant to use the logs. By >> > using >> > the c api, you can just define callbacks and get called everything a >> > request/transaction ends, so you don't need to read regularly. >> > -- >> > Guillaume Quintard >> > >> > >> > On May 8, 2017 17:45, "Reza Naghibi" wrote: >> > >> > You can do this in a VMOD via PRIV_TASK: >> > >> > >> > -- >> > Reza Naghibi >> > Varnish Software >> > >> > On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: >> >> >> >> Hello, >> >> From VCL, is it possible to execute code that runs after a request has >> >> been processed? >> >> >> >> I'm looking into writing a module that enables Varnish for distributed >> >> tracing using the OpenTracing project [opentracing.io]. This requires >> >> invoking code at the beginning of a request to start a span and insert >> >> tracing context into the request's headers and invoking code after a >> >> request's been processed to finish the span and measure how long it >> >> took to process. >> >> >> >> I recently did a similar project for nginx >> >> [github.com/rnburn/nginx-opentracing]. Nginx provides an >> >> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you >> >> to set up handlers run after requests are serviced. Can anything >> >> equivalent be done using VCL? >> >> >> >> I image you could accomplish this by subscribing and regularly reading >> >> from Varnish's shared memory log, but I'd much rather do it directly >> >> if possible. >> >> >> >> Thanks, Ryan >> >> >> >> _______________________________________________ >> >> varnish-misc mailing list >> >> varnish-misc at varnish-cache.org >> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > >> > >> > _______________________________________________ >> > varnish-misc mailing list >> > varnish-misc at varnish-cache.org >> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > > > From guillaume at varnish-software.com Fri May 12 01:38:02 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 11 May 2017 18:38:02 -0700 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: you can put anything in the priv field of the task, but the issue is that you have to put that data in there, meaning a call to your vmod from the vcl. the VUT.dispatch_f function isn't to be called from a vmod, and I don't think you need to. Maybe it's time to take a step back, can you fill us in the whole workflow, notably: - what data do you inject, and how do you create it? - what do you need to know about the req/resp/bereq/beresp? I almost have the feeling that this could be solved through pure vcl+shell. -- Guillaume Quintard On Thu, May 11, 2017 at 5:33 PM, Ryan Burn wrote: > From the free function, is there any way to get the status code or > other properties of the request? I tried using VRT_r_obj_status with a > stored reference to the context, but that doesn't seem to work since > some of the request's resources have already been reclaimed: > > https://github.com/rnburn/varnish-opentracing/blob/ > master/opentracing/src/trace.cpp#L22 > > Is there any other place something like the status would be stored? > > > On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi > wrote: > > Sorry, email misfire. > > > > You can do this in a VMOD via PRIV_TASK: > > > > https://varnish-cache.org/docs/trunk/reference/vmod. > html#private-pointers > > > > It might make sense to track this stuff in some kind of struct, in which > > case, put it into *priv and then register a *free callback. Otherwise, > just > > put a dummy value into the *priv. *free will get called after the > request is > > done and you can put your custom code in there. > > > > -- > > Reza Naghibi > > Varnish Software > > > > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi > > > wrote: > >> > >> You can do this in a VMOD via PRIV_TASK: > >> > >> > >> -- > >> Reza Naghibi > >> Varnish Software > >> > >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: > >>> > >>> Hello, > >>> From VCL, is it possible to execute code that runs after a request has > >>> been processed? > >>> > >>> I'm looking into writing a module that enables Varnish for distributed > >>> tracing using the OpenTracing project [opentracing.io]. This requires > >>> invoking code at the beginning of a request to start a span and insert > >>> tracing context into the request's headers and invoking code after a > >>> request's been processed to finish the span and measure how long it > >>> took to process. > >>> > >>> I recently did a similar project for nginx > >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an > >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you > >>> to set up handlers run after requests are serviced. Can anything > >>> equivalent be done using VCL? > >>> > >>> I image you could accomplish this by subscribing and regularly reading > >>> from Varnish's shared memory log, but I'd much rather do it directly > >>> if possible. > >>> > >>> Thanks, Ryan > >>> > >>> _______________________________________________ > >>> varnish-misc mailing list > >>> varnish-misc at varnish-cache.org > >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > >> > >> > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri May 12 01:40:13 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 11 May 2017 18:40:13 -0700 Subject: Hit for Pass after "return(pass)" in vcl_recv In-Reply-To: <2bdfb20e-0830-0ab0-3732-579fafa25a1e@sharphosting.uk> References: <2bdfb20e-0830-0ab0-3732-579fafa25a1e@sharphosting.uk> Message-ID: IIRC, no it won't because it's a pass, so the new object won't enter the cache at all. -- Guillaume Quintard On Thu, May 11, 2017 at 4:40 PM, Nigel Peck wrote: > > Hi, > > I'm wondering what happens with requests that were marked return(pass) by > vcl_recv, when they get to vcl_backend_response. If I do the following: > > set beresp.http.Cache-Control = "private, max-age=0, no-cache, no-store"; > set beresp.http.Expires = "Mon, 01 Jan 2001 00:00:00 GMT"; > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > > Is that going to mark them hit-for-pass? Or can they not be cached in any > way now? (even hit for pass) Is the "uncacheable" redundant because they > were already marked for pass? > > I would appreciate any further info on this. > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Fri May 12 03:44:24 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 11 May 2017 22:44:24 -0500 Subject: Hit for Pass after "return(pass)" in vcl_recv In-Reply-To: References: <2bdfb20e-0830-0ab0-3732-579fafa25a1e@sharphosting.uk> Message-ID: <0376ad2b-ba5d-d320-9654-16e6cb205cca@sharphosting.uk> Thanks for this. I did some testing and can confirm it won't enter the cache at all as you say. Nigel On 11/05/2017 20:40, Guillaume Quintard wrote: > IIRC, no it won't because it's a pass, so the new object won't enter the > cache at all. > > -- > Guillaume Quintard > > On Thu, May 11, 2017 at 4:40 PM, Nigel Peck > wrote: > > > Hi, > > I'm wondering what happens with requests that were marked > return(pass) by vcl_recv, when they get to vcl_backend_response. If > I do the following: > > set beresp.http.Cache-Control = "private, max-age=0, no-cache, > no-store"; > set beresp.http.Expires = "Mon, 01 Jan 2001 00:00:00 GMT"; > set beresp.ttl = 120s; > set beresp.uncacheable = true; > return (deliver); > > Is that going to mark them hit-for-pass? Or can they not be cached > in any way now? (even hit for pass) Is the "uncacheable" redundant > because they were already marked for pass? > > I would appreciate any further info on this. > > Thanks > Nigel > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > From rnickb731 at gmail.com Fri May 12 05:20:03 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Fri, 12 May 2017 01:20:03 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Sure. The intention with OpenTracing is to provide a common API that can be used to instrument frameworks and libraries. What happens when, for example, a span is created or its context injected into a request header isn?t specified by the standard. It?s up to the particular tracing implementation used (e.g. LightStep, Zipkin, Jaeger, etc) to decide what specifically to do. So, if a user wants to enable varnish for OpenTracing, I?d expect them do something like the following in their VCL: ### # This is distributed as part of the varnish-opentracing project. It imports a varnish module # that exposes VCL functions to interact with the C++ OpenTracing API # https://github.com/opentracing/opentracing-cpp # and adds commands to the VCL built-in subroutines so that the module?s functions will # be invoked when certain events occur. include ?opentracing.vcl?; # A user also needs to select a tracing implementation to use. This is done by importing # the implementation?s module and initializing the tracer in vcl_init. For example, if they?re # using LightStep they might do something like this import lightstep; sub vcl_init { lightstep.collector_host(???); lightstep.collector_port(???); lightstep.init_tracer(); } # Tracing is then explicitly turned on for a particular request with logic in the vcl_recv subroutine. # This means that a span will be created for the request and any backend requests that result from it. # The trace?s context will also be propagated into the backend request headers, so that any tracing done # by the backend server can be linked to it. sub vcl_recv { # This turns on tracing for all requests. opentracing.trace_request(); } ### Though all the pieces aren?t together, I have an example set up here https://github.com/rnburn/varnish-opentracing/blob/master/example/library/varnish/library.vcl To go back to the questions: - what data do you inject, and how do you create it? You would be injecting a list of key-value pairs that represent the context of the active span (https://github.com/opentracing/specification/blob/master/specification.md#spancontext). Specifically what that means is up to the tracing implementation, but it would look something like this: 'ot-tracer-traceid': 'f84de504f0287bbc' // An ID used to identify the Trace. 'ot-tracer-spanid': 'e34088878e7f0ce8' // An ID used to identify the active span within the Trace. 'ot-tracer-sampled': 'true' // Heuristic used by the Tracer - what do you need to know about the req/resp/bereq/beresp? Knowing whether the request resulted in an error is pretty important to record. Other data usually added are the URI, http method, ip addresses of the client, server, and backend servers. Some of the guidelines on what to include are documented here: https://github.com/opentracing/specification/blob/master/semantic_conventions.md An example might make this clearer. This shows the breakdown of a trace representing the action of a user submitting a profile form: https://github.com/rnburn/nginx-opentracing/blob/master/doc/data/nginx-upload-trace5.png The server (described in more detail here https://github.com/rnburn/nginx-opentracing/blob/master/doc/Tutorial.md) uses nginx as a reverse proxy in front of Node.js servers that update a database and perform image manipulation. You can see spans created on the nginx side to track the duration of the request and how long it passes through various location blocks as well as spans created from the Node.js server to represent the database activity and image manipulation. Injecting context into the request headers is what allows the spans to be linked together so that the entire trace can be formed. On Thu, May 11, 2017 at 9:38 PM, Guillaume Quintard wrote: > you can put anything in the priv field of the task, but the issue is that > you have to put that data in there, meaning a call to your vmod from the > vcl. > > the VUT.dispatch_f function isn't to be called from a vmod, and I don't > think you need to. > > Maybe it's time to take a step back, can you fill us in the whole workflow, > notably: > - what data do you inject, and how do you create it? > - what do you need to know about the req/resp/bereq/beresp? > > I almost have the feeling that this could be solved through pure vcl+shell. > > -- > Guillaume Quintard > > On Thu, May 11, 2017 at 5:33 PM, Ryan Burn wrote: >> >> From the free function, is there any way to get the status code or >> other properties of the request? I tried using VRT_r_obj_status with a >> stored reference to the context, but that doesn't seem to work since >> some of the request's resources have already been reclaimed: >> >> >> https://github.com/rnburn/varnish-opentracing/blob/master/opentracing/src/trace.cpp#L22 >> >> Is there any other place something like the status would be stored? >> >> >> On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi >> wrote: >> > Sorry, email misfire. >> > >> > You can do this in a VMOD via PRIV_TASK: >> > >> > >> > https://varnish-cache.org/docs/trunk/reference/vmod.html#private-pointers >> > >> > It might make sense to track this stuff in some kind of struct, in which >> > case, put it into *priv and then register a *free callback. Otherwise, >> > just >> > put a dummy value into the *priv. *free will get called after the >> > request is >> > done and you can put your custom code in there. >> > >> > -- >> > Reza Naghibi >> > Varnish Software >> > >> > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi >> > >> > wrote: >> >> >> >> You can do this in a VMOD via PRIV_TASK: >> >> >> >> >> >> -- >> >> Reza Naghibi >> >> Varnish Software >> >> >> >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn wrote: >> >>> >> >>> Hello, >> >>> From VCL, is it possible to execute code that runs after a request has >> >>> been processed? >> >>> >> >>> I'm looking into writing a module that enables Varnish for distributed >> >>> tracing using the OpenTracing project [opentracing.io]. This requires >> >>> invoking code at the beginning of a request to start a span and insert >> >>> tracing context into the request's headers and invoking code after a >> >>> request's been processed to finish the span and measure how long it >> >>> took to process. >> >>> >> >>> I recently did a similar project for nginx >> >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an >> >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows you >> >>> to set up handlers run after requests are serviced. Can anything >> >>> equivalent be done using VCL? >> >>> >> >>> I image you could accomplish this by subscribing and regularly reading >> >>> from Varnish's shared memory log, but I'd much rather do it directly >> >>> if possible. >> >>> >> >>> Thanks, Ryan >> >>> >> >>> _______________________________________________ >> >>> varnish-misc mailing list >> >>> varnish-misc at varnish-cache.org >> >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> > >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > From guillaume at varnish-software.com Mon May 15 21:19:50 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 15 May 2017 14:19:50 -0700 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Thanks Ryan, I think I have a clearer picture now. So, indeed, I think you only need a only a light vmod and a log analyzer. If I get what's happening, you need a vmod call in vcl_recv to create a trace if it doesn't exist yet, and in vcl_backend_fetch to create a span. Then, you can simply look at the log, and you'll have all the meta data you need (including IP, ports, timings and such). You may want a specific function to set the component name per-request, but that can easily be done through std.log, so I wouldn't worry about it at first. Am I completely off, or is that at least remotely sensible? -- Guillaume Quintard On Thu, May 11, 2017 at 10:20 PM, Ryan Burn wrote: > Sure. The intention with OpenTracing is to provide a common API that > can be used to instrument frameworks and libraries. What happens when, > for example, a span is created or its context injected into a request > header isn?t specified by the standard. It?s up to the particular > tracing implementation used (e.g. LightStep, Zipkin, Jaeger, etc) to > decide what specifically to do. > > So, if a user wants to enable varnish for OpenTracing, I?d expect them > do something like the following in their VCL: > > ### > # This is distributed as part of the varnish-opentracing project. It > imports a varnish module > # that exposes VCL functions to interact with the C++ OpenTracing API > # https://github.com/opentracing/opentracing-cpp > # and adds commands to the VCL built-in subroutines so that the > module?s functions will > # be invoked when certain events occur. > include ?opentracing.vcl?; > > > # A user also needs to select a tracing implementation to use. This is > done by importing > # the implementation?s module and initializing the tracer in vcl_init. > For example, if they?re > # using LightStep they might do something like this > import lightstep; > > sub vcl_init { > lightstep.collector_host(???); > lightstep.collector_port(???); > lightstep.init_tracer(); > } > > > # Tracing is then explicitly turned on for a particular request with > logic in the vcl_recv subroutine. > # This means that a span will be created for the request and any > backend requests that result from it. > # The trace?s context will also be propagated into the backend request > headers, so that any tracing done > # by the backend server can be linked to it. > sub vcl_recv { > # This turns on tracing for all requests. > opentracing.trace_request(); > } > ### > > > Though all the pieces aren?t together, I have an example set up here > > https://github.com/rnburn/varnish-opentracing/blob/master/example/library/ > varnish/library.vcl > > To go back to the questions: > > - what data do you inject, and how do you create it? > You would be injecting a list of key-value pairs that represent the > context of the active span > (https://github.com/opentracing/specification/ > blob/master/specification.md#spancontext). > Specifically what that means is up to the tracing implementation, but > it would look something like this: > > > 'ot-tracer-traceid': 'f84de504f0287bbc' // An ID used to > identify the Trace. > 'ot-tracer-spanid': 'e34088878e7f0ce8' // An ID used to identify > the active span within the Trace. > 'ot-tracer-sampled': 'true' // Heuristic > used by the Tracer > > - what do you need to know about the req/resp/bereq/beresp? > Knowing whether the request resulted in an error is pretty important > to record. Other data usually added > are the URI, http method, ip addresses of the client, server, and > backend servers. Some of the guidelines on what to include are > documented here: > > https://github.com/opentracing/specification/blob/master/semantic_ > conventions.md > > > An example might make this clearer. This shows the breakdown of a > trace representing the action of a user submitting a profile form: > > https://github.com/rnburn/nginx-opentracing/blob/master/ > doc/data/nginx-upload-trace5.png > > The server (described in more detail here > https://github.com/rnburn/nginx-opentracing/blob/master/doc/Tutorial.md) > uses nginx as a reverse proxy in front of Node.js servers that update > a database and perform image manipulation. You can see spans created > on the nginx side to track the duration of the request and how long it > passes through various location blocks as well as spans created from > the Node.js server to represent the database activity and image > manipulation. Injecting context into the request headers is what > allows the spans to be linked together so that the entire trace can be > formed. > > On Thu, May 11, 2017 at 9:38 PM, Guillaume Quintard > wrote: > > you can put anything in the priv field of the task, but the issue is that > > you have to put that data in there, meaning a call to your vmod from the > > vcl. > > > > the VUT.dispatch_f function isn't to be called from a vmod, and I don't > > think you need to. > > > > Maybe it's time to take a step back, can you fill us in the whole > workflow, > > notably: > > - what data do you inject, and how do you create it? > > - what do you need to know about the req/resp/bereq/beresp? > > > > I almost have the feeling that this could be solved through pure > vcl+shell. > > > > -- > > Guillaume Quintard > > > > On Thu, May 11, 2017 at 5:33 PM, Ryan Burn wrote: > >> > >> From the free function, is there any way to get the status code or > >> other properties of the request? I tried using VRT_r_obj_status with a > >> stored reference to the context, but that doesn't seem to work since > >> some of the request's resources have already been reclaimed: > >> > >> > >> https://github.com/rnburn/varnish-opentracing/blob/ > master/opentracing/src/trace.cpp#L22 > >> > >> Is there any other place something like the status would be stored? > >> > >> > >> On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi < > reza at varnish-software.com> > >> wrote: > >> > Sorry, email misfire. > >> > > >> > You can do this in a VMOD via PRIV_TASK: > >> > > >> > > >> > https://varnish-cache.org/docs/trunk/reference/vmod. > html#private-pointers > >> > > >> > It might make sense to track this stuff in some kind of struct, in > which > >> > case, put it into *priv and then register a *free callback. Otherwise, > >> > just > >> > put a dummy value into the *priv. *free will get called after the > >> > request is > >> > done and you can put your custom code in there. > >> > > >> > -- > >> > Reza Naghibi > >> > Varnish Software > >> > > >> > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi > >> > > >> > wrote: > >> >> > >> >> You can do this in a VMOD via PRIV_TASK: > >> >> > >> >> > >> >> -- > >> >> Reza Naghibi > >> >> Varnish Software > >> >> > >> >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn > wrote: > >> >>> > >> >>> Hello, > >> >>> From VCL, is it possible to execute code that runs after a request > has > >> >>> been processed? > >> >>> > >> >>> I'm looking into writing a module that enables Varnish for > distributed > >> >>> tracing using the OpenTracing project [opentracing.io]. This > requires > >> >>> invoking code at the beginning of a request to start a span and > insert > >> >>> tracing context into the request's headers and invoking code after a > >> >>> request's been processed to finish the span and measure how long it > >> >>> took to process. > >> >>> > >> >>> I recently did a similar project for nginx > >> >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an > >> >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows > you > >> >>> to set up handlers run after requests are serviced. Can anything > >> >>> equivalent be done using VCL? > >> >>> > >> >>> I image you could accomplish this by subscribing and regularly > reading > >> >>> from Varnish's shared memory log, but I'd much rather do it directly > >> >>> if possible. > >> >>> > >> >>> Thanks, Ryan > >> >>> > >> >>> _______________________________________________ > >> >>> varnish-misc mailing list > >> >>> varnish-misc at varnish-cache.org > >> >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > >> >> > >> >> > >> > > >> > >> _______________________________________________ > >> varnish-misc mailing list > >> varnish-misc at varnish-cache.org > >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rnickb731 at gmail.com Tue May 16 03:23:52 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Mon, 15 May 2017 23:23:52 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Definitely closer. But I?m not sure how that approach would work without having the log analyzer portion built into the VMOD. A restriction of the OpenTracing API is that the functions to start, attach tags, and finish a span all need to act on the same object ? it looks roughly like this: span = tracer.StartSpan(/* start time */, /* parent-span if exists /*); span.SetTag(/* key */, /* value */); span.Finish(/* finish time */); You couldn?t, for example, have the span created in a VMOD and then have a separate process analyzing the logs attach the tags and specify the span?s duration. Not sure if it?s possible, but if I could use the free function set in a PRIV_TOP structure to query the status code of the response sent, that would, I think, work well since I could avoid the complexity of setting up a VSM reader in a VMOD and pulling out the data from the log hierarchy. On Mon, May 15, 2017 at 5:19 PM, Guillaume Quintard wrote: > Thanks Ryan, I think I have a clearer picture now. > > So, indeed, I think you only need a only a light vmod and a log analyzer. > > If I get what's happening, you need a vmod call in vcl_recv to create a > trace if it doesn't exist yet, and in vcl_backend_fetch to create a span. > Then, you can simply look at the log, and you'll have all the meta data you > need (including IP, ports, timings and such). > > You may want a specific function to set the component name per-request, but > that can easily be done through std.log, so I wouldn't worry about it at > first. > > Am I completely off, or is that at least remotely sensible? > > > -- > Guillaume Quintard > > On Thu, May 11, 2017 at 10:20 PM, Ryan Burn wrote: >> >> Sure. The intention with OpenTracing is to provide a common API that >> can be used to instrument frameworks and libraries. What happens when, >> for example, a span is created or its context injected into a request >> header isn?t specified by the standard. It?s up to the particular >> tracing implementation used (e.g. LightStep, Zipkin, Jaeger, etc) to >> decide what specifically to do. >> >> So, if a user wants to enable varnish for OpenTracing, I?d expect them >> do something like the following in their VCL: >> >> ### >> # This is distributed as part of the varnish-opentracing project. It >> imports a varnish module >> # that exposes VCL functions to interact with the C++ OpenTracing API >> # https://github.com/opentracing/opentracing-cpp >> # and adds commands to the VCL built-in subroutines so that the >> module?s functions will >> # be invoked when certain events occur. >> include ?opentracing.vcl?; >> >> >> # A user also needs to select a tracing implementation to use. This is >> done by importing >> # the implementation?s module and initializing the tracer in vcl_init. >> For example, if they?re >> # using LightStep they might do something like this >> import lightstep; >> >> sub vcl_init { >> lightstep.collector_host(???); >> lightstep.collector_port(???); >> lightstep.init_tracer(); >> } >> >> >> # Tracing is then explicitly turned on for a particular request with >> logic in the vcl_recv subroutine. >> # This means that a span will be created for the request and any >> backend requests that result from it. >> # The trace?s context will also be propagated into the backend request >> headers, so that any tracing done >> # by the backend server can be linked to it. >> sub vcl_recv { >> # This turns on tracing for all requests. >> opentracing.trace_request(); >> } >> ### >> >> >> Though all the pieces aren?t together, I have an example set up here >> >> >> https://github.com/rnburn/varnish-opentracing/blob/master/example/library/varnish/library.vcl >> >> To go back to the questions: >> >> - what data do you inject, and how do you create it? >> You would be injecting a list of key-value pairs that represent the >> context of the active span >> >> (https://github.com/opentracing/specification/blob/master/specification.md#spancontext). >> Specifically what that means is up to the tracing implementation, but >> it would look something like this: >> >> >> 'ot-tracer-traceid': 'f84de504f0287bbc' // An ID used to >> identify the Trace. >> 'ot-tracer-spanid': 'e34088878e7f0ce8' // An ID used to identify >> the active span within the Trace. >> 'ot-tracer-sampled': 'true' // Heuristic >> used by the Tracer >> >> - what do you need to know about the req/resp/bereq/beresp? >> Knowing whether the request resulted in an error is pretty important >> to record. Other data usually added >> are the URI, http method, ip addresses of the client, server, and >> backend servers. Some of the guidelines on what to include are >> documented here: >> >> >> https://github.com/opentracing/specification/blob/master/semantic_conventions.md >> >> >> An example might make this clearer. This shows the breakdown of a >> trace representing the action of a user submitting a profile form: >> >> >> https://github.com/rnburn/nginx-opentracing/blob/master/doc/data/nginx-upload-trace5.png >> >> The server (described in more detail here >> https://github.com/rnburn/nginx-opentracing/blob/master/doc/Tutorial.md) >> uses nginx as a reverse proxy in front of Node.js servers that update >> a database and perform image manipulation. You can see spans created >> on the nginx side to track the duration of the request and how long it >> passes through various location blocks as well as spans created from >> the Node.js server to represent the database activity and image >> manipulation. Injecting context into the request headers is what >> allows the spans to be linked together so that the entire trace can be >> formed. >> >> On Thu, May 11, 2017 at 9:38 PM, Guillaume Quintard >> wrote: >> > you can put anything in the priv field of the task, but the issue is >> > that >> > you have to put that data in there, meaning a call to your vmod from the >> > vcl. >> > >> > the VUT.dispatch_f function isn't to be called from a vmod, and I don't >> > think you need to. >> > >> > Maybe it's time to take a step back, can you fill us in the whole >> > workflow, >> > notably: >> > - what data do you inject, and how do you create it? >> > - what do you need to know about the req/resp/bereq/beresp? >> > >> > I almost have the feeling that this could be solved through pure >> > vcl+shell. >> > >> > -- >> > Guillaume Quintard >> > >> > On Thu, May 11, 2017 at 5:33 PM, Ryan Burn wrote: >> >> >> >> From the free function, is there any way to get the status code or >> >> other properties of the request? I tried using VRT_r_obj_status with a >> >> stored reference to the context, but that doesn't seem to work since >> >> some of the request's resources have already been reclaimed: >> >> >> >> >> >> >> >> https://github.com/rnburn/varnish-opentracing/blob/master/opentracing/src/trace.cpp#L22 >> >> >> >> Is there any other place something like the status would be stored? >> >> >> >> >> >> On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi >> >> >> >> wrote: >> >> > Sorry, email misfire. >> >> > >> >> > You can do this in a VMOD via PRIV_TASK: >> >> > >> >> > >> >> > >> >> > https://varnish-cache.org/docs/trunk/reference/vmod.html#private-pointers >> >> > >> >> > It might make sense to track this stuff in some kind of struct, in >> >> > which >> >> > case, put it into *priv and then register a *free callback. >> >> > Otherwise, >> >> > just >> >> > put a dummy value into the *priv. *free will get called after the >> >> > request is >> >> > done and you can put your custom code in there. >> >> > >> >> > -- >> >> > Reza Naghibi >> >> > Varnish Software >> >> > >> >> > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi >> >> > >> >> > wrote: >> >> >> >> >> >> You can do this in a VMOD via PRIV_TASK: >> >> >> >> >> >> >> >> >> -- >> >> >> Reza Naghibi >> >> >> Varnish Software >> >> >> >> >> >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn >> >> >> wrote: >> >> >>> >> >> >>> Hello, >> >> >>> From VCL, is it possible to execute code that runs after a request >> >> >>> has >> >> >>> been processed? >> >> >>> >> >> >>> I'm looking into writing a module that enables Varnish for >> >> >>> distributed >> >> >>> tracing using the OpenTracing project [opentracing.io]. This >> >> >>> requires >> >> >>> invoking code at the beginning of a request to start a span and >> >> >>> insert >> >> >>> tracing context into the request's headers and invoking code after >> >> >>> a >> >> >>> request's been processed to finish the span and measure how long it >> >> >>> took to process. >> >> >>> >> >> >>> I recently did a similar project for nginx >> >> >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an >> >> >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows >> >> >>> you >> >> >>> to set up handlers run after requests are serviced. Can anything >> >> >>> equivalent be done using VCL? >> >> >>> >> >> >>> I image you could accomplish this by subscribing and regularly >> >> >>> reading >> >> >>> from Varnish's shared memory log, but I'd much rather do it >> >> >>> directly >> >> >>> if possible. >> >> >>> >> >> >>> Thanks, Ryan >> >> >>> >> >> >>> _______________________________________________ >> >> >>> varnish-misc mailing list >> >> >>> varnish-misc at varnish-cache.org >> >> >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> >> >> >> >> >> >> >> > >> >> >> >> _______________________________________________ >> >> varnish-misc mailing list >> >> varnish-misc at varnish-cache.org >> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > >> > > > From dridi at varni.sh Tue May 16 07:53:38 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 16 May 2017 09:53:38 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: On Tue, May 16, 2017 at 5:23 AM, Ryan Burn wrote: > Definitely closer. But I?m not sure how that approach would work > without having the log analyzer portion built into the VMOD. A > restriction of the OpenTracing API is that the functions to start, > attach tags, and finish a span all need to act on the same object ? it > looks roughly like this: > > span = tracer.StartSpan(/* start time */, /* parent-span if exists /*); > span.SetTag(/* key */, /* value */); > span.Finish(/* finish time */); > > You couldn?t, for example, have the span created in a VMOD and then > have a separate process analyzing the logs attach the tags and specify > the span?s duration. How about a VMOD doing the only StartSpan, std.log for logging keys and whatnot, and the VUT doing all the rest when it processes the transaction's logs? The Finish operation appears to take a timestamp anyway, so it doesn't need to happen exactly when the transaction finishes, does it? You could also let your VMOD do the logging (SetTag) to not bother users with the span id for every log statement by putting the id in the relevant PRIV_ scope. The syntax you are looking for is not available in VCL. > Not sure if it?s possible, but if I could use the free function set in > a PRIV_TOP structure to query the status code of the response sent, > that would, I think, work well since I could avoid the complexity of > setting up a VSM reader in a VMOD and pulling out the data from the > log hierarchy. Accessing the VSM in a VMOD is possible but I'd recommend not to. Dridi From rnickb731 at gmail.com Wed May 17 21:07:08 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Wed, 17 May 2017 17:07:08 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: > How about a VMOD doing the only StartSpan, std.log for logging keys > and whatnot, and the VUT doing all the rest when it processes the > transaction's logs? But the VUT would be running in a separate process, no? If so, how does it access the span object returned when you start the span. The OpenTracing API doesn't support augmenting information to a span later. The functions to set the tags, finish time, etc all have to act on the same object that's returned when you first start the span. From sreeranj4droid at gmail.com Thu May 18 06:56:14 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Thu, 18 May 2017 12:26:14 +0530 Subject: Any option to identify if a request is pass in browser headers. Message-ID: Hi All, Following is added in vcl-deliver section to identify if the content is served from cache or from backend. Similarly is there an option that we can identify if the request was a PASS ========================================== sub vcl_deliver { # Sometimes it's nice to see when content has been served from the cache. if (obj.hits > 0) { # If the object came from the cache, set an HTTP header to say so set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } # For security and asthetic reasons, remove some HTTP headers before final delivery... remove resp.http.Server; remove resp.http.X-Powered-By; remove resp.http.Via; remove resp.http.X-Varnish;} ========================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Thu May 18 08:45:07 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 18 May 2017 10:45:07 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: On Wed, May 17, 2017 at 11:07 PM, Ryan Burn wrote: >> How about a VMOD doing the only StartSpan, std.log for logging keys >> and whatnot, and the VUT doing all the rest when it processes the >> transaction's logs? > > But the VUT would be running in a separate process, no? If so, how > does it access the span object returned when you start the span. The > OpenTracing API doesn't support augmenting information to a span > later. The functions to set the tags, finish time, etc all have to act > on the same object that's returned when you first start the span. You log whatever needs to ultimately be transmitted to the OT server and let the VUT pick the logs and do the actual API calls. Performing blocking (API) calls from VCL is also a performance killer, that's why Varnish logs in memory and can let VUTs find what they need to do the dirty work. Take varnishncsa for example. The varnishd process that is technically the HTTP server doesn't do access logs and instead it's a separate VUT that picks up the relevant information from the shmlog to produce them. Dridi From colas.delmas at gmail.com Thu May 18 10:00:28 2017 From: colas.delmas at gmail.com (Nicolas Delmas) Date: Thu, 18 May 2017 12:00:28 +0200 Subject: Any option to identify if a request is pass in browser headers. In-Reply-To: References: Message-ID: Hi, Yes it's possible. I did something similar on my configuration : I use the header req.http.X-Pass to set the reason why I get a MISS / PASS : *Exemple* sub vcl_recv { if (req.url ~ "\?s=" || req.url ~ "/feed" || req.url ~ "/mu-.*" || req.url ~ "/wp-(login|admin)" || req.url ~ "/(cart|my-account|checkout|addons|/?add-to-cart=)") { set req.http.X-Pass = "Wordpress Urls"; return (pass); } } In the subroutine MISS, used when your object is not in cache I had the header too. sub vcl_miss { set req.http.X-Pass = "Not-in-cache"; return (fetch); } I change the condition in the vcl_deliver to display the reason of the MISS (and you could differenciate if it was a real MISS or if it was a PASS), and use this condition (*obj.uncacheable*) that allow me to know if the object has mark as Hit-For-Pass sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else if (obj.uncacheable && req.http.X-Pass ~ "Not-in-Cache") { set resp.http.X-Cache = "MISS : Hit-For-Pass"; } else { set resp.http.X-Cache = "MISS : " + req.http.X-Pass; } } So you could had a header in the vcl_pass and test it in order you display PASS instead of MISS in the vcl_deliver Nicolas *Nicolas Delmas* http://tutoandco.colas-delmas.fr/ 2017-05-18 8:56 GMT+02:00 sreeranj s : > Hi All, > > > Following is added in vcl-deliver section to identify if the content is > served from cache or from backend. Similarly is there an option that we can > identify if the request was a PASS > > ========================================== > > sub vcl_deliver { > > # Sometimes it's nice to see when content has been served from the cache. > if (obj.hits > 0) { > # If the object came from the cache, set an HTTP header to say so > set resp.http.X-Cache = "HIT"; > } else { > set resp.http.X-Cache = "MISS"; > } > > # For security and asthetic reasons, remove some HTTP headers before final delivery... > remove resp.http.Server; > remove resp.http.X-Powered-By; > remove resp.http.Via; > remove resp.http.X-Varnish;} > > > > ========================================== > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rnickb731 at gmail.com Thu May 18 19:20:28 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Thu, 18 May 2017 15:20:28 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: I don't see how that could work. The context for the active span needs to be injected into the headers of any backend requests. The headers need to be modified from the varnishd process, right? And you can't inject a span context unless the span has been created, so I would think a VMOD would have to start the span as well; then since the OpenTracing API to specify other properties of the span requires acting on the object returned when you start the span, those would all need to called from the VMOD. Also, none of the OpenTracing API functions should block for very long, if at all. The more expensive work such as uploading the tracing data would happen in a separate thread. On Thu, May 18, 2017 at 4:45 AM, Dridi Boukelmoune wrote: > On Wed, May 17, 2017 at 11:07 PM, Ryan Burn wrote: >>> How about a VMOD doing the only StartSpan, std.log for logging keys >>> and whatnot, and the VUT doing all the rest when it processes the >>> transaction's logs? >> >> But the VUT would be running in a separate process, no? If so, how >> does it access the span object returned when you start the span. The >> OpenTracing API doesn't support augmenting information to a span >> later. The functions to set the tags, finish time, etc all have to act >> on the same object that's returned when you first start the span. > > You log whatever needs to ultimately be transmitted to the OT server > and let the VUT pick the logs and do the actual API calls. Performing > blocking (API) calls from VCL is also a performance killer, that's why > Varnish logs in memory and can let VUTs find what they need to do the > dirty work. > > Take varnishncsa for example. The varnishd process that is technically > the HTTP server doesn't do access logs and instead it's a separate VUT > that picks up the relevant information from the shmlog to produce > them. > > Dridi From guillaume at varnish-software.com Sun May 21 15:56:11 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 21 May 2017 11:56:11 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: Best way I see for now is to capture a pointer to req in PRIV_TASK and use is during the release/free operation. You should be okay since that operation comes before the release of the req, but don't keep anything after that step. You'll lose some timing information, as well as the orignal http objects (you'll get the vcl-transformed ones) though. Would that suffice? -- Guillaume Quintard On Thu, May 18, 2017 at 3:20 PM, Ryan Burn wrote: > I don't see how that could work. The context for the active span needs > to be injected into the headers of any backend requests. The headers > need to be modified from the varnishd process, right? And you can't > inject a span context unless the span has been created, so I would > think a VMOD would have to start the span as well; then since the > OpenTracing API to specify other properties of the span requires > acting on the object returned when you start the span, those would all > need to called from the VMOD. > > Also, none of the OpenTracing API functions should block for very > long, if at all. The more expensive work such as uploading the tracing > data would happen in a separate thread. > > On Thu, May 18, 2017 at 4:45 AM, Dridi Boukelmoune wrote: > > On Wed, May 17, 2017 at 11:07 PM, Ryan Burn wrote: > >>> How about a VMOD doing the only StartSpan, std.log for logging keys > >>> and whatnot, and the VUT doing all the rest when it processes the > >>> transaction's logs? > >> > >> But the VUT would be running in a separate process, no? If so, how > >> does it access the span object returned when you start the span. The > >> OpenTracing API doesn't support augmenting information to a span > >> later. The functions to set the tags, finish time, etc all have to act > >> on the same object that's returned when you first start the span. > > > > You log whatever needs to ultimately be transmitted to the OT server > > and let the VUT pick the logs and do the actual API calls. Performing > > blocking (API) calls from VCL is also a performance killer, that's why > > Varnish logs in memory and can let VUTs find what they need to do the > > dirty work. > > > > Take varnishncsa for example. The varnishd process that is technically > > the HTTP server doesn't do access logs and instead it's a separate VUT > > that picks up the relevant information from the shmlog to produce > > them. > > > > Dridi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rnickb731 at gmail.com Mon May 22 22:31:33 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Mon, 22 May 2017 18:31:33 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: I think that might be what I'm looking for. Is there any function like VRT_r_resp_status, but that operates on the request object instead of the context, to let me extract the status code sent? On Sun, May 21, 2017 at 11:56 AM, Guillaume Quintard wrote: > Best way I see for now is to capture a pointer to req in PRIV_TASK and use > is during the release/free operation. You should be okay since that > operation comes before the release of the req, but don't keep anything after > that step. > > You'll lose some timing information, as well as the orignal http objects > (you'll get the vcl-transformed ones) though. Would that suffice? > > -- > Guillaume Quintard > > On Thu, May 18, 2017 at 3:20 PM, Ryan Burn wrote: >> >> I don't see how that could work. The context for the active span needs >> to be injected into the headers of any backend requests. The headers >> need to be modified from the varnishd process, right? And you can't >> inject a span context unless the span has been created, so I would >> think a VMOD would have to start the span as well; then since the >> OpenTracing API to specify other properties of the span requires >> acting on the object returned when you start the span, those would all >> need to called from the VMOD. >> >> Also, none of the OpenTracing API functions should block for very >> long, if at all. The more expensive work such as uploading the tracing >> data would happen in a separate thread. >> >> On Thu, May 18, 2017 at 4:45 AM, Dridi Boukelmoune wrote: >> > On Wed, May 17, 2017 at 11:07 PM, Ryan Burn wrote: >> >>> How about a VMOD doing the only StartSpan, std.log for logging keys >> >>> and whatnot, and the VUT doing all the rest when it processes the >> >>> transaction's logs? >> >> >> >> But the VUT would be running in a separate process, no? If so, how >> >> does it access the span object returned when you start the span. The >> >> OpenTracing API doesn't support augmenting information to a span >> >> later. The functions to set the tags, finish time, etc all have to act >> >> on the same object that's returned when you first start the span. >> > >> > You log whatever needs to ultimately be transmitted to the OT server >> > and let the VUT pick the logs and do the actual API calls. Performing >> > blocking (API) calls from VCL is also a performance killer, that's why >> > Varnish logs in memory and can let VUTs find what they need to do the >> > dirty work. >> > >> > Take varnishncsa for example. The varnishd process that is technically >> > the HTTP server doesn't do access logs and instead it's a separate VUT >> > that picks up the relevant information from the shmlog to produce >> > them. >> > >> > Dridi > > From dridi at varni.sh Tue May 23 11:12:37 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 23 May 2017 13:12:37 +0200 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: On Thu, May 18, 2017 at 9:20 PM, Ryan Burn wrote: > I don't see how that could work. The context for the active span needs > to be injected into the headers of any backend requests. The headers > need to be modified from the varnishd process, right? And you can't > inject a span context unless the span has been created, so I would > think a VMOD would have to start the span as well; then since the > OpenTracing API to specify other properties of the span requires > acting on the object returned when you start the span, those would all > need to called from the VMOD. You can have a VMOD that both populates whatever value you need to pass through requests and std.log whatever needs to be sent back to the OpenTracing server. The the VUT can collect that information, plus Timestamp records and do the sending (it could even send them by batches if the OT API supports it). When it comes to header manipulations (including method, status etc) they are already logged so a VUT can already pick that up and save some work from the VMOD. I'm not familiar with OT but what I described is how Zipnish does its tracing. Except that Zipnish relies on the X-Varnish header to get a unique-ish id, so no blocking call needs to be made. So maybe they are highly different systems, I chimed in because I saw a Zipkin example while briefly skimming through the docs. > Also, none of the OpenTracing API functions should block for very > long, if at all. The nice thing with blocking calls is that you can add "until they do". And even if they only take a handful of milliseconds, that's still orders of magnitude slower than regular VCL code. If I understand correctly only getting an id should be necessary in your case. > The more expensive work such as uploading the tracing > data would happen in a separate thread. And your VUT can be that separate thread (well in a separate process too) so that you don't need to care about synchronizing anything, it's all done by libvarnishapi. And it can work both in "real time" on a live varnish instance or on a log dump [1] if the OpenTracing API allows to defer spans submission after the facts (or for testing purposes). A VUT can survive a crash of the varnishd child, a thread created by a VMOD can't. Cheers, Dridi [1] see varnishlog -w and -r From rnickb731 at gmail.com Tue May 23 21:15:36 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Tue, 23 May 2017 17:15:36 -0400 Subject: Any way to invoke code from VCL after a request has been serviced? In-Reply-To: References: Message-ID: > When it comes to header manipulations (including method, status etc) > they are already logged so a VUT can already pick that up and save > some work from the VMOD. It doesn't need to just view the headers; it needs to add headers to encode the active span context. See http://opentracing.io/documentation/pages/api/cross-process-tracing.html And the active span needs to be started before those headers can be added. > I'm not familiar with OT but what I described is how Zipnish does its > tracing. Except that Zipnish relies on the X-Varnish header to get a > unique-ish id, so no blocking call needs to be made. So maybe they are > highly different systems, I chimed in because I saw a Zipkin example > while briefly skimming through the docs. Zipkin is a distributed tracing system that provides OpenTracing implementations, but Zipnish is just using it as a monitor for varnish. It's not doing context propagation. If you're only interested in monitoring varnish, that's fine; but if you want to see how a request is processed in an entire distributed system (i.e. not just varnish, but the backend servers it's sitting in front of or any other service that might be in front of varnish), then you need to do context propagation. From np.lists at sharphosting.uk Tue May 23 22:58:31 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Tue, 23 May 2017 17:58:31 -0500 Subject: Unexplained Cache MISSes Message-ID: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Hi, I have an issue where some requests are not getting served from the cache when they should be. "should be" as in it's my intention they should be, and not sure what's going wrong to cause them not to be. I had some discussion about this issue before, when I was on 4.0.4, and the suggestion was to upgrade, which I've now done and am on 4.1.6. Below is a full varnishlog report for an image URL. My VCL sets a week TTL on every backend response that is going to be stored, and is then kept up to date with PURGEs from a script when needed or every four days otherwise, so everything that can be cached should be coming out of the cache at all times. The first two entries below are the PURGE/restart and then the subsequent entry a MISS. There are no other entries, this is a complete report from: sudo varnishlog -d -q 'ReqURL eq "/example/image/URI/image.jpg"' Names have been changed to protect the guilty. Nothing has been lru_nuked at all, there is no entry for it in varnishstat. There is an "Age" header on the restarted response after the purge, which seems strange. I can't see a "TTL" record on the restarted response either. All insights greatly appreciated. Please let me know if any further info needed. Nigel -- * << Request >> 230779 - Begin req 230778 rxreq - Timestamp Start: 1495505749.920337 0.000000 0.000000 - Timestamp Req: 1495505749.920337 0.000000 0.000000 - ReqStart 192.168.0.1 33530 - ReqMethod PURGE - ReqURL /example/image/URI/image.jpg - ReqProtocol HTTP/1.1 - ReqHeader TE: deflate,gzip;q=0.3 - ReqHeader Connection: TE, close - ReqHeader Accept-Encoding: gzip - ReqHeader Host: www.example.com - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 - ReqHeader X-Forwarded-For: 192.168.0.1 - VCL_call RECV - ReqHeader X-Processed-By: Melian - VCL_acl MATCH purgers "192.168.0.1" - VCL_return purge - VCL_call HASH - VCL_return lookup - VCL_call PURGE - ReqMethod GET - VCL_return restart - Timestamp Restart: 1495505749.920445 0.000108 0.000108 - Link req 230780 restart - End * << Request >> 230780 - Begin req 230779 restart - Timestamp Start: 1495505749.920445 0.000108 0.000000 - ReqStart 192.168.0.1 33530 - ReqMethod GET - ReqURL /example/image/URI/image.jpg - ReqProtocol HTTP/1.1 - ReqHeader TE: deflate,gzip;q=0.3 - ReqHeader Connection: TE, close - ReqHeader Accept-Encoding: gzip - ReqHeader Host: www.example.com - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 - ReqHeader X-Forwarded-For: 192.168.0.1 - ReqHeader X-Processed-By: Melian - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 168785 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Mon, 22 May 2017 18:41:37 GMT - RespHeader Server: Apache/2 - RespHeader Last-Modified: Thu, 02 Mar 2017 02:22:25 GMT - RespHeader ETag: "1e40-549b6198b7a40" - RespHeader Content-Length: 7744 - RespHeader Content-Type: image/jpeg - RespHeader X-Host: www.example.com - RespHeader X-URL: /example/image/URI/image.jpg - RespHeader Cache-Control: max-age=3600 - RespHeader X-Varnish: 230780 168785 - RespHeader Age: 27252 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespUnset Age: 27252 - RespHeader Age: 0 - RespHeader X-Cache: HIT (1) - RespUnset X-Host: www.example.com - RespUnset X-URL: /example/image/URI/image.jpg - RespUnset X-Varnish: 230780 168785 - RespUnset Via: 1.1 varnish-v4 - RespHeader Via: Varnish - VCL_return deliver - Timestamp Process: 1495505749.920481 0.000144 0.000036 - RespHeader Accept-Ranges: bytes - Debug "RES_MODE 2" - RespHeader Connection: close - Timestamp Resp: 1495505749.920542 0.000204 0.000060 - ReqAcct 227 0 227 306 7744 8050 - End * << Request >> 2031637 - Begin req 131739 rxreq - Timestamp Start: 1495573815.866983 0.000000 0.000000 - Timestamp Req: 1495573815.866983 0.000000 0.000000 - ReqStart 86.153.27.10 48595 - ReqMethod GET - ReqURL /example/image/URI/image.jpg - ReqProtocol HTTP/1.1 - ReqHeader Host: www.example.com - ReqHeader Connection: keep-alive - ReqHeader User-Agent: Mozilla/5.0 (Linux; Android 6.0.1; SAMSUNG SM-G900F Build/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/5.4 Chrome/51.0.2704.106 Mobile Safari/537.36 - ReqHeader Accept: image/webp,image/*,*/*;q=0.8 - ReqHeader Referer: http://www.example.com/jcb-parts-catalogue - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Accept-Language: en-GB,en-US,en - ReqHeader Cookie: _gat=1; hgkjb432k5vb4k35vb4k35n32vbmn423=bjjb423jbr2bkj234bjk324bkj; PRODUCT_BROWSING_FEATURES_USED=1; _ga=GA1.2.1234567890.1234567890; _gid=GA1.2.1234567890.1234567890 - ReqHeader X-Forwarded-For: 86.153.27.10 - VCL_call RECV - ReqHeader X-Processed-By: Melian - ReqUnset Cookie: _gat=1; hgkjb432k5vb4k35vb4k35n32vbmn423=bjjb423jbr2bkj234bjk324bkj; PRODUCT_BROWSING_FEATURES_USED=1; _ga=GA1.2.1234567890.1234567890; _gid=GA1.2.1234567890.1234567890 - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - VCL_call MISS - VCL_return fetch - Link bereq 2031638 fetch - Timestamp Fetch: 1495573815.869083 0.002100 0.002100 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Tue, 23 May 2017 21:10:15 GMT - RespHeader Server: Apache/2 - RespHeader Last-Modified: Thu, 02 Mar 2017 02:22:25 GMT - RespHeader ETag: "1e40-549b6198b7a40" - RespHeader Content-Length: 7744 - RespHeader Content-Type: image/jpeg - RespHeader X-Host: www.example.com - RespHeader X-URL: /example/image/URI/image.jpg - RespHeader Cache-Control: max-age=3600 - RespHeader X-Varnish: 2031637 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespUnset Age: 0 - RespHeader Age: 0 - RespHeader X-Cache: MISS - RespUnset X-Host: www.example.com - RespUnset X-URL: /example/image/URI/image.jpg - RespUnset X-Varnish: 2031637 - RespUnset Via: 1.1 varnish-v4 - RespHeader Via: Varnish - VCL_return deliver - Timestamp Process: 1495573815.869164 0.002180 0.000080 - RespHeader Accept-Ranges: bytes - Debug "RES_MODE 2" - RespHeader Connection: keep-alive - Timestamp Resp: 1495573815.869250 0.002266 0.000086 - ReqAcct 655 0 655 308 7744 8052 - End -- From rnickb731 at gmail.com Tue May 23 23:58:20 2017 From: rnickb731 at gmail.com (Ryan Burn) Date: Tue, 23 May 2017 19:58:20 -0400 Subject: Any way for a VMOD to iterate over all the header values? Message-ID: Does varnish expose any API that would let a VMOD iterate over all of the header key-value pairs? I know there's the VRT_GetHdr that allows you to lookup a single header, but I haven't found anything for access all the headers. From sreeranj4droid at gmail.com Wed May 24 11:23:13 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Wed, 24 May 2017 16:53:13 +0530 Subject: Custom variables in varnish 5.1.2 Message-ID: Hi, Is there custom variable in varnish 5.1.2, where I can store the regex like ^/?(.+?)?/([A-Z0-9]+([_x\ -][A-Z0-9]+)*)/details/?(tab)?/?(features|pictures|specifications|compatible|compatibleaccessories|manuals -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed May 24 12:28:28 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 24 May 2017 08:28:28 -0400 Subject: Custom variables in varnish 5.1.2 In-Reply-To: References: Message-ID: Hi, you can use vmod-var + vmod-re. -- Guillaume Quintard On Wed, May 24, 2017 at 7:23 AM, sreeranj s wrote: > Hi, > > Is there custom variable in varnish 5.1.2, where I can store the regex > like ^/?(.+?)?/([A-Z0-9]+([_x\ -][A-Z0-9]+)*)/details/?(tab)? > /?(features|pictures|specifications|compatible| > compatibleaccessories|manuals > > > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sreeranj4droid at gmail.com Wed May 24 13:16:56 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Wed, 24 May 2017 18:46:56 +0530 Subject: Custom variables in varnish 5.1.2 In-Reply-To: References: Message-ID: Thanks Guillaume. I will try it out. On Wed, May 24, 2017 at 5:58 PM, Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > you can use vmod-var + vmod-re. > > -- > Guillaume Quintard > > On Wed, May 24, 2017 at 7:23 AM, sreeranj s > wrote: > >> Hi, >> >> Is there custom variable in varnish 5.1.2, where I can store the regex >> like ^/?(.+?)?/([A-Z0-9]+([_x\ -][A-Z0-9]+)*)/details/?(tab)? >> /?(features|pictures|specifications|compatible|compatibleacc >> essories|manuals >> >> >> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Wed May 24 23:59:39 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Wed, 24 May 2017 18:59:39 -0500 Subject: Unexplained Cache MISSes In-Reply-To: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: I've been looking at this some more, and it seems the problem is the same as it was before. The PURGE does not seem to work immediately. The restarted request gets a HIT, then the next request gets a MISS and after that the HITs start again. Here's a sequence from varnishlog showing: - the PURGE, - the restart, - the MISS, - the HIT Again this is a full output for everything in the log on that URL. Names and IP addresses have been changed. Thanks Nigel * << Request >> 266604 - Begin req 266603 rxreq - Timestamp Start: 1495662133.465511 0.000000 0.000000 - Timestamp Req: 1495662133.465511 0.000000 0.000000 - ReqStart xxx.xxx.xxx.xx2 57250 - ReqMethod PURGE - ReqURL /example/url - ReqProtocol HTTP/1.1 - ReqHeader TE: deflate,gzip;q=0.3 - ReqHeader Connection: TE, close - ReqHeader Accept-Encoding: gzip - ReqHeader Host: www.example.com - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 - ReqHeader X-Forwarded-For: xxx.xxx.xxx.xx2 - VCL_call RECV - ReqHeader X-Processed-By: Melian - VCL_acl MATCH purgers "xxx.xxx.xxx.xx2" - VCL_return purge - VCL_call HASH - VCL_return lookup - VCL_call PURGE - ReqMethod GET - VCL_return restart - Timestamp Restart: 1495662133.465563 0.000052 0.000052 - Link req 266605 restart - End * << Request >> 266605 - Begin req 266604 restart - Timestamp Start: 1495662133.465563 0.000052 0.000000 - ReqStart xxx.xxx.xxx.xx2 57250 - ReqMethod GET - ReqURL /example/url - ReqProtocol HTTP/1.1 - ReqHeader TE: deflate,gzip;q=0.3 - ReqHeader Connection: TE, close - ReqHeader Accept-Encoding: gzip - ReqHeader Host: www.example.com - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 - ReqHeader X-Forwarded-For: xxx.xxx.xxx.xx2 - ReqHeader X-Processed-By: Melian - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 132102 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Wed, 24 May 2017 02:37:14 GMT - RespHeader Server: Apache/2 - RespHeader P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" - RespHeader Last-Modified: Wed, 24 May 2017 02:37:15 GMT - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader X-Host: www.example.com - RespHeader X-URL: /example/url - RespHeader Cache-Control: max-age=3600 - RespHeader Content-Encoding: gzip - RespHeader Vary: Accept-Encoding - RespHeader X-Varnish: 266605 132102 - RespHeader Age: 68698 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespUnset Age: 68698 - RespHeader Age: 0 - RespHeader X-Cache: HIT (7) - RespUnset X-Host: www.example.com - RespUnset X-URL: /example/url - RespUnset X-Varnish: 266605 132102 - RespUnset Via: 1.1 varnish-v4 - RespHeader Via: Varnish - VCL_return deliver - Timestamp Process: 1495662133.465618 0.000107 0.000055 - RespHeader Accept-Ranges: bytes - RespHeader Content-Length: 7493 - Debug "RES_MODE 2" - RespHeader Connection: close - Timestamp Resp: 1495662133.465660 0.000149 0.000042 - ReqAcct 225 0 225 396 7493 7889 - End * << Request >> 3017 - Begin req 3016 rxreq - Timestamp Start: 1495664394.000921 0.000000 0.000000 - Timestamp Req: 1495664394.000921 0.000000 0.000000 - ReqStart xxx.xxx.xxx.xx3 45771 - ReqMethod GET - ReqURL /example/url - ReqProtocol HTTP/1.1 - ReqHeader Host: www.example.com - ReqHeader Connection: Keep-alive - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader From: googlebot(at)googlebot.com - ReqHeader User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) - ReqHeader Accept-Encoding: gzip,deflate,br - ReqHeader If-Modified-Since: Wed, 24 May 2017 15:16:04 GMT - ReqHeader X-Forwarded-For: xxx.xxx.xxx.xx3 - VCL_call RECV - ReqHeader X-Processed-By: Melian - VCL_return hash - ReqUnset Accept-Encoding: gzip,deflate,br - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - VCL_call MISS - VCL_return fetch - Link bereq 3018 fetch - Timestamp Fetch: 1495664394.381188 0.380267 0.380267 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Wed, 24 May 2017 22:19:54 GMT - RespHeader Server: Apache/2 - RespHeader P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" - RespHeader Last-Modified: Wed, 24 May 2017 22:19:54 GMT - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader X-Host: www.example.com - RespHeader X-URL: /example/url - RespHeader Cache-Control: max-age=3600 - RespHeader Content-Encoding: gzip - RespHeader Vary: Accept-Encoding - RespHeader X-Varnish: 3017 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespHeader X-Cache: MISS - RespUnset X-Host: www.example.com - RespUnset X-URL: /example/url - RespUnset X-Varnish: 3017 - RespUnset Via: 1.1 varnish-v4 - RespHeader Via: Varnish - VCL_return deliver - Timestamp Process: 1495664394.381214 0.380294 0.000026 - RespHeader Accept-Ranges: bytes - RespHeader Transfer-Encoding: chunked - Debug "RES_MODE 8" - RespHeader Connection: keep-alive - Timestamp Resp: 1495664394.396562 0.395641 0.015347 - ReqAcct 409 0 409 404 7493 7897 - End * << Request >> 35821 - Begin req 35820 rxreq - Timestamp Start: 1495668065.207785 0.000000 0.000000 - Timestamp Req: 1495668065.207785 0.000000 0.000000 - ReqStart xxx.xxx.xxx.xx1 33904 - ReqMethod GET - ReqURL /example/url - ReqProtocol HTTP/1.1 - ReqHeader Host: www.example.com - ReqHeader Connection: Keep-alive - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 - ReqHeader From: googlebot(at)googlebot.com - ReqHeader User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) - ReqHeader Accept-Encoding: gzip,deflate,br - ReqHeader If-Modified-Since: Wed, 24 May 2017 22:19:54 GMT - ReqHeader X-Forwarded-For: xxx.xxx.xxx.xx1 - VCL_call RECV - ReqHeader X-Processed-By: Melian - VCL_return hash - ReqUnset Accept-Encoding: gzip,deflate,br - ReqHeader Accept-Encoding: gzip - VCL_call HASH - VCL_return lookup - Hit 3018 - VCL_call HIT - VCL_return deliver - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader Date: Wed, 24 May 2017 22:19:54 GMT - RespHeader Server: Apache/2 - RespHeader P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" - RespHeader Last-Modified: Wed, 24 May 2017 22:19:54 GMT - RespHeader Content-Type: text/html; charset=utf-8 - RespHeader X-Host: www.example.com - RespHeader X-URL: /example/url - RespHeader Cache-Control: max-age=3600 - RespHeader Content-Encoding: gzip - RespHeader Vary: Accept-Encoding - RespHeader X-Varnish: 35821 3018 - RespHeader Age: 3670 - RespHeader Via: 1.1 varnish-v4 - VCL_call DELIVER - RespUnset Age: 3670 - RespHeader Age: 0 - RespHeader X-Cache: HIT (1) - RespUnset X-Host: www.example.com - RespUnset X-URL: /example/url - RespUnset X-Varnish: 35821 3018 - RespUnset Via: 1.1 varnish-v4 - RespHeader Via: Varnish - VCL_return deliver - Timestamp Process: 1495668065.207927 0.000142 0.000142 - RespProtocol HTTP/1.1 - RespStatus 304 - RespReason Not Modified - RespReason Not Modified - Debug "RES_MODE 0" - RespHeader Connection: keep-alive - Timestamp Resp: 1495668065.208002 0.000217 0.000075 - ReqAcct 409 0 409 367 0 367 - End On 23/05/2017 17:58, Nigel Peck wrote: > > Hi, > > I have an issue where some requests are not getting served from the > cache when they should be. "should be" as in it's my intention they > should be, and not sure what's going wrong to cause them not to be. I > had some discussion about this issue before, when I was on 4.0.4, and > the suggestion was to upgrade, which I've now done and am on 4.1.6. > > Below is a full varnishlog report for an image URL. My VCL sets a week > TTL on every backend response that is going to be stored, and is then > kept up to date with PURGEs from a script when needed or every four days > otherwise, so everything that can be cached should be coming out of the > cache at all times. The first two entries below are the PURGE/restart > and then the subsequent entry a MISS. There are no other entries, this > is a complete report from: > > sudo varnishlog -d -q 'ReqURL eq "/example/image/URI/image.jpg"' > > Names have been changed to protect the guilty. Nothing has been > lru_nuked at all, there is no entry for it in varnishstat. > > There is an "Age" header on the restarted response after the purge, > which seems strange. I can't see a "TTL" record on the restarted > response either. > > All insights greatly appreciated. Please let me know if any further info > needed. > > Nigel > > -- > > * << Request >> 230779 > - Begin req 230778 rxreq > - Timestamp Start: 1495505749.920337 0.000000 0.000000 > - Timestamp Req: 1495505749.920337 0.000000 0.000000 > - ReqStart 192.168.0.1 33530 > - ReqMethod PURGE > - ReqURL /example/image/URI/image.jpg > - ReqProtocol HTTP/1.1 > - ReqHeader TE: deflate,gzip;q=0.3 > - ReqHeader Connection: TE, close > - ReqHeader Accept-Encoding: gzip > - ReqHeader Host: www.example.com > - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 > - ReqHeader X-Forwarded-For: 192.168.0.1 > - VCL_call RECV > - ReqHeader X-Processed-By: Melian > - VCL_acl MATCH purgers "192.168.0.1" > - VCL_return purge > - VCL_call HASH > - VCL_return lookup > - VCL_call PURGE > - ReqMethod GET > - VCL_return restart > - Timestamp Restart: 1495505749.920445 0.000108 0.000108 > - Link req 230780 restart > - End > > * << Request >> 230780 > - Begin req 230779 restart > - Timestamp Start: 1495505749.920445 0.000108 0.000000 > - ReqStart 192.168.0.1 33530 > - ReqMethod GET > - ReqURL /example/image/URI/image.jpg > - ReqProtocol HTTP/1.1 > - ReqHeader TE: deflate,gzip;q=0.3 > - ReqHeader Connection: TE, close > - ReqHeader Accept-Encoding: gzip > - ReqHeader Host: www.example.com > - ReqHeader User-Agent: SuperDuperApps-Cache-Purger/0.1 > - ReqHeader X-Forwarded-For: 192.168.0.1 > - ReqHeader X-Processed-By: Melian > - VCL_call RECV > - VCL_return hash > - VCL_call HASH > - VCL_return lookup > - Hit 168785 > - VCL_call HIT > - VCL_return deliver > - RespProtocol HTTP/1.1 > - RespStatus 200 > - RespReason OK > - RespHeader Date: Mon, 22 May 2017 18:41:37 GMT > - RespHeader Server: Apache/2 > - RespHeader Last-Modified: Thu, 02 Mar 2017 02:22:25 GMT > - RespHeader ETag: "1e40-549b6198b7a40" > - RespHeader Content-Length: 7744 > - RespHeader Content-Type: image/jpeg > - RespHeader X-Host: www.example.com > - RespHeader X-URL: /example/image/URI/image.jpg > - RespHeader Cache-Control: max-age=3600 > - RespHeader X-Varnish: 230780 168785 > - RespHeader Age: 27252 > - RespHeader Via: 1.1 varnish-v4 > - VCL_call DELIVER > - RespUnset Age: 27252 > - RespHeader Age: 0 > - RespHeader X-Cache: HIT (1) > - RespUnset X-Host: www.example.com > - RespUnset X-URL: /example/image/URI/image.jpg > - RespUnset X-Varnish: 230780 168785 > - RespUnset Via: 1.1 varnish-v4 > - RespHeader Via: Varnish > - VCL_return deliver > - Timestamp Process: 1495505749.920481 0.000144 0.000036 > - RespHeader Accept-Ranges: bytes > - Debug "RES_MODE 2" > - RespHeader Connection: close > - Timestamp Resp: 1495505749.920542 0.000204 0.000060 > - ReqAcct 227 0 227 306 7744 8050 > - End > > * << Request >> 2031637 > - Begin req 131739 rxreq > - Timestamp Start: 1495573815.866983 0.000000 0.000000 > - Timestamp Req: 1495573815.866983 0.000000 0.000000 > - ReqStart 86.153.27.10 48595 > - ReqMethod GET > - ReqURL /example/image/URI/image.jpg > - ReqProtocol HTTP/1.1 > - ReqHeader Host: www.example.com > - ReqHeader Connection: keep-alive > - ReqHeader User-Agent: Mozilla/5.0 (Linux; Android 6.0.1; > SAMSUNG SM-G900F Build/MMB29M) AppleWebKit/537.36 (KHTML, like Gecko) > SamsungBrowser/5.4 Chrome/51.0.2704.106 Mobile Safari/537.36 > - ReqHeader Accept: image/webp,image/*,*/*;q=0.8 > - ReqHeader Referer: http://www.example.com/jcb-parts-catalogue > - ReqHeader Accept-Encoding: gzip, deflate > - ReqHeader Accept-Language: en-GB,en-US,en > - ReqHeader Cookie: _gat=1; > hgkjb432k5vb4k35vb4k35n32vbmn423=bjjb423jbr2bkj234bjk324bkj; > PRODUCT_BROWSING_FEATURES_USED=1; _ga=GA1.2.1234567890.1234567890; > _gid=GA1.2.1234567890.1234567890 > - ReqHeader X-Forwarded-For: 86.153.27.10 > - VCL_call RECV > - ReqHeader X-Processed-By: Melian > - ReqUnset Cookie: _gat=1; > hgkjb432k5vb4k35vb4k35n32vbmn423=bjjb423jbr2bkj234bjk324bkj; > PRODUCT_BROWSING_FEATURES_USED=1; _ga=GA1.2.1234567890.1234567890; > _gid=GA1.2.1234567890.1234567890 > - VCL_return hash > - ReqUnset Accept-Encoding: gzip, deflate > - ReqHeader Accept-Encoding: gzip > - VCL_call HASH > - VCL_return lookup > - VCL_call MISS > - VCL_return fetch > - Link bereq 2031638 fetch > - Timestamp Fetch: 1495573815.869083 0.002100 0.002100 > - RespProtocol HTTP/1.1 > - RespStatus 200 > - RespReason OK > - RespHeader Date: Tue, 23 May 2017 21:10:15 GMT > - RespHeader Server: Apache/2 > - RespHeader Last-Modified: Thu, 02 Mar 2017 02:22:25 GMT > - RespHeader ETag: "1e40-549b6198b7a40" > - RespHeader Content-Length: 7744 > - RespHeader Content-Type: image/jpeg > - RespHeader X-Host: www.example.com > - RespHeader X-URL: /example/image/URI/image.jpg > - RespHeader Cache-Control: max-age=3600 > - RespHeader X-Varnish: 2031637 > - RespHeader Age: 0 > - RespHeader Via: 1.1 varnish-v4 > - VCL_call DELIVER > - RespUnset Age: 0 > - RespHeader Age: 0 > - RespHeader X-Cache: MISS > - RespUnset X-Host: www.example.com > - RespUnset X-URL: /example/image/URI/image.jpg > - RespUnset X-Varnish: 2031637 > - RespUnset Via: 1.1 varnish-v4 > - RespHeader Via: Varnish > - VCL_return deliver > - Timestamp Process: 1495573815.869164 0.002180 0.000080 > - RespHeader Accept-Ranges: bytes > - Debug "RES_MODE 2" > - RespHeader Connection: keep-alive > - Timestamp Resp: 1495573815.869250 0.002266 0.000086 > - ReqAcct 655 0 655 308 7744 8052 > - End > > -- > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From guillaume at varnish-software.com Thu May 25 00:25:30 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 24 May 2017 20:25:30 -0400 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: How reliably can you reproduce? You you mind sharing your VCL (either pastebin, or just to me)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Thu May 25 17:34:37 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 25 May 2017 12:34:37 -0500 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: I searched the log for restarted purge requests that have received a HIT, and there are quite a lot of them. Many are ok though, and get a MISS. There are also entries that have this sequence: - VCL_call RECV - VCL_return hash - VCL_call HASH - VCL_return lookup - Hit 886033 - VCL_call HIT - VCL_return miss - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. - VCL_call PASS - VCL_return fetch I'm happy to share my VCL, and yes would prefer just to you at this point. I could look at creating a cut-down version later to share on the list if it's useful. I'll email that to you separately now. Thanks for this. Nigel On 24/05/2017 19:25, Guillaume Quintard wrote: > How reliably can you reproduce? You you mind sharing your VCL (either > pastebin, or just to me)? From np.lists at sharphosting.uk Thu May 25 18:20:44 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 25 May 2017 13:20:44 -0500 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: <4c093bc8-34ce-c251-b9a3-27f96cad5d74@sharphosting.uk> Just a bit more info on this. I got some numbers for you. MISS Restarted Requests 2433 HIT Restarted Requests 375 Got the error: vcl_hit{} returns miss without busy object 28 On 25/05/2017 12:34, Nigel Peck wrote: > > I searched the log for restarted purge requests that have received a > HIT, and there are quite a lot of them. Many are ok though, and get a > MISS. There are also entries that have this sequence: > > - VCL_call RECV > - VCL_return hash > - VCL_call HASH > - VCL_return lookup > - Hit 886033 > - VCL_call HIT > - VCL_return miss > - VCL_Error vcl_hit{} returns miss without busy object. Doing pass. > - VCL_call PASS > - VCL_return fetch > > I'm happy to share my VCL, and yes would prefer just to you at this > point. I could look at creating a cut-down version later to share on the > list if it's useful. I'll email that to you separately now. > > Thanks for this. > > Nigel > > On 24/05/2017 19:25, Guillaume Quintard wrote: >> How reliably can you reproduce? You you mind sharing your VCL (either >> pastebin, or just to me)? > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc From carlos.abalde at gmail.com Thu May 25 19:25:39 2017 From: carlos.abalde at gmail.com (Carlos Abalde) Date: Thu, 25 May 2017 21:25:39 +0200 Subject: Unexplained Cache MISSes In-Reply-To: <4c093bc8-34ce-c251-b9a3-27f96cad5d74@sharphosting.uk> References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <4c093bc8-34ce-c251-b9a3-27f96cad5d74@sharphosting.uk> Message-ID: <27B31339-01D5-49AA-8400-D67952B056A1@gmail.com> Hi, Just in case this is useful, the 'vcl_hit{} returns miss without busy object' error is documented in https://github.com/varnishcache/varnish-cache/issues/1799 . It happens when obj.ttl + obj.grace > 0 and leaving 'vcl_hit' with 'return (miss)'. While the object is being re-fetched those errors will be logged because request coalescing won't work as expected for later requests of the same object. That means later requests will enter 'vcl_hit' and then will be handled as pass requests. Once the objet is re-fetched, further requests will behave as expected. Best, -- Carlos Abalde -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Thu May 25 19:32:03 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Thu, 25 May 2017 14:32:03 -0500 Subject: Unexplained Cache MISSes In-Reply-To: <27B31339-01D5-49AA-8400-D67952B056A1@gmail.com> References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <4c093bc8-34ce-c251-b9a3-27f96cad5d74@sharphosting.uk> <27B31339-01D5-49AA-8400-D67952B056A1@gmail.com> Message-ID: <1c202679-982d-3f0c-3d4f-6bbaa264aaa4@sharphosting.uk> Thanks Carlos, I did see that issue on GitHub and very helpful to have the short version :) It seems that in some cases hits are being attempted on objects that are in the process of being purged, or something like that, which is causing this error. Thanks, Nigel On 25/05/2017 14:25, Carlos Abalde wrote: > Hi, > > Just in case this is useful, the 'vcl_hit{} returns miss without busy > object' error is documented in > https://github.com/varnishcache/varnish-cache/issues/1799. It happens > when obj.ttl + obj.grace > 0 and leaving 'vcl_hit' with 'return (miss)'. > While the object is being re-fetched those errors will be logged because > request coalescing won't work as expected for later requests of the same > object. That means later requests will enter 'vcl_hit' and then will be > handled as pass requests. Once the objet is re-fetched, further requests > will behave as expected. > > Best, > > -- > Carlos Abalde > From cherian.in at gmail.com Sat May 27 00:46:59 2017 From: cherian.in at gmail.com (Cherian Thomas) Date: Fri, 26 May 2017 17:46:59 -0700 Subject: Understanding why Varnish decided to clean the memory Message-ID: Hi all, Screenshot of all the varnish metrics: http://d.pr/i/MtwCJx Every once in a while varnish throws all the objects in the cache out and I have to re-warm it despite a year of TTL for all the objects and a large grace period. This happened yesterday at roughly 3:45 AM. What am I doing wrong? Other machine metrics: http://d.pr/i/0j01vC - Cherian -------------- next part -------------- An HTML attachment was scrubbed... URL: From dward at townnews.com Sat May 27 01:15:27 2017 From: dward at townnews.com (Dustin Ward) Date: Fri, 26 May 2017 21:15:27 -0400 Subject: Understanding why Varnish decided to clean the memory In-Reply-To: References: Message-ID: <15c4778f196-1377-66b6@webprd-m88.mail.aol.com> It's likely that varnish panicked. Try running "varnishadm panic.show". On Friday, May 26, 2017 at 7:59 PM Cherian Thomas wrote: Hi all, Screenshot of all the varnish metrics: http://d.pr/i/MtwCJx Every once in a while varnish throws all the objects in the cache out and I have to re-warm it despite a year of TTL for all the objects and a large grace period. This happened yesterday at roughly 3:45 AM. What am I doing wrong? Other machine metrics: http://d.pr/i/0j01vC - Cherian -------------- next part -------------- An HTML attachment was scrubbed... URL: From cherian.in at gmail.com Sat May 27 01:18:25 2017 From: cherian.in at gmail.com (Cherian Thomas) Date: Fri, 26 May 2017 18:18:25 -0700 Subject: Understanding why Varnish decided to clean the memory In-Reply-To: <15c4778f196-1377-66b6@webprd-m88.mail.aol.com> References: <15c4778f196-1377-66b6@webprd-m88.mail.aol.com> Message-ID: Ran >> # varnishadm panic.show Child has not panicked or panic has been cleared Command failed with error code 300 On Fri, May 26, 2017 at 6:15 PM, Dustin Ward wrote: > varnishadm panic.show - Cherian -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Sat May 27 19:30:36 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Sat, 27 May 2017 14:30:36 -0500 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: Just another update on this, regarding how reliably I can reproduce. I have now updated my code for purging the cache, so it reports which purged objects received a HIT when they were retrieved with the restarted request after the purge, and it happens every time I purge a significant number of items from the cache, perhaps 10% of them or something like that. So I can reproduce reliably. Thanks Nigel On 24/05/2017 19:25, Guillaume Quintard wrote: > How reliably can you reproduce? You you mind sharing your VCL (either > pastebin, or just to me)? From daghf at varnish-software.com Mon May 29 09:18:09 2017 From: daghf at varnish-software.com (Dag Haavi Finstad) Date: Mon, 29 May 2017 11:18:09 +0200 Subject: Any way for a VMOD to iterate over all the header values? In-Reply-To: References: Message-ID: Hi Ryan, For an example have a look at what vmod-header does, https://github.com/varnish/varnish-modules/blob/master/src/vmod_header.c#L134-L140 hp->hd[u].b will point to a string containing "header: value" for each of the headers. On Wed, May 24, 2017 at 1:58 AM, Ryan Burn wrote: > Does varnish expose any API that would let a VMOD iterate over all of > the header key-value pairs? I know there's the VRT_GetHdr that allows > you to lookup a single header, but I haven't found anything for access > all the headers. > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -- Dag Haavi Finstad Software Developer | Varnish Software Mobile: +47 476 64 134 We Make Websites Fly! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sreeranj4droid at gmail.com Mon May 29 16:15:28 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Mon, 29 May 2017 21:45:28 +0530 Subject: Retaining some cookies and receiving hits Message-ID: Hi, As per the link https://varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html, following code will help us to retain COOKIE1 and |COOKIE2, but strip other cookies. So COOKIE1 and |COOKIE2 is send to backend. I have the following questions. 1) By default varnish will not cache if there is cookie present in request or a set-cookie value is there in server response. In the following case we have retained COOKIE1 and |COOKIE2, but I still have varnish caches the responses(I have the unset cookie from backend responses). Could you please let me know the reason. 2) If the approach is ok, please advise on any issues are related to this approach. 3) I am not adding any specific value in hash block, so requests are cached only based on req-url or IP. hope that is right. ================================= sub vcl_recv { if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } }} ================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From sreeranj4droid at gmail.com Tue May 30 04:40:18 2017 From: sreeranj4droid at gmail.com (sreeranj s) Date: Tue, 30 May 2017 10:10:18 +0530 Subject: Retaining some cookies and receiving hits In-Reply-To: References: Message-ID: Let me reiterate the question. By default varnish will not cache if there is cookie present in request or a set-cookie value is there in server response. In the following case we have retained COOKIE1 and |COOKIE2, but can varnish still caches the responses(I have the unset cookie from backend responses), by returning hash in vcl recv. No changes in vcl_hash is made, so caching is based on req_url. Please advise on any issues on this approach. *************************************** sub vcl_recv { if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; return(pass) } return(hash) }} *************************************** On Mon, May 29, 2017 at 9:45 PM, sreeranj s wrote: > Hi, > > As per the link https://varnish-cache.org/docs/4.0/users-guide/ > increasing-your-hitrate.html, following code will help us to retain > COOKIE1 and |COOKIE2, but strip other cookies. So COOKIE1 and |COOKIE2 is > send to backend. I have the following questions. > > 1) By default varnish will not cache if there is cookie present in request > or a set-cookie value is there in server response. In the following case we > have retained COOKIE1 and |COOKIE2, but I still have varnish caches the > responses(I have the unset cookie from backend responses). Could you > please let me know the reason. > > 2) If the approach is ok, please advise on any issues are related to this > approach. > > 3) I am not adding any specific value in hash block, so requests are > cached only based on req-url or IP. hope that is right. > > > ================================= > > sub vcl_recv { > if (req.http.Cookie) { > set req.http.Cookie = ";" + req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > > if (req.http.Cookie == "") { > unset req.http.Cookie; > } > }} > > > ================================= > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Tue May 30 07:38:42 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 30 May 2017 09:38:42 +0200 Subject: Retaining some cookies and receiving hits In-Reply-To: References: Message-ID: Hi, As you said, the hashing is only based on the URL (and host/ip), that means that Varnish can cache "/account.html" with cookie "user=alice" and deliver it to the request "/account.html" with cookie "user=bob", is that an issue? I highly recommend using vmod cookie to avoid the regex madness. I'd also extract the cookies into their own headers and hash them inconditionally, giving something like: sub vcl_recv { cookie.parse(req.http.cookie); set req.http.cookie1 = cookie.get("COOKIE1"); set req.http.cookie2 = cookie.get("COOKIE2"); unset req.http.cookie; } sub vcl_hash { hash_data(req.http.cookie1); hash_data(req.http.cookie2); } -- Guillaume Quintard On Tue, May 30, 2017 at 6:40 AM, sreeranj s wrote: > Let me reiterate the question. > > By default varnish will not cache if there is cookie present in request or > a set-cookie value is there in server response. In the following case we > have retained COOKIE1 and |COOKIE2, but can varnish still caches the > responses(I have the unset cookie from backend responses), by returning > hash in vcl recv. No changes in vcl_hash is made, so caching is based on > req_url. > > Please advise on any issues on this approach. > > *************************************** > sub vcl_recv { > > if (req.http.Cookie) { > set req.http.Cookie = ";" + req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > > if (req.http.Cookie == "") { > unset req.http.Cookie; > > return(pass) > } > return(hash) > > }} > > *************************************** > > > > > On Mon, May 29, 2017 at 9:45 PM, sreeranj s > wrote: > >> Hi, >> >> As per the link https://varnish-cache.org/docs >> /4.0/users-guide/increasing-your-hitrate.html, following code will help >> us to retain COOKIE1 and |COOKIE2, but strip other cookies. So COOKIE1 and >> |COOKIE2 is send to backend. I have the following questions. >> >> 1) By default varnish will not cache if there is cookie present in >> request or a set-cookie value is there in server response. In the following >> case we have retained COOKIE1 and |COOKIE2, but I still have varnish caches >> the responses(I have the unset cookie from backend responses). Could you >> please let me know the reason. >> >> 2) If the approach is ok, please advise on any issues are related to this >> approach. >> >> 3) I am not adding any specific value in hash block, so requests are >> cached only based on req-url or IP. hope that is right. >> >> >> ================================= >> >> sub vcl_recv { >> if (req.http.Cookie) { >> set req.http.Cookie = ";" + req.http.Cookie; >> set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); >> set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); >> set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); >> set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); >> >> if (req.http.Cookie == "") { >> unset req.http.Cookie; >> } >> }} >> >> >> ================================= >> >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciapnz at gmail.com Tue May 30 11:06:26 2017 From: ciapnz at gmail.com (Danila Vershinin) Date: Tue, 30 May 2017 14:06:26 +0300 Subject: Retaining some cookies and receiving hits In-Reply-To: References: Message-ID: Hi, Why unset req.http.cookie; ? Wouldn?t this require applications to read $_SERVER[?HTTP_COOKIE1?] and $_SERVER[?HTTP_COOKIE2?]. as opposed to $_COOKIE. (I can see this might break PHP sessions working out of the box). Or I misunderstood? Best Regards, Danila > On 30 May 2017, at 10:38, Guillaume Quintard wrote: > > Hi, > > As you said, the hashing is only based on the URL (and host/ip), that means that Varnish can cache "/account.html" with cookie "user=alice" and deliver it to the request "/account.html" with cookie "user=bob", is that an issue? > > I highly recommend using vmod cookie to avoid the regex madness. I'd also extract the cookies into their own headers and hash them inconditionally, giving something like: > > sub vcl_recv { > cookie.parse(req.http.cookie); > set req.http.cookie1 = cookie.get("COOKIE1"); > set req.http.cookie2 = cookie.get("COOKIE2"); > unset req.http.cookie; > } > > sub vcl_hash { > hash_data(req.http.cookie1); > hash_data(req.http.cookie2); > } > > -- > Guillaume Quintard > > On Tue, May 30, 2017 at 6:40 AM, sreeranj s > wrote: > Let me reiterate the question. > > By default varnish will not cache if there is cookie present in request or a set-cookie value is there in server response. In the following case we have retained COOKIE1 and |COOKIE2, but can varnish still caches the responses(I have the unset cookie from backend responses), by returning hash in vcl recv. No changes in vcl_hash is made, so caching is based on req_url. > > Please advise on any issues on this approach. > > *************************************** > sub vcl_recv { > if (req.http.Cookie) { > set req.http.Cookie = ";" + req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > > if (req.http.Cookie == "") { > unset req.http.Cookie; > return(pass) > } > return(hash) > } > } > *************************************** > > > > On Mon, May 29, 2017 at 9:45 PM, sreeranj s > wrote: > Hi, > > As per the link https://varnish-cache.org/docs/4.0/users-guide/increasing-your-hitrate.html , following code will help us to retain COOKIE1 and |COOKIE2, but strip other cookies. So COOKIE1 and |COOKIE2 is send to backend. I have the following questions. > > 1) By default varnish will not cache if there is cookie present in request or a set-cookie value is there in server response. In the following case we have retained COOKIE1 and |COOKIE2, but I still have varnish caches the responses(I have the unset cookie from backend responses). Could you please let me know the reason. > > 2) If the approach is ok, please advise on any issues are related to this approach. > > 3) I am not adding any specific value in hash block, so requests are cached only based on req-url or IP. hope that is right. > > > ================================= > sub vcl_recv { > if (req.http.Cookie) { > set req.http.Cookie = ";" + req.http.Cookie; > set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); > set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); > set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); > set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); > > if (req.http.Cookie == "") { > unset req.http.Cookie; > } > } > } > > ================================= > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Tue May 30 11:27:10 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 30 May 2017 13:27:10 +0200 Subject: Retaining some cookies and receiving hits In-Reply-To: References: Message-ID: Hello Danila, Since hashing only the url, I assumed the cookie was of no interest to the backend, so I'm nuking it to benefit from builtin.vcl, ie. I don't have to return(hash). -- Guillaume Quintard On Tue, May 30, 2017 at 1:06 PM, Danila Vershinin wrote: > Hi, > > Why unset req.http.cookie; ? > > Wouldn?t this require applications to read $_SERVER[?HTTP_COOKIE1?] > and $_SERVER[?HTTP_COOKIE2?]. as opposed to $_COOKIE. (I can see this might > break PHP sessions working out of the box). > Or I misunderstood? > > Best Regards, > Danila > > On 30 May 2017, at 10:38, Guillaume Quintard com> wrote: > > Hi, > > As you said, the hashing is only based on the URL (and host/ip), that > means that Varnish can cache "/account.html" with cookie "user=alice" and > deliver it to the request "/account.html" with cookie "user=bob", is that > an issue? > > I highly recommend using vmod cookie to avoid the regex madness. I'd also > extract the cookies into their own headers and hash them inconditionally, > giving something like: > > sub vcl_recv { > cookie.parse(req.http.cookie); > set req.http.cookie1 = cookie.get("COOKIE1"); > set req.http.cookie2 = cookie.get("COOKIE2"); > unset req.http.cookie; > } > > sub vcl_hash { > hash_data(req.http.cookie1); > hash_data(req.http.cookie2); > } > > -- > Guillaume Quintard > > On Tue, May 30, 2017 at 6:40 AM, sreeranj s > wrote: > >> Let me reiterate the question. >> >> By default varnish will not cache if there is cookie present in request >> or a set-cookie value is there in server response. In the following case we >> have retained COOKIE1 and |COOKIE2, but can varnish still caches the >> responses(I have the unset cookie from backend responses), by returning >> hash in vcl recv. No changes in vcl_hash is made, so caching is based on >> req_url. >> >> Please advise on any issues on this approach. >> >> *************************************** >> sub vcl_recv { >> >> if (req.http.Cookie) { >> set req.http.Cookie = ";" + req.http.Cookie; >> set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); >> set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); >> set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); >> set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); >> >> if (req.http.Cookie == "") { >> unset req.http.Cookie; >> >> return(pass) >> } >> return(hash) >> >> }} >> >> *************************************** >> >> >> >> >> On Mon, May 29, 2017 at 9:45 PM, sreeranj s >> wrote: >> >>> Hi, >>> >>> As per the link https://varnish-cache.org/docs >>> /4.0/users-guide/increasing-your-hitrate.html, following code will help >>> us to retain COOKIE1 and |COOKIE2, but strip other cookies. So COOKIE1 and >>> |COOKIE2 is send to backend. I have the following questions. >>> >>> 1) By default varnish will not cache if there is cookie present in >>> request or a set-cookie value is there in server response. In the following >>> case we have retained COOKIE1 and |COOKIE2, but I still have varnish caches >>> the responses(I have the unset cookie from backend responses). Could you >>> please let me know the reason. >>> >>> 2) If the approach is ok, please advise on any issues are related to >>> this approach. >>> >>> 3) I am not adding any specific value in hash block, so requests are >>> cached only based on req-url or IP. hope that is right. >>> >>> >>> ================================= >>> >>> sub vcl_recv { >>> if (req.http.Cookie) { >>> set req.http.Cookie = ";" + req.http.Cookie; >>> set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); >>> set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1="); >>> set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); >>> set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); >>> >>> if (req.http.Cookie == "") { >>> unset req.http.Cookie; >>> } >>> }} >>> >>> >>> ================================= >>> >>> >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Tue May 30 12:49:52 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 30 May 2017 14:49:52 +0200 Subject: Retaining some cookies and receiving hits In-Reply-To: References: Message-ID: On Tue, May 30, 2017 at 1:27 PM, Guillaume Quintard wrote: > Hello Danila, > > Since hashing only the url, I assumed the cookie was of no interest to the > backend, so I'm nuking it to benefit from builtin.vcl, ie. I don't have to > return(hash). On that note, a shameless plug: https://info.varnish-software.com/blog/yet-another-post-on-caching-vs-cookies Part 2 is written, still a draft that I need to revisit once my cookie madness gauge leaves the red zone (the gauge is currently over 9000 and may fail to block strong language). On the same topic: https://info.varnish-software.com/blog/sticky-session-with-cookies Cheers From dridi at varni.sh Tue May 30 12:53:19 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 30 May 2017 14:53:19 +0200 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> Message-ID: On Sat, May 27, 2017 at 9:30 PM, Nigel Peck wrote: > > Just another update on this, regarding how reliably I can reproduce. I have > now updated my code for purging the cache, so it reports which purged > objects received a HIT when they were retrieved with the restarted request > after the purge, and it happens every time I purge a significant number of > items from the cache, perhaps 10% of them or something like that. So I can > reproduce reliably. Dunno how you define "which purged objects received a HIT" but I would enable Hash records to be able to compare them in the logs to first make sure they actually are the same. See man varnishd, look for vsl_mask. Dridi From dridi at varni.sh Wed May 31 14:49:24 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 31 May 2017 16:49:24 +0200 Subject: Unexplained Cache MISSes In-Reply-To: <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> Message-ID: On Tue, May 30, 2017 at 8:06 PM, Nigel Peck wrote: > > Thanks, I'll look at that to make sure. The situation is, if I purge an > object, and that request is restarted in vcl_purge, it is by definition the > same object, is it not? Since it is the exact same request. So what I'm > saying is, about 10% of requests that I purge, receive a HIT on the restart > of that same request. > > It's not because of intervening requests and I've not made any changes to > vcl_hash, so it's a very simple hash that isn't being changed before the > restart in vcl_purge. It is possible that while the purge is happening another client requests the same object and once the purge restarts into a GET or HIT it gets a hit from the other client's request. Grouping logs (by session or request) might help better understand what's happening. I haven't read this thread in details. Also please keep the list in CC. Cheers, Dridi From guillaume at varnish-software.com Wed May 31 16:25:35 2017 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 31 May 2017 18:25:35 +0200 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> Message-ID: On Wed, May 31, 2017 at 4:49 PM, Dridi Boukelmoune wrote: > > It is possible that while the purge is happening another client > requests the same object and once the purge restarts into a GET or HIT > it gets a hit from the other client's request. > I got that idea too, but the HIT after the purge return an object with a large age. -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From np.lists at sharphosting.uk Wed May 31 17:45:14 2017 From: np.lists at sharphosting.uk (Nigel Peck) Date: Wed, 31 May 2017 12:45:14 -0500 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> Message-ID: <3fdcafb5-4000-3d64-478b-fb60baa9a783@sharphosting.uk> On 31/05/2017 09:49, Dridi Boukelmoune wrote: > On Tue, May 30, 2017 at 8:06 PM, Nigel Peck wrote: >> >> Thanks, I'll look at that to make sure. The situation is, if I purge an >> object, and that request is restarted in vcl_purge, it is by definition the >> same object, is it not? Since it is the exact same request. So what I'm >> saying is, about 10% of requests that I purge, receive a HIT on the restart >> of that same request. >> >> It's not because of intervening requests and I've not made any changes to >> vcl_hash, so it's a very simple hash that isn't being changed before the >> restart in vcl_purge. > > It is possible that while the purge is happening another client > requests the same object and once the purge restarts into a GET or HIT > it gets a hit from the other client's request. > > Grouping logs (by session or request) might help better understand > what's happening. I haven't read this thread in details. Also please > keep the list in CC. Sorry about missing the list off the CC, was an oversight on my part, must have hit the wrong button and missed that I did that. It's not due to other requests happening between. As Guillaume says the age is high, and also I checked many of these in varnishlog by looking at all entries for the URL and there is never a request in between. Perhaps a good next step would be for me to set up a minimal install on a fresh CentOS 7 instance and see if I can reproduce there with minimal VCL? Although there is nothing in the VCL I have that should cause this intermittent behaviour. Nigel From dridi at varni.sh Wed May 31 23:21:54 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 1 Jun 2017 01:21:54 +0200 Subject: Unexplained Cache MISSes In-Reply-To: References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> Message-ID: On Wed, May 31, 2017 at 6:25 PM, Guillaume Quintard wrote: > > > On Wed, May 31, 2017 at 4:49 PM, Dridi Boukelmoune wrote: >> >> It is possible that while the purge is happening another client >> requests the same object and once the purge restarts into a GET or HIT >> it gets a hit from the other client's request. > > > I got that idea too, but the HIT after the purge return an object with a > large age. The age is something that could come from the backend. Does the VXID match the one that was just purged when a restart gets a hit? Dridi From dridi at varni.sh Wed May 31 23:33:01 2017 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 1 Jun 2017 01:33:01 +0200 Subject: Unexplained Cache MISSes In-Reply-To: <3fdcafb5-4000-3d64-478b-fb60baa9a783@sharphosting.uk> References: <211c667a-ce70-6373-c840-4482c159e38c@sharphosting.uk> <35dfe986-72dc-95f5-0319-9d0743aebbe4@sharphosting.uk> <3fdcafb5-4000-3d64-478b-fb60baa9a783@sharphosting.uk> Message-ID: > Sorry about missing the list off the CC, was an oversight on my part, must > have hit the wrong button and missed that I did that. No worries, reading my previous email I must say that remark doesn't look nice. I wrote it in a rush like I often do on this list. > It's not due to other requests happening between. As Guillaume says the age > is high, and also I checked many of these in varnishlog by looking at all > entries for the URL and there is never a request in between. There's no ordering guarantee in the varnishlog output, although they should likely be ordered since they share the same hash. You'd need to check the Timestamp records to get a grasp of chronology. > Perhaps a good next step would be for me to set up a minimal install on a > fresh CentOS 7 instance and see if I can reproduce there with minimal VCL? > Although there is nothing in the VCL I have that should cause this > intermittent behaviour. If it's a bug, it might be one of those hard to reproduce... Amazingly enough I never looked at the logs of a purge, maybe ExpKill could give us a VXID to then check against the hit. If only SomeoneElse(tm) could spare me the time and look at it themselves and tell us (wink wink=). Cheers