From varnish-bugs at varnish-cache.org Mon Sep 3 10:02:02 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 10:02:02 -0000 Subject: [Varnish] #1193: varnishstat displays values for wrong attribute In-Reply-To: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> References: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> Message-ID: <061.c2a04e47fc87c804287d9e3aee18bd45@varnish-cache.org> #1193: varnishstat displays values for wrong attribute -------------------------+--------------------- Reporter: macquist | Owner: martin Type: defect | Status: new Priority: normal | Milestone: Component: varnishstat | Version: 3.0.3 Severity: normal | Resolution: Keywords: | -------------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 3 10:08:48 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 10:08:48 -0000 Subject: [Varnish] #1193: varnishstat displays values for wrong attribute In-Reply-To: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> References: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> Message-ID: <061.58e72fda58ff532cd299fbe4f6cef595@varnish-cache.org> #1193: varnishstat displays values for wrong attribute -------------------------+------------------------- Reporter: macquist | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishstat | Version: 3.0.3 Severity: normal | Resolution: worksforme Keywords: | -------------------------+------------------------- Changes (by martin): * status: new => closed * resolution: => worksforme Comment: This looks like a binary mismatch problem, where you are using the old binaries to read the values from the newer varnishd. This can cause the wrong values to be returned because the value offsets has shifted. Please check the versions and make sure the programs has been restarted. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 3 10:13:50 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 10:13:50 -0000 Subject: [Varnish] #1174: Varnish should warn about timeouts exceeded in varnishlog In-Reply-To: <045.4cb88af9abb92ce981897a441af11508@varnish-cache.org> References: <045.4cb88af9abb92ce981897a441af11508@varnish-cache.org> Message-ID: <060.63f228feb4527b4b85643748a2c94a86@varnish-cache.org> #1174: Varnish should warn about timeouts exceeded in varnishlog -------------------------+------------------------------ Reporter: derjohn | Owner: Type: enhancement | Status: closed Priority: normal | Milestone: Varnish 3.0 dev Component: build | Version: 3.0.0 Severity: normal | Resolution: fixed Keywords: | -------------------------+------------------------------ Changes (by martin): * status: new => closed * resolution: => fixed Comment: Timeout debug messages has been implemented in the log. Is present in the latest 3.0.3 release. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 3 10:34:32 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 10:34:32 -0000 Subject: [Varnish] #940: ETag for gzip'd variant identical to ETag of ungzipped variant. In-Reply-To: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> References: <043.d3f0122c5f2e8a8e1ed36783de6aff0a@varnish-cache.org> Message-ID: <058.6c618ab23c3c9f934d8dfb76d446b683@varnish-cache.org> #940: ETag for gzip'd variant identical to ETag of ungzipped variant. ----------------------+--------------------- Reporter: david | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: build | Version: 3.0.0 Severity: critical | Resolution: Keywords: | ----------------------+--------------------- Changes (by martin): * owner: => martin Comment: First mission: document some different scenarios and the expected behavior, so we can all agree on that behavior. Look specifically at mangling the Etag - and see that the worst case outcome will be a 200 where a 304 could've been possible. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 3 15:55:15 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 15:55:15 -0000 Subject: [Varnish] #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) In-Reply-To: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> References: <047.bcc0e94d7c2ef8aaaa3912631b629b9e@varnish-cache.org> Message-ID: <062.66c6e4fd68eb814cfd3429dfbbeeda01@varnish-cache.org> #1119: Varnish 3.0.2 freezed, Pushing vcls failed:#012CLI communication error (hdr) -------------------------------------------------+------------------------- Reporter: campisano | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.2 Severity: blocker | Resolution: worksforme Keywords: child died pushing vcls failed | #012CLI communication error (hdr) | -------------------------------------------------+------------------------- Comment (by campisano): Replying to [comment:12 lampe]: > the child blocks on a write to the shared memory file when cached disk writes exceed a certain amount and the disk is busy. With large RAM, the disk sync can take several seconds. If the child blocks long enough, the VCL upload from the master process times out and the child is terminated but not restarted. > Putting the shm log file on tmpfs resolved it for me. No physical disk, thus no waiting on the I/O scheduler. The Varnish Book explicitly recommends that the log must not cause physical disk I/O. I understand, so the best way appears to use RAM directly or with tempfs. But I have not sufficent RAM to give to the varnish log, so what happen when varnish want store something and have no more space to write? I suppose that he discard something old, do the call to the apache and all can work fine, I'm right? However, why using tempfs where can use RAM directly? Varnish could have a little overhead with tempfs that is not necessary I suppose, or I'm wrong? > I'll also experiment with lowering /proc/sys/vm/dirty_background_bytes to <1GB and increasing dirty_ratio to 50%. See http://www.kernel.org/doc/Documentation/sysctl/vm.txt for a description of these linux kernel parameters. Nice, but this is for all the process on the server... However, did you get good results with this configs (without use tempfs)? Thanks you lampe. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 3 22:07:45 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 03 Sep 2012 22:07:45 -0000 Subject: [Varnish] #1194: segfault on newer fedoras, ppc64 Message-ID: <044.6db4509878cd300aa5f6bf2224bb7e71@varnish-cache.org> #1194: segfault on newer fedoras, ppc64 --------------------+------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Keywords: --------------------+------------------- On newer versions of fedora, that is f17, f18, rawhide, on ppc64, varnishd segfaults during the test suite already at tests/b00000.vtc. Note that the full test suite runs without problems on epel6/ppc64 (based on fedora 12/13). Also note that 32bit ppc runs the full test suite on those recent fedora versions. Test suite fail example: http://ppc.koji.fedoraproject.org/koji/getfile?taskID=694549&name=build.log gdb backtraces http://users.linpro.no/ingvar/varnish/3.0.3/3.0.3-1/ I have a running fedora 17/ppc64 environment for testing at the lab. Ingvar -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 4 06:18:10 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Sep 2012 06:18:10 -0000 Subject: [Varnish] #1191: pcre jit does not work on i386 In-Reply-To: <044.1dde28fc30c5b94e77c2c0f12d0415da@varnish-cache.org> References: <044.1dde28fc30c5b94e77c2c0f12d0415da@varnish-cache.org> Message-ID: <059.9b1298904b2b88581e6ec073c3efd734@varnish-cache.org> #1191: pcre jit does not work on i386 --------------------+-------------------- Reporter: ingvar | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: Keywords: | --------------------+-------------------- Comment (by Tollef Fog Heen ): In [791706d8e535fe1573a6ea5cd515113ddacbac52]: {{{ #!CommitTicketReference repository="" revision="791706d8e535fe1573a6ea5cd515113ddacbac52" Disable the PCRE JIT compiler by default The JIT compiler is broken on some versions of PCRE, at least on i386, so disable it by default. It can be enabled using --enable-pcre-jit to configure. Fixes #1191 }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 4 06:18:12 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 04 Sep 2012 06:18:12 -0000 Subject: [Varnish] #1191: pcre jit does not work on i386 In-Reply-To: <044.1dde28fc30c5b94e77c2c0f12d0415da@varnish-cache.org> References: <044.1dde28fc30c5b94e77c2c0f12d0415da@varnish-cache.org> Message-ID: <059.530355fac0aa581dbde948cfbe41afe5@varnish-cache.org> #1191: pcre jit does not work on i386 --------------------+--------------------- Reporter: ingvar | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by Tollef Fog Heen ): * status: new => closed * resolution: => fixed Comment: (In [791706d8e535fe1573a6ea5cd515113ddacbac52]) Disable the PCRE JIT compiler by default The JIT compiler is broken on some versions of PCRE, at least on i386, so disable it by default. It can be enabled using --enable-pcre-jit to configure. Fixes #1191 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 6 12:19:26 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Sep 2012 12:19:26 -0000 Subject: [Varnish] #1167: 3.0.3rc1 Compile Error on Solaris 10 with gcc 4.3.3 In-Reply-To: <044.677261b72c12472e07a7cff6c933fe66@varnish-cache.org> References: <044.677261b72c12472e07a7cff6c933fe66@varnish-cache.org> Message-ID: <059.2a81e5703cd06f5851a4fbbf9e11b07f@varnish-cache.org> #1167: 3.0.3rc1 Compile Error on Solaris 10 with gcc 4.3.3 --------------------------+-------------------- Reporter: Dommas | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: port:solaris | Version: 3.0.3 Severity: normal | Resolution: Keywords: | --------------------------+-------------------- Changes (by phk): * version: trunk => 3.0.3 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 6 12:23:03 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Sep 2012 12:23:03 -0000 Subject: [Varnish] #856: nm -an hack does not work on solaris In-Reply-To: <043.5c20ff94dcf8d062877180d2c9fb33f0@varnish-cache.org> References: <043.5c20ff94dcf8d062877180d2c9fb33f0@varnish-cache.org> Message-ID: <058.73e0313776616f443a9dcb00e3e951cb@varnish-cache.org> #856: nm -an hack does not work on solaris --------------------------+------------------------- Reporter: slink | Owner: slink Type: defect | Status: closed Priority: lowest | Milestone: Component: port:solaris | Version: trunk Severity: trivial | Resolution: worksforme Keywords: Solaris | --------------------------+------------------------- Changes (by phk): * status: new => closed * resolution: => worksforme Comment: Just checked backtrace in OmnitiOS using gcc compiler and it looks as sane as one can hope for. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Thu Sep 6 15:51:44 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Thu, 06 Sep 2012 15:51:44 -0000 Subject: [Varnish] #1193: varnishstat displays values for wrong attribute In-Reply-To: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> References: <046.b8e3d0b37e65980902d5b23b5c58a637@varnish-cache.org> Message-ID: <061.c8f51e242a2527b1390702cc019f655f@varnish-cache.org> #1193: varnishstat displays values for wrong attribute -------------------------+------------------------- Reporter: macquist | Owner: martin Type: defect | Status: closed Priority: normal | Milestone: Component: varnishstat | Version: 3.0.3 Severity: normal | Resolution: worksforme Keywords: | -------------------------+------------------------- Comment (by macquist): Replying to [comment:3 martin]: > This looks like a binary mismatch problem, where you are using the old binaries to read the values from the newer varnishd. This can cause the wrong values to be returned because the value offsets has shifted. Please check the versions and make sure the programs has been restarted. You were right, we had a version mismatch between lib and program package: * libvarnishapi1 3.0.2-1 * varnish 3.0.3-1 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 14 20:10:02 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 14 Sep 2012 20:10:02 -0000 Subject: [Varnish] #1195: varnishd child will not start Message-ID: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> #1195: varnishd child will not start ---------------------------------------+---------------------- Reporter: chrcol | Type: defect Status: new | Priority: high Milestone: | Component: varnishd Version: 3.0.2 | Severity: critical Keywords: child process start crash | ---------------------------------------+---------------------- Hi I run varnish successfully on multiple server's, I then tried to install varnish on a virtual machine for a dev environment. The vcl config is the same as what works on other machines with only changes for backends. Upon starting varnishd we discovered that it wasnt responding to requests even tho the process was up and after some debugging it turns out the child process is crashing immediatly on start. Here is the output from debug. start child (20749) Started Pushing vcls failed: CLI communication error (hdr) Stopping Child 200 0 Child (20749) died signal=11 Child (-1) said Child starts Child cleanup complete Current param.show although these aren't all normal settings, here I raised various timeouts and reduced malloc and thread size to try and get it working. param.show 200 3094 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 60 [seconds] clock_skew 10 [s] connect_timeout 10.000000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 14400.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 10.000000 [s] group root (0) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] listen_address 0.0.0.0:80 listen_depth 1024 [connections] log_hashstring off [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] ping_interval 3 [seconds] pipe_timeout 10 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 60 [seconds] sess_timeout 10 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 20 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 500 [threads] thread_pool_min 100 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack 16384 [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 1 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcc_err_unref on [bool] vcl_dir /usr/local/etc/varnish vcl_trace off [bool] vmod_dir /usr/local/lib/varnish/vmods waiter default (epoll, poll) The os is debian and I gave the virtual machine 2 gigs of ram. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 15 11:25:05 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 15 Sep 2012 11:25:05 -0000 Subject: [Varnish] #1196: TEST ./tests/b00028.vtc FAILED Message-ID: <050.7dfb1133942f8dc4a563cd156da4525a@varnish-cache.org> #1196: TEST ./tests/b00028.vtc FAILED --------------------------+-------------------- Reporter: plamenpetrov | Type: defect Status: new | Priority: normal Milestone: | Component: build Version: 3.0.3 | Severity: normal Keywords: | --------------------------+-------------------- Following the docs install-from-source guide: Before you install, you may want to run the regression tests, make a cup of tea while it runs, it takes some minutes: {{{ make check }}} Don't worry of a single or two tests fail, some of the tests are a bit too timing sensitive (Please tell us which so we can fix it) but if a lot of them fails, and in particular if the b00000.vtc test fails, something is horribly wrong, and you will get nowhere without figuring out what. ....... The "make check" command results in this: {{{ make[2]: Entering directory `/root/compile/varnish-3.0.3/bin/varnishtest' ./varnishtest -i -j3 ./tests/*.vtc # top TEST ./tests/a00000.vtc passed (0.021) # top TEST ./tests/a00002.vtc passed (0.027) # top TEST ./tests/a00001.vtc passed (0.052) # top TEST ./tests/a00003.vtc passed (0.043) # top TEST ./tests/a00005.vtc passed (0.022) # top TEST ./tests/a00004.vtc passed (0.040) # top TEST ./tests/a00006.vtc passed (0.038) # top TEST ./tests/a00007.vtc passed (0.025) # top TEST ./tests/a00010.vtc passed (0.020) # top TEST ./tests/a00011.vtc passed (0.011) # top TEST ./tests/a00012.vtc passed (0.010) # top TEST ./tests/a00009.vtc passed (0.661) # top TEST ./tests/b00000.vtc passed (1.098) # top TEST ./tests/b00001.vtc passed (0.773) # top TEST ./tests/a00008.vtc passed (1.815) # top TEST ./tests/b00002.vtc passed (0.956) # top TEST ./tests/b00003.vtc passed (0.871) # top TEST ./tests/b00005.vtc passed (0.888) # top TEST ./tests/b00006.vtc passed (0.761) # top TEST ./tests/b00004.vtc passed (1.679) # top TEST ./tests/b00007.vtc passed (0.836) # top TEST ./tests/b00009.vtc passed (0.889) # top TEST ./tests/b00008.vtc passed (1.383) # top TEST ./tests/b00010.vtc passed (0.760) # top TEST ./tests/b00011.vtc passed (0.850) # top TEST ./tests/b00012.vtc passed (0.935) # top TEST ./tests/b00013.vtc passed (0.892) # top TEST ./tests/b00014.vtc passed (1.065) # top TEST ./tests/b00016.vtc passed (1.102) # top TEST ./tests/b00017.vtc passed (0.746) # top TEST ./tests/b00018.vtc passed (0.739) # top TEST ./tests/b00015.vtc passed (2.607) # top TEST ./tests/b00019.vtc passed (0.958) # top TEST ./tests/b00020.vtc passed (3.355) # top TEST ./tests/b00022.vtc passed (3.298) # top TEST ./tests/b00023.vtc passed (2.728) # top TEST ./tests/b00024.vtc passed (2.717) # top TEST ./tests/b00025.vtc passed (2.695) # top TEST ./tests/b00027.vtc passed (0.689) # top TEST ./tests/b00021.vtc passed (9.777) **** top 0.0 macro def varnishd=../varnishd/varnishd **** top 0.0 macro def pwd=/root/compile/varnish-3.0.3/bin/varnishtest **** top 0.0 macro def topbuild=/root/compile/varnish-3.0.3/bin/varnishtest/../.. **** top 0.0 macro def bad_ip=10.255.255.255 **** top 0.0 macro def tmpdir=/tmp/vtc.26365.1908605f * top 0.0 TEST ./tests/b00028.vtc starting *** top 0.0 varnishtest * top 0.0 TEST regexp match and no-match *** top 0.0 server ** s1 0.0 Starting server **** s1 0.0 macro def s1_addr=127.0.0.1 **** s1 0.0 macro def s1_port=44657 **** s1 0.0 macro def s1_sock=127.0.0.1 44657 * s1 0.0 Listen on 127.0.0.1 44657 *** top 0.0 varnish ** s1 0.0 Started on 127.0.0.1 44657 ** v1 0.0 Launch *** v1 0.0 CMD: cd ${pwd} && ${varnishd} -d -d -n /tmp/vtc.26365.1908605f/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.26365.1908605f/v1/_S -M '127.0.0.1 45289' -P /tmp/vtc.26365.1908605f/v1/varnishd.pid -sfile,/tmp/vtc.26365.1908605f/v1,10M *** v1 0.0 CMD: cd /root/compile/varnish-3.0.3/bin/varnishtest && ../varnishd/varnishd -d -d -n /tmp/vtc.26365.1908605f/v1 -l 10m,1m,- -p auto_restart=off -p syslog_cli_traffic=off -a '127.0.0.1:0' -S /tmp/vtc.26365.1908605f/v1/_S -M '127.0.0.1 45289' -P /tmp/vtc.26365.1908605f/v1/varnishd.pid -sfile,/tmp/vtc.26365.1908605f/v1,10M *** v1 0.0 PID: 28082 *** v1 0.2 debug| Platform: Linux,3.5.2-VMware,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.2 debug| 200 236 \n *** v1 0.2 debug| -----------------------------\n *** v1 0.2 debug| Varnish Cache CLI 1.0\n *** v1 0.2 debug| -----------------------------\n *** v1 0.2 debug| Linux,3.5.2-VMware,i686,-sfile,-smalloc,-hcritbit\n *** v1 0.2 debug| \n *** v1 0.2 debug| Type 'help' for command list.\n *** v1 0.2 debug| Type 'quit' to close CLI session.\n *** v1 0.2 debug| Type 'start' to launch worker process.\n *** v1 0.2 debug| \n **** v1 0.3 CLIPOLL 1 0x1 0x0 *** v1 0.3 CLI connection fd = 9 *** v1 0.3 CLI RX 107 **** v1 0.3 CLI RX| mzrgpylmkcvryutwapbnzwocdxeirvbd\n **** v1 0.3 CLI RX| \n **** v1 0.3 CLI RX| Authentication required.\n **** v1 0.3 CLI TX| auth da18274e07a44b47640737b36e8d080e679e7f8d287b91c68770aaf649ef9093\n *** v1 0.3 CLI RX 200 **** v1 0.3 CLI RX| -----------------------------\n **** v1 0.3 CLI RX| Varnish Cache CLI 1.0\n **** v1 0.3 CLI RX| -----------------------------\n **** v1 0.3 CLI RX| Linux,3.5.2-VMware,i686,-sfile,-smalloc,-hcritbit\n **** v1 0.3 CLI RX| \n **** v1 0.3 CLI RX| Type 'help' for command list.\n **** v1 0.3 CLI RX| Type 'quit' to close CLI session.\n **** v1 0.3 CLI RX| Type 'start' to launch worker process.\n **** v1 0.3 CLI TX| vcl.inline vcl1 << %XJEIFLH|)Xspa8P\n **** v1 0.3 CLI TX| backend s1 { .host = "127.0.0.1"; .port = "44657"; }\n **** v1 0.3 CLI TX| \n **** v1 0.3 CLI TX| \n **** v1 0.3 CLI TX| \n **** v1 0.3 CLI TX| \tsub vcl_fetch {\n **** v1 0.3 CLI TX| \t\tif (beresp.http.foo ~ "bar") {\n **** v1 0.3 CLI TX| \t\t\tset beresp.http.foo1 = "1";\n **** v1 0.3 CLI TX| \t\t} else {\n **** v1 0.3 CLI TX| \t\t\terror 999;\n **** v1 0.3 CLI TX| \t\t}\n **** v1 0.3 CLI TX| \t\tif (beresp.http.bar !~ "bar") {\n **** v1 0.3 CLI TX| \t\t\tset beresp.http.bar1 = "2";\n **** v1 0.3 CLI TX| \t\t} else {\n **** v1 0.3 CLI TX| \t\t\terror 999;\n **** v1 0.3 CLI TX| \t\t}\n **** v1 0.3 CLI TX| \t}\n **** v1 0.3 CLI TX| \n **** v1 0.3 CLI TX| \n **** v1 0.3 CLI TX| %XJEIFLH|)Xspa8P\n *** v1 0.4 CLI RX 200 **** v1 0.4 CLI RX| VCL compiled. **** v1 0.4 CLI TX| vcl.use vcl1 *** v1 0.4 CLI RX 200 ** v1 0.4 Start **** v1 0.4 CLI TX| start *** v1 0.5 debug| child (28108) Started\n **** v1 0.5 vsl| 0 CLI - Rd vcl.load "vcl1" ./vcl.wqSbvggu.so **** v1 0.5 vsl| 0 CLI - Wr 200 36 Loaded "./vcl.wqSbvggu.so" as "vcl1" **** v1 0.5 vsl| 0 WorkThread - 0xb3efb010 start **** v1 0.5 vsl| 0 CLI - Rd vcl.use "vcl1" **** v1 0.5 vsl| 0 CLI - Wr 200 0 **** v1 0.5 vsl| 0 CLI - Rd start **** v1 0.5 vsl| 0 Debug - Acceptor is epoll **** v1 0.5 vsl| 0 CLI - Wr 200 0 **** v1 0.5 vsl| 0 WorkThread - 0xb22e8010 start *** v1 0.5 CLI RX 200 *** v1 0.5 wait-running **** v1 0.5 CLI TX| status *** v1 0.5 debug| Child (28108) said Child starts\n *** v1 0.5 debug| Child (28108) said SMF.s0 mmap'ed 10485760 bytes of 10485760\n **** v1 0.5 vsl| 0 WorkThread - 0xb1ad6010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1ac5010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1ab4010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1aa3010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1a92010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1a81010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1a70010 start **** v1 0.5 vsl| 0 WorkThread - 0xb1a5f010 start *** v1 0.5 CLI RX 200 **** v1 0.5 CLI RX| Child in state running **** v1 0.5 CLI TX| debug.xid 1000 *** v1 0.6 CLI RX 200 **** v1 0.6 CLI RX| XID is 1000 **** v1 0.6 CLI TX| debug.listen_address *** v1 0.6 CLI RX 200 **** v1 0.6 CLI RX| 127.0.0.1 53607\n ** v1 0.6 Listen on 127.0.0.1 53607 **** v1 0.6 macro def v1_addr=127.0.0.1 **** v1 0.6 macro def v1_port=53607 **** v1 0.6 macro def v1_sock=127.0.0.1 53607 *** top 0.6 client ** c1 0.6 Starting client ** c1 0.6 Waiting for client *** c1 0.6 Connect to 127.0.0.1 53607 *** c1 0.6 connected fd 10 from 127.0.0.1 48889 to 127.0.0.1 53607 *** c1 0.6 txreq **** c1 0.6 txreq| GET / HTTP/1.1\r\n **** c1 0.6 txreq| \r\n *** s1 0.6 accepted fd 4 *** c1 0.6 rxresp *** s1 0.6 rxreq **** s1 0.6 rxhdr| GET / HTTP/1.1\r\n **** s1 0.6 rxhdr| X-Forwarded-For: 127.0.0.1\r\n **** s1 0.6 rxhdr| X-Varnish: 1001\r\n **** s1 0.6 rxhdr| Accept-Encoding: gzip\r\n **** s1 0.6 rxhdr| Host: 127.0.0.1\r\n **** s1 0.6 rxhdr| \r\n **** s1 0.6 http[ 0] | GET **** s1 0.6 http[ 1] | / **** s1 0.6 http[ 2] | HTTP/1.1 **** s1 0.6 http[ 3] | X-Forwarded-For: 127.0.0.1 **** s1 0.6 http[ 4] | X-Varnish: 1001 **** s1 0.6 http[ 5] | Accept-Encoding: gzip **** s1 0.6 http[ 6] | Host: 127.0.0.1 **** s1 0.6 bodylen = 0 *** s1 0.6 txresp **** s1 0.6 txresp| HTTP/1.1 200 Ok\r\n **** s1 0.6 txresp| Foo: bar\r\n **** s1 0.6 txresp| Bar: foo\r\n **** s1 0.6 txresp| Content-Length: 5\r\n **** s1 0.6 txresp| \r\n **** s1 0.6 txresp| 1111\n *** s1 0.6 shutting fd 4 ** s1 0.6 Ending **** v1 0.6 vsl| 0 CLI - Rd debug.xid 1000 **** v1 0.6 vsl| 0 CLI - Wr 200 11 XID is 1000 **** v1 0.6 vsl| 0 CLI - Rd debug.listen_address **** v1 0.6 vsl| 0 CLI - Wr 200 16 127.0.0.1 53607 **** v1 0.6 vsl| 11 SessionOpen c 127.0.0.1 48889 127.0.0.1:0 **** v1 0.6 vsl| 11 ReqStart c 127.0.0.1 48889 1001 **** v1 0.6 vsl| 11 RxRequest c GET **** v1 0.6 vsl| 11 RxURL c / **** v1 0.6 vsl| 11 RxProtocol c HTTP/1.1 **** v1 0.6 vsl| 11 VCL_call c recv **** v1 0.6 vsl| 11 VCL_return c lookup **** v1 0.6 vsl| 11 VCL_call c hash **** v1 0.6 vsl| 11 Hash c / **** v1 0.6 vsl| 11 Hash c 127.0.0.1 **** v1 0.6 vsl| 11 VCL_return c hash **** v1 0.6 vsl| 11 VCL_call c miss **** v1 0.6 vsl| 11 VCL_return c fetch **** v1 0.6 vsl| 13 BackendOpen b s1 127.0.0.1 33413 127.0.0.1 44657 **** v1 0.6 vsl| 11 Backend c 13 s1 s1 **** v1 0.6 vsl| 13 TxRequest b GET **** v1 0.6 vsl| 13 TxURL b / **** v1 0.6 vsl| 13 TxProtocol b HTTP/1.1 **** v1 0.6 vsl| 13 TxHeader b X-Forwarded-For: 127.0.0.1 **** v1 0.6 vsl| 13 TxHeader b X-Varnish: 1001 **** v1 0.6 vsl| 13 TxHeader b Accept-Encoding: gzip **** v1 0.6 vsl| 13 TxHeader b Host: 127.0.0.1 ---- c1 0.6 HTTP rx EOF (fd:10 read: Success) *** v1 0.6 debug| Child (28108) died signal=11\n *** v1 0.6 debug| Child cleanup complete\n * top 0.6 RESETTING after ./tests/b00028.vtc ** s1 0.6 Waiting for server **** s1 0.6 macro undef s1_addr **** s1 0.6 macro undef s1_port **** s1 0.6 macro undef s1_sock ** v1 1.6 Wait ** v1 1.6 R 28082 Status: 0000 * top 1.7 TEST ./tests/b00028.vtc FAILED # top TEST ./tests/b00028.vtc FAILED (1.660) exit=1 make[2]: *** [check] Error 2 make[2]: Leaving directory `/root/compile/varnish-3.0.3/bin/varnishtest' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/root/compile/varnish-3.0.3/bin' make: *** [check-recursive] Error 1 }}} The host system is 32-bit CRUX linux 2.7, running on a VMware ESXi server virtual machine. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 07:54:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 07:54:43 -0000 Subject: [Varnish] #1129: ./configure does not check for curses.h In-Reply-To: <044.6650d3ebd4b7cc3d434c12fe191c4eab@varnish-cache.org> References: <044.6650d3ebd4b7cc3d434c12fe191c4eab@varnish-cache.org> Message-ID: <059.bae8f09d18ad0f8ca85b4bc7b6994dfd@varnish-cache.org> #1129: ./configure does not check for curses.h --------------------+---------------------- Reporter: ismell | Owner: tfheen Type: defect | Status: closed Priority: low | Milestone: Component: build | Version: 3.0.2 Severity: minor | Resolution: invalid Keywords: | --------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: No response from submitter, closing. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 07:55:35 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 07:55:35 -0000 Subject: [Varnish] #1049: Unbalanced {} in varnishncsa init script In-Reply-To: <043.b2723c63d08ceda97a3718abd8394083@varnish-cache.org> References: <043.b2723c63d08ceda97a3718abd8394083@varnish-cache.org> Message-ID: <058.ca9a19b23d2113e40a9c32f26e2fc5db@varnish-cache.org> #1049: Unbalanced {} in varnishncsa init script -----------------------+--------------------- Reporter: scoof | Owner: tfheen Type: defect | Status: closed Priority: low | Milestone: Component: packaging | Version: 3.0.2 Severity: normal | Resolution: fixed Keywords: | -----------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: .. and actually closing the ticket too. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 08:42:11 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 08:42:11 -0000 Subject: [Varnish] #1192: RHEL6: Init-script not giving correct startup In-Reply-To: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> References: <044.184ff331e3ed28f8eba95a1c3e7eeaf7@varnish-cache.org> Message-ID: <059.2a20d356de9ddf9a65760f16824e0d2e@varnish-cache.org> #1192: RHEL6: Init-script not giving correct startup ----------------------+----------------------- Reporter: Ueland | Owner: tfheen Type: defect | Status: assigned Priority: normal | Milestone: Component: varnishd | Version: 3.0.2 Severity: normal | Resolution: Keywords: | ----------------------+----------------------- Changes (by tfheen): * owner: => tfheen * status: new => assigned Old description: > Running Varnish 3.0.2 from RPM on a RHEL6 box, after a crash, varnish > would not start anymore, even then the init script says that it has > started. I have not yet figured out why varnish does not start. But the > main issue here is that the init script says "Ok" when it should say > "failed". > > > [root at dev ~]# rpm -qa|grep varnish > varnish-3.0.2-1.el5.x86_64 > varnish-libs-3.0.2-1.el5.x86_64 > (PS: Running the RHEL5-package on RHEL6, not sure if this is the problem > itself.) > > [root at dev ~]# cat /etc/redhat-release > Red Hat Enterprise Linux Server release 6.3 (Santiago) > > [root at dev ~]# service varnish start > Starting Varnish Cache: [ OK ] > [root at dev ~]# ps -ef|grep varnish > root 7718 4694 0 19:25 pts/0 00:00:00 grep varnish > > (nothing running from Varnish) > > dmesg: > Aug 30 19:25:22 dev varnishd[7715]: Platform: > Linux,2.6.32-279.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit > Aug 30 19:25:22 dev varnishd[7715]: Child start failed: could not open > sockets New description: Running Varnish 3.0.2 from RPM on a RHEL6 box, after a crash, varnish would not start anymore, even then the init script says that it has started. I have not yet figured out why varnish does not start. But the main issue here is that the init script says "Ok" when it should say "failed". {{{ [root at dev ~]# rpm -qa|grep varnish varnish-3.0.2-1.el5.x86_64 varnish-libs-3.0.2-1.el5.x86_64 }}} (PS: Running the RHEL5-package on RHEL6, not sure if this is the problem itself.) {{{ [root at dev ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) [root at dev ~]# service varnish start Starting Varnish Cache: [ OK ] [root at dev ~]# ps -ef|grep varnish root 7718 4694 0 19:25 pts/0 00:00:00 grep varnish }}} (nothing running from Varnish) dmesg: {{{ Aug 30 19:25:22 dev varnishd[7715]: Platform: Linux,2.6.32-279.el6.x86_64,x86_64,-sfile,-smalloc,-hcritbit Aug 30 19:25:22 dev varnishd[7715]: Child start failed: could not open sockets }}} -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 08:43:44 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 08:43:44 -0000 Subject: [Varnish] #1189: configure script didn't check rst2man binary In-Reply-To: <050.727cc84c45ed73d4e5a4d5e3ce0441e5@varnish-cache.org> References: <050.727cc84c45ed73d4e5a4d5e3ce0441e5@varnish-cache.org> Message-ID: <065.1779b1eb0e8303a36237cb12161287d3@varnish-cache.org> #1189: configure script didn't check rst2man binary --------------------------+---------------------- Reporter: JonathanHuot | Owner: Type: defect | Status: closed Priority: low | Milestone: Component: build | Version: trunk Severity: minor | Resolution: invalid Keywords: | --------------------------+---------------------- Changes (by tfheen): * status: new => closed * resolution: => invalid Comment: Which it does. Read the configure output. :-) -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 10:06:01 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 10:06:01 -0000 Subject: [Varnish] #1195: varnishd child will not start In-Reply-To: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> References: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> Message-ID: <059.769a20d9186c2cb8cab30ca9a92e2b6d@varnish-cache.org> #1195: varnishd child will not start ---------------------------------------+-------------------- Reporter: chrcol | Owner: Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: critical | Resolution: Keywords: child process start crash | ---------------------------------------+-------------------- Description changed by martin: Old description: > Hi > > I run varnish successfully on multiple server's, I then tried to install > varnish on a virtual machine for a dev environment. > > The vcl config is the same as what works on other machines with only > changes for backends. > > Upon starting varnishd we discovered that it wasnt responding to requests > even tho the process was up and after some debugging it turns out the > child process is crashing immediatly on start. > > Here is the output from debug. > > start > child (20749) Started > Pushing vcls failed: > CLI communication error (hdr) > Stopping Child > 200 0 > > Child (20749) died signal=11 > Child (-1) said Child starts > Child cleanup complete > > Current param.show although these aren't all normal settings, here I > raised various timeouts and reduced malloc and thread size to try and get > it working. > > param.show > 200 3094 > acceptor_sleep_decay 0.900000 [] > acceptor_sleep_incr 0.001000 [s] > acceptor_sleep_max 0.050000 [s] > auto_restart on [bool] > ban_dups on [bool] > ban_lurker_sleep 0.010000 [s] > between_bytes_timeout 60.000000 [s] > cc_command "exec gcc -std=gnu99 -pthread -fpic -shared > -Wl,-x -o %o %s" > cli_buffer 8192 [bytes] > cli_timeout 60 [seconds] > clock_skew 10 [s] > connect_timeout 10.000000 [s] > critbit_cooloff 180.000000 [s] > default_grace 10.000000 [seconds] > default_keep 0.000000 [seconds] > default_ttl 14400.000000 [seconds] > diag_bitmap 0x0 [bitmap] > esi_syntax 0 [bitmap] > expiry_sleep 1.000000 [seconds] > fetch_chunksize 128 [kilobytes] > fetch_maxchunksize 262144 [kilobytes] > first_byte_timeout 10.000000 [s] > group root (0) > gzip_level 6 [] > gzip_memlevel 8 [] > gzip_stack_buffer 32768 [Bytes] > gzip_tmp_space 0 [] > gzip_window 15 [] > http_gzip_support on [bool] > http_max_hdr 64 [header lines] > http_range_support on [bool] > http_req_hdr_len 8192 [bytes] > http_req_size 32768 [bytes] > http_resp_hdr_len 8192 [bytes] > http_resp_size 32768 [bytes] > listen_address 0.0.0.0:80 > listen_depth 1024 [connections] > log_hashstring off [bool] > log_local_address off [bool] > lru_interval 2 [seconds] > max_esi_depth 5 [levels] > max_restarts 4 [restarts] > nuke_limit 50 [allocations] > ping_interval 3 [seconds] > pipe_timeout 10 [seconds] > prefer_ipv6 off [bool] > queue_max 100 [%] > rush_exponent 3 [requests per request] > saintmode_threshold 10 [objects] > send_timeout 60 [seconds] > sess_timeout 10 [seconds] > sess_workspace 16384 [bytes] > session_linger 50 [ms] > session_max 100000 [sessions] > shm_reclen 255 [bytes] > shm_workspace 8192 [bytes] > shortlived 10.000000 [s] > syslog_cli_traffic on [bool] > thread_pool_add_delay 20 [milliseconds] > thread_pool_add_threshold 2 [requests] > thread_pool_fail_delay 200 [milliseconds] > thread_pool_max 500 [threads] > thread_pool_min 100 [threads] > thread_pool_purge_delay 1000 [milliseconds] > thread_pool_stack 16384 [bytes] > thread_pool_timeout 300 [seconds] > thread_pool_workspace 65536 [bytes] > thread_pools 1 [pools] > thread_stats_rate 10 [requests] > user nobody (65534) > vcc_err_unref on [bool] > vcl_dir /usr/local/etc/varnish > vcl_trace off [bool] > vmod_dir /usr/local/lib/varnish/vmods > waiter default (epoll, poll) > > The os is debian and I gave the virtual machine 2 gigs of ram. New description: Hi I run varnish successfully on multiple server's, I then tried to install varnish on a virtual machine for a dev environment. The vcl config is the same as what works on other machines with only changes for backends. Upon starting varnishd we discovered that it wasnt responding to requests even tho the process was up and after some debugging it turns out the child process is crashing immediatly on start. Here is the output from debug. {{{ start child (20749) Started Pushing vcls failed: CLI communication error (hdr) Stopping Child 200 0 Child (20749) died signal=11 Child (-1) said Child starts Child cleanup complete }}} Current param.show although these aren't all normal settings, here I raised various timeouts and reduced malloc and thread size to try and get it working. {{{ param.show 200 3094 acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -pthread -fpic -shared -Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 60 [seconds] clock_skew 10 [s] connect_timeout 10.000000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 14400.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 10.000000 [s] group root (0) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support on [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] listen_address 0.0.0.0:80 listen_depth 1024 [connections] log_hashstring off [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] ping_interval 3 [seconds] pipe_timeout 10 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 60 [seconds] sess_timeout 10 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 20 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 500 [threads] thread_pool_min 100 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack 16384 [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 1 [pools] thread_stats_rate 10 [requests] user nobody (65534) vcc_err_unref on [bool] vcl_dir /usr/local/etc/varnish vcl_trace off [bool] vmod_dir /usr/local/lib/varnish/vmods waiter default (epoll, poll) }}} The os is debian and I gave the virtual machine 2 gigs of ram. -- -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 10:09:30 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 10:09:30 -0000 Subject: [Varnish] #1195: varnishd child will not start In-Reply-To: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> References: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> Message-ID: <059.06ccddf554e10b80d4c644a3cd99df3d@varnish-cache.org> #1195: varnishd child will not start ---------------------------------------+--------------------- Reporter: chrcol | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: critical | Resolution: Keywords: child process start crash | ---------------------------------------+--------------------- Changes (by martin): * owner: => martin -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Mon Sep 17 10:15:29 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Mon, 17 Sep 2012 10:15:29 -0000 Subject: [Varnish] #1195: varnishd child will not start In-Reply-To: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> References: <044.48db3ff1db939e16a6de9b07a5cee214@varnish-cache.org> Message-ID: <059.91ada330462bf097810cb9207dee4e60@varnish-cache.org> #1195: varnishd child will not start ---------------------------------------+--------------------- Reporter: chrcol | Owner: martin Type: defect | Status: new Priority: high | Milestone: Component: varnishd | Version: 3.0.2 Severity: critical | Resolution: Keywords: child process start crash | ---------------------------------------+--------------------- Comment (by martin): Hi, Exactly which version of Varnish are you using? Is it version 3.0.3? Also, the parameters suggests that you are running this on a 32-bit system. Is that correct? A backtrace of the segfault would also be most helpful in determining what is happening. Regards, Martin Blix Grydeland -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Tue Sep 18 11:14:49 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Tue, 18 Sep 2012 11:14:49 -0000 Subject: [Varnish] #1194: segfault on newer fedoras, ppc64 In-Reply-To: <044.6db4509878cd300aa5f6bf2224bb7e71@varnish-cache.org> References: <044.6db4509878cd300aa5f6bf2224bb7e71@varnish-cache.org> Message-ID: <059.36dbfdbb54d657682d4e1efea25c8644@varnish-cache.org> #1194: segfault on newer fedoras, ppc64 --------------------+--------------------- Reporter: ingvar | Owner: Type: defect | Status: closed Priority: normal | Milestone: Component: build | Version: 3.0.3 Severity: normal | Resolution: fixed Keywords: | --------------------+--------------------- Changes (by tfheen): * status: new => closed * resolution: => fixed Comment: commit 2587b1b973b1c4e019ee74da7a93fe31af5a6186 Author: Tollef Fog Heen Date: Tue Sep 18 11:21:09 2012 +0200 Mark ban list as volatile Fixes #1194 -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 19 13:47:48 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 19 Sep 2012 13:47:48 -0000 Subject: [Varnish] #1197: Source package inconsistencies in current stable release Message-ID: <059.587e862dec180f2b16ae954bc957f6f5@varnish-cache.org> #1197: Source package inconsistencies in current stable release -----------------------+----------------------- Reporter: varnish@? | Type: defect Status: new | Priority: normal Milestone: | Component: packaging Version: 3.0.3 | Severity: major Keywords: | -----------------------+----------------------- Currently we're widely using varnish-3.0.2 in our projects. Last week we're hitting the bug #1120 (Assertion error in VRY_Match), thus we're updateing some project's to varnish-3.0.3: {{{ [root at server ~]# varnishd -V varnishd (varnish-3.0.3 revision 9e6a70f) Copyright (c) 2006 Verdens Gang AS Copyright (c) 2006-2011 Varnish Software AS }}} Yesterday we hit the same bug again, so i've looked into the code, because we're compiling from source, and what i've found is, that the patchset, which have fixed bug #1120, seemingly does not made it into the current stable relaese or that the source packages which i've downloaded from repo .varnish-cache.org are seemingly not in sync. I also have not found the revision in trac, which is printed in the available source packages. If you need more information, please feel free to contact me. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Wed Sep 26 09:21:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Wed, 26 Sep 2012 09:21:43 -0000 Subject: [Varnish] #1198: Make check takes much longer after we started to sync the shmem file Message-ID: <044.9aa3d0a3e8fb7c21731fdfe1e38a61ff@varnish-cache.org> #1198: Make check takes much longer after we started to sync the shmem file --------------------+------------------- Reporter: martin | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: build | Version: trunk Severity: normal | Keywords: --------------------+------------------- In commit cc710d56021a4d1927ceedf426cebfff48ef1860 we started to sync the shmem file to disk during startup, as a fix for kernels without coherent VM/buf (OpenBSD?). This makes a make check on my computer take almost 3 times longer. Minor issue, but still a bit annoying to spend the extra times on systems that don't have the VM issues. {{{ With the msync: $ time make -j8 check real 2m56.515s user 0m26.498s sys 0m26.870s Without the msync: $ time make -j8 check real 1m10.838s user 0m22.737s sys 0m24.970s }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 28 09:30:32 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Sep 2012 09:30:32 -0000 Subject: [Varnish] #1177: Child will not start on Solaris died signal=10 (core dumped) In-Reply-To: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> References: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> Message-ID: <059.812365957d6b5ad3331390d3652542fe@varnish-cache.org> #1177: Child will not start on Solaris died signal=10 (core dumped) --------------------------+-------------------- Reporter: karnak | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: port:solaris | Version: 3.0.2 Severity: normal | Resolution: Keywords: Solaris | --------------------------+-------------------- Comment (by hieubkav): Replying to [comment:1 phk]: anybody have work around for this issue?it's also happened with varnish 3.0.3 , solaris 10 sparc -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 28 09:50:58 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Sep 2012 09:50:58 -0000 Subject: [Varnish] #1177: Child will not start on Solaris died signal=10 (core dumped) In-Reply-To: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> References: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> Message-ID: <059.517f701ad636c3e6051d74d06ebc3430@varnish-cache.org> #1177: Child will not start on Solaris died signal=10 (core dumped) --------------------------+-------------------- Reporter: karnak | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: port:solaris | Version: 3.0.2 Severity: normal | Resolution: Keywords: Solaris | --------------------------+-------------------- Comment (by slink): This issue is most probably not about Solaris, but about sparc. Last time I had a look at Varnish on Sparc, I would get bus errors all over the place, because sparc has strict memory alignment rules, but Varnish hasn't. A major step towards supporting sparc was that WS allocations are pointer aligned for some time now, but there probably are many other places which would need attention. So the brief answer is: To support sparc or other platforms with strict memory alignment requirements, we'd need to check all dynamic memory allocations for alignment requirements. Nils -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 28 09:56:55 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Sep 2012 09:56:55 -0000 Subject: [Varnish] #1177: Child will not start on Solaris died signal=10 (core dumped) In-Reply-To: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> References: <044.047b8252b083fc44811d5aba045edcbd@varnish-cache.org> Message-ID: <059.98b57650a8c29371fb614d15e41a8971@varnish-cache.org> #1177: Child will not start on Solaris died signal=10 (core dumped) --------------------------+-------------------- Reporter: karnak | Owner: slink Type: defect | Status: new Priority: normal | Milestone: Component: port:solaris | Version: 3.0.2 Severity: normal | Resolution: Keywords: Solaris | --------------------------+-------------------- Comment (by hieubkav): thanks for your quick reply.so how much time to investigate this ?if you need more debugging information, i will get it and coordinate with you to fix! sorry for my bad english Hieu -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Fri Sep 28 23:48:57 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Fri, 28 Sep 2012 23:48:57 -0000 Subject: [Varnish] #1199: c00045.vtc may fail intermittently depending on compiler and optimization Message-ID: <040.142f68a1a38937f0fa2e389fb4c02db6@varnish-cache.org> #1199: c00045.vtc may fail intermittently depending on compiler and optimization -------------------+------------------------- Reporter: mi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.3 | Severity: minor Keywords: | -------------------+------------------------- Compiling with gcc-4.4.4 on RedHat here (64bit) I sometimes observe test c00045.vtc failing -- it expects n_lru_nuked to be 1, but the actual value ''sometimes'' is zero. Simply rerunning `make check' again may pass the test. We'd like to incorporate `make check' into the build process for the RPMs, but that's difficult, when some of the test failures are bogus... -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 29 00:06:14 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Sep 2012 00:06:14 -0000 Subject: [Varnish] #1200: On systems with INET6 disabled, c00005.vtc test fails Message-ID: <040.e3d9367dba5ffa1540668a97afc0b35e@varnish-cache.org> #1200: On systems with INET6 disabled, c00005.vtc test fails -------------------+------------------------- Reporter: mi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.3 | Severity: normal Keywords: | -------------------+------------------------- The below patch fixes the problem for me. Perhaps, VCL has some way of checking, whether INET6 is enabled -- so the patch is not necessary and an additional if can be used instead. {{{ --- bin/varnishtest/tests/c00005.vtc 2012-08-20 05:20:39.000000000 -0400 +++ bin/varnishtest/tests/c00005.vtc 2012-09-26 12:09:59.000000000 -0400 @@ -32,4 +32,3 @@ ! "localhost"; "0.0.0.0" / 0; - "::" / 0; } }}} -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 29 00:06:43 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Sep 2012 00:06:43 -0000 Subject: [Varnish] #1201: Test r01109.vtc reliably fails on FreeBSD (32bit) Message-ID: <040.82b06dc48a52240ce65050f7e25e7021@varnish-cache.org> #1201: Test r01109.vtc reliably fails on FreeBSD (32bit) -------------------+------------------------- Reporter: mi | Type: defect Status: new | Priority: normal Milestone: | Component: varnishtest Version: 3.0.3 | Severity: normal Keywords: | -------------------+------------------------- varnishd, as built by the www/varnish port, crashes on SIGFAULT, when running the r01109.vtc test. Attaching full log. -- Ticket URL: Varnish The Varnish HTTP Accelerator From varnish-bugs at varnish-cache.org Sat Sep 29 17:56:08 2012 From: varnish-bugs at varnish-cache.org (Varnish) Date: Sat, 29 Sep 2012 17:56:08 -0000 Subject: [Varnish] #1200: On systems with INET6 disabled, c00005.vtc test fails In-Reply-To: <040.e3d9367dba5ffa1540668a97afc0b35e@varnish-cache.org> References: <040.e3d9367dba5ffa1540668a97afc0b35e@varnish-cache.org> Message-ID: <055.7af4410d454570de8cb201a223364358@varnish-cache.org> #1200: On systems with INET6 disabled, c00005.vtc test fails -------------------------+-------------------- Reporter: mi | Owner: Type: defect | Status: new Priority: normal | Milestone: Component: varnishtest | Version: 3.0.3 Severity: normal | Resolution: Keywords: | -------------------------+-------------------- Comment (by bz): Doesn't the reverse mean that my systems without IPv4 support will fail the test (as well, now)? -- Ticket URL: Varnish The Varnish HTTP Accelerator